Talk:Embedded Open Modular Architecture/EOMA-68

Jump to: navigation, search

Confusion over definition of I2C Addresses

There has been considerable confusion even when developing the EOMA68 specification, after finding datasheets that do not correctly describe their addresses. The confusion stems from treating the first 8 bits as an 8-bit number (MSB first) and many datasheets - including that of the recommended part for the I2C EEPROM AT24C64) - propagate this confusion in direct violation of the I2C specification.

To correctly identify an I2C address, the first 7 bits must be treated as a 7-bit decimal number (MSB first). The 8th bit is a read-write indicator. Many datasheets incorrectly report two separate I2C "addresses", one is double that of the correct address and the other is double plus one.

Please ensure when complying with the EOMA68 specification that an EEPROM with the correct I2C address of 0x51 rather than 0xA2 and 0xA3 are used.

>  so what you're saying is that just because (if you were reading the 8
> bits in sequence), the bit that comes *after* the I2Caddress (its LSB)
> is in the place that, if it *was* an 8-bit address, you'd call it bit
> 0.  but you should never consider this to be so, instead should read
> the 7 bits only then treat the 8th bit as completely separate, right?
>  so that would explain how i managed to read 0xA2 as being the I2C
> EEPROM read address and 0xA3 as the I2C EEPROM write address, when in
> fact they're *both* 0x51 and the R/W bit has absolutely nothing to do
> with the actual address, would that be right?

Yes. I don't remember where I had read that, but somehow I remembered
that, to not be too surprised to hit actual cases of such confusion few
times later. I didn't really do my own due diligence, but good chance
to do it now. So, wikipedia
links to "Official I2C Specification, NXP" . That clearly
states in section 3.1.10 that "This address is seven bits long
followed by an eighth bit which is a data direction bit (R/W)".
Googling for "i2c address confusion" in particular finds which goes over 7/8 bit
confusion and which one is correct.

I2C EEPROM read/write

  • aseigo 2013nov05

there are three options:

  • a) the ID EEPROM must be writable
  • b) the ID EEPROM must not be writable
  • c) the ID EEPROM may be writable, but this should not be relied on by any


c) may be writable, but portable software may not rely on this

this is a middle ground which allows devices which would be in violation of the standard in the (b) case to remain compliant, while allowing devices which really are best served by (a) to not only be compliant but also avoid having theoretically portable software not performing correctly on them.

basically, (c) implies that the EEPROM must be treated as not-writable in the general case. any user actions or software that attemps to write to the EEPROM is classified as non-portable and device-specific, and the EOMA68 would provide zero guarantees as to the expected behaviour in such cases.

as this allows for compliant generic chassis *and* mass-market consumer devices, i highly recommend (c)

  • lkcl2013nov06: agreed.


Thread titleRepliesLast modified
Requirements for SD/MMC and SPI109:57, 31 July 2016
Requirements for RGB/TTL - 16 vs 18 bit bus109:57, 31 July 2016

Requirements for SD/MMC and SPI

"In essence, the SD/MMC committee have caused a bit of trouble, here, but it may be best to trust their experience in that SD/MMC Cards have probably not, for some considerable time, been actually using SPI mode, but have been offering the 2, 4 (and now 8) lane capability for a long, long time."

By 2-bit, do we mean the SD/MMC mode where data is transferred one bit at a time over either command or data lines? Just to get the terminology right, I believe SD calls it 1-bit SD.

Rsaxvc (talk)06:59, 31 July 2016

honestly not sure - what would you suggest (editing-wise)?

Lkcl (talk)09:57, 31 July 2016

Requirements for RGB/TTL - 16 vs 18 bit bus

"EOMA-68's RGB/TTL interface is 18-bit-wide. If a particular SoC only has e.g. 16-bit or 15-bit RGB/TTL then the LSB (lower) bits MUST be set to logic output level 0 when the LCD interface is enabled: they must NOT be left floating or tri-state. This ensures that devices which are expecting the full 18-bits do not receive noise on the lower bits of each of the R,G and B 8-bit inputs."

While I agree floating is bad, I'm not certain requiring those pins to be grounded is the best solution as it will prevent the display from ever showing full Red, Blue, Green, or White. Instead, if enough drive-strength is available, any extra LSBs can be tied to the corresponding MSBs for each channel. Or, it could be tied to other pins driven by actual data.

Rsaxvc (talk)06:57, 31 July 2016

thanks, appreciated.

Lkcl (talk)09:57, 31 July 2016