Hi Luke,
sorry for the few months delay - I was stuck with moving, family, job applications, etc.
I would like to bring up again the painful topic of a long-sustainable graphical interface in the EOMA68 specification ( http://rhombus-tech.net/whitepapers/ecocomputing_07sep2015/ ) as the RGB/TTL solution is so utterly bad choice.
RGB/TTL is way too slow and highly limited (alone, but even more in EOMA specifications):
a) maximum resolution 1366x768 (even though the current EOMA should be capable of 1440x900) b) maximum pixel clock about 148.5 MHz (~ max 71.6 fps at the mainstream 1920x1080 with visual artifacts and high EMI disqualifying EOMA for mobile devices with radios) c) maximum color depth 18 bits (3x6) d) eats a lot of pins (18) already in this lowest quality set up e) requires HW changes (i.e. disallows sustainable plug & play) in case EOMA should mitigate the limitations above (parallel interfaces require addition of many pins) - this is anyway impossible, because all pins are already in use f) easy (not requiring a chip) conversion to VGA output; conversion chips to all (!) other interfaces (which are modern, serial, and ubiquitous) is needed (and is more expensive than a serial->RGB/TTL because of very low purchaser interest) g) easy implementation in FPGA (few hundreds of LUTs)
Yet EOMA does not even allow adding any better display interface, because there are not enough "free" pins for any modern serial interface (MIPI DSI, eDP, HDMI, ...). This effectively totally (!) disallows manufacturers of EOMA cards to add an SoC <-> MIPI/eDP/HDMI circuity on an EOMA card to provide at least mainstream (!) resolution output with 24 bit colors for internal displays (not talking about 2017, when it will supposedly be 4K at 30 fps with 24 bit colors).
In other words, this only one fact degrades the whole EOMA to the category of yet another toy (as every other libre general-purpose user computer HW failed up until now). By the way I had to publicly confess this during my talk at the conference OpenAlt 2016 (https://openalt.cz/2016 , there is a full recording).
After reading through all the important emails from Arm-netbook beginning in 2010 (yeah, 571509 lines of text), many of your posts on different web servers, and watching nearly all your videos, I did some research on display interfaces. Few quick facts based on my findings follow (yes, I focus on lower mainstream and mainstream "fat" embedded and mobile segment, not on total low-end, because there we have zillion of existing PCBs all offering basically the same HW interfaces - some of them even libre).
From panelook.com (size >= 7.0", px density >= 160 PPI):
* LVDS: 382 panels in MP YEAR 2016 (2015: 53) => ratio (the higher the better) 53/382 = 0.138 * MIPI DSI: 239 panels in MP YEAR 2016 (2015: 50) => ratio 50/239 = 0.209 * eDP: 233 panels in MP YEAR 2016 (2015: 66) => ratio 66/233 = 0.283 * RGB/TTL: 15 panels in MP YEAR 2016 (2015: 2) ratio 2/15 = 0.133
(the ratio shows how much is the certain interface on rise)
Video interfaces from data sheets of few tens of more performant (i.e. having more computing power) mobile SoCs (no AMDs, no Intels) in 2015 & 2016:
* LVDS: nearly nowhere * MIPI DSI: everywhere (!) * eDP: nearly nowhere (in contrast to big chips like Intel i5/i6/i7, where eDP is largely prevalent) * RGB/TTL: nearly everywhere (but especially on smaller SoCs)
We can see a strong trend of LVDS disappearing (though having still the major position in 2015 & 2016), MIPI DSI and eDP on fast rise and RGB/TTL on a total decline reaching it's physical limits. Add the fact, that MIPI DSI is present on basically every mobile SoC since cca 2014 (in contrast to 2012 when EOMA was looking for "the ultimate video interface" and when RGB/TTL was really the only portable option) and moreover is easily and cheaply convertible to eDP or to LVDS, we have a clear winner. By the way, even Intel also recommends an external MIPI DSI to LVDS and MIPI DSI to eDP bridges for his SoCs. Based on all that I'm confident, that in the upcoming 10 years, the SoC market will use MIPI DSI everywhere as the main standard and eDP for the few biggest chips.
MIPI DSI also doesn't have the issues as LVDS, when the specification did not cover the chosen width and properties of implementation and thus prevented bundling universal conversion chips.
MIPI DSI offers:
a) highest state-of-the-art resolutions (not limited to, but supporting 4096x2160) b) highest state-of-the-art refresh rates (not limited to, but supporting 120 fps at 1920x1200; i.e. pixel clock about 276.4 MHz) without visual artifacts and while remaining low-power c) 24 bit color depth (3x8) d) eats just 4 pins for minimal configuration with one lane (in practise 4 lanes are most common, so 10 pins will be needed) e) requires none (increasing frequency) or very minimal (adding two pins as a new data lane) HW changes (MIPI DSI is a serial interface) f) VGA output (hell it should die out already!) requires a conversion chip (which is not so expensive, so it shouldn't influence the Micro Desktop PC PCB nor notebook PCB price; actually I would not provide VGA at all on these consumer devices, but rather (e)DP or HDMI, because consumers do not want VGA any more and external eDP -> VGA converters are about 5$ with free shipping world-wide for those who need it or want to be really eco-friendly and use the few surviving VGA devices at the expense of higher electrical consumption) g) easy implementation in FPGA (there are fully functional existing implementations having just about 2000 LUTs)
But how to cope with this when there is already an EOMA card in manufacturing?
Let me boldly demonstrate "thinking outside the box" (kicked off by email from Luke from Sat, 13 Jul 2013 17:22:41 +0100 and supported by the "take advantage of the MIPI / eDP" statement of Luke from Thu, 10 Apr 2014 10:48:46 +0100).
Can we provide both interfaces (RGB/TTL + MIPI DSI) on the same pins while having a HW way to choose from these?
Yes we can! EOMA already counts on several types of PC Cards (originally called PCMCIA). At minimum two - thinner (Type I - 3.3 mm) and thicker (Type II - 5 mm). Let's declare the thicker cards to be high-end and offer only MIPI DSI while thinner cards low-end with just RGB/TTL. Problem solved!
The "high-end" specification shall then also be extended allowing higher thermal dissipation (5W is too low - maybe 15W would be OK as it's still easy to cool passively) etc. to accommodate "high-end" (actually mainstream, but in this context it's high-end) requirements.
Speaking about thermal dissipation, I'm not sure that in case of the high-end card type, this limit should be a fixed one. I would probably prefer the 15W value as a strong recommendation instead of a firm requirement. While always (disregarding whether it's smaller than 15W or not) requiring to readably and fully visibly to the end user quote (at best on the outermost coating) the maximum dissipation of the whole particular card under heavy load. Why? Because there will be a manufacturer offering a special super hyper mega powerful card implementing a "turbo" mechanical switch automatically overclocking the by default heavily underclocked 8-core beast.
This could then be finally called useful and sustainable for 10 years.
Recap of solved issues:
* No more copyleft-like/religious constraints ("we require you to only use libre and eco-friendly stuff, not the new fast non-libre SoCs with non-refurbished eco-unfriendly displays"), but rather a permissive approach. * No more issues with non-tiny display panels. * No issues with users putting a wrong card to the slot. * No more issues with dissipation. * No more issues with libre world being old, slow, ugly looking, etc. (as I'm often told, because it unfortunately currently holds). * No more issues with decision makers, who will finally get the freedom of choice from several cards (keep in mind what the marketing/portfolio research says - a set of more similar but still diversified products/services sell significantly better than one product/service disregarding whether it's high-quality or not). * Useful for the upcoming hybrid phone ( http://rhombus-tech.net/community_ideas/hybrid_phone/ ) - lower number of pins, lower consumption.
Enjoy the new life in China with your family and good luck with discovery how the world's real HW production and market works,
-- Jan
On 12/13/16, dumblob dumblob@gmail.com wrote:
Hi Luke,
sorry for the few months delay - I was stuck with moving, family, job applications, etc.
I would like to bring up again the painful topic of a long-sustainable graphical interface in the EOMA68 specification ( http://rhombus-tech.net/whitepapers/ecocomputing_07sep2015/ ) as the RGB/TTL solution is so utterly bad choice.
nope. been over this many many times.
RGB/TTL is way too slow
no it's not. most mid-end SoCs can do up to 2048x2048 @ 30fps, and can certainly do 1920x1080p60 over RGB/TTL.
it's nonsense to say that it's too slow.
and highly limited (alone, but even more in EOMA specifications):
no it's not. the 3.3mm card height is reserved for 1920x1080p60.
c) maximum color depth 18 bits (3x6)
that's because if you look at most mid-end LCDs they only do 18 bpp anyway. it also means that the extra 6 pins which were saved could be dedicated to SPI and two more EINTs, which turned out to be crucial.
most people's eyes are incapable of telling the difference. really.
f) easy (not requiring a chip) conversion to VGA output; conversion chips to all (!) other interfaces (which are modern, serial, and ubiquitous) is needed (and is more expensive than a serial->RGB/TTL because of very low purchaser interest)
yep. very easy. or, for the low-cost products, it's not even needed at all: 320x240, 480x320, 640x480, 800x480 and 800x600 LCDs are all RGB/TTL.
go look it up on panelook.com.
g) easy implementation in FPGA (few hundreds of LUTs)
Yet EOMA does not even allow adding any better display interface, because there are not enough "free" pins for any modern serial interface (MIPI DSI, eDP, HDMI, ...). This effectively totally (!) disallows manufacturers of EOMA cards to add an SoC <-> MIPI/eDP/HDMI circuity on an EOMA card
nonsense. they can always add a MIPI-to-RGB converter or an eDP-to-RGB converter IC on-board the Card.
to provide at least mainstream (!) resolution output with 24 bit colors for internal displays (not talking about 2017, when it will supposedly be 4K at 30 fps with 24 bit colors).
the processors you're referring to are so power-hungry that they require special cooling and/or fans. in other words, they're *well* beyond the thermal design capability of EOMA68 anyway.
also, the attention to design when dealing with 4k displays is EXTREME. not even rockchip's own EVB for the RK3288 is capable of driving a 4k HDMI display... because there's too much noise, degrading the signal.
the data rates you're talking about here are ... well, let's work it out. it's 1920*2*1080*2*60*8 = 3981312000. that's close to FOUR gigahertz.
to create boards where the data rates are that high requires extreme special care and attention to layout. it's radio frequencies, basically.
now, here's the thing: at this early stage of the project, i can cope with designing boards that are up to a maximum frequency of... 100mhz for RGB/TTL, and even 1ghz for HDMI 1.4 (1920x1080x60*8), by following some strict design rules that have taken me a long while to learn...
... but 3.7 ghz? fuck no. you must be joking. with the upcoming RK3288 prototype of course i'm going to try it out, but if it actually works i'm literally going to end up laughing on the floor, manically and hysterically at the complete fluke.
bear in mind that for a standard to be successful its interface capability must be MANDATORY, if you start FORCING the standard to support 3.7 ghz data transfer rates, you just raised the bar - the cost of development to a whooole new level.
EOMA68 is designed to be *affordably* implementable (by even someone like me who is self-taught).
In other words, this only one fact degrades the whole EOMA
repeat after me: there is NO SUCH THING as an EOMA standard. there is only a FAMILY of EOMA standards, of which EOMA68 was chosen as the first "easily and affordably implementable" one.
to the category of yet another toy (as every other libre general-purpose user computer HW failed up until now). By the way I had to publicly confess this during my talk at the conference OpenAlt 2016 (https://openalt.cz/2016 , there is a full recording).
After reading through all the important emails from Arm-netbook beginning in 2010 (yeah, 571509 lines of text),
ye gods man!
many of your posts on different web servers, and watching nearly all your videos, I did some research on display interfaces. Few quick facts based on my findings follow (yes, I focus on lower mainstream and mainstream "fat" embedded and mobile segment, not on total low-end, because there we have zillion of existing PCBs all offering basically the same HW interfaces - some of them even libre).
yeahyeah. the 72mhz ECs. none of them are capable of driving LCDs - the framebuffer alone overwhelms their capacity both for internal memory, internal bus rate and external data transfer rate.
many of the low-cost ECs actually use SPI-based or MCU-based (8080) LCDs (with something like an HX8357D controller IC) because these have their own internal framebuffer RAM (on-board)... quite cool, basically. it's like a tiny embedded version of the x86 IBM PC with its AT Bus....
...but i digress.
From panelook.com (size >= 7.0", px density >= 160 PPI):
- LVDS: 382 panels in MP YEAR 2016 (2015: 53) => ratio (the higher the
better) 53/382 = 0.138
- MIPI DSI: 239 panels in MP YEAR 2016 (2015: 50) => ratio 50/239 = 0.209
- eDP: 233 panels in MP YEAR 2016 (2015: 66) => ratio 66/233 = 0.283
- RGB/TTL: 15 panels in MP YEAR 2016 (2015: 2) ratio 2/15 = 0.133
great: you _did_ look it up! errr... except you didn't divide them up by resolution. that throws your enquiries off completely. basically, RGB/TTL is only common for 320x240 up to 800x600 (which is the low-cost end). above 800x600 the parallel data bus skew is too much... which is why these differential-pair serial buses were created in the first place.
(the ratio shows how much is the certain interface on rise)
Video interfaces from data sheets of few tens of more performant (i.e. having more computing power) mobile SoCs (no AMDs, no Intels) in 2015 & 2016:
- LVDS: nearly nowhere
- MIPI DSI: everywhere (!)
- eDP: nearly nowhere (in contrast to big chips like Intel i5/i6/i7, where
eDP is largely prevalent)
- RGB/TTL: nearly everywhere (but especially on smaller SoCs)
yes. so, as predicted, the decision to go with RGB/TTL makes sense... and still stands.
lower-cost SoC manufacturers don't like spending the money on royalty licenses for MIPI or eDP, plus it makes absolutely no sense to use MIPI or eDP for 320x240, 480x320 or 640x480 LCDs.
plus, the cost of a converter IC is a far greater ratio of the BOM at the lower-cost end than it is at the higher-end... and RGB/TTL is the de-facto interface of choice at the lower end.
... you _did_ read the "interface selection" section in the white paper, right?
bridges for his SoCs. Based on all that I'm confident, that in the upcoming 10 years, the SoC market will use MIPI DSI everywhere as the main standard and eDP for the few biggest chips.
... which we'll get to.... *with another standard*... with the 3.3mm card height variant being an interim stand-in to cover the intervening time... where profits from sales of designs based around the *CURRENT* standard will help fund the next one.
... you didn't think i was going to stop at just the one standard, did you?
Can we provide both interfaces (RGB/TTL + MIPI DSI) on the same pins while having a HW way to choose from these?
NO.
Yes we can!
NO. absolutely not.
EOMA already counts on several types of PC Cards (originally called PCMCIA). At minimum two - thinner (Type I - 3.3 mm) and thicker (Type II - 5 mm). Let's declare the thicker cards to be high-end and offer only MIPI DSI while thinner cards low-end with just RGB/TTL. Problem solved!
massive problem *created* which DESTROYS the standard even before it's implemented.
allow me to go through it.
(updated: i spotted another problem, which is that it would be impossible to prevent the 3.3mm "low end" cards from being inserted into 5.0mm "high end" slots... potentially electrically damaging both Card and Housing. on this point alone what you're suggesting is a non-starter. even if you tried to make them interoperable it would have to be 3.3mm which was the "high" end and 5.0mm the low end, because 5.0mm low-end Cards would NOT PHYSICALLY FIT into a 3.3mm slot.. so now we proceed to explain why *that* idea does not work, either... ).
af first glance, it seems like a great idea: add two different video interfaces on the same pins. so.
which pins do you use for which functionality?
* pin 1: RGB Red 0 *AND* MIPI lane 0 tx- * pin 1: RGB Red 1 *AND* MIPI lane 0 tx+
.... ....
now let's design the housing boards around that. let's make one which has RGB/TTL, and the other which has MIPI.
so, we've got hard-wired pins for RGB on say a 7in tablet
and we've got hard-wired pins for MIPI on say a 14in laptop.
great!
ok, so now let's select some SoCs.
right. first SoC we pick... we find that it has dedicated pins for MIPI, and dedicated pins for RGB/TTL. so we are forced to find an ultra-high-speed multiplexer SoC. problem "solved".
second SoC we pick... we find that it has shared pins (multiplexed) for MIPI and RGB/TTL. they DO NOT MATCH OUR ARBITRARILY CHOSEN ARRANGEMENT. now we try to use the same high-speed multiplexer SoC... only to find that it requires SEPARATE inputs... and... err... now we have to wire the same pins from the SoC to two sets of pins on the multiplexer IC... now we have signal-bounce to deal with (dual path)... and double-impedance-matching.... and it's getting alarmingly complex.
third SoC we pick... we find that it has shared pins which are COMPLETELY DIFFERENT FROM THE SECOND SoC.
fourth SoC we pick... has MIPI but does not even have RGB/TTL.
fifth SoC we pick... has RGB/TTL but does not have MIPI.
in ALL of these instances, it's simply flat-out impossible to do the board layout... why? because there's simply not enough room to fit the converter IC onto the board. have you _seen_ how tiny the EOMA68 PCBs are? 78.1 x 47.3mm with a height limit of 1.9mm on TOP and 1.6mm on BOTTOM, that's with a 1.2mm PCB. i don't even want to know how much 0.8mm PCBs cost.
can you see how this quickly gets into absolute hell on earth, with costs rapidly escalating?
what converter IC do you pick? does one even exist? is it common enough so that it has alternative competition so that we don't end up with the entire standard being critically dependent on ONE company's IC and them going out of business?
can you also see how it would create total confusion for end-users, thus DESTROYING all and any possibility of being a successful and simple mass-volume standard?
can you imagine the conversations of the sales people, "oh i'm sorry, you bought that 7in tablet which only does RGB/TTL? i'm sorry, you can't use the more modern MIPI Computer Card, you have to THROW AWAY that 7in tablet housing".
it's total nonsense... and in direct violation of the purpose of the standard: to reduce e-waste.
so... NO. not going to happen.
what *is* going to have to happen however is a new standard's created [and it will either use MIPI or it will use eDP... or it will have both on separate pins]. this would be much more suitable for the incoming mid-end SoCs (intel and amd are going to have to take a back seat until they sort out their backdoor co-processors and proprietary hardware drivers].
but, it is necessary first to find a suitable connector. funnily enough there's a guy who has located games cartridge connectors... i forget which one... NES? Game Boy? which has 100 pins....
http://old.pinouts.ru/Game/CartridgeGameBoy_pinout.shtml
nope, not that one, it's only 32...
https://wiki.nesdev.com/w/index.php/Cartridge_connector#Pinout_of_72-pin_NES...
NES - 72, ah HA! err... except they're enormous:
https://en.wikipedia.org/wiki/Nintendo_Entertainment_System_Game_Pak#60-pin_...
13.3 x 12 x 2cm or something mad.
so, back to the drawing board on that one.
basically it's not as simple as it sounds. just add an extra interface, right? no problem, right? wrong. there's *really* good reasons why the functions on EOMA68 only have GPIO multiplexing, and it's because GPIO is the lowest common denominator function that can reasonably be expected of any arbitrarily-chosen pin.
we can't even multiplex say SPI and UART onto the same pins, because there's absolutely no guarantee that any arbitrarily-chosen SoC *has* SPI and UART multiplexed onto *EXACTLY* the same arbitrarily-chosen pins [for a standard]. that leaves the burden on *software* to do bit-banging of either UARt or SPI... and god help you if the SoC doesn't support EINT capability on the chosen pins (because the others had to be used for other purposes), and you have to do high-frequency CPU-intensive polling.
and the moment you start saying "ohh it's okay to have one function but not the other on any given card" you've just fucked the entire standard in the "salesman" scenario above. nobody would EVER trust the standard to be "eco-conscious" if you told them that they were forced to throw out perfectly good Housings.
i appreciate your thoughts, but there's far more to take into consideration here than whether a particular interface is "up-to-date". at least six completely different inter-related criteria had to be satisfied, with *none* of them being "open for negotiation".
Speaking about thermal dissipation, I'm not sure that in case of the high-end card type, this limit should be a fixed one.
again it comes down to what the boards (chassis manufacturers) can cope with. remember, whatever is picked *has* to be covered by *ALL* chassis manufacturers.
so if you picked up to 15W, *ALL* chassis manufacturers *MUST* provide up to a maximum of 15W...
... or it is necessary to put in the EOMA68 I2C EEPROM, "this chassis can provide up to N watts power if needed, and has the thermal dissipation capability to deal with it".
it gets complicated, quite quickly, but would be doable.
hmmm, and the nice thing is, it's an upwards-compatible expansion of the EOMA68 standard which doesn't interfere with the existing release.
i like it.
l.
On Tue, Dec 13, 2016 at 9:51 AM, dumblob dumblob@gmail.com wrote:
Can we provide both interfaces (RGB/TTL + MIPI DSI) on the same pins while having a HW way to choose from these?
Yes we can! EOMA already counts on several types of PC Cards (originally called PCMCIA). At minimum two - thinner (Type I - 3.3 mm) and thicker (Type II - 5 mm). Let's declare the thicker cards to be high-end and offer only MIPI DSI while thinner cards low-end with just RGB/TTL. Problem solved!
To be specific, EOMA68 includes Type I, Type II and Type III already. There are differences in permitted power, and RGB resolution. These differences run in opposite directions, in order to make sure that any combination that fits will work.
WRT resolution, Type I is the high-end, with card-minimum/housing-maximum 1920x1080, because a Type I housing will accept _only_ Type I cards. Thus anything with a 1920x1080 screen has a Type I slot, and any card that physically fits will drive that display.
Type II and Type III both have a card-minimum/housing-maximum 1366x768 -- but of course they'll accept a high-end Type I card, it'll just run contentedly at less-than-max resolution.
(AIUI, a Type II housing can have a 1920x1080 display, as long as that display will accept and upscale a 1366x768 signal. Then either a Type I card or a Type II card exceeding the minimum specs can output full 1920x1080, but it will work with even the minimum Type II card's 1366x768.)
The point is, any card _must_ work in any housing it physically fits in. So if you want Type I to support MIPI, that's great -- but that Type I still fits in a Type II, so it must also be able to output RGB/TTL on the same pins, and there must be a mechanism to autonegotiate this depending on the housing. (Or if, as in your proposal, Type II has MIPI, then a Type II housing must accept both MIPI and the RGB/TTL signals from a Type I card -- again, with autonegotiation.)
I don't think adding autonegotiation here is particularly hard (basically just defining a flag in the I2C EEPROM), but the need to support both interfaces negates some of the benefits of MIPI, and adds complexity to all cards supporting MIPI, and I'm not sure that complexity (multiplexer, and in some cases, MIPI->RGB conversion IC) can actually fit on a crowded EOMA68 card.
The "high-end" specification shall then also be extended allowing higher thermal dissipation (5W is too low - maybe 15W would be OK as it's still easy to cool passively) etc. to accommodate "high-end" (actually mainstream, but in this context it's high-end) requirements.
Type I and Type II are currently limited to 5W, while Type III is limited to 10W. Again, these have to be in this order, because any card must work in any housing it physically fits. So a 5W card can be powered by a 10W power supply, but not the other way around.
I think 10W is a hardware limit of the connector (4 pins at 0.5A per pin); but even if I'm wrong on that, keep in mind that _every_ Type III (and in your proposal, Type II) housing _must_ provide the maximum allowed power. So when you say 15W, you're saying that _every_ housing must have a 3A supply, even if 90% of cards only need 2A.
(Some of this could be extended with autonegotiation -- e.g. all Type I/II cards start in 5W maximum, but _if_ the housing permits it, can switch to an optional high-performance 10W mode. It's tricky, because there's both electrical and thermal considerations involved, but I think it may be worthwhile.)
Speaking about thermal dissipation, I'm not sure that in case of the high-end card type, this limit should be a fixed one.
Obviously, technically skilled people will overclock and overwatt specific EOMA68 cards in housings that they know can supply more power. But the minute you change this behavior from "hackers breaking the rules" to "there are no rules", you've made it so that the whole promise of EOMA ("Just plug it in; it will work") can no longer be kept.
I would probably prefer the 15W value as a strong recommendation instead of a firm requirement. While always (disregarding whether it's smaller than 15W or not) requiring to readably and fully visibly to the end user quote (at best on the outermost coating) the maximum dissipation of the whole particular card under heavy load.
I personally would be fine with that -- in fact, I would be fine with a lot of things that are defined in the EOMA68 standard being just a matter of labeling, and leave it on the user to choose compatible parts.
But Luke's not designing this standard for me; he's designing it for people who would get confused and buy a 25W card and a 10W-max tablet housing, and not understand why they don't work. If you're targeting those people (which you have to, to get volume), you have to make it work for them.
Benson Mitchell
On 12/14/16, Benson Mitchell benson.mitchell+arm-netbook@gmail.com wrote:
On Tue, Dec 13, 2016 at 9:51 AM, dumblob dumblob@gmail.com wrote:
Can we provide both interfaces (RGB/TTL + MIPI DSI) on the same pins while having a HW way to choose from these?
Yes we can! EOMA already counts on several types of PC Cards (originally called PCMCIA). At minimum two - thinner (Type I - 3.3 mm) and thicker (Type II - 5 mm). Let's declare the thicker cards to be high-end and offer only MIPI DSI while thinner cards low-end with just RGB/TTL. Problem solved!
To be specific, EOMA68 includes Type I, Type II and Type III already. There are differences in permitted power, and RGB resolution. These differences run in opposite directions, in order to make sure that any combination that fits will work.
WRT resolution, Type I is the high-end, with card-minimum/housing-maximum 1920x1080, because a Type I housing will accept _only_ Type I cards. Thus anything with a 1920x1080 screen has a Type I slot, and any card that physically fits will drive that display.
nooo, 5.0mm is the 1366x768 because the 5.0mm needs to be prevented and prohibited from being physically inserted into incompatible 3.3mm (1920x1080) slots.
(AIUI, a Type II housing can have a 1920x1080 display, as long as that display will accept and upscale a 1366x768 signal.
NO. absolutely not. that is a completely unacceptable technical burden on the manufacturers of the housings, forcing them to have additional circuitry which may or may not be used.... and may or may not be actually available on the open market.... and may actually end up being far more costly than the processor utilised in the Card.
any kind of resolution scaling at these framerates and buffer sizes it actually needs a full processor - with several hundred megabytes of DDR2 / DDR3 RAM - to perform the conversion.
so no - absolutely not. you connect the LCD to the EOMA68 bus on the Housing, the LCD's resolution is fixed as decided by the manufacturer of the Housing, and that's the end of it.
Then either a Type I card or a Type II card exceeding the minimum specs can output full 1920x1080, but it will work with even the minimum Type II card's 1366x768.)
The point is, any card _must_ work in any housing it physically fits in. So if you want Type I to support MIPI, that's great -- but that Type I still fits in a Type II, so it must also be able to output RGB/TTL on the same pins,
... correct.... but worse than that it must be on the *exact* dual-function pins as arbitrarily specified in the propsed [completely not-thought-through] standard.
and there must be a mechanism to autonegotiate this depending on the housing.
correct.
I don't think adding autonegotiation here is particularly hard (basically just defining a flag in the I2C EEPROM),
correct.
but the need to support both interfaces negates some of the benefits of MIPI,
not really relevant
and adds complexity to all cards supporting MIPI,
"insane, hard to implement and with ICs that probably don't actually exist" complexity
and I'm not sure that complexity (multiplexer, and in some cases, MIPI->RGB conversion IC) can actually fit on a crowded EOMA68 card.
correct.
The "high-end" specification shall then also be extended allowing higher thermal dissipation (5W is too low - maybe 15W would be OK as it's still easy to cool passively) etc. to accommodate "high-end" (actually mainstream, but in this context it's high-end) requirements.
Type I and Type II are currently limited to 5W, while Type III is limited to 10W. Again, these have to be in this order, because any card must work in any housing it physically fits. So a 5W card can be powered by a 10W power supply, but not the other way around.
I think 10W is a hardware limit of the connector (4 pins at 0.5A per pin);
correct. ah, i'd forgotten about that. yeah you do not want to be overheating the pins. so, 10W limit it is.
Speaking about thermal dissipation, I'm not sure that in case of the high-end card type, this limit should be a fixed one.
Obviously, technically skilled people will overclock and overwatt specific EOMA68 cards in housings that they know can supply more power.
outside of the standard.... probably.
But the minute you change this behavior from "hackers breaking the rules" to "there are no rules", you've made it so that the whole promise of EOMA ("Just plug it in; it will work") can no longer be kept.
correct. and that's why these things have to be thought through very, very carefully, and simplly not permitted - outright banned - if there is even the slightest possibility of confusion or harm.
remember the goal is 100 million units and above per year. even the SLIGHTEST chance of confusion could result in millions of units returned, resulting in a catastrophic loss of confidence in the standard.
there is NO WAY the standard can be "sacrificed" just for the benefit of some arbitrary "nice-to-have" decision or short-term profit.
I personally would be fine with that -- in fact, I would be fine with a lot of things that are defined in the EOMA68 standard being just a matter of labeling, and leave it on the user to choose compatible parts.
as long as the compatibility is "everything, always works [even if it's a bit slower]" i don't care.
the MOMENT that becomes "it MIGHT work, but it might not" then the standard's fucked and six years (and counting) have been utterly wasted, irrevocably destroyed in an instant.
But Luke's not designing this standard for me; he's designing it for people who would get confused and buy a 25W card and a 10W-max tablet housing, and not understand why they don't work.
absolutely correct.
If you're targeting those people (which you have to, to get volume), you have to make it work for them.
absolutely correct.
this isn't a "techie standard wanna plug in yer favurit memry upgrad just undo da scruz n read duh instrucshunns on da in'ur'ne' "
just as that pc journalist said a couple months back (about gaming pcs being too hard), it's for people with big fat fingers who are afraid to drop the screwdriver and damage things.
one button.
press it.
card comes out.
put new one in.
it will work.
that simple.
and it stays that simple.
this is not negotiable.
l.
On Tue, Dec 13, 2016 at 8:55 PM, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
On 12/14/16, Benson Mitchell benson.mitchell+arm-netbook@gmail.com wrote:
On Tue, Dec 13, 2016 at 9:51 AM, dumblob dumblob@gmail.com wrote:
Can we provide both interfaces (RGB/TTL + MIPI DSI) on the same pins while having a HW way to choose from these?
Yes we can! EOMA already counts on several types of PC Cards (originally called PCMCIA). At minimum two - thinner (Type I - 3.3 mm) and thicker (Type II - 5 mm). Let's declare the thicker cards to be high-end and offer only MIPI DSI while thinner cards low-end with just RGB/TTL. Problem solved!
To be specific, EOMA68 includes Type I, Type II and Type III already. There are differences in permitted power, and RGB resolution. These differences run in opposite directions, in order to make sure that any combination that fits will work.
WRT resolution, Type I is the high-end, with card-minimum/housing-maximum 1920x1080, because a Type I housing will accept _only_ Type I cards. Thus anything with a 1920x1080 screen has a Type I slot, and any card that physically fits will drive that display.
nooo, 5.0mm is the 1366x768 because the 5.0mm needs to be prevented and prohibited from being physically inserted into incompatible 3.3mm (1920x1080) slots.
I think we're failing to communicate here. Type I is 3.3mm, Type II is 5.0mm. As far as I can tell, we're saying the same thing so far.
(AIUI, a Type II housing can have a 1920x1080 display, as long as that display will accept and upscale a 1366x768 signal.
NO. absolutely not. that is a completely unacceptable technical burden on the manufacturers of the housings, forcing them to have additional circuitry which may or may not be used.... and may or may not be actually available on the open market.... and may actually end up being far more costly than the processor utilised in the Card.
I'm _not_ saying to require it, I'm saying I thought a housing _can_ have it. There's no "forcing" them to, and I get that it makes no economic sense in almost all cases. But _if_ a particular manufacturer wants to make a particular housing with a high-resolution display, built-in upscaler, and Type II (5.0mm) slot, that's not a problem, is it?
any kind of resolution scaling at these framerates and buffer sizes it actually needs a full processor - with several hundred megabytes of DDR2 / DDR3 RAM - to perform the conversion.
Or ASICs, such as are in most ordinary LCD monitors and TVs.
The specific application I was thinking of is "Smart TV": a 1080p TV (or projector) with an EOMA68 slot in the side of it. It should already be able to handle 1080p, 720p, and other resolutions from the DVI/VGA/etc. inputs, so for little additional cost, it could handle both 1366x768 and 1920x1080, from either Type II or Type I cards respectively.
The options I see are: A Support _only_ 1920x1080 from EOMA68, and have a Type I 3.3mm slot; anyone with a spare Type II card can't run a lower resolution, but has to buy a new card. B Support _only_ 1366x768 from EOMA68, and have a Type II 5.0mm slot; anyone with a spare Type I card can use it, but has to live with the lower resolution and upscaling artifacts, even though their card can do better. C Support 1366x768 and 1920x1080, and have a Type II 5.0mm slot; anyone with a spare Type II card can use it with upscaling, while anyone with a Type I card can use it at full resolution.
Are you saying the _only_ ways for such a "Smart TV" to be EOMA68 compliant are options A and B, and there's no way to do option C?
If so, can you help me understand why?
I know EOMA68 actually uses device-tree files rather than DDC like a VGA or DVI connection, but it just seems like it should be easy to do the conceptual equivalent of DDC -- the monitor (EOMA68: housing) sends a list of modes, the PC (EOMA68: CPU card) picks the largest one it can handle, and you're done. For single-resolution housings, the list has one entry. If it can't/doesn't work this way... why not?
Benson Mitchell
On 12/14/16, Benson Mitchell benson.mitchell+arm-netbook@gmail.com wrote:
On Tue, Dec 13, 2016 at 8:55 PM, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
nooo, 5.0mm is the 1366x768 because the 5.0mm needs to be prevented and prohibited from being physically inserted into incompatible 3.3mm (1920x1080) slots.
I think we're failing to communicate here. Type I is 3.3mm, Type II is 5.0mm. As far as I can tell, we're saying the same thing so far.
wheels turn a bit slowly in my head at the moment... :) *click* yes you're right.
I'm _not_ saying to require it, I'm saying I thought a housing _can_ have it. There's no "forcing" them to, and I get that it makes no economic sense in almost all cases. But _if_ a particular manufacturer wants to make a particular housing with a high-resolution display, built-in upscaler, and Type II (5.0mm) slot, that's not a problem, is it?
ah.... ah.... oo! as long as the housing was absolutely guaranteed to work at 1366x768 *and* 1920x1080... you're absolutely right, it would be fine!
any kind of resolution scaling at these framerates and buffer sizes it actually needs a full processor - with several hundred megabytes of DDR2 / DDR3 RAM - to perform the conversion.
Or ASICs, such as are in most ordinary LCD monitors and TVs.
these days most of them are actually custom full processors (you remember that vulnerability report recently about LCDs being hackable over their HDMI interfacee?) the framebuffer is so large - 1920x1080x8x4 = 64mb (!!)
The specific application I was thinking of is "Smart TV": a 1080p TV (or projector) with an EOMA68 slot in the side of it. It should already be able to handle 1080p, 720p, and other resolutions from the DVI/VGA/etc. inputs, so for little additional cost, it could handle both 1366x768 and 1920x1080, from either Type II or Type I cards respectively.
The options I see are:
... remember there's two cases (at the moment) where up to 1920x1080 is supported by 3.3mm cards and housings, and up to 1366x768 is supported by 5mm cards and housings. what you're proposing below is incomplete but i get the general idea
A Support _only_ 1920x1080 from EOMA68, and have a Type I 3.3mm slot; anyone with a spare Type II card can't run a lower resolution, but has to buy a new card.
baaad.
B Support _only_ 1366x768 from EOMA68, and have a Type II 5.0mm slot; anyone with a spare Type I card can use it, but has to live with the lower resolution and upscaling artifacts, even though their card can do better.
blegh.
C Support 1366x768 and 1920x1080, and have a Type II 5.0mm slot; anyone with a spare Type II card can use it with upscaling, while anyone with a Type I card can use it at full resolution.
better.
Are you saying the _only_ ways for such a "Smart TV" to be EOMA68 compliant are options A and B, and there's no way to do option C?
no, you've come up with a really good suggestion (thanks to the OP for the question and the discussion opportunity).
it's a hell of an extra technical cost - to have a full custom ASIC/processor and some DDR2/3 RAM to do the upscaling.... but if that's what it takes - and it's not unreasonable - then that's what it takes.
ha.
okay.
so we have, in the resolution department:
* type II 5.0 mm cards may expect up to 1366x768 as guaranteed and given "native" resolution(s)
* type I 3mm mm cards may expect up to 1920x1080 as guaranteed and given "native" resolution(s)
* type II 5.0mm cards may expect housings to provide "upscaling" support for resolutions beyond 1366x768, a notification of the full list of available resolutions in the I2C EEPROM (in the form of DDC data) and, hmmm.... we'll need some form of auto-detection or standards-compliant means and method of communicating the desired resolution. hmmm....
and in the power-provision department:
* all housings MUST supply up to 5.0 watts
* all housings MAY indicate in the I2C EEPROM that they have the capability to extract sufficient heat away from the card in order to support up to 10.0 watts. that'll probably involve a fan in the housing, pointing directly at the Card casework.
oo this is actually quite exciting.
l.
Glad we're on the same page.
On Wed, Dec 14, 2016 at 5:36 AM, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
- type II 5.0mm cards may expect housings to provide "upscaling"
support for resolutions beyond 1366x768, a notification of the full list of available resolutions in the I2C EEPROM (in the form of DDC data) and, hmmm.... we'll need some form of auto-detection or standards-compliant means and method of communicating the desired resolution. hmmm....
I finally found the documentation for display-timings in devicetree. As I suspected, it already supports an arbitrary number of modes, and specifying one of them as native/preferred.
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documen...
So it looks like no EOMA68-specific data is needed to make it work.
Benson Mitchell
On 12/14/16, Benson Mitchell benson.mitchell+arm-netbook@gmail.com wrote:
Glad we're on the same page.
:)
On Wed, Dec 14, 2016 at 5:36 AM, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
- type II 5.0mm cards may expect housings to provide "upscaling"
support for resolutions beyond 1366x768, a notification of the full list of available resolutions in the I2C EEPROM (in the form of DDC data) and, hmmm.... we'll need some form of auto-detection or standards-compliant means and method of communicating the desired resolution. hmmm....
I finally found the documentation for display-timings in devicetree. As I suspected, it already supports an arbitrary number of modes, and specifying one of them as native/preferred.
ah. rrright. okay. two things:
(1) the way it's going to have to work is: you read the EOMA68 I2C EEPROM @ addr 0x51, get the housing "id", and from there *decide which dtb fragments to load*. there's a patch related to beagleboards and so on which is being worked on that allows numbered pointer-references in devicetree to be replaced with arbitrary pre-compiled binary dtb fragments.
(2) getting the various settings isn't the problem (load DDC data over I2C like you would from any VESA-compliant monitor), picking the preferred one according to the devicetree specification isn't the problem, it's *TELLING* the hardware which one *HAS* been picked that's the problem.
now, if this was HDMI, it would not be a problem. HDMI: auto-detection is part of tthe protocol. VGA? not a problem (at least with modern monitors) - auto-detection can be designed in. but RGB/TTL? naah. there's absolutely no meta-data.
even in instances where LCDs contain DDC data, it's often completely and utterly wrong, so you have to use it as a "guide", experiment, then completely ignore it and hard-code the values that *WORKED* into the linux kernel source / dts.
now you have two possible (or potentially even more) resolutions to pick from... how the hell do you tell the IC (whatever it is) which one to use?
no idea - so that needs to be resolved, first, before the EOMA68 standard is to be augmented.
i'm not going to put something into the standard which hasn't at least been thoroughly researched.
l.
http://www.chrontel.com/index.php/products/display-interface/ch7018-lvds-tra...
need to find the datasheet for that.
http://www.silicondevice.com/file.upload/images/Gid1300Pdf_CH7018A%20Datashe...
oh look - the CH7018A datasheet.
blegh. only does up to 1024x768 input resolution. however what i was referring to is in section 4.0 (p32) of the datasheet, you use the serial port to tell it what modes its input is set to, as well as what the output mode is.
so, you get the general idea - it's just a pity that chrontel don't have any more modern products but they'll be quite likely to be other manufacturers... i just want to make absolutely absolutely sure that some ICs actually exist that aren't EOL.
anyone got time to go over TI's web site and others? it's too frickin awkward to do this kind of research from china... unless it's a chinese web site (which of course is bloody hard to find because it's not properly indexed / search-engine-referenced) but if you _do_ find one let me know as the speed is quite quick from here.
l.
http://www.chrontel.com/index.php/products/display-interface/ch7034-hdtv-vga...
oo! that one!
hey i recognise that one, i'm certain it was the IC used in the GPL-violating CTPC89e, all those years ago.
http://download.csdn.net/detail/jangel_lee/6854539
*hurls*. anyone know how to extract files from that crapsite?
l.
Using the F12 console in Chromium... I don't see that there's even a PDF behind that thing. I honestly cannot tell how that flash player document viewer thing is generating or retrieving its content.
...never mind, I'm blind as a bat.
There's a blue download button underneath the black magic document viewer, which takes you to --> http://download.csdn.net/download/jangel_lee/6854539
If you click the ORANGE button on that download page, the one with the star-in-circle icon, you may be able to get in through there. Clicking it brings up a dialog box for a login. In there, there's a set of four tiny icons. Second-in-from-right tiny icon is a Github logo. Click that and, if you've a Github account, you can probably get through. I have one such account, but (as is often the case with me) I've long since forgotten my login credentials entirely... so, good luck.
There also appears to be a pay-us-money option... the "VIP" GREEN button below the other three.
78sk7zgfb6@dispostable.com http://www.dispostable.com/inbox/78sk7zgfb6/
user: popooegoo pass: poopsoup
dont understand the crap one is asked after creating account. hope maybe ya can just login and download and thats it?
jan: y'did ok. you started a great conversation where it'll result in improvements to the EOMA68 standard. the subject line is telling, however: "the only way". that's inflexibility which is going to end in tears. there's... well... there's _usually_ more than one way :) you did ok taking in a huge number of factors... just not enough of them. still appears to be my role to do that... *sigh*... take it easy, ok?
l.
On 12/14/16, Christopher Havel laserhawk64@gmail.com wrote:
Using the F12 console in Chromium... I don't see that there's even a PDF behind that thing. I honestly cannot tell how that flash player document viewer thing is generating or retrieving its content.
ah: you reminded me, there might be a way to examine the network traffic, see what the fuckwit adobe plugin is doing, and extract the specific file / request.
l.
Hi guys,
I waited for the thread to settle down - so, first, thank you for calming down.
This thread simply just approved what I've been most afraid of. Namely all the information channels regarding EOMA make up to a one big mess.
I'll follow now this process:
1) I'll gather all valid information about current EOMA (as quite a few pieces are invalid, incomplete or at least misleading) through thorough asking on this list.
2) I'll add (new) information from my side and make absolutely sure you all understand it (including context and background).
3) I'll restate, rewrite, propose again a solution to what I see as a bottle neck.
4) We'll all finally react upon the point 3 and discuss the steps to resolve each particular bottle neck.
Regards,
-- Jan
arm-netbook@lists.phcomp.co.uk