ok, been quite busy, 3 to 4 different things going on.
first one: i've been working on getting the eoma68-a20 board up and running (booting) out of NAND flash, that's been quite hair-raising. using a sunxi 3.15-10-rc5 kernel i was able to use the standard mtd-tools package to erase and write to the hynix TSOP48 NAND chip, placing an SPL-enabled recent version of u-boot onto it. this all worked fine... except it turns out that MLC NAND can self-destruct just by reading! all i can say is, no wonder allwinner's bootloader process is so damn complex! it's like a 4-stage boot: boot0 (minimum bring-up), boot1 (capable of reading NAND as well as SD/MMC FAT partitions), then u-boot (modified to read allwinner's strange partition format) and finally linux kernel.
second, the jz4775 CPU Card is finally underway: http://rhombus-tech.net/ingenic/jz4775/news/
this is the first FSF-Endorseable CPU Card, using the 1.2ghz low-power Ingenic MIPS. it seems strange to put a $3 processor alongside $2.50 of NAND Flash and then put in almost $12 of DDR3 RAM ICs (2 GB RAM) but the threshold where SoCs were no longer the most expensive part of a BOM was passed a loong time ago.
third, the laptop main board, i've got everything working except the LCD. i've blown up 2 LCDs already, it was necessary to buy 2 more. first mistake was that it was hard to read the datasheet so the connector was reversed: that resulted in -20V being shoved up the backside of the +3.3v sensitive ICs on the LCD, end result when i was finally able to make up a reversed-cable, the "magic smoke" left that LCD. also i hadn't noticed that i'd shorted the 5V rail to the 3.3v rail, which didn't help, and may have damaged some of the GPIOs on one of the EOMA68-A20 boards i have here. still quite a bit to investigate there.
fourth, i've managed to smoke about 3-4 Power ICs already - various MOSFETs, two 3 Amp RT8288 PMICs from microdesktop boards i'm using for test purposes - all to get the laptop power / charger board up and running. annoying! but finally a couple of hours ago, by making up a 2nd board with minimal components, i managed to get the LTC4155 up and recognised on the I2C bus, as well as check that it was outputting stable 5V from USB-style input. the next phase is to populate the over-voltage and reverse-voltage protection components.
it's getting there... it's just slower than i would like.
l.
On Tuesday 24. November 2015 05.03.16 Luke Kenneth Casson Leighton wrote:
second, the jz4775 CPU Card is finally underway: http://rhombus-tech.net/ingenic/jz4775/news/
this is the first FSF-Endorseable CPU Card, using the 1.2ghz low-power Ingenic MIPS. it seems strange to put a $3 processor alongside $2.50 of NAND Flash and then put in almost $12 of DDR3 RAM ICs (2 GB RAM) but the threshold where SoCs were no longer the most expensive part of a BOM was passed a loong time ago.
This is great news! I also like the 2GB RAM support - with many ARM single- board computers still only supporting 1GB, it makes sense to go for the maximum here - and the USB boot mode support, which is great for experimenting with new boot payloads and is one of the nice things about the Ben NanoNote.
Paul
On Tue, Nov 24, 2015 at 9:06 AM, Paul Boddie paul@boddie.org.uk wrote:
On Tuesday 24. November 2015 05.03.16 Luke Kenneth Casson Leighton wrote:
second, the jz4775 CPU Card is finally underway: http://rhombus-tech.net/ingenic/jz4775/news/
this is the first FSF-Endorseable CPU Card, using the 1.2ghz low-power Ingenic MIPS. it seems strange to put a $3 processor alongside $2.50 of NAND Flash and then put in almost $12 of DDR3 RAM ICs (2 GB RAM) but the threshold where SoCs were no longer the most expensive part of a BOM was passed a loong time ago.
This is great news! I also like the 2GB RAM support - with many ARM single- board computers still only supporting 1GB, it makes sense to go for the maximum here -
those kinds of decisions i believe are usually made based on the faulty logical analysis which is summarised as "but... but.... the processor costs less than the memory!!!"
and the USB boot mode support, which is great for experimenting with new boot payloads and is one of the nice things about the Ben NanoNote.
same company. nanonote is the jz4720.
l.
On Tuesday 24. November 2015 14.42.44 Luke Kenneth Casson Leighton wrote:
On Tue, Nov 24, 2015 at 9:06 AM, Paul Boddie paul@boddie.org.uk wrote:
This is great news! I also like the 2GB RAM support - with many ARM single- board computers still only supporting 1GB, it makes sense to go for the maximum here -
those kinds of decisions i believe are usually made based on the faulty logical analysis which is summarised as "but... but.... the processor costs less than the memory!!!"
RAM is (or will soon be) the inhibiting factor around the adoption of single- board computers. People forgave the first model of the Raspberry Pi for having 256MB RAM because it was only supposed to be used for "lightweight" tasks, but people's expectations tend to increase when they realise that these devices could replace their primary computer.
I suppose that you could cluster several of them if they each only provided, say, 1GB RAM, but then you need the convenient infrastructure to make that easy. Indeed, a multi-slot cluster unit would be nice with EOMA-68, and I think someone mentioned something similar earlier in the year. The software architecture would also need reconsidering, however.
As I understand it, 32-bit MIPS divides the addressable memory space in two, meaning that 2GB of actual memory is the limit. I haven't paid enough attention to 32-bit ARM recently to say whether similar limits apply there, but a move to 64-bit architectures is necessary to address over 4GB, anyway, and that would be the motivation for doing it. (Which is why I was baffled by the Olimex "64-bit" device that was only going to provide 2GB RAM.)
and the USB boot mode support, which is great for experimenting with new boot payloads and is one of the nice things about the Ben NanoNote.
same company. nanonote is the jz4720.
Yes. The NanoNote deliberately exposed the USB Boot pins to facilitate experimentation, whereas the Letux/Trendtac Minibook/MiniPC (which uses the mystery jz4730) apparently requires more work to mess around at the booting stage. I also noticed that the MIPS Creator CI20 has USB Boot (using a jz4780) as well. And the (micro)SD Boot mode is also very handy, of course.
Paul
On Tue, Nov 24, 2015 at 3:30 PM, Paul Boddie paul@boddie.org.uk wrote:
On Tuesday 24. November 2015 14.42.44 Luke Kenneth Casson Leighton wrote:
On Tue, Nov 24, 2015 at 9:06 AM, Paul Boddie paul@boddie.org.uk wrote:
This is great news! I also like the 2GB RAM support - with many ARM single- board computers still only supporting 1GB, it makes sense to go for the maximum here -
those kinds of decisions i believe are usually made based on the faulty logical analysis which is summarised as "but... but.... the processor costs less than the memory!!!"
I suppose that you could cluster several of them if they each only provided, say, 1GB RAM, but then you need the convenient infrastructure to make that easy. Indeed, a multi-slot cluster unit would be nice with EOMA-68, and I think someone mentioned something similar earlier in the year.
yes. primary justification for this is to enable rack-mount low-power modular servers.
As I understand it, 32-bit MIPS divides the addressable memory space in two, meaning that 2GB of actual memory is the limit.
rright. as ingenic's MIPS is home-grown they inform me that the jz4775 can actually address up to 3GB of RAM. however that would involve a... rather complex board arrangement, of a mixture of ICs or to simply put in 4GB DDR total and ignore 1GB of it.
I haven't paid enough attention to 32-bit ARM recently to say whether similar limits apply there,
absolutely they are - because the peripherals are all memory-addressed, and the boot ROM also has to be addressed somehow.
to make life a bit simpler (save some silicon space), usually only a few of the address bits are utilised to make the decision "address this peripheral yes/no". so hilariously you can sometimes address the exact same peripheral at regularly-spaced (large) memory intervals. this was i believe the case with at least the intel pxa 270: it was over 10 years ago but i do seem to recall seeing the exact same peripherals at different addresses when reverse-engineering an XDA smartphone.
in short... yeah :)
l.
Hello,
On Tue, 24 Nov 2015 15:53:10 +0000 Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
[]
RAM is (or will soon be) the inhibiting factor around the adoption of single-board computers.
I haven't paid enough attention to 32-bit ARM recently to say whether similar limits apply there,
absolutely they are - because the peripherals are all memory-addressed, and the boot ROM also has to be addressed somehow.
Just as x86-32, ARMv7 has physical address extension http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438i/CHDCGIB... , so it can address more than 4Gb of physical memory. That still leaves 4Gb of virtual memory per process, and thanks god - bloating memory size doesn't mean growing its speed, so the more memory, the slower it all works.
Generally, it's pretty depressing to read this memory FUD on mailing list of "sustainable computing" project. What mere people would need more memory for? Watching movies? Almost nobody puts more than 1Gb because *it's not really needed*. And for sh%tty software, no matter if you have 1, 2, or 8GB - it will devour it and sh%t it all around, making the system overall work slower and slower with more memory. (I'm currently sitting on 16Gb box with constant 100% cpu load - it's Firefox collecting garbage in its 6Gb javascript heap - forever and ever).
For comparison, my latest discovery is relation database engines which can execute queries in few *kilobytes* of RAM - https://github.com/graemedouglas/LittleD and Contiki Antelope http://dunkels.com/adam/tsiftes11database.pdf
On Tuesday 24. November 2015 19.36.14 Paul Sokolovsky wrote:
Just as x86-32, ARMv7 has physical address extension http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438i/CHDCGI BF.html , so it can address more than 4Gb of physical memory. That still leaves 4Gb of virtual memory per process, and thanks god - bloating memory size doesn't mean growing its speed, so the more memory, the slower it all works.
4GB or 4Gb? I guess you mean the former. Again, I haven't kept up with this, so it's useful to know. I remember that the i386 architecture had support for larger address spaces, but I guess that it was more convenient to move towards the amd64 variant in the end.
Generally, it's pretty depressing to read this memory FUD on mailing list of "sustainable computing" project. What mere people would need more memory for? Watching movies? Almost nobody puts more than 1Gb because *it's not really needed*. And for sh%tty software, no matter if you have 1, 2, or 8GB - it will devour it and sh%t it all around, making the system overall work slower and slower with more memory. (I'm currently sitting on 16Gb box with constant 100% cpu load - it's Firefox collecting garbage in its 6Gb javascript heap - forever and ever).
FUD? Ouch! Thanks for classifying some pretty innocent remarks in such a negative way. For your information, my primary machine for the last ten years has only ever had 1GB - and I was making do with 128MB for years before that - and at times I have felt (or have been made to feel) behind the times by people who think everybody went amd64 and that nobody develops on 32-bit Intel any more.
Yes, software is bloated and we can often do what we need with less. Another interest of mine is old microcomputers where people used to do stuff with a few KB, just as you've discovered...
For comparison, my latest discovery is relation database engines which can execute queries in few *kilobytes* of RAM - https://github.com/graemedouglas/LittleD and Contiki Antelope http://dunkels.com/adam/tsiftes11database.pdf
It's worth bearing in mind that PostgreSQL was (and maybe still is) delivered with a conservative configuration that was aimed at systems of the mid- to late-1990s. Contrary to what people would have you believe, multi-GB query caches are usually a luxury, not a necessity.
Anyway, my point about memory still stands: you can get by with 256MB for sure, but people are wanting to run stuff that happens to use more than that. It's not their fault that distributions are steadily bloating themselves, and these users don't necessarily have the time or expertise to either seek out or rework more efficient distributions. Moreover, some of the leaner distributions are less capable to the extent that it really is worth evaluating what a bit of extra memory can deliver.
Finally, there are also genuine reasons for wanting more RAM: not for programs but for manipulating data efficiently. And on sustainability, after a while it will become rather more difficult to source lower-capacity components, and they may well be less energy-efficient than their replacements, too.
Some progress is actually worth having, you know.
Paul
Hello,
On Tue, 24 Nov 2015 22:49:06 +0100 Paul Boddie paul@boddie.org.uk wrote:
On Tuesday 24. November 2015 19.36.14 Paul Sokolovsky wrote:
Just as x86-32, ARMv7 has physical address extension http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438i/CHDCGI BF.html , so it can address more than 4Gb of physical memory. That still leaves 4Gb of virtual memory per process, and thanks god - bloating memory size doesn't mean growing its speed, so the more memory, the slower it all works.
4GB or 4Gb? I guess you mean the former.
Yep, 4GiB, can't live up to 21st century standards, yuck.
Again, I haven't kept up with this, so it's useful to know. I remember that the i386 architecture had support for larger address spaces, but I guess that it was more convenient to move towards the amd64 variant in the end.
The way it was pushed on everyone, yeah. And we'll see if the same is happening now with arm64 - just as you, I skipped x86_64 "revolution", so can't judge for sure, but as far as I can tell, yeah, it's being pushed pretty much. Which is only said for projects like EOMA68, because it's endless run, and all the careful selection of nice 32-bit SoCs risk going down /dev/null, being consumers soon will meet classic stuff with "wwwwhat? it's not 64-bit?"
Generally, it's pretty depressing to read this memory FUD on mailing list of "sustainable computing" project. What mere people would need more memory for? Watching movies? Almost nobody puts more than 1Gb because *it's not really needed*. And for sh%tty software, no matter if you have 1, 2, or 8GB - it will devour it and sh%t it all around, making the system overall work slower and slower with more memory. (I'm currently sitting on 16Gb box with constant 100% cpu load - it's Firefox collecting garbage in its 6Gb javascript heap - forever and ever).
FUD? Ouch! Thanks for classifying some pretty innocent remarks in such a negative way.
Perhaps it was a bit strong, but we all know that EOMA68 project is rather overdue, and it feels that maybe - just maybe - something will materialize finally anytime soon. And maybe - just maybe - coming up with high-end 2Gb module is good idea to show that project can deliver "bleeding edge" spec, not perceivably lag behind the market midline. But marketing it with "RAM is (or will soon be) the inhibiting factor around the adoption of single- board computers." is IMHO will only hurt the project, as its main goal (unless my memory plays tricks on me) is to deliver commodity hardware in the hands of people to do real-world things (and allow to sustainably reuse that hardware for doing even more real-world things).
So, one of scenario how it all may come up is that all this sustainability talk may be a marketing gimmick and there won't be much more sustainability in EOMA than freedom in some fairphone. It will be a toy for particular kind of hipsters, delivered to them in denim recycled bags. Luke gets rich and will drive a sustainable personally tuned Tesla, but not that rich to produce an open-hardware SoC. All that would be pretty sad finale after all these years.
So, I hope the project will continue to educate how cool it's to run a home router with server capabilities with 256MB RAM instead of typical 32MB, even if it costs 3x (for starters, hope it gets more sustainable over time), rather than ship luxuries and proclaiming demise of 1GB single-board computers.
Thanks for other comments - insightful.
[]
Some progress is actually worth having, you know.
Paul
On Wednesday 25. November 2015 21.16.42 Paul Sokolovsky wrote:
Paul Boddie paul@boddie.org.uk wrote:
Again, I haven't kept up with this, so it's useful to know. I remember that the i386 architecture had support for larger address spaces, but I guess that it was more convenient to move towards the amd64 variant in the end.
The way it was pushed on everyone, yeah. And we'll see if the same is happening now with arm64 - just as you, I skipped x86_64 "revolution", so can't judge for sure, but as far as I can tell, yeah, it's being pushed pretty much. Which is only said for projects like EOMA68, because it's endless run, and all the careful selection of nice 32-bit SoCs risk going down /dev/null, being consumers soon will meet classic stuff with "wwwwhat? it's not 64-bit?"
Well, I used Alpha back in the 1990s and that was a 64-bit transition worth pursuing, given how much faster the architecture was than virtually everything else that was available to me at the time. But it's precisely the "it's not 64-bit?" thinking that leads to products like that Olimex one (and also the Qualcomm arm64 development board [*]) where, as I noted, the most significant current motivation - more memory headroom for those who need it - is completely ignored. I guess we'd both ask what the point of such products is, other than "preparing for the future" or whatever the usual sales pitch is.
[*] https://developer.qualcomm.com/hardware/dragonboard-410c
("Free graphics possible" according to the Debian Wiki's ARM 64 port page...
https://wiki.debian.org/Arm64Port
...which lists the gold-plated options for "preparing for the future" offered by vendors who don't really seem to be certain that it is the future at the moment.)
FUD? Ouch! Thanks for classifying some pretty innocent remarks in such a negative way.
Perhaps it was a bit strong, but we all know that EOMA68 project is rather overdue, and it feels that maybe - just maybe - something will materialize finally anytime soon. And maybe - just maybe - coming up with high-end 2Gb module is good idea to show that project can deliver "bleeding edge" spec, not perceivably lag behind the market midline.
At this point 2GB is pretty reasonable. Indeed, I'd be happy with 2GB for my own purposes: I did consider upgrading my desktop machine to 2GB a few months ago, but it would be much nicer to go with EOMA-68 devices instead. My personal justification for more than 1GB involves developing and testing stuff in User Mode Linux which isn't happy running with a small tmpfs, and although I could switch to other virtualisation technologies (distribution support just wasn't there when I started to use UML), I imagine that I'd still need more memory to make it all happy, anyway.
I could imagine using a couple of 1GB devices instead for such activities. Going down to 512MB could work, too, but might involve more attention to the distribution-level stuff. At some point, the extra work required in tailoring the software to work well with the hardware is work I don't really have time for, however. So, 256MB would present some awkward choices, given the state of various distributions today.
I'm not saying that everyone needs 2GB or even 1GB (or maybe even 512MB), but then again, I'm not a big media "consumer" on my hardware, so I do wonder what amount of memory would be the minimum to satisfy "most people".
But marketing it with "RAM is (or will soon be) the inhibiting factor around the adoption of single- board computers." is IMHO will only hurt the project, as its main goal (unless my memory plays tricks on me) is to deliver commodity hardware in the hands of people to do real-world things (and allow to sustainably reuse that hardware for doing even more real-world things).
Oh, it wasn't a marketing suggestion at all. I don't suggest playing the game of who can offer most RAM, but it is clear that the amount of RAM does influence potential buyers and their impressions of what they might be able to use the device for. One can certainly get away with offering something very cheap and saying that it is good enough for "lightweight tasks" or, given the shenanigans in the "educational" scene at the moment, as an almost "throwaway" piece of gear that occupies the attentions of certain kinds of users for a short while, especially with things like this Raspberry Pi Zero that was announced very recently, but I think that it serves people better to have something that can address more than their absolute basic needs. Otherwise, they'll say that "this cheap thing is all very well, but I want a proper laptop", or whatever.
So, one of scenario how it all may come up is that all this sustainability talk may be a marketing gimmick and there won't be much more sustainability in EOMA than freedom in some fairphone. It will be a toy for particular kind of hipsters, delivered to them in denim recycled bags. Luke gets rich and will drive a sustainable personally tuned Tesla, but not that rich to produce an open-hardware SoC. All that would be pretty sad finale after all these years.
Actually, Fairphone has come some way in terms of freedom: the Qualcomm SoC that's in the second product may even be supportable by Free Software. Meanwhile, Luke's Tesla would have to be personalised to stand out where I live. :-)
So, I hope the project will continue to educate how cool it's to run a home router with server capabilities with 256MB RAM instead of typical 32MB, even if it costs 3x (for starters, hope it gets more sustainable over time), rather than ship luxuries and proclaiming demise of 1GB single-board computers.
Well, a lot of these computers have to get to 1GB first before their demise. ;-) But it is certainly the case that you can deliver better (and better- supported) hardware to various kinds of devices, and for many of them a modest amount of memory (by desktop/laptop standards) would go a long way. I don't think I would ever dispute that.
Paul
Actually, Fairphone has come some way in terms of freedom: the Qualcomm SoC that's in the second product may even be supportable by Free Software. Meanwhile, Luke's Tesla would have to be personalised to stand out where I live. :-)
i like what fairphone are attempting - providing people with the means to repair their own devices. it's a pity that they haven't really thought it through properly: if they had, they would have made the parts *properly* modular - i.e. in ESD-protective robust and independent cases such that either a 4-year-old or an 80-year-old could swap out parts in a few seconds.... *safely*.
and i won't be buying a tesla, i will be making my own hybrid - including designing the engine (@ 40% more fuel-efficient than existing 2-stroke and 4-stroke engines, by using "detonation" i.e. over1800F combustion temperature) - and including *reversing* the size of the hydrocarbon engine vs electrical engine: a 6kW hydrocarbon engine along-side a 12kW (20kW peak, going into over-temperature) electrical engine.
the over-temperature characteristics are now permitted in Europe under EU vehicle category L7E [heavy quadricycle] for up to 30 seconds of "boost power" at up to 25kW so that the vehicle may accelerate at the same rate as other, more powerful (yet heavier) vehicles.
long story - been working on a design of ultra-efficient vehicle for several years.
l.
(I'm currently sitting on 16Gb box with constant 100% cpu load - it's Firefox collecting garbage in its 6Gb javascript heap - forever and ever).
I'm glad I'm not the only one suffering from this...
On Wed, Nov 25, 2015 at 2:22 AM, Jean-Luc Aufranc cnxsoft@cnx-software.com wrote:
(I'm currently sitting on 16Gb box with constant 100% cpu load - it's Firefox collecting garbage in its 6Gb javascript heap - forever and ever).
I'm glad I'm not the only one suffering from this...
... you're not - it's a known design flaw in web browser design: a global process running all tabs/windows in order to reflect a global javascript execution context. chrome fixed the design flaw by running entirely separate processes per window, which brings its own set of problems.
i run over 100 tabs on a regular basis, so every now and then i end up with 100% cpu usage and occasionally well over 45 gb of virtual memory gets allocated. solution: kill firefox and restart it, allowing it to reopen all tabs. recent versions leave the tabs "unopened" until you *actually* open them by explicitly clicking on them.
l.
Luke Kenneth Casson Leighton lkcl@lkcl.net writes:
On Wed, Nov 25, 2015 at 2:22 AM, Jean-Luc Aufranc cnxsoft@cnx-software.com wrote:
(I'm currently sitting on 16Gb box with constant 100% cpu load - it's Firefox collecting garbage in its 6Gb javascript heap - forever and ever).
I'm glad I'm not the only one suffering from this...
... you're not - it's a known design flaw in web browser design: a global process running all tabs/windows in order to reflect a global javascript execution context. chrome fixed the design flaw by running entirely separate processes per window, which brings its own set of problems.
i run over 100 tabs on a regular basis, so every now and then i end up with 100% cpu usage and occasionally well over 45 gb of virtual memory gets allocated. solution: kill firefox and restart it, allowing it to reopen all tabs. recent versions leave the tabs "unopened" until you *actually* open them by explicitly clicking on them.
As a user of pentadactyl this procedure is particularly simple
:restart<return>
I only mention this because I have a feeling that pentadactyl might be something that some of the folk here would like but may not have heard of -- it makes Firefox/Iceweasel behave like vi -- it sounds mad, but many people find it instantly addictive.
The only real downside is that it's often a bit behind the latest Firefox release, so doesn't yet work with 42, but if you're OK with packaged versions, Debian's currently got 38.2.1esr available in everything since oldstable (wheezy), and that works with the current version ... which is packaged for testing (a.k.a. "stretch") as xul-ext-pentadactyl
Cheers, Phil.
On Wed, Nov 25, 2015 at 8:44 AM, Philip Hands phil@hands.com wrote:
As a user of pentadactyl this procedure is particularly simple
:restart<return>
I only mention this because I have a feeling that pentadactyl might be something that some of the folk here would like but may not have heard of -- it makes Firefox/Iceweasel behave like vi -- it sounds mad, but many people find it instantly addictive.
cool! :)
... you're not - it's a known design flaw in web browser design: a global process running all tabs/windows in order to reflect a global javascript execution context. chrome fixed the design flaw by running entirely separate processes per window, which brings its own set of problems.
It's a also a design flaw in the HTML5 system itself: every webpage is basically its own program, so you end up running lots of program that are under the control of people whose interest is not to reduce your CPU or memory usage.
Most real applications try to be careful not to use up CPU resources at all as long as there's no user interaction, but many webpages have way too much commercial interest in constantly jumping up and down to grab your attention and/or staying in touch with a whole bunch of servers to maximize the amount of data they collect on it.
I almost regret the time when flash was the only game in town for "active" web pages, and I could just run a cron job to periodically kill all flash processes.
Stefan
arm-netbook@lists.phcomp.co.uk