I have seen that are developing several EOMA-68, but all have very little power or are old (like A20, JZ4775 or IC1T). This makes that many people that are looking for powerful hardware is left out of the EOMA-68 market.
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
Hi
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
Or the Loongson[1]: It is "free-er". An octa core costs 2500RMB and drains around 30W with nearly twice the computational power of an intel Sandy Bridge. So before you go for power-eaters++ like Intel or Tegra which even seems to enjoy the ride about proprietary GPU design, let's just use the dragon core.
Cheers [1] http://loongson.cn
On Sat, May 2, 2015 at 10:20 AM, David Lanzendörfer david.lanzendoerfer@o2s.ch wrote:
Hi
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
Or the Loongson[1]: It is "free-er". An octa core costs 2500RMB and drains around 30W with nearly twice the computational power of an intel Sandy Bridge. So before you go for power-eaters++ like Intel or Tegra which even seems to enjoy the ride about proprietary GPU design, let's just use the dragon core.
the curreent Loongson is, as you say, 30W. EOMA68 (Type I) is a maximum of 5W, where the comfortable operating thermal limit is actually 3.5W - literally a tenth of the Loongson's power budget (and that's just the processor).
also, the Loongson processors are designed around a Northbridge-Southbridge architecture: ironically the best chip to use is the AMD CS5536 (the same IC that was used in the OLPC XO-1).
also, they require a 64-bit RAM interface (minimum) - i believe they might even require dual 64-bit-wide RAM interfaces. the power consumption from the memory alone is therefore going to be something of the order of 8 to 10 watts at 1333mhz or so.
so for a full Loongson system you would not be looking at a 30W budget, you would be looking at more like a 40 to 45W budget.
by contrast, for example with the A20 PCB we're looking at 32-bit wide 800mhz DDR3 RAM and that uses only around 350mW (0.35W), and the A20 itself only uses about 2.5 watts, flat-out.
the difference then, david, is absolutely enormous, making them totally unsuitable for EOMA68.
Loongson processors would however be *perfectly* suited to the EOMA-200 standard, which is designed *exactly* for this type of higher-powered processor.
however, as i am currently focussed on the easier-to-fund EOMA68, as a way to bootstrap up to profitability by which i can then focus on the *other* standards later, it would be foolish of me - unless you can find the funding and provide it to me - to change direction at this exact minute, especially when things are close to crowd-funding and take-off.
does that help clarify?
l.
On Saturday 2. May 2015 09.51.11 gacuest@gmail.com wrote:
I have seen that are developing several EOMA-68, but all have very little power or are old (like A20, JZ4775 or IC1T). This makes that many people that are looking for powerful hardware is left out of the EOMA-68 market.
I personally regard low power consumption as a good thing, and I guess the reason for focusing on that is the original intention to use EOMA-68 in, well, an "ARM netbook". ;-)
As for things being old, I suppose the A20 is "old" if you regard anything that isn't the manufacturer's current generation of products as old. Then again, maybe I don't care as much as you about this: my desktop computer is ten years old, and running GNU/Linux means that I've not been forced to upgrade to bail out various large corporations repeatedly over that period.
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
Another consideration is openness. Are either of these technologies sufficiently open? Nvidia have traditionally had a bad reputation for this, perhaps only courting openness when they've struggled to attract customers, as I remember being the case with their SoCs: I think the summary was that they promised a lot and delivered comparatively little, and the customers all switched their future designs to other SoCs in disgust.
Paul
On Sat, May 2, 2015 at 10:44 AM, Paul Boddie paul@boddie.org.uk wrote:
On Saturday 2. May 2015 09.51.11 gacuest@gmail.com wrote:
I have seen that are developing several EOMA-68, but all have very little power or are old (like A20, JZ4775 or IC1T). This makes that many people that are looking for powerful hardware is left out of the EOMA-68 market.
I personally regard low power consumption as a good thing, and I guess the reason for focusing on that is the original intention to use EOMA-68 in, well, an "ARM netbook". ;-)
:)
As for things being old, I suppose the A20 is "old" if you regard anything that isn't the manufacturer's current generation of products as old. Then again, maybe I don't care as much as you about this: my desktop computer is ten years old, and running GNU/Linux means that I've not been forced to upgrade to bail out various large corporations repeatedly over that period.
it's something like 17 seconds to boot a full desktop GUI on the A20.
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
Another consideration is openness. Are either of these technologies sufficiently open? Nvidia have traditionally had a bad reputation for this, perhaps only courting openness when they've struggled to attract customers, as I remember being the case with their SoCs: I think the summary was that they promised a lot and delivered comparatively little, and the customers all switched their future designs to other SoCs in disgust.
that, and the fact that the pricing will be $17 vs a $5 allwinner processor, it becomes hard to justify the price differential when you can't really tell the difference if the game you're running on a 7in screen is 30fps instead of 120fps (i.e. beyond the refresh rate of the LCD)
the whole point is that there's a threshold of "good enough" that all these lower-priced (and lower-power) processors fit perfectly well.
that and the fact that the IC3128 and the JZ4775 are FSF-Endorseable means that there are people willing to buy them irrespective of the slightly lower performance. the JZ4775 CPU Card will come with 2gb of RAM, so the fact that it's only a 1ghz single-core MIPS will be less of an issue.
l.
2015-05-02 12:33 Luke Kenneth Casson Leighton:
On Sat, May 2, 2015 at 10:44 AM, Paul Boddie paul@boddie.org.uk wrote:
Another consideration is openness. Are either of these technologies sufficiently open? Nvidia have traditionally had a bad reputation for this, perhaps only courting openness when they've struggled to attract customers, as I remember being the case with their SoCs: I think the summary was that they promised a lot and delivered comparatively little, and the customers all switched their future designs to other SoCs in disgust.
[...]
that and the fact that the IC3128 and the JZ4775 are FSF-Endorseable means that there are people willing to buy them irrespective of the slightly lower performance. the JZ4775 CPU Card will come with 2gb of RAM, so the fact that it's only a 1ghz single-core MIPS will be less of an issue.
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
Almost none of the disadvantages cited for Intel or NVIDIA SoCs apply (NDAs, binary blobs, power issues). Everything is fully open in the case of those processors, and the toolchains are based on the usual GNU/Linux ones (GNU GCC, glibc, etc) and mostly ready (they could use some help with upstreaming, but that's another issue). Unless there is a problem with finding factories able to build them, I don't know if there is any disadvantage compared to ICubeCorp IC3128 and Ingenic JZ4775?
In the case of OpenRISC, there is even a Debian port half-ready [1]. I guess that Ingenic's will already work with the Debian mips/mipsel port, but I think that for ICubeCorp's all of the software distribution would have to be created from scratch.
[1] https://people.debian.org/~mafm/posts/2015/20150421_about-the-debian-gnulinu...
Cheers. -- Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com
On Saturday 2. May 2015 17.07.57 Manuel A. Fernandez Montecelo wrote:
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
I imagine that it depends on things like availability of hardware versions of CPUs for these architectures. I recall that there was a board for OpenRISC that used an FPGA, and there was also a fundraising campaign for an ASIC version of, I think, the OR1200 which didn't succeed. So it may be the case that an FPGA solution is the only remotely near-future option, and that brings a lot of other issues.
There's also the distinction between plain CPUs and SoCs. I don't really follow what goes on around these things, but I recall that the people who were developing the Milkymist One hardware were aiming to deliver an SoC solution based on the LatticeMico32 core:
https://en.wikipedia.org/wiki/M-Labs
I don't know what the situation is with OpenRISC and SoCs.
With the FPGA stuff, a significant concern has been the need for proprietary tools to actually deploy the hardware designs. It appears that some work is coming to fruition to eliminate such tools for some FPGA products:
http://www.clifford.at/yosys/about.html
Almost none of the disadvantages cited for Intel or NVIDIA SoCs apply (NDAs, binary blobs, power issues). Everything is fully open in the case of those processors, and the toolchains are based on the usual GNU/Linux ones (GNU GCC, glibc, etc) and mostly ready (they could use some help with upstreaming, but that's another issue). Unless there is a problem with finding factories able to build them, I don't know if there is any disadvantage compared to ICubeCorp IC3128 and Ingenic JZ4775?
Maybe performance might be an issue if the FPGA route is chosen. Otherwise, it's the availability of "proper silicon" versions, I guess.
In the case of OpenRISC, there is even a Debian port half-ready [1]. I guess that Ingenic's will already work with the Debian mips/mipsel port, but I think that for ICubeCorp's all of the software distribution would have to be created from scratch.
[1] https://people.debian.org/~mafm/posts/2015/20150421_about-the-debian-
gnulinux-port-for-openrisc-or1k/
That's an interesting report! Certainly, OpenRISC can hopefully use a lot of the existing MIPS-targeted software without significant modification.
Paul
2015-05-02 17:00 Paul Boddie:
On Saturday 2. May 2015 17.07.57 Manuel A. Fernandez Montecelo wrote:
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
I imagine that it depends on things like availability of hardware versions of CPUs for these architectures. I recall that there was a board for OpenRISC that used an FPGA, and there was also a fundraising campaign for an ASIC version of, I think, the OR1200 which didn't succeed. So it may be the case that an FPGA solution is the only remotely near-future option, and that brings a lot of other issues.
Basically, the underlying question of what I was wondering (because I don't have any idea about hardware manufacturing), is if instead of asking companies to manufacture EOMA-68 A20 CPU cards, they could be asked to manufacture the same but with OpenRISC or RISC-V cores instead.
There's also the distinction between plain CPUs and SoCs. I don't really follow what goes on around these things, but I recall that the people who were developing the Milkymist One hardware were aiming to deliver an SoC solution based on the LatticeMico32 core:
https://en.wikipedia.org/wiki/M-Labs
I don't know what the situation is with OpenRISC and SoCs.
I also don't know much about this, but there are efforts in that direction (both in OpenRISC and RISC-V camps), for example:
http://opencores.org/or1k/ORPSoC
lowrisc themselves want to go ahead with a SoC based on RISC-V, with additional CPU features.
Cheers. -- Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com
On Sat, May 2, 2015 at 5:24 PM, Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com wrote:
2015-05-02 17:00 Paul Boddie:
On Saturday 2. May 2015 17.07.57 Manuel A. Fernandez Montecelo wrote:
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
I imagine that it depends on things like availability of hardware versions of CPUs for these architectures. I recall that there was a board for OpenRISC that used an FPGA, and there was also a fundraising campaign for an ASIC version of, I think, the OR1200 which didn't succeed. So it may be the case that an FPGA solution is the only remotely near-future option, and that brings a lot of other issues.
Basically, the underlying question of what I was wondering (because I don't have any idea about hardware manufacturing), is if instead of asking companies to manufacture EOMA-68 A20 CPU cards, they could be asked to manufacture the same but with OpenRISC or RISC-V cores instead.
yeeess... but to do so requires those steps (1) through (6) i told you about. you can't just drop a processor onto a board and hope for the best, you actually have to custom-design the *entire* PCB - 300 components usually, thousands of individual wires (each one with rules).... it's not as straightforward as "yeah just put a processor down, it'll work".
There's also the distinction between plain CPUs and SoCs. I don't really follow what goes on around these things, but I recall that the people who were developing the Milkymist One hardware were aiming to deliver an SoC solution based on the LatticeMico32 core:
plain CPUs are just that: a Central Processing Unit - nothing more.
it means that you then have to have a monstrously-fast (meaning "power hungry") I/O bus, which was fine back in 1986 when it was an 8-bit 8mhz bus called "XT", but now, peripherals are so f****g fast you need about 2 watts just to drive the damn I/O signals up and down!
i don't know if you're aware, but a *single* I/O pad on a SoC is *bigger* than the RISC core itself. it's completely mad.
so to avoid that, SoCs integrate all the I/O onto the chip, so that the "communication cost" can at least be cut out of the system's equation.
the problem then becomes that SoCs end up being extremely specialist.
https://en.wikipedia.org/wiki/M-Labs
I don't know what the situation is with OpenRISC and SoCs.
I also don't know much about this, but there are efforts in that direction (both in OpenRISC and RISC-V camps), for example:
http://opencores.org/or1k/ORPSoC
lowrisc themselves want to go ahead with a SoC based on RISC-V, with additional CPU features.
*sigh*. academics. absolutely no clue about the real world. the cost of getting the chip made is by far and above the largest part of the exercise of getting a chip to market. so they're going to make something which has no graphics, no video acceleration - nothing. why would *anyone* want to buy such a chip??? madness...
anyway, good luck to them - they have a lot to learn.
l.
2015-05-02 18:40 Luke Kenneth Casson Leighton:
On Sat, May 2, 2015 at 5:24 PM, Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com wrote:
2015-05-02 17:00 Paul Boddie:
On Saturday 2. May 2015 17.07.57 Manuel A. Fernandez Montecelo wrote:
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
I imagine that it depends on things like availability of hardware versions of CPUs for these architectures. I recall that there was a board for OpenRISC that used an FPGA, and there was also a fundraising campaign for an ASIC version of, I think, the OR1200 which didn't succeed. So it may be the case that an FPGA solution is the only remotely near-future option, and that brings a lot of other issues.
Basically, the underlying question of what I was wondering (because I don't have any idea about hardware manufacturing), is if instead of asking companies to manufacture EOMA-68 A20 CPU cards, they could be asked to manufacture the same but with OpenRISC or RISC-V cores instead.
yeeess... but to do so requires those steps (1) through (6) i told you about. you can't just drop a processor onto a board and hope for the best, you actually have to custom-design the *entire* PCB - 300 components usually, thousands of individual wires (each one with rules).... it's not as straightforward as "yeah just put a processor down, it'll work".
Erm, I wonder if you are confusing me with another person, because I don't remember any conversation with you about PCBs or any steps, at least recently???
Anyway, I already suspected that it was not a matter of dropping a CPU and everything else falling into place into what it was already designed.
What I don't know --and that's why I was asking-- is if the reply to "Would it be somehow possible to have this in the near future?" approximates more to one which one of these:
a) "Impossible!"
b) "Perhaps could do, but not interested for the time being because I don't know if they will sell"
c) "Yeah, I had already planned to look into this in early summer, and it will take 6-15 months after that --if funding comes-- to get the samples".
lowrisc themselves want to go ahead with a SoC based on RISC-V, with additional CPU features.
*sigh*. academics. absolutely no clue about the real world. the cost of getting the chip made is by far and above the largest part of the exercise of getting a chip to market. so they're going to make something which has no graphics, no video acceleration - nothing. why would *anyone* want to buy such a chip??? madness...
anyway, good luck to them - they have a lot to learn.
Apart from being academics, the founders of the project are co-founders of RaspberryPi, and they have as advisors "bunnie" of Novena laptop fame --among others-- and Google's Project Ara, so I think that it's not a typical academic project.
Cheers. -- Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com
On Sat, May 2, 2015 at 7:26 PM, Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com wrote:
2015-05-02 18:40 Luke Kenneth Casson Leighton:
On Sat, May 2, 2015 at 5:24 PM, Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com wrote:
2015-05-02 17:00 Paul Boddie:
On Saturday 2. May 2015 17.07.57 Manuel A. Fernandez Montecelo wrote:
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
I imagine that it depends on things like availability of hardware versions of CPUs for these architectures. I recall that there was a board for OpenRISC that used an FPGA, and there was also a fundraising campaign for an ASIC version of, I think, the OR1200 which didn't succeed. So it may be the case that an FPGA solution is the only remotely near-future option, and that brings a lot of other issues.
Basically, the underlying question of what I was wondering (because I don't have any idea about hardware manufacturing), is if instead of asking companies to manufacture EOMA-68 A20 CPU cards, they could be asked to manufacture the same but with OpenRISC or RISC-V cores instead.
yeeess... but to do so requires those steps (1) through (6) i told you about. you can't just drop a processor onto a board and hope for the best, you actually have to custom-design the *entire* PCB - 300 components usually, thousands of individual wires (each one with rules).... it's not as straightforward as "yeah just put a processor down, it'll work".
Erm, I wonder if you are confusing me with another person, because I don't remember any conversation with you about PCBs or any steps, at least recently???
manufacture. steps prior to that: design the PCB. source the components. guarantee supply. steps (1) through (6) which i outlined earlier in this thread, not six hours previously, today.
What I don't know --and that's why I was asking-- is if the reply to "Would it be somehow possible to have this in the near future?" approximates more to one which one of these:
a) "Impossible!"
b) "Perhaps could do, but not interested for the time being because I don't know if they will sell"
c) "Yeah, I had already planned to look into this in early summer, and it will take 6-15 months after that --if funding comes-- to get the samples".
none of those. the answer remains as i said: steps (1) through (6) have to be satisfied, in addition to there being sufficient end-user interest to justify the investment of time and money.
Apart from being academics, the founders of the project are co-founders of RaspberryPi, and they have as advisors "bunnie" of Novena laptop fame --among others-- and Google's Project Ara, so I think that it's not a typical academic project.
none of those people have _actually_ designed a processor, nor have they the commercial experience in designing a processor to be targetted at a specific market, nor have they *actually* been through the process of sourcing and licensing (or designing) the hard macros and associated test vectors, nor have they been through the costings and project management aspects associated with bringing a processor to market.
in other words, each and every one of the people you mentioned has absolutely zero experience in processor design and manufacturing.
l.
On Saturday 2. May 2015 21.01.10 Luke Kenneth Casson Leighton wrote:
On Sat, May 2, 2015 at 7:26 PM, Manuel A. Fernandez Montecelo
manuel.montezelo@gmail.com wrote:
2015-05-02 18:40 Luke Kenneth Casson Leighton:
yeeess... but to do so requires those steps (1) through (6) i told you about. you can't just drop a processor onto a board and hope for the best, you actually have to custom-design the *entire* PCB - 300 components usually, thousands of individual wires (each one with rules).... it's not as straightforward as "yeah just put a processor down, it'll work".
Erm, I wonder if you are confusing me with another person, because I don't remember any conversation with you about PCBs or any steps, at least recently???
I think Manuel interpreted this as a reply to him directly: the singular "you" rather than the plural "you".
[...]
none of those. the answer remains as i said: steps (1) through (6) have to be satisfied, in addition to there being sufficient end-user interest to justify the investment of time and money.
Well, if we were just talking about FPGAs (wasn't that proposed at some point?), then we could probably run through steps (1) through (6) relatively quickly. ;-)
Apart from being academics, the founders of the project are co-founders of RaspberryPi, and they have as advisors "bunnie" of Novena laptop fame --among others-- and Google's Project Ara, so I think that it's not a typical academic project.
none of those people have _actually_ designed a processor, nor have they the commercial experience in designing a processor to be targetted at a specific market, nor have they *actually* been through the process of sourcing and licensing (or designing) the hard macros and associated test vectors, nor have they been through the costings and project management aspects associated with bringing a processor to market.
in other words, each and every one of the people you mentioned has absolutely zero experience in processor design and manufacturing.
I think the project also has more experienced people on board, as opposed to mere figureheads and supporters with considerable experience in somewhat different fields. For example, Julius Baxter (one of the named figures) does have a "soft-CPU" design to his name already and considerable experience with OpenRISC. I mentioned FPGA tools previously, and the Yosys suite is participating in some way in Google Summer of Code under the lowRISC umbrella, as are various other people associated with OpenRISC.
Moreover, the RISC-V architecture on which lowRISC is based has David Patterson on board, who was the originator of Berkeley RISC which was developed further into SPARC, so we're not talking about a group of pundits waiting for other people to do the work. Indeed, there's a RISC-V core that has supposedly been "proven" on/for various manufacturing processes, so those people aren't messing around.
We aren't going to see anything ready-to-use from lowRISC this year, according to their own schedule, but it could be interesting to watch.
Paul
P.S. I imagine the reason why Imagination Technologies launched some academic FPGA initiative or other recently is because freely-licensed cores based on unencumbered architectures could easily steal the academic/educational show.
On Sat, May 2, 2015 at 8:32 PM, Paul Boddie paul@boddie.org.uk wrote:
Moreover, the RISC-V architecture on which lowRISC is based has David Patterson on board, who was the originator of Berkeley RISC which was developed further into SPARC, so we're not talking about a group of pundits waiting for other people to do the work. Indeed, there's a RISC-V core that has supposedly been "proven" on/for various manufacturing processes, so those people aren't messing around.
(1) the goal is clearly stated as "to produce a working core that may be used by anybody".
(2) in order to restrict the amount of work that is to be done, they are *not* going to add any kind of "accelerated" instructions. no SIMD, no special decoder instructions, no video acceleration instructions, no 3D acceleration instructions - nothing.
in other words they're going to spend $USD 5m to produce a chip that contains *NO* VPU, *NO* GPU, no hardware acceleration of any kind.
now, when they say "yeah sure anybody may use the resultant core we design", that's completely and utterly useless because it's the *spending $USD 5m* that's the primary barrier!!
with $USD 10m, i can - right now - go and contact ICubeCorp, and re-activate the plans that were set up 2 years ago to produce a mass-volume SoC that would be *far more commercially viable* than what the RISC-V team are attempting to do. at least ICubeCorp's processor design has been made by someone who worked for ATI, NVIDIA _and_ AMD in their 3D / VPU division.
you want a SoC that doesn't have any kind of accelerated video or graphics? go get one of the TI beaglebone ARM Cortex A8s for goodness sake. or go get the latest dual-core ARM Cortex A5 form ATMEL, even the Zync 7030 would be better as at least you could use the FPGA to do... *something*.
honestly, then, unless someone has beat some sense into them, the RISC-V project is pretty much guaranteed to be yet another "Open Flop", where money is poured into yet another expensive lesson.
kinda annoying, but there you go.
l.
On Saturday 2. May 2015 22.05.00 Luke Kenneth Casson Leighton wrote:
On Sat, May 2, 2015 at 8:32 PM, Paul Boddie paul@boddie.org.uk wrote:
Moreover, the RISC-V architecture on which lowRISC is based has David Patterson on board, who was the originator of Berkeley RISC which was developed further into SPARC, so we're not talking about a group of pundits waiting for other people to do the work. Indeed, there's a RISC-V core that has supposedly been "proven" on/for various manufacturing processes, so those people aren't messing around.
(1) the goal is clearly stated as "to produce a working core that may be used by anybody".
Indeed. I believe that even the OR1200 core - "even" because people don't always have nice things to say about the OpenRISC design - was used by Samsung in various products, so there would certainly be some interest (and thus money) from various companies.
(2) in order to restrict the amount of work that is to be done, they are *not* going to add any kind of "accelerated" instructions. no SIMD, no special decoder instructions, no video acceleration instructions, no 3D acceleration instructions - nothing.
in other words they're going to spend $USD 5m to produce a chip that contains *NO* VPU, *NO* GPU, no hardware acceleration of any kind.
I haven't looked at anybody's plans for RISC-V or lowRISC in any detail. I only remarked about the people involved because they do have a pedigree here: it's not just people who want to do something; some of these people have been there, done that, and appear to want to do it again.
Whether they'll produce something that would make sense for netbooks, notebooks, tablets or other such things is another matter. I can envisage the likes of Google wanting to put this kind of thing in servers, and that probably isn't going to be particularly relevant to us here.
[...]
honestly, then, unless someone has beat some sense into them, the RISC-V project is pretty much guaranteed to be yet another "Open Flop", where money is poured into yet another expensive lesson.
I can see it being useful for certain companies for sure: they get to collaborate on a core architecture which doesn't have a gatekeeper, and they also get to add their favourite features. Some of those companies will happily add GPU, VPU or whatever because that's what they've been doing all along with whatever other architectures they've been using. (An example of an unencumbered architecture getting enhanced is, of course, MIPS by the likes of Ingenic, at least until very recently when they acquired a MIPS licence.)
Not that we will necessarily be able to buy these fancy RISC-V variants, however. Then again, maybe we'll be proven wrong and the participants will get together and develop various fancy features that even appear in readily available products that system manufacturers can buy. But again, I don't expect anything of significance for the foreseeable future, anyway.
Paul
2015-05-02 20:32 Paul Boddie:
On Saturday 2. May 2015 21.01.10 Luke Kenneth Casson Leighton wrote:
On Sat, May 2, 2015 at 7:26 PM, Manuel A. Fernandez Montecelo
manuel.montezelo@gmail.com wrote:
2015-05-02 18:40 Luke Kenneth Casson Leighton:
yeeess... but to do so requires those steps (1) through (6) i told you about. you can't just drop a processor onto a board and hope for the best, you actually have to custom-design the *entire* PCB - 300 components usually, thousands of individual wires (each one with rules).... it's not as straightforward as "yeah just put a processor down, it'll work".
Erm, I wonder if you are confusing me with another person, because I don't remember any conversation with you about PCBs or any steps, at least recently???
I think Manuel interpreted this as a reply to him directly: the singular "you" rather than the plural "you".
Yes, I did interpret it that way :) , sorry that I missed the original message that Luke was referring to.
Apart from being academics, the founders of the project are co-founders of RaspberryPi, and they have as advisors "bunnie" of Novena laptop fame --among others-- and Google's Project Ara, so I think that it's not a typical academic project.
none of those people have _actually_ designed a processor, nor have they the commercial experience in designing a processor to be targetted at a specific market, nor have they *actually* been through the process of sourcing and licensing (or designing) the hard macros and associated test vectors, nor have they been through the costings and project management aspects associated with bringing a processor to market.
in other words, each and every one of the people you mentioned has absolutely zero experience in processor design and manufacturing.
I think the project also has more experienced people on board, as opposed to mere figureheads and supporters with considerable experience in somewhat different fields. For example, Julius Baxter (one of the named figures) does have a "soft-CPU" design to his name already and considerable experience with OpenRISC. I mentioned FPGA tools previously, and the Yosys suite is participating in some way in Google Summer of Code under the lowRISC umbrella, as are various other people associated with OpenRISC.
Moreover, the RISC-V architecture on which lowRISC is based has David Patterson on board, who was the originator of Berkeley RISC which was developed further into SPARC, so we're not talking about a group of pundits waiting for other people to do the work. Indeed, there's a RISC-V core that has supposedly been "proven" on/for various manufacturing processes, so those people aren't messing around.
We aren't going to see anything ready-to-use from lowRISC this year, according to their own schedule, but it could be interesting to watch.
+1
Also I think that they will have funding available more easily to get to a final product, if everything goes well. I don't know if any intermediate result that they create (e.g. lowrisc SoC spec) will be usable by EOMA.
In any case, I am interested in EOMA-68 with A20 or mips-based processors, but I think that OpenRISC or RISC-V will be definitely cool and an ultimate goal of freely licensed hardware. Perhaps not for end-users only concerned with using the hardware, though; but in that case I am also not sure about ICubeCorp's processors unless/until there are distros/OSs which support them.
P.S. I imagine the reason why Imagination Technologies launched some academic FPGA initiative or other recently is because freely-licensed cores based on unencumbered architectures could easily steal the academic/educational show.
My thoughts exactly :-)
-- Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com
Luke, I just want to poke in here and thank you for mentioning something -- I myself was thinking of poking you on the subject of Intel's Bay Trail stuff -- now I know :) things are never as simple as one wishes they were...
Oh well, maybe someday we'll get x86 stuff into EOMA68... that *would* be huge, IMO -- basically every standard PC (and Mac, now) runs x86... but I think I understand why it's not in the cards, yet.
(I hate to ask, because of how dead-dog-slow their stuff is -- but what about VIA? Their Esther-core Eden ULV CPU is 3.5w @ 1GHz... of course then you stuff in the chipset and the RAM and the everything else, and you're probably way over budget, but I thought I'd mention it.)
On Sat, May 2, 2015 at 11:07 AM, Manuel A. Fernandez Montecelo < manuel.montezelo@gmail.com> wrote:
2015-05-02 12:33 Luke Kenneth Casson Leighton:
On Sat, May 2, 2015 at 10:44 AM, Paul Boddie paul@boddie.org.uk wrote:
Another consideration is openness. Are either of these technologies sufficiently open? Nvidia have traditionally had a bad reputation for this, perhaps only courting openness when they've struggled to attract customers, as I remember being the case with their SoCs: I think the summary was that they promised a lot and delivered comparatively little, and the customers all switched their future designs to other SoCs in disgust.
[...]
that and the fact that the IC3128 and the JZ4775 are FSF-Endorseable means that there are people willing to buy them irrespective of the slightly lower performance. the JZ4775 CPU Card will come with 2gb of RAM, so the fact that it's only a 1ghz single-core MIPS will be less of an issue.
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
Almost none of the disadvantages cited for Intel or NVIDIA SoCs apply (NDAs, binary blobs, power issues). Everything is fully open in the case of those processors, and the toolchains are based on the usual GNU/Linux ones (GNU GCC, glibc, etc) and mostly ready (they could use some help with upstreaming, but that's another issue). Unless there is a problem with finding factories able to build them, I don't know if there is any disadvantage compared to ICubeCorp IC3128 and Ingenic JZ4775?
In the case of OpenRISC, there is even a Debian port half-ready [1]. I guess that Ingenic's will already work with the Debian mips/mipsel port, but I think that for ICubeCorp's all of the software distribution would have to be created from scratch.
[1] https://people.debian.org/~mafm/posts/2015/20150421_about-the-debian-gnulinu...
Cheers.
Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On Sat, May 2, 2015 at 5:01 PM, Christopher Havel laserhawk64@gmail.com wrote:
Luke, I just want to poke in here and thank you for mentioning something -- I myself was thinking of poking you on the subject of Intel's Bay Trail stuff -- now I know :) things are never as simple as one wishes they were...
Oh well, maybe someday we'll get x86 stuff into EOMA68... that *would* be huge, IMO -- basically every standard PC (and Mac, now) runs x86... but I think I understand why it's not in the cards, yet.
it'll get there.
(I hate to ask, because of how dead-dog-slow their stuff is -- but what about VIA? Their Esther-core Eden ULV CPU is 3.5w @ 1GHz... of course then you stuff in the chipset and the RAM and the everything else, and you're probably way over budget, but I thought I'd mention it.)
correct. it's not just about the processor: you have to take into account the support ICs, in this case the southbridge IC (another 2 watts at least), then the 64-bit-wide DDR3 RAM (maybe even 128-bit-wide, i'd have to check) but just the CPU and the southbridge chip takes you over 5.5 watts... 0.5 watts beyond the external hard cut-off limit of 5.0 watts.
it really really is incredibly challenging to pick qualifying SoCs.
l.
On Sat, May 2, 2015 at 1:05 PM, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
it really really is incredibly challenging to pick qualifying SoCs.
I'm starting to figure that out ;) Probably just as well about VIA -- their stuff is descended from Cyrix (remember them?) and it shows -- performance is questionable at best. Better than a Transmeta Crusoe, if only because, due to the way the Crusoe works (it's not natively an x86 CPU but soft-emulates one) it's basically the lowest rung on the ladder, bar none.
On Sat, May 2, 2015 at 6:10 PM, Christopher Havel laserhawk64@gmail.com wrote:
On Sat, May 2, 2015 at 1:05 PM, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
it really really is incredibly challenging to pick qualifying SoCs.
I'm starting to figure that out ;)
:)
due to the way the Crusoe works (it's not natively an x86 CPU but soft-emulates one) it's basically the lowest rung on the ladder, bar none.
i really liked the transmeta idea. much lower power (higher performance/watt ratio). pity they couldn't continue.
don't the latest loongson's have hard-emulation of the top 200 most common x86 instructions, meaning that they can emulate x86 code on a MIPS-based processor at 70% of the speed of an equivalent x86 processor?
l.
On Sat, May 2, 2015 at 1:44 PM, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
due to the way the Crusoe works (it's not natively an x86 CPU but soft-emulates one) it's basically the lowest rung on the ladder, bar
none.
i really liked the transmeta idea. much lower power (higher performance/watt ratio). pity they couldn't continue.
*shudders violently*
The Crusoe makes a Geode GX500 look like the i5-540M in my Dell laptop. (I've used a GX500 based system. It was a truly awful experience that I do not intend to repeat.) The Crusoe is the Corvair of the CPU world, minus the book by Ralph Nader -- if the Corvair in question is running on three cylinders instead of six! Poorly thought out, to a fault, slower than a dead sloth in quicksand -- and it shows. Transmeta's designs are actually *worse* than Cyrix's trash -- in my mind, at least. (Cyrix would've been at least decent if their in-house ALU design hadn't been nearly so pathetic.)
The problem, of course, is that it soft-emulates the entire processor architecture it's supposed to be a part of -- so you get a minimum 50% performance penalty just by way of operation -- and that estimation only works if the instruction translation is perfect and takes exactly as long as each x86 instruction it has to translate. IRL it's going to be tremendously worse -- although I don't have exact figures. (Probably just as well.)
Don't get me wrong, innovation is cool, and the idea of the Crusoe is indeed innovative -- but innovation isn't everything, to put it mildly.
That one's an Edsel.
don't the latest loongson's have hard-emulation of the top 200 most common x86 instructions, meaning that they can emulate x86 code on a MIPS-based processor at 70% of the speed of an equivalent x86 processor?
Afraid I can't speak to that. I don't follow Longsoon stuff -- or, really, much of anything in the CPU/SoC realm, these days. Too much going on to follow, really -- particularly since I either don't care about it or can't afford it. That said, a ~30% performance penalty is going to be noticeable, I would think.
On Sat, May 2, 2015 at 4:07 PM, Manuel A. Fernandez Montecelo manuel.montezelo@gmail.com wrote:
2015-05-02 12:33 Luke Kenneth Casson Leighton:
On Sat, May 2, 2015 at 10:44 AM, Paul Boddie paul@boddie.org.uk wrote:
Another consideration is openness. Are either of these technologies sufficiently open? Nvidia have traditionally had a bad reputation for this, perhaps only courting openness when they've struggled to attract customers, as I remember being the case with their SoCs: I think the summary was that they promised a lot and delivered comparatively little, and the customers all switched their future designs to other SoCs in disgust.
[...]
that and the fact that the IC3128 and the JZ4775 are FSF-Endorseable means that there are people willing to buy them irrespective of the slightly lower performance. the JZ4775 CPU Card will come with 2gb of RAM, so the fact that it's only a 1ghz single-core MIPS will be less of an issue.
Speaking of openness/FSF-endorsability, and having into account that the current focus is to go ahead with what is already planned like the A20, with which I fully agree (so please don't take this as a demand, just as showing interest) -- would it be feasible in the near future to have OpenRISC or RISC-V (or RISC-V-based lowrisc, when ready)?
if they're any good, such that people want to buy enough of them, thus justifying the outlay of expenditure on the PCB development and assembly, then yes, of course.
Almost none of the disadvantages cited for Intel or NVIDIA SoCs apply (NDAs, binary blobs, power issues). Everything is fully open in the case of those processors, and the toolchains are based on the usual GNU/Linux ones (GNU GCC, glibc, etc) and mostly ready (they could use some help with upstreaming, but that's another issue). Unless there is a problem with finding factories able to build them, I don't know if there is any disadvantage compared to ICubeCorp IC3128 and Ingenic JZ4775?
In the case of OpenRISC, there is even a Debian port half-ready [1].
cool!
the only problem being: they've designed OpenRISC with not long-enough pipelines. it's *going* to stall above a certain speed, and that speed really isn't going to be very much. certainly not enough to be commercially competitive.
I guess that Ingenic's will already work with the Debian mips/mipsel port,
yes.
but I think that for ICubeCorp's all of the software distribution would have to be created from scratch.
correct.
On Sat, May 2, 2015 at 10:44 AM, Paul Boddie paul@boddie.org.uk wrote:
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
Another consideration is openness. Are either of these technologies sufficiently open? Nvidia have traditionally had a bad reputation for this, perhaps only courting openness when they've struggled to attract customers, as I remember being the case with their SoCs: I think the summary was that they promised a lot and delivered comparatively little, and the customers all switched their future designs to other SoCs in disgust.
so, in essence we have:
* NDAs which prevent and prohibit access to critical information, making even assessment of the SoC much harder than the competition
* Non-free binaries and firmware which, from bitter experience, are known to cause such complete and utter aggravation, support issues, upgrade and reliability issues that it's likely to cause such harm to a customer's reputation that end-users will reject the entire customer's products outright (look at what happened to OCZ as an example)
* pricing that's far too high for what it is esp. compared to entry-level successful readily-available mass-volume china-based SoCs
* features that are so far advanced of "good enough computing" that nobody notices the difference *anyway*
* not that many customers anyway because there's not that much buy-in (chicken-and-egg problem), so the MOQ has to set at 1,000,000 units... thus *increasing* the chicken and egg problem rather than decreasing it...
all in all, miguel, it's very hard to justify the expenditure of time and effort, and there are in fact sound business reasons why the "more powerful" SoCs should *not* be used - it's simply too much of a risk.
at some point, someone in a non-china-based company *is* going to "Get It". they'll produce a long-term (5+ year) SoC, with between 300 and 600 pins, at a price between $3 and $5, that is at least quad core, and, because it will be in 14nm or 10nm, will be absolutely great performance and extremely good value for money.
the problem that any such company faces however is that the yields on anything below 28nm are beginning to get very dicey. 28nm can easily manage a 95% yield, and at this low end of the market you can easily get 4,000 ICs on a 10in wafer. one wafer costs $USD 4,000, yield is 95%, packaging is $1, testing is $1, now you start to see why Allwinner can sell a quad core 64-bit processor for $5: they're making $1 profit per chip!
by contrast: intel puts such vast amounts of cache onto their processors (2mbytes instead of 128k for an ARM SoC) that they can only fit say 500 ICs onto a 10in wafer... one wafer still costs $USD 4,000... you're now looking at a $12 per IC.
*but*, let's say that they use 22nm, now you have 50% more ICs... *but*, the yield is say only 60% because it's a lower geometry.... the cost now just went *up*, not down...
at 14nm the yield is only say 40% but you don't actually get 2x the number of ICs because you have to allow for space between them in order to cut the individual ICs off the wafer.... cost went up even more...
... start to make sense?
l.
On Sat, May 2, 2015 at 8:51 AM, gacuest@gmail.com wrote:
I have seen that are developing several EOMA-68, but all have very little power or are old (like A20, JZ4775 or IC1T). This makes that many people that are looking for powerful hardware is left out of the EOMA-68 market.
that's why there is a 10 watt version planned (10mm high CPU Card).
also, it is why EOMA-200 exists. "more powerful" hardware means "more power".
What is the future of EOMA-68?
the future of EOMA-68 is a decade-long standard. anything and everything *will* happen, in good time, as long as it fits within the specification (including the power budget).
Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
hi gacuest,
powerful hardware means the following:
* 10 to 200 Watts (EOMA68's entire budget because it is passive cooled is an absolute max of 5 watts and that has to include the DDR RAM and anything else)
* lower geometry (22nm, 14nm, 10nm) so that the speed is faster for less power.
so there are a couple of things that can be done. a couple of "cheats" so to speak. the first is that you can get this "more powerful" hardware, and you can run it for a short duration flat-out, monitor the ampage and the temperature, and if exceeded STOP running flat-out.
the second is to simply lower the clock speed.
now, before we even get there, we have to gain access to the NVIDIA X1 and to the Intel Bay Trail SoCs.
if you would like to find out for yourself what that's like, can i suggest that you do the experiment for yourself. i set you the exercise of contacting NVIDIA and asking for the following:
(1) a full datasheet and all other technical documentation including power management (2) a Reference Design (including PMIC and DDR RAM layout already done) (3) samples (qty 10 to 25) (4) pricing in volume 250, 2500 and 30,000 for starters. (5) lead times on supply for the above quantities (6) lifetime of the product (i.e. how long it's going to be manufactured for)
once you have this information - *if* you can get this information - and it is reasonable i.e. the Tegra X1 is not going to be discontinued within the next 2-3 months, i will be absolutely delighted, and you can have an EOMA68-X1 CPU Card.
second: dealing with Intel is.... interesting, shall we say. they have the market "subdivided" into segments: consumer, embedded, media and now *shudder* Internet-of-Things.
the different departments are *ACTIVELY* prevented and prohibited from competing with each other!
now, the "consumer" processors are typically only available for up to 9-12 months, and the MOQs are typically 1 million units and above. they're also cheaper: the Z3735F is for example $USD 17 (which btw is *TRIPLE* the price of the Allwinner A20 or the new 64-bit A64!!!)
the "embedded" processors are typically 1100 pins, the PCBs therefore are extraordinarily complex (and costly) - $12 for a 45x78mm PCB would not be an unreasonable estimate (compared to $1.50 for a 6-layer 5mil pitch).
and not only that but the cost of the actual processor itself is *double* (minimum) that of the equivalent "consumer" processor. how could it not be: the yields on a 1,100 pin processor are much less than on a 600-pin one, and the demand is less so they can't optimise the fab to get better yields. or, if they do, they make lots, but then have to store them for 5 years in order to guarantee the "embedded" status.
now can you see why these "more powerful" processors are not yet available in EOMA68 form-factor? it's purely the logistics of dealing with these companies.
by complete contrast, the A20 and all the Allwinner processors, by way of being extremely popular, and by way of Allwinner being extremely helpful in giving me Reference Designs with the DDR3 RAM layout already done, and by way of it being possible to get hold of samples from random china suppliers....
can you see and appreciate the difference?
now, Ingenic, Allwinner and ICubeCorp have been *cooperative*. Texas Instruments and Freescale could also be said to be cooperative - after all they provide things like the iMXQSB Reference Design and BeagleBoard Reference Designs, and 1200 page reference manuals. BUT, unlike Texas Instruments and Freescale, the cost of the processors is:
* ICubeCorp IC3128: $2 (yes, USD 2.00 exactly) * Ingenic JZ4775: $3. (yes, USD 3.00) * Allwinner A20: $5 (yes, USD 5.00)
contrast this with:
* Freescale iMX6 Quad: ***THIRTY FIVE*** (35) US DOLLARS
* TI single-core Beagle Black processor: $5... but its performance is *LESS* than that of the Ingenic JZ4775.
does that help explain, then, why the processors that are available in EOMA68 form-factor are available, and why those processors that you have listed are not yet available?
i would _love_ to be able to make the processors that you have listed available, and if _you_ would like them to be made available, you can help by finding out the information (1) through (6) listed above, finding the funding required for the prototypes (which will be somewhere around an estimated $USD 20,000 each) and then finding a customer or customers that are willing to place the MOQ quantities all at the same time.
am i making it clear that this stuff is really quite challenging?!!! :)
l.
On Sat, May 2, 2015 at 6:33 PM, gacuest@gmail.com wrote:
- ICubeCorp IC3128: $2 (yes, USD 2.00 exactly)
- Ingenic JZ4775: $3. (yes, USD 3.00)
- Allwinner A20: $5 (yes, USD 5.00)
Is there an estimated price for the EOMA-68 of that SoCs?
for the A20 i'm looking at around $USD 38 for an order of 200 units from the contract manufacturer in the USA. the JZ4775 one if done the same way same quantity would be about $32 (it also will have 2GB of RAM which is one of the higher costs). the IC3128 one... mmm... mayyyybe around $25?
that's *SUPPLIER* pricing, ok, *NOT* repeat *NOT* i repeat **NOT** sale price. ok?
l.
Thanks.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On 05/02/15 08:51, gacuest@gmail.com wrote:
I have seen that are developing several EOMA-68, but all have very little power or are old (like A20, JZ4775 or IC1T). This makes that many people that are looking for powerful hardware is left out of the EOMA-68 market.
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
this is NOT a troll
i think it was sometime in 2011 that the eoma-68 concept was defined.
i do realise that luke and others have put a lot of effort into this but it has been a very long time and i still can't buy anything
is it not time to reevaluate the project, its goals and objectives?
On 05/05/2015 10:54 AM, Simon Kenyon wrote:
this is NOT a troll
i think it was sometime in 2011 that the eoma-68 concept was defined.
i do realise that luke and others have put a lot of effort into this but it has been a very long time and i still can't buy anything
That's how long it takes for (mostly) one person to do something this complex.
is it not time to reevaluate the project, its goals and objectives?
That's happened several times.
Money would speed things up a lot - suggestions or contributions welcome.
On Tue, May 5, 2015 at 10:11 AM, Boris Barbour boris.barbour@ens.fr wrote:
On 05/05/2015 10:54 AM, Simon Kenyon wrote:
this is NOT a troll
i think it was sometime in 2011 that the eoma-68 concept was defined.
i do realise that luke and others have put a lot of effort into this but it has been a very long time and i still can't buy anything
That's how long it takes for (mostly) one person to do something this complex.
is it not time to reevaluate the project, its goals and objectives?
That's happened several times.
Money would speed things up a lot
yes it would. and word-of-mouth marketing, as well as inviting people to contribute here, even $1 a week would help:
https://gratipay.com/luke.leighton/
one thing, when compared to say the OpenPandora and other projects that have been crowd-funded *in advance*, is that the approach i've taken has much more integrity and far less risk.
iet's look at say the OpenPandora. they were very *very* close to becoming unprofitable, esp. after investing $USD 50,000 to an R.F. Engineer who ended up giving them unsuccessful advice. in the end they had to actually assemble the units with volunteer work, in a shed (literally) because the assembly labour was beyond the budget because they'd spent it all.
... if there had been any other unsuccessful R&D efforts they would have *not been successful* - the entire project would have been jeapordised.
and we know of several crowd-funded "from scratch" projects that have actually gone that way, and failed.
those projects are presented as "pay in advance, you'll probably get your product... which we haven't even designed or prototyped yet so we don't know very much or if this will work".
...which strikes me as incredibly dishonest.
so this is why kickstarter and indiegogo both changed their rules, and crowdsupply has been set up *specifically* that you *must* have a working prototype *before* the campaign can start.
i complained one hell of a lot about these changes of rules, initially (what's the damn point when you want to crowd-fund a product's R&D so you can *get* working prototypes???)
but, in hindsight, i realised that actually it's much more honest to ask "help fund the R&D phase" as *completely separate* from "help fund the up-front manufacturing" which although it's a heck of a lot of details is actually much more straightforward and a lot less risky.
so for those people who *know* that they're funding an R&D phase, they can go "hmm, yep i can spare $1 per week for that, it's nothing to me, but i realise that enough people giving $1 per week would make a huge difference overall. i don't even want anything in return - just knowing that i'm funding these efforts is enough payment for me".
*instead* of "here's $100, now GIVE ME MY CROWD-FUNDED PRODUCT or ELSE" which... yeah, you know the drill - the number of people who are disappointed to find that they've paid for an expensive open and publicised learning experience instead of an actual product.
l.
On Tuesday 5. May 2015 10.54.30 Simon Kenyon wrote:
On 05/02/15 08:51, gacuest@gmail.com wrote:
I have seen that are developing several EOMA-68, but all have very little power or are old (like A20, JZ4775 or IC1T). This makes that many people that are looking for powerful hardware is left out of the EOMA-68 market.
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
this is NOT a troll
Careful! That's how they usually start. ;-)
i think it was sometime in 2011 that the eoma-68 concept was defined.
i do realise that luke and others have put a lot of effort into this but it has been a very long time and i still can't buy anything
is it not time to reevaluate the project, its goals and objectives?
Well, I wrote up a summary of the history in an article that you can read here:
http://blogs.fsfe.org/pboddie/?p=933
I'm sure you know most of it, but I felt that it was necessary for me to remind myself of what had been going on most of this time, and at what point different things had happened. I haven't been interacting with this list - only reading it - for most of the project's history, so it was easy for me to think that progress had been limited.
In fact, quite a bit has happened, but there have obviously been setbacks, mostly to do with the choices made in getting hardware into people's hands. Had different choices been made in, say, 2013 then a broader audience might have had something to use by now, but it seems that a strategy was followed that might have seemed the best and fastest route to market at the time, but which in hindsight proved to be a dead end.
The one thing that I wanted to emphasise in that article is that hardware has been produced, which means that the hard part should be out of the way. But I guess only Luke knows the status of the crowd-funding campaign and the "last mile".
Paul
On Tue, May 5, 2015 at 10:20 AM, Paul Boddie paul@boddie.org.uk wrote:
In fact, quite a bit has happened, but there have obviously been setbacks, mostly to do with the choices made in getting hardware into people's hands. Had different choices been made in, say, 2013 then a broader audience might have had something to use by now, but it seems that a strategy was followed that might have seemed the best and fastest route to market at the time, but which in hindsight proved to be a dead end.
and that right there is valuable. just like edison [finding 1,999 ways *not* to make a lightbulb] - success happens mostly through committment and persistence to try different things...
*re-evaluating all the time* exactly as you say, simon.
The one thing that I wanted to emphasise in that article is that hardware has been produced, which means that the hard part should be out of the way. But I guess only Luke knows the status of the crowd-funding campaign and the "last mile".
yep. costings. it's the details. laser-cut steel masks: $1,000. having to do 5 prototypes with the contract manufacturers: $2,500. MOQs for some of the components: $3,000.
all of this has to be taken into account, based on a MOQ (which i'm still likely to set at 250), *all* those NREs have to be taken into account and paid for out of the up-front committed money from people.
bearing in mind that the unit cost of assembly for the EOMA68-A20 board is to be around $60 (in qty 250) and the micro-engineering board around $30 (in qty 250), those NREs above (total $6500) divided by 250 comes to an extra $26, the unit *COST* comes to an amazingly high $USD 116.
.... nothing like the $35 for a wandboard or whatever it's called, is it?
but that's just how it is.
l.
On Tuesday 5. May 2015 12.00.54 Luke Kenneth Casson Leighton wrote:
bearing in mind that the unit cost of assembly for the EOMA68-A20 board is to be around $60 (in qty 250) and the micro-engineering board around $30 (in qty 250), those NREs above (total $6500) divided by 250 comes to an extra $26, the unit *COST* comes to an amazingly high $USD 116.
.... nothing like the $35 for a wandboard or whatever it's called, is it?
but that's just how it is.
That's why it is so worthwhile pitching this for the freedom-related benefits rather than the cost-related benefits at this point in time. After all, the unit cost of the Neo900 is probably going to end up being 700 EUR, but there wasn't a shortage of people willing to pay that amount for something that prioritises things like Free Software capabilities, openness and privacy.
Sure, there is also the stream of people comparing the Neo900 to the latest locked-down Android device that supposedly costs half as much on a subsidy but which actually costs more if those people had bothered to look at the total outlay, which of course they don't. And those people will happily whine about not getting updates to the software or that Google are mining their data, as they check their GMail and otherwise make full use of Google services. (One could remark similarly about disdain from Apple users and from both of the Windows Phone users.)
If Bunnie can sell "heirloom" Novena laptops for $5000 or so apiece, there has to be some scope for headroom on the price amongst people who appreciate the nature of the effort, even amongst those of more modest means. :-)
Paul
On Tue, May 5, 2015 at 11:16 AM, Paul Boddie paul@boddie.org.uk wrote:
On Tuesday 5. May 2015 12.00.54 Luke Kenneth Casson Leighton wrote:
bearing in mind that the unit cost of assembly for the EOMA68-A20 board is to be around $60 (in qty 250) and the micro-engineering board around $30 (in qty 250), those NREs above (total $6500) divided by 250 comes to an extra $26, the unit *COST* comes to an amazingly high $USD 116.
.... nothing like the $35 for a wandboard or whatever it's called, is it?
but that's just how it is.
That's why it is so worthwhile pitching this for the freedom-related benefits rather than the cost-related benefits at this point in time. After all, the unit cost of the Neo900 is probably going to end up being 700 EUR, but there wasn't a shortage of people willing to pay that amount for something that prioritises things like Free Software capabilities, openness and privacy.
If Bunnie can sell "heirloom" Novena laptops for $5000 or so apiece, there has to be some scope for headroom on the price amongst people who appreciate the nature of the effort, even amongst those of more modest means. :-)
very much appreciated the reminder of this, paul.
On Tue, May 5, 2015 at 9:54 AM, Simon Kenyon simon@koala.ie wrote:
On 05/02/15 08:51, gacuest@gmail.com wrote:
I have seen that are developing several EOMA-68, but all have very little power or are old (like A20, JZ4775 or IC1T). This makes that many people that are looking for powerful hardware is left out of the EOMA-68 market.
What is the future of EOMA-68? Any EOMA-68 with a powerful hardware (like Tegra X1 or Intel Bay-Trail)?
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
this is NOT a troll
understood and accepted. you're asking hard questions, and that's appreciated.
i think it was sometime in 2011 that the eoma-68 concept was defined.
i do realise that luke and others have put a lot of effort into this but it has been a very long time and i still can't buy anything
is it not time to reevaluate the project, its goals and objectives?
i'm tempted to say that a decade-long project by definition doesn't need its goals and objectives to be re-evaluated, however this hasn't been the case.
the first - key - mistake made was to add SATA to EOMA68, not just because the allwinner A10 had it, but also because the idea was, at the time, to make EOMA68 more of a "computer" standard. part-server, part-desktop, part-embedded, part-portable device and so on.
then as the market in low-cost SoCs progressed, and more were evaluated - Texas Instruments AM Sitara beaglebone/black SoCs, Ingenic SoCs, Allwinner's "tablet" specialised SoCs, Rockchip SoCs - it was noted that *absolutely none of them* had SATA, and even fewer had Ethernet.
so given that the cost of the SoCs, some of them, are as low as $USD 2.00 (literally $2.00 - as in $2 and zero cents) if you have to add a USB Hub IC ($1) and a USB-to-SATA IC ($1.50) it *entirely* defeats the object of the exercise in putting down the low-cost $USD 2 SoC in the first place.
so... SATA had to go. that was the first specification change.
then, somewhere along the lines - about 2 years ago - joe and henrik kindly had a discussion about UART and TTL voltage reference levels. i still to this day don't quite see why just a simple zener diode won't do the job (henrik "Gets It") but hey, rather than have the argument, i decided it was safer to change the spec... again... rather than have it fail BEFORE IT EVEN STARTED...
... and replaced one of the 4 0.5A 5V *input* pins with a "TTL ref voltage" *OUTPUT* pin.
then i realised that, actually, many people who'd said "what about SD/MMC" and "what about SPI" and "what about UART" were actually right, so i made *ANOTHER* change to the spec, this time:
* reducing the RGB/TTL from 24 to 18 pins (freeing up 3 pins) * putting in SPI in place of the RGB/TTL pins removed * replacing SATA (4 pin) with USB2 (2 pin) leaving 2 further pins free * one of those was turned into a PWM * the other was turned into a 2nd EINT (IRQ-capable) GPIO.
so now we have something that's, instead of being "general computer", "general server", "general desktop", is more "embedded devices", "portable devices"...
... but how long did it take - and how much money - to make these decisions?
*four years*, simon, and it's cost so far around $USD 25,000 possibly as much as $USD 30,000. which is a hell of a lot of money to spend as a learning experience on something as "simple" as a standard.
so the short answer is, i've done a *hell* of a lot of thinking, and am *constantly* re-evaluating the project, its goals and objectives, and, fortunately, each time i do that, the answer comes up "yep it's on track".
the one thing i do have to do is take a deep breath and commit to this first crowd-funding campaign. i'm... not hugely happy that the cost of USA-based manufacturing is a whopping $25 per product extra ($12 for assembly of the micro-desktop instead of $3, $12 for assembly of the eoma68-a20 pcb instead of $3, and $5 for the PCBs themselves instead of $1.50 but to be fair the contract manufacturer has amortised the $1,000 laser-cut steel mask for solder pasting into the quote)
but, thinking about it over the past couple of days, i figured "well, if that's the cost then that's the cost", and we go ahead anyway.
l.
arm-netbook@lists.phcomp.co.uk