ok so it would seem that the huge amount of work going into RISC-V means that it's on track to becoming a steamroller that will squash proprietary SoCs, so i'm quite happy to make sure that it's not-so-subtly nudged in the right direction.
i've started a page where i am keeping notes: http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to create a desirable mass-volume low-cost SoC, meaning that it will need to at least do 1080p60 video decode and have 3D graphics capability. oh... and be entirely libre.
the plan is:
* to create an absolute basic SoC, starting from lowRISC (64-bit), ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a low-cost proof-of-concept where mistakes can be iterated through * provide the end-result to software developers so that they can have actual real silicon to work with * begin a first crowd-funding phase to create a 28nm (or better) multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are almost entirely from opencores.org, meaning that there really should be absolutely no need to license any costly hard macros. that *includes* a DDR3 controller (but does not include a DDR3 PHY, which will need to be designed):
* DDR3 controller (not including PHY) * lowRISC contains "minion cores" so can be soft-programmed to do any GPIO * boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH) * OpenCores VGA controller (actually it's an LCD RGB/TTL controller) * OpenCores ULPI USB 2.0 controller * OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really* important to avoid, because mixed analog and digital is incredibly hard to get right. also note that things like HDMI, SATA, and even ethernet are quite deliberately NOT on the list. Ethernet RMII (which is digital) could be implemented in software using a minion core. the advantage of using the opencores VGA (actually LCD) controller is: i already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into 40nm, would easily compete with some of TI's offerings, as well as the Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on debian/testing (took a while) so it *might* not be necessary even to pay for the ASIC design tooling (the cost of which is insane). coriolis2 includes a reasonable auto-router. i still have yet to go through the tutorials to see how it works. for design rules: 90nm design rules (stacks etc.) are actually publicly available, which would potentially mean that a clock rate of at least 300mhz would be achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm geometry. 65 down to 40nm would be much more preferable but may be hard to get.
graphics: i'm going through the list of people who have done GPUs (or parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been modified to "the text of the GPL license plus an additional clause which is that if you want to use this for commercial purposes then... you can't". which is *NOT* a GPL license, it's a proprietary commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's compatible with AMD's software. nyuzi is an experimental GPU where i hope its developer believes in its potential. ORGFX i am currently evaluating but it looks pretty damn good, and i think it is slightly underestimated. i could really use some help evaluating it properly. my feeling is that a combination of MIAOW to handle shading and ORGFX for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
Yes. Do it. DO IT.
On 27 April 2017 14:21:08 GMT+03:00, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
ok so it would seem that the huge amount of work going into RISC-V means that it's on track to becoming a steamroller that will squash proprietary SoCs, so i'm quite happy to make sure that it's not-so-subtly nudged in the right direction.
i've started a page where i am keeping notes: http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to create a desirable mass-volume low-cost SoC, meaning that it will need to at least do 1080p60 video decode and have 3D graphics capability. oh... and be entirely libre.
the plan is:
- to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a low-cost proof-of-concept where mistakes can be iterated through
- provide the end-result to software developers so that they can have
actual real silicon to work with
- begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are almost entirely from opencores.org, meaning that there really should be absolutely no need to license any costly hard macros. that *includes* a DDR3 controller (but does not include a DDR3 PHY, which will need to be designed):
- DDR3 controller (not including PHY)
- lowRISC contains "minion cores" so can be soft-programmed to do any
GPIO
- boot and debug through ZipCPU's UART (use an existing EC's on-board
FLASH)
- OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
- OpenCores ULPI USB 2.0 controller
- OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really* important to avoid, because mixed analog and digital is incredibly hard to get right. also note that things like HDMI, SATA, and even ethernet are quite deliberately NOT on the list. Ethernet RMII (which is digital) could be implemented in software using a minion core. the advantage of using the opencores VGA (actually LCD) controller is: i already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into 40nm, would easily compete with some of TI's offerings, as well as the Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on debian/testing (took a while) so it *might* not be necessary even to pay for the ASIC design tooling (the cost of which is insane). coriolis2 includes a reasonable auto-router. i still have yet to go through the tutorials to see how it works. for design rules: 90nm design rules (stacks etc.) are actually publicly available, which would potentially mean that a clock rate of at least 300mhz would be achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm geometry. 65 down to 40nm would be much more preferable but may be hard to get.
graphics: i'm going through the list of people who have done GPUs (or parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been modified to "the text of the GPL license plus an additional clause which is that if you want to use this for commercial purposes then... you can't". which is *NOT* a GPL license, it's a proprietary commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's compatible with AMD's software. nyuzi is an experimental GPU where i hope its developer believes in its potential. ORGFX i am currently evaluating but it looks pretty damn good, and i think it is slightly underestimated. i could really use some help evaluating it properly. my feeling is that a combination of MIAOW to handle shading and ORGFX for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
I'm really really excited about a possible 100% libre RISC-V based computer. Though I'm a backer of the most recent campaign (and can't wait to get it! :)), I lack the knowledge/skills to actually help with the technical development of this upcoming RISC card. Is there anything I/we can do to help get this RISC initiative going?
On 2017-04-28 08:36, Allan Mwenda wrote:
Yes. Do it. DO IT.
On 27 April 2017 14:21:08 GMT+03:00, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
ok so it would seem that the huge amount of work going into RISC-V means that it's on track to becoming a steamroller that will squash proprietary SoCs, so i'm quite happy to make sure that it's not-so-subtly nudged in the right direction. i've started a page where i am keeping notes: http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to create a desirable mass-volume low-cost SoC, meaning that it will need to at least do 1080p60 video decode and have 3D graphics capability. oh... and be entirely libre. the plan is: * to create an absolute basic SoC, starting from lowRISC (64-bit), ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a low-cost proof-of-concept where mistakes can be iterated through * provide the end-result to software developers so that they can have actual real silicon to work with * begin a first crowd-funding phase to create a 28nm (or better) multi-core SMP SoC for this first phase the interfaces that i've tracked down so far are almost entirely fromopencores.org <http://opencores.org>, meaning that there really should be absolutely no need to license any costly hard macros. that *includes* a DDR3 controller (but does not include a DDR3 PHY, which will need to be designed): * DDR3 controller (not including PHY) * lowRISC contains "minion cores" so can be soft-programmed to do any GPIO * boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH) * OpenCores VGA controller (actually it's an LCD RGB/TTL controller) * OpenCores ULPI USB 2.0 controller * OpenCores USB-OTG 1.1 PHY note that there are NO ANALOG INTERFACES in that. this is *really* important to avoid, because mixed analog and digital is incredibly hard to get right. also note that things like HDMI, SATA, and even ethernet are quite deliberately NOT on the list. Ethernet RMII (which is digital) could be implemented in software using a minion core. the advantage of using the opencores VGA (actually LCD) controller is: i already have the full source for a *complete* linux driver. I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be software-programmed as bit-banging in the minion cores. these interfaces, amazingly, are enough to do an SoC that, if put into 40nm, would easily compete with some of TI's offerings, as well as the Allwinner R8 (aka A13). i've also managed to get alliance and coriolis2 compiled on debian/testing (took a while) so it *might* not be necessary even to pay for the ASIC design tooling (the cost of which is insane). coriolis2 includes a reasonable auto-router. i still have yet to go through the tutorials to see how it works. for design rules: 90nm design rules (stacks etc.) are actually publicly available, which would potentially mean that a clock rate of at least 300mhz would be achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm geometry. 65 down to 40nm would be much more preferable but may be hard to get. graphics: i'm going through the list of people who have done GPUs (or parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been modified to "the text of the GPL license plus an additional clause which is that if you want to use this for commercial purposes then... you can't". which is *NOT* a GPL license, it's a proprietary commercial license! MIAOW is just an OpenCL engine but a stonking good one that's compatible with AMD's software. nyuzi is an experimental GPU where i hope its developer believes in its potential. ORGFX i am currently evaluating but it looks pretty damn good, and i think it is slightly underestimated. i could really use some help evaluating it properly. my feeling is that a combination of MIAOW to handle shading and ORGFX for the rendering would be a really powerful combination. so. it's basically doable. comments and ideas welcome, please do edit the page to keep track of noteshttp://rhombus-tech.net/riscv/libre_riscv/
On 5/2/17, Pen-Yuan Hsing penyuanhsing@gmail.com wrote:
I'm really really excited about a possible 100% libre RISC-V based computer. Though I'm a backer of the most recent campaign (and can't wait to get it! :)), I lack the knowledge/skills to actually help with the technical development of this upcoming RISC card. Is there anything I/we can do to help get this RISC initiative going?
On 2017-04-28 08:36, Allan Mwenda wrote:
Yes. Do it. DO IT.
On 27 April 2017 14:21:08 GMT+03:00, Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
ok so it would seem that the huge amount of work going into RISC-V means that it's on track to becoming a steamroller that will squash proprietary SoCs, so i'm quite happy to make sure that it's not-so-subtly nudged in the right direction. i've started a page where i am keeping notes: http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to create a desirable mass-volume low-cost SoC, meaning that it will need to at least do 1080p60 video decode and have 3D graphics capability. oh... and be entirely libre. the plan is: * to create an absolute basic SoC, starting from lowRISC (64-bit), ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a low-cost proof-of-concept where mistakes can be iterated through * provide the end-result to software developers so that they can have actual real silicon to work with * begin a first crowd-funding phase to create a 28nm (or better) multi-core SMP SoC for this first phase the interfaces that i've tracked down so far are almost entirely fromopencores.org <http://opencores.org>, meaning that
there really should be absolutely no need to license any costly hard macros. that *includes* a DDR3 controller (but does not include a DDR3 PHY, which will need to be designed):
* DDR3 controller (not including PHY) * lowRISC contains "minion cores" so can be soft-programmed to do any
GPIO * boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH) * OpenCores VGA controller (actually it's an LCD RGB/TTL controller) * OpenCores ULPI USB 2.0 controller * OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really* important to avoid, because mixed analog and digital is incredibly hard to get right. also note that things like HDMI, SATA, and even ethernet are quite deliberately NOT on the list. Ethernet RMII (which is digital) could be implemented in software using a minion core. the advantage of using the opencores VGA (actually LCD) controller is: i already have the full source for a *complete* linux driver. I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be software-programmed as bit-banging in the minion cores. these interfaces, amazingly, are enough to do an SoC that, if put into 40nm, would easily compete with some of TI's offerings, as well as the Allwinner R8 (aka A13). i've also managed to get alliance and coriolis2 compiled on debian/testing (took a while) so it *might* not be necessary even to pay for the ASIC design tooling (the cost of which is insane). coriolis2 includes a reasonable auto-router. i still have yet to go through the tutorials to see how it works. for design rules: 90nm design rules (stacks etc.) are actually publicly available, which would potentially mean that a clock rate of at least 300mhz would be achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm geometry. 65 down to 40nm would be much more preferable but may be hard to get. graphics: i'm going through the list of people who have done GPUs (or parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been modified to "the text of the GPL license plus an additional clause which is that if you want to use this for commercial purposes then... you can't". which is *NOT* a GPL license, it's a proprietary commercial license! MIAOW is just an OpenCL engine but a stonking good one that's compatible with AMD's software. nyuzi is an experimental GPU where i hope its developer believes in its potential. ORGFX i am currently evaluating but it looks pretty damn good, and i think it is slightly underestimated. i could really use some help evaluating it properly. my feeling is that a combination of MIAOW to handle shading and ORGFX for the rendering would be a really powerful combination. so. it's basically doable. comments and ideas welcome, please do edit the page to keep track of noteshttp://rhombus-tech.net/riscv/libre_riscv/
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
Print off some of the RISC-V technical documentation, write in the link "http://lists.phcomp.co.uk/pipermail/arm-netbook/2017-April/013457.html" and leave the copies in cafes, coffee shops, computer labs, etc. Only leave one copy per place. Some one will get curious I'm sure, and the rest is up to them.
On Wed, May 3, 2017 at 2:05 AM, Pen-Yuan Hsing penyuanhsing@gmail.com wrote:
I'm really really excited about a possible 100% libre RISC-V based computer. Though I'm a backer of the most recent campaign (and can't wait to get it! :)), I lack the knowledge/skills to actually help with the technical development of this upcoming RISC card. Is there anything I/we can do to help get this RISC initiative going?
just spread the word (particularly when the new campaign comes up) - also if you know of any business people willing to invest, meet any investors particularly those with an ethical or social focus, do put them in touch.
also: universities. if you happen to know any university professors ask their Electrical Engineering Dept to consider research into RISC-V. it'll sort-of happen anyway (happening already) because it's easy to get hold of the RISC-V design.
l.
On 03/05/17 06:00, Luke Kenneth Casson Leighton wrote:
On Wed, May 3, 2017 at 2:05 AM, Pen-Yuan Hsing penyuanhsing@gmail.com wrote:
I'm really really excited about a possible 100% libre RISC-V based computer. Though I'm a backer of the most recent campaign (and can't wait to get it! :)), I lack the knowledge/skills to actually help with the technical development of this upcoming RISC card. Is there anything I/we can do to help get this RISC initiative going?
just spread the word (particularly when the new campaign comes up) - also if you know of any business people willing to invest, meet any investors particularly those with an ethical or social focus, do put them in touch.
also: universities. if you happen to know any university professors ask their Electrical Engineering Dept to consider research into RISC-V. it'll sort-of happen anyway (happening already) because it's easy to get hold of the RISC-V design.
l.
Thanks Luke. This just made me realise that I should bring this up the next time I run into the local maker space. I only occasionally see them, but will be sure to remember this!
Is the webpage designed for an interested maker/hacker or computer science academic to easily understand?
On Wed, May 3, 2017 at 4:24 PM, Pen-Yuan Hsing penyuanhsing@gmail.com wrote:
Thanks Luke. This just made me realise that I should bring this up the next time I run into the local maker space. I only occasionally see them, but will be sure to remember this!
Is the webpage designed for an interested maker/hacker or computer science academic to easily understand?
which one? the one i'm maintaining on rhombus-tech is purely for taking notes, so i don't lose track of the contacts / links that i find.
also it depends on what you'd like to help with. if you'd like to help with *this* project's efforts to create a riscv-64 SoC, then this list and the rhombus tech wiki's a good starting point
if however you'd like to simply make people aware of risc-v in general then the riscv.org web site's the best place to refer them to.
l.
On 05/03/2017 11:30 AM, Luke Kenneth Casson Leighton wrote:
On Wed, May 3, 2017 at 4:24 PM, Pen-Yuan Hsing penyuanhsing@gmail.com wrote:
Thanks Luke. This just made me realise that I should bring this up the next time I run into the local maker space. I only occasionally see them, but will be sure to remember this!
Is the webpage designed for an interested maker/hacker or computer science academic to easily understand?
which one? the one i'm maintaining on rhombus-tech is purely for taking notes, so i don't lose track of the contacts / links that i find.
also it depends on what you'd like to help with. if you'd like to help with *this* project's efforts to create a riscv-64 SoC, then this list and the rhombus tech wiki's a good starting point
if however you'd like to simply make people aware of risc-v in general then the riscv.org web site's the best place to refer them to.
isn't lowrisc a better starting point though?
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On 03/05/17 16:30, Luke Kenneth Casson Leighton wrote:
On Wed, May 3, 2017 at 4:24 PM, Pen-Yuan Hsing penyuanhsing@gmail.com wrote:
Thanks Luke. This just made me realise that I should bring this up the next time I run into the local maker space. I only occasionally see them, but will be sure to remember this!
Is the webpage designed for an interested maker/hacker or computer science academic to easily understand?
which one? the one i'm maintaining on rhombus-tech is purely for taking notes, so i don't lose track of the contacts / links that i find.
also it depends on what you'd like to help with. if you'd like to help with *this* project's efforts to create a riscv-64 SoC, then this list and the rhombus tech wiki's a good starting point
if however you'd like to simply make people aware of risc-v in general then the riscv.org web site's the best place to refer them to.
l.
Sorry I wasn't clear. I was just wondering if the rhombus-tech page can be a "landing page" that I can forward people to. But if you think riscv.org is fine I can do that, too! That said, riscv.org probably doesn't emphasise the libre nature of it, does it? Therefore would it be helpful to have some sort of accessible introductory page that talks about how RISV-V is "fun" for hacking AND its importance in 100% libre computing?
As for me, I have no technical skills to actually help with development. That's why I originally asked if there are other ways to help!
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Thu, May 4, 2017 at 11:31 AM, Pen-Yuan Hsing penyuanhsing@gmail.com wrote:
Sorry I wasn't clear. I was just wondering if the rhombus-tech page can be a "landing page" that I can forward people to. But if you think riscv.org is fine I can do that, too! That said, riscv.org probably doesn't emphasise the libre nature of it, does it?
permissive licenses being what they are.... no, true, it doesn't. libre is a very specific meaning, and the goal is to create a *libre* SoC.
ha, bit of irony for you: gaisler research released the LEON3 SPARCv8 core a number of years ago under the GPLv2, so that people could use it for "academic and research purposes", the expectation being that for "commercial" use, they would seek a license from gaisler because you can't mix GPLv2 source with proprietary hard macro source.
the irony / beauty is: by seeking out *specifically* hard macros even for DDR3 that are compatible with the GPL, no proprietary license is needed :)
so... the source code which implements SMP cache coherency for a multi-core LEON3... i can pull that out and use it :)
l.
2017-05-04 12:51 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
ha, bit of irony for you: gaisler research released the LEON3 SPARCv8 core a number of years ago under the GPLv2, so that people could use it for "academic and research purposes", the expectation being that for "commercial" use, they would seek a license from gaisler because you can't mix GPLv2 source with proprietary hard macro source.
the irony / beauty is: by seeking out *specifically* hard macros even for DDR3 that are compatible with the GPL, no proprietary license is needed :)
so... the source code which implements SMP cache coherency for a multi-core LEON3... i can pull that out and use it :)
Uhhm. So your going to use the GLP'ed macro for "SMP cache coherency" from the LEON3 SPARCv8 design into the RISC-V so you can build a Multi core (SMP) RISC-V?
Or are you considering a new SoC SPARC design?
According to wikipedia there is also a LEON4. If it is based on the LEON3 then the source code should be available right? Or is a materialized HW design no obligated to ship with the source since it's not binary/machine code?
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Thu, May 4, 2017 at 3:29 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
2017-05-04 12:51 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
ha, bit of irony for you: gaisler research released the LEON3 SPARCv8 core a number of years ago under the GPLv2, so that people could use it for "academic and research purposes", the expectation being that for "commercial" use, they would seek a license from gaisler because you can't mix GPLv2 source with proprietary hard macro source.
the irony / beauty is: by seeking out *specifically* hard macros even for DDR3 that are compatible with the GPL, no proprietary license is needed :)
so... the source code which implements SMP cache coherency for a multi-core LEON3... i can pull that out and use it :)
Uhhm. So your going to use the GLP'ed macro for "SMP cache coherency" from the LEON3 SPARCv8 design into the RISC-V so you can build a Multi core (SMP) RISC-V?
yyep!
Or are you considering a new SoC SPARC design?
no. not enough mind-share
According to wikipedia there is also a LEON4. If it is based on the LEON3 then the source code should be available right?
it's not.
l.
2017-04-27 13:21 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
ok so it would seem that the huge amount of work going into RISC-V means that it's on track to becoming a steamroller that will squash proprietary SoCs, so i'm quite happy to make sure that it's not-so-subtly nudged in the right direction.
i've started a page where i am keeping notes: http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to create a desirable mass-volume low-cost SoC, meaning that it will need to at least do 1080p60 video decode and have 3D graphics capability. oh... and be entirely libre.
That's one hornet nest you're going into. But I'd really like to see you pull it off.
the plan is:
- to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a low-cost proof-of-concept where mistakes can be iterated through
- provide the end-result to software developers so that they can have
actual real silicon to work with
- begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are almost entirely from opencores.org, meaning that there really should be absolutely no need to license any costly hard macros. that *includes* a DDR3 controller (but does not include a DDR3 PHY, which will need to be designed):
- DDR3 controller (not including PHY)
- lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
- boot and debug through ZipCPU's UART (use an existing EC's on-board
FLASH)
- OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
- OpenCores ULPI USB 2.0 controller
- OpenCores USB-OTG 1.1 PHY
I'm not much into HW design. But I think it would be wise to aim for USB-C connectivity.
USB-C does not imply USB3.0 AFAIKT.
USB-C has to option of channeling USB2/3,HDMI,DP via the alternate modes and power. So a stack of USB-C connectors on the User Facing Side would be awesome.
It would also limit the need for other connectors and PHY's.
The problem is MUXing all modes to a single output. New Apple laptops have USB-C but not all ports support all functions.
Perhaps a bit of FPGA could be the key?
Ethernet over UCB-C is still being discussed. So the FPGA might be handy to have when/if that mode is materialized.
A bit of FPGA would be nice to have anyway. Media codecs keep on changing and would extend the life of the SoC.
note that there are NO ANALOG INTERFACES in that. this is *really* important to avoid, because mixed analog and digital is incredibly hard to get right. also note that things like HDMI, SATA, and even ethernet are quite deliberately NOT on the list.
That's what phy's are for right?
VGA is on decline I would bother with it too much. But that's personal.
Ethernet RMII (which is digital) could be implemented in software using a minion core. the advantage of using the opencores VGA (actually LCD) controller is: i already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into 40nm, would easily compete with some of TI's offerings, as well as the Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on debian/testing (took a while) so it *might* not be necessary even to pay for the ASIC design tooling (the cost of which is insane). coriolis2 includes a reasonable auto-router. i still have yet to go through the tutorials to see how it works. for design rules: 90nm design rules (stacks etc.) are actually publicly available, which would potentially mean that a clock rate of at least 300mhz would be achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm geometry. 65 down to 40nm would be much more preferable but may be hard to get.
I Don't think speed is to much of an issue right now. Having something workable like this, even only suitable for embedded use, would gain traction fast enough to get attention and help for new revisions with smaller and faster production.
Besides the max for silicon scaling is nearing. EULV is still not generally available.
Better architectures are needed. Just like better programming.
graphics: i'm going through the list of people who have done GPUs (or parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been modified to "the text of the GPL license plus an additional clause which is that if you want to use this for commercial purposes then... you can't". which is *NOT* a GPL license, it's a proprietary commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's compatible with AMD's software. nyuzi is an experimental GPU where i hope its developer believes in its potential. ORGFX i am currently evaluating but it looks pretty damn good, and i think it is slightly underestimated. i could really use some help evaluating it properly. my feeling is that a combination of MIAOW to handle shading and ORGFX for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On Fri, Apr 28, 2017 at 11:29 AM, mike.valk@gmail.com mike.valk@gmail.com wrote:
2017-04-27 13:21 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
ok so it would seem that the huge amount of work going into RISC-V means that it's on track to becoming a steamroller that will squash proprietary SoCs, so i'm quite happy to make sure that it's not-so-subtly nudged in the right direction.
i've started a page where i am keeping notes: http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to create a desirable mass-volume low-cost SoC, meaning that it will need to at least do 1080p60 video decode and have 3D graphics capability. oh... and be entirely libre.
That's one hornet nest you're going into.
yyyup. am tracking down the pieces.
But I'd really like to see you pull it off.
like a quantum electron it'll probably happen because i forget to look backwards :)
- DDR3 controller (not including PHY)
- lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
- boot and debug through ZipCPU's UART (use an existing EC's on-board
FLASH)
- OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
- OpenCores ULPI USB 2.0 controller
- OpenCores USB-OTG 1.1 PHY
I'm not much into HW design. But I think it would be wise to aim for USB-C connectivity.
not for the first proof-of-concept SoC... unless the external PHY which will be wired to the ULPI (PHY) interface happens to support USB-C.
the *mass-volume* SoC: yes, great idea.
USB-C has to option of channeling USB2/3,HDMI,DP via the alternate modes and power. So a stack of USB-C connectors on the User Facing Side would be awesome.
remember that 90nm is a maximum clock rate if you're really really lucky of 400mhz: 300mhz is more realistic. 65nm you get maybe 700mhz absolute max.
It would also limit the need for other connectors and PHY's.
that would be a big advantage.
The problem is MUXing all modes to a single output. New Apple laptops have USB-C but not all ports support all functions.
Perhaps a bit of FPGA could be the key?
yeah.
Ethernet over UCB-C is still being discussed. So the FPGA might be handy to have when/if that mode is materialized.
A bit of FPGA would be nice to have anyway. Media codecs keep on changing and would extend the life of the SoC.
at the expense of power consumption.
note that there are NO ANALOG INTERFACES in that. this is *really* important to avoid, because mixed analog and digital is incredibly hard to get right. also note that things like HDMI, SATA, and even ethernet are quite deliberately NOT on the list.
That's what phy's are for right?
it's not quite that simple, but yes :)
VGA is on decline I would bother with it too much. But that's personal.
yep it's out for this SoC.
Ethernet RMII (which is digital) could be implemented in software using a minion core. the advantage of using the opencores VGA (actually LCD) controller is: i already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into 40nm, would easily compete with some of TI's offerings, as well as the Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on debian/testing (took a while) so it *might* not be necessary even to pay for the ASIC design tooling (the cost of which is insane). coriolis2 includes a reasonable auto-router. i still have yet to go through the tutorials to see how it works. for design rules: 90nm design rules (stacks etc.) are actually publicly available, which would potentially mean that a clock rate of at least 300mhz would be achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm geometry. 65 down to 40nm would be much more preferable but may be hard to get.
I Don't think speed is to much of an issue right now. Having something workable like this, even only suitable for embedded use, would gain traction fast enough to get attention and help for new revisions with smaller and faster production.
yeahyeah. well, the embedded market is where the RV32* is already being targetted (sifive, pulpino) - there's nobody however who's started on RV64 because it's a whole different beast. 64-bit is usually deployed where performance is a priority (i.e. by definition space-saving being diametrically the opposite isn't) and that means DDR3 external RAM instead of e.g. 48k of *internal* SRAM... and many other things.
Besides the max for silicon scaling is nearing. EULV is still not generally available.
Better architectures are needed. Just like better programming.
40% better performance-watt is a good enough indicator to me.
l.
2017-04-28 13:23 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
On Fri, Apr 28, 2017 at 11:29 AM, mike.valk@gmail.com mike.valk@gmail.com wrote:
The problem is MUXing all modes to a single output. New Apple laptops
have
USB-C but not all ports support all functions.
Perhaps a bit of FPGA could be the key?
yeah.
Ethernet over UCB-C is still being discussed. So the FPGA might be handy
to
have when/if that mode is materialized.
A bit of FPGA would be nice to have anyway. Media codecs keep on changing and would extend the life of the SoC.
at the expense of power consumption.
If you're trying to trans-code something that you don't have a co-processor/module for you're forced to CPU/GPU trans-coding. Would a FPGA still be more power huns gry then?
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
We can always have evolution create a efficient decoder ;-) https://www.damninteresting.com/on-the-origin-of-circuits/ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.9691&rep=rep...
On Fri, Apr 28, 2017 at 12:47 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
If you're trying to trans-code something that you don't have a co-processor/module for you're forced to CPU/GPU trans-coding.
you may be misunderstanding: the usual way to interact with a GPU is to use a memory buffer, drop some data in it, tell the GPU (again via a memory location) "here, get on with it" - basically a hardware-version of an API - and it goes an executes its *OWN* instructions, completely independently and absolutely nothing to do with the CPU.
there's no "transcoding" involved because they share the same memory bus.
Would a FPGA still be more power huns gry then?
yes.
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
you wouldn't give a general-purpose task to an FPGA, and you wouldn't give a specialist task for which they're not suited to a CPU, GPU _or_ an FPGA: you'd give it to a custom piece of silicon.
in the case where you have something that falls outside of the custom silicon (a newer CODEC for example) then yes, an FPGA would *possibly* help... if and only if you have enough bandwidth.
video is RIDICULOUSLY bandwidth-hungry. 1920x1080 @ 60fps 32bpp is... an insane data-rate. it's 470 MEGABYTES per second. that's what the framebuffer has to handle, so you not only have to have the HDMI (or other video) PHY capable of handling that but the CODEC hardware has to be able to *write* - simultaneously - on the exact same memory bus.
the point is: if you're considering using an FPGA to accelerate video it's gonna be a *really* big and expensive FPGA, and you would need to implement something like PCIe just to cope with the communications between the two.
costs just escalated way beyond market value.
this is why companies just simply... abandon one SoC and do another one which has an improved custom CODEC silicon which *does* handle the newer CODEC(s).
We can always have evolution create a efficient decoder ;-) https://www.damninteresting.com/on-the-origin-of-circuits/
woooow.
"It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip’s operation, but they were interacting with the main circuitry through some unorthodox method— most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors’ absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white."
that's incredible.
l.
This is the most interesting article I've read in a long time. Like machine learning but on an fpga... and analog!!! Comes to prove my hunch that the binary approach to computing is not the most optimal one. Analog might be hard but with enough investment it can give better results in the long run. It's just it's hard enough to be considered impossible right now.
On Fri, Apr 28, 2017 at 3:58 PM, Luke Kenneth Casson Leighton <lkcl@lkcl.net
wrote:
On Fri, Apr 28, 2017 at 12:47 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
If you're trying to trans-code something that you don't have a co-processor/module for you're forced to CPU/GPU trans-coding.
you may be misunderstanding: the usual way to interact with a GPU is to use a memory buffer, drop some data in it, tell the GPU (again via a memory location) "here, get on with it" - basically a hardware-version of an API - and it goes an executes its *OWN* instructions, completely independently and absolutely nothing to do with the CPU.
there's no "transcoding" involved because they share the same memory bus.
Would a FPGA still be more power huns gry then?
yes.
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
you wouldn't give a general-purpose task to an FPGA, and you wouldn't give a specialist task for which they're not suited to a CPU, GPU _or_ an FPGA: you'd give it to a custom piece of silicon.
in the case where you have something that falls outside of the custom silicon (a newer CODEC for example) then yes, an FPGA would *possibly* help... if and only if you have enough bandwidth.
video is RIDICULOUSLY bandwidth-hungry. 1920x1080 @ 60fps 32bpp is... an insane data-rate. it's 470 MEGABYTES per second. that's what the framebuffer has to handle, so you not only have to have the HDMI (or other video) PHY capable of handling that but the CODEC hardware has to be able to *write* - simultaneously - on the exact same memory bus.
the point is: if you're considering using an FPGA to accelerate video it's gonna be a *really* big and expensive FPGA, and you would need to implement something like PCIe just to cope with the communications between the two.
costs just escalated way beyond market value.
this is why companies just simply... abandon one SoC and do another one which has an improved custom CODEC silicon which *does* handle the newer CODEC(s).
We can always have evolution create a efficient decoder ;-) https://www.damninteresting.com/on-the-origin-of-circuits/
woooow.
"It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip’s operation, but they were interacting with the main circuitry through some unorthodox method— most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors’ absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white."
that's incredible.
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On Fri, Apr 28, 2017 at 3:05 PM, Bill Kontos vkontogpls@gmail.com wrote:
This is the most interesting article I've read in a long time. Like machine learning but on an fpga... and analog!!!
more than that: analog on a *digital* chip! and using E.M. effects (cross-talk) to make up the circuit! i'm just amazed... but perhaps it should not be unexpected.
the rest of the article makes a really good point, which has me deeply concerned now that there are fuckwits out there making "driverless" cars, toying with people's lives in the process. you have *no idea* what unexpected decisions are being made, what has been "optimised out".
with aircraft it's a different matter: the skies are clear, it's a matter of physics and engineering, and the job of taking off, landing and changing direction is, if extremely complex, actually just a matter of programming. also, the PILOT IS ULTIMATELY IN CHARGE.
cars - where you could get thrown unexpected completely unanticipated scenarios involving life-and-death decisions - are a totally different matter.
the only truly ethical way to create "driverless" cars is to create an actual *conscious* machine intelligence with which you can have a conversation, and *TEACH* it - through a rational conversation - what the actual parameters are for (a) the laws of the road (b) moral decisions regarding life-and-death situations.
applying genetic algorithms to driving of vehicles is a stupid, stupid idea because you cannot tell what has been "optimised out" - just as the guy from this article says.
l.
2017-04-28 16:17 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
the rest of the article makes a really good point, which has me deeply concerned now that there are fuckwits out there making "driverless" cars, toying with people's lives in the process. you have *no idea* what unexpected decisions are being made, what has been "optimised out".
That's no different from regular "human" programming. If you employ IA programming you still can validate the code like you would that of a normal human.
Or build a second independ IA for the "four" eye principle.
with aircraft it's a different matter: the skies are clear, it's a matter of physics and engineering, and the job of taking off, landing and changing direction is, if extremely complex, actually just a matter of programming. also, the PILOT IS ULTIMATELY IN CHARGE.
cars - where you could get thrown unexpected completely unanticipated scenarios involving life-and-death decisions - are a totally different matter.
the only truly ethical way to create "driverless" cars is to create an actual *conscious* machine intelligence with which you can have a conversation, and *TEACH* it - through a rational conversation - what the actual parameters are for (a) the laws of the road (b) moral decisions regarding life-and-death situations.
The problem is nuance. If a cyclist crosses your path and escaping collision can only be done by driving into a group of people waiting to cross after you passed them. The choice seems logical: Hit the cyclist. Many are saved by killing/injuring/bumping one.
Humans are notoriously bad in taking those decisions themselves. We only consider the cyclist. That's our focus. The group become the second objective.
Many people are killed/injured by trying to avoid hitting animals. You try to avoid collision only to find you'r vehicle becoming uncontrollable or finding a new object on your new trajectory, mostly trees.
The real crisis comes from outside control. The car can be hacked and become weaponized. That works with humans as well but is more difficult and takes more time. Programming humans takes time.
Or some other Asimov related issue ;-)
applying genetic algorithms to driving of vehicles is a stupid, stupid idea because you cannot tell what has been "optimised out" - just as the guy from this article says.
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On Fri, Apr 28, 2017 at 3:45 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
2017-04-28 16:17 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
the rest of the article makes a really good point, which has me deeply concerned now that there are fuckwits out there making "driverless" cars, toying with people's lives in the process. you have *no idea* what unexpected decisions are being made, what has been "optimised out".
That's no different from regular "human" programming.
it's *massively* different. a human will follow their training, deploy algorithms and have an *understanding* of the code and what it does.
monte-carlo-generated iterative algorthms you *literally* have no idea what it does or how it does it. the only guarantee that you have is that *for the set of inputs CURRENTLY tested to date* you have "known behaviour".
but for the cases which you haven't catered for you *literally* have no way of knowing how the code is going to react.
now this sounds very very similar to the human case: yes you would expect human-written code to also have to pass test suites.
but the real difference is highighted with the following question: when it comes to previously undiscovered bugs, how the heck are you supposed to "fix" bugs that you have *LITERALLY* no idea how the code even works?
and that's what it really boils down to:
(a) in unanticipated circumstances you have literally no idea what the code will do. it could do something incredibly dangerous.
(b) in unanticipated circumstances the chances of *fixing* the bug in the genetic-derived code are precisely: zero. the only option is to run the algorithm again but with a new set of criteria, generating an entirely new algorithm which *again* is in the same (dangerous) category.
l.
On self driving cars atm the driver is required to sit on the driver's position ready to engage the controls. The moment the driver touches the gas pedal the car is under his control. So the system is designed in such a way that the driver is actually in control. In the only accident so far in the history of Tesla the driver was actually sleeping instead of paying attention. Also the issue of preventing the AI from optimising out some edge cases can be solved by carefully planning the tests that the neural network is trained on, which includes hitting the cycler instead of the folks in a bus stop or hitting the tree instead of the animal etc. I'm confident this stuff has already been taken care of, but of course I would love it if Tesla's code was open source. Although I fail to see how they could continue making revenue if they open sourced their code( as that is basically 50% of what they are selling).
On Fri, Apr 28, 2017 at 5:55 PM, Luke Kenneth Casson Leighton <lkcl@lkcl.net
wrote:
On Fri, Apr 28, 2017 at 3:45 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
2017-04-28 16:17 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
the rest of the article makes a really good point, which has me deeply concerned now that there are fuckwits out there making "driverless" cars, toying with people's lives in the process. you have *no idea* what unexpected decisions are being made, what has been "optimised out".
That's no different from regular "human" programming.
it's *massively* different. a human will follow their training, deploy algorithms and have an *understanding* of the code and what it does.
monte-carlo-generated iterative algorthms you *literally* have no idea what it does or how it does it. the only guarantee that you have is that *for the set of inputs CURRENTLY tested to date* you have "known behaviour".
but for the cases which you haven't catered for you *literally* have no way of knowing how the code is going to react.
now this sounds very very similar to the human case: yes you would expect human-written code to also have to pass test suites.
but the real difference is highighted with the following question: when it comes to previously undiscovered bugs, how the heck are you supposed to "fix" bugs that you have *LITERALLY* no idea how the code even works?
and that's what it really boils down to:
(a) in unanticipated circumstances you have literally no idea what the code will do. it could do something incredibly dangerous.
(b) in unanticipated circumstances the chances of *fixing* the bug in the genetic-derived code are precisely: zero. the only option is to run the algorithm again but with a new set of criteria, generating an entirely new algorithm which *again* is in the same (dangerous) category.
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
Hi,
I've done some research into this in the last couple of weeks.
Am Fri, 28 Apr 2017 20:21:00 +0300 schrieb Bill Kontos vkontogpls@gmail.com:
On self driving cars atm the driver is required to sit on the driver's position ready to engage the controls. The moment the driver touches the gas pedal the car is under his control. So the system is designed in such a way that the driver is actually in control. In the only accident so far in the history of Tesla the driver was actually sleeping instead of paying attention.
This kind of car is what the SAE defined as a level 2 car. Full autonomy ist level 5. See the Standard J3016 (costs nothing, but needs account)[1]
Also the issue of preventing the AI from optimising out some edge cases can be solved by carefully planning the tests that the neural network is trained on, which includes hitting the cycler instead of the folks in a bus stop or hitting the tree instead of the animal etc. I'm confident this stuff has already been taken care of, but of course I would love it if Tesla's code was open source. Although I fail to see how they could continue making revenue if they open sourced their code( as that is basically 50% of what they are selling).
I'm sad to say that it isn't even close to solved. The only two concrete ideas I found are: 1. Egoism. The driver of the car always wins. 2. Utilitarianism. "The Greater Good". The best outcome for the most people.
There is also another one which ignores most of the problem. 3. Random. Creating different possible crash cenarios and selecting one at random.
Even if we would find utilitarianism as a good choice, we would have to calculate the sums of the products of probabilities for harm times value of harmed participant in the accident. Our sensors aren't even close to good enough to calculate good probabilities and we have no idea which value to assign to a participant. And the sensors and computing would have to decide how many outcomes there could be and then calculate those value-sums for each and then take the best outcome.
Then you have to consider that in many countries the programming of such a targeting algorithm, one that decides who is killed, would count as planning a murder. And every casualty in an accident would be murdered by the people that created the algorithm. Because it isn't reacting anymore as it was for human drivers but precalculated and planned.
Then there is the problem of who would by utilitarian cars. [2]
Greetings Hannes
[1] http://standards.sae.org/j3016_201609/ [2] https://arxiv.org/abs/1510.03346
On 4/28/17, Hannes Schnaitter hannes@schnaitter.de wrote:
Then you have to consider that in many countries the programming of such a targeting algorithm, one that decides who is killed, would count as planning a murder. And every casualty in an accident would be murdered by the people that created the algorithm. Because it isn't reacting anymore as it was for human drivers but precalculated and planned.
I wonder what type of prosecutor it would take to bring an ai to court?
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Sat, Apr 29, 2017 at 5:18 AM, John Luke Gibson eaterjolly@gmail.com wrote:
I wonder what type of prosecutor it would take to bring an ai to court?
https://en.wikipedia.org/wiki/The_Hour_of_the_Pig
:)
Came here for the 64bit processor stayed for the sci-fi
On 29 April 2017 07:18:14 GMT+03:00, John Luke Gibson eaterjolly@gmail.com wrote:
On 4/28/17, Hannes Schnaitter hannes@schnaitter.de wrote:
Then you have to consider that in many countries the programming of such a targeting algorithm, one that decides who is killed, would
count
as planning a murder. And every casualty in an accident would be murdered by the people that created the algorithm. Because it isn't reacting anymore as it was for human drivers but precalculated and planned.
I wonder what type of prosecutor it would take to bring an ai to court?
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On Sat, Apr 29, 2017 at 12:38 PM, Allan Mwenda allanitomwesh@gmail.com wrote:
Came here for the 64bit processor stayed for the sci-fi
:)
Hi,
On the topic of the ethics of driverless cars I'd recomment Patrick Lin's chapter 'Why ethics matters for autonomous cars' in a mostly german book about autonomous driving.
http://link.springer.com/chapter/10.1007/978-3-662-45854-9_4
Greetings Hannes
On Fri, Apr 28, 2017 at 3:56 PM, Hannes Schnaitter hannes@schnaitter.de wrote:
Hi,
On the topic of the ethics of driverless cars I'd recomment Patrick Lin's chapter 'Why ethics matters for autonomous cars' in a mostly german book about autonomous driving.
http://link.springer.com/chapter/10.1007/978-3-662-45854-9_4
thanks hannes
2017-04-28 14:58 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
in the case where you have something that falls outside of the custom silicon (a newer CODEC for example) then yes, an FPGA would *possibly* help... if and only if you have enough bandwidth.
That is what I was talking about.
video is RIDICULOUSLY bandwidth-hungry. 1920x1080 @ 60fps 32bpp is... an insane data-rate. it's 470 MEGABYTES per second. that's what the framebuffer has to handle, so you not only have to have the HDMI (or other video) PHY capable of handling that but the CODEC hardware has to be able to *write* - simultaneously - on the exact same memory bus.
I Overestimated the capabilities of an FPGA. I've just read You need two/four FPGA linked to do H264 in realtime. Or a full new one. FPGA's also are usually are very slow.
I found a nice presentation on using FPGA's for video codecs. https://www.ece.cmu.edu/~ece796/seminar/10/seminar/FPGA.ppt
The most facinating option I found is to reconfigure the FPGA for each processing step.
the point is: if you're considering using an FPGA to accelerate video it's gonna be a *really* big and expensive FPGA, and you would need to implement something like PCIe just to cope with the communications between the two.
costs just escalated way beyond market value.
this is why companies just simply... abandon one SoC and do another one which has an improved custom CODEC silicon which *does* handle the newer CODEC(s).
Hmm. So for longevity the video decoder should be outside the SoC and be serviceable... Nah just buy a new EOMA card and keep the rest. ;-)
Speaking of which any plans for a en/decoder module(IP Block is therm right?) in the new SoC? Or leaving that out?
On Fri, Apr 28, 2017 at 3:31 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
Speaking of which any plans for a en/decoder module(IP Block is therm right?) in the new SoC? Or leaving that out?
opencores has some codecs and VP8 and VP9 are available for production SoCs.
Out of curiosity has anyone ever attempted to prototype a hardware block based on evolution principles? Doing it on an fpga is probably a bad idea since we wont be able to implement the results in more copies but this could potentially also happen in a software simulation where the input and output interfaces of the hardware block are pre defined On Apr 28, 2017 2:47 PM, "mike.valk@gmail.com" mike.valk@gmail.com wrote:
2017-04-28 13:23 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
On Fri, Apr 28, 2017 at 11:29 AM, mike.valk@gmail.com mike.valk@gmail.com wrote:
The problem is MUXing all modes to a single output. New Apple laptops
have
USB-C but not all ports support all functions.
Perhaps a bit of FPGA could be the key?
yeah.
Ethernet over UCB-C is still being discussed. So the FPGA might be
handy to
have when/if that mode is materialized.
A bit of FPGA would be nice to have anyway. Media codecs keep on
changing
and would extend the life of the SoC.
at the expense of power consumption.
If you're trying to trans-code something that you don't have a co-processor/module for you're forced to CPU/GPU trans-coding. Would a FPGA still be more power huns gry then?
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
We can always have evolution create a efficient decoder ;-) https://www.damninteresting.com/on-the-origin-of-circuits/ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1. 50.9691&rep=rep1&type=pdf
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On 04/28/2017 05:56 PM, Bill Kontos wrote:
Out of curiosity has anyone ever attempted to prototype a hardware block based on evolution principles? Doing it on an fpga is probably a bad idea since we wont be able to implement the results in more copies but this could potentially also happen in a software simulation where the input and output interfaces of the hardware block are pre defined
<snip>
I suspect that without having the feature of it being an instruction set that only works on that one chip due to it exploiting the quirks of the chip, some efficiency would be lost.
I'm imagining a system where traditional silicone grooms many FPGAs, each with a dedicated task, and the system is provided with some known-good instruction sets that work, but only slowly. So then either the OEM or the user sets up their fancy new system, and one of the steps is to plug it in and run a setup program for anywhere from a few hours to a couple of days which iterates the instructions to improve efficiency, then they can begin to use their system.
As for using this method in a software simulation, I wouldn't be surprised if some chip manufacturers already do that for certain sections of the chip, even if its only during the early design faze. I would imagine the software guiding the evolution could be instructed to cull anything that isn't working with binary, thus allowing human engineers/programmers to more easily reverse engineer the instruction set and further edit it.
On Fri, Apr 28, 2017 at 10:51:49PM -0500, ryan wrote:
On 04/28/2017 05:56 PM, Bill Kontos wrote:
Out of curiosity has anyone ever attempted to prototype a
hardware block
based on evolution principles? Doing it on an fpga is probably a bad idea since we wont be able to implement the results in more copies but this could potentially also happen in a software simulation where the input and output interfaces of the hardware block are pre defined
<snip>
I suspect that without having the feature of it being an instruction set that only works on that one chip due to it exploiting the quirks of the chip, some efficiency would be lost.
I'm imagining a system where traditional silicone grooms many FPGAs, each with a dedicated task, and the system is provided with some known-good instruction sets that work, but only slowly. So then either the OEM or the user sets up their fancy new system, and one of the steps is to plug it in and run a setup program for anywhere from a few hours to a couple of days which iterates the instructions to improve efficiency, then they can begin to use their system.
As for using this method in a software simulation, I wouldn't be surprised if some chip manufacturers already do that for certain sections of the chip, even if its only during the early design faze. I would imagine the software guiding the evolution could be instructed to cull anything that isn't working with binary, thus allowing human engineers/programmers to more easily reverse engineer the instruction set and further edit it.
Let me remind you of a real-world situation. The hardware designers were woring on the second version of their successful CPU. They attached some counters to masure hos many times the varioun instructons were being executed. THey discovered that the most common instructons were certain test and brnch instructions. So they worked hard on making sure the next model had the most efficient implementation of those test and branch instructions they could achieve.
But when they finally put the new machine together and tried it out, they foud no improvement at all.
Investigating, they discovered they had optimized the wait loop.
-- hendrik
On Sat, Apr 29, 2017 at 2:57 PM, Hendrik Boom hendrik@topoi.pooq.com wrote:
Let me remind you of a real-world situation. The hardware designers were woring on the second version of their successful CPU. They attached some counters to masure hos many times the varioun instructons were being executed. THey discovered that the most common instructons were certain test and brnch instructions. So they worked hard on making sure the next model had the most efficient implementation of those test and branch instructions they could achieve.
But when they finally put the new machine together and tried it out, they foud no improvement at all.
Investigating, they discovered they had optimized the wait loop.
that is ffrickin funny. but also relevant, as i am aware of for example the ICT's efforts to add x86-accelerating instructions to the Loongson 2G architecture. although a MIPS64 they added hardware-emulation of the "top" 200 x86 instructions to achieve a qemu emulation that was 70% of actual x86 clock-rates.
which got me thinking: how the heck would you guage which actual instructions were "top"? would it be better instead to measure _power_ consumption per instruction, aiming for better performance/watt?
l.
2017-04-27 13:21 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
ok so it would seem that the huge amount of work going into RISC-V means that it's on track to becoming a steamroller that will squash proprietary SoCs, so i'm quite happy to make sure that it's not-so-subtly nudged in the right direction.
i've started a page where i am keeping notes: http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to create a desirable mass-volume low-cost SoC, meaning that it will need to at least do 1080p60 video decode and have 3D graphics capability. oh... and be entirely libre.
the plan is:
- to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a low-cost proof-of-concept where mistakes can be iterated through
- provide the end-result to software developers so that they can have
actual real silicon to work with
- begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are almost entirely from opencores.org, meaning that there really should be absolutely no need to license any costly hard macros. that *includes* a DDR3 controller (but does not include a DDR3 PHY, which will need to be designed):
- DDR3 controller (not including PHY)
- lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
- boot and debug through ZipCPU's UART (use an existing EC's on-board
FLASH)
Perhaps put it sirectly to an USB bridge. UART's on debugging hardware is non existant. We all use FTDI dongles.
Look like OpenCores has a module. https://opencores.org/project,usb2uart
- OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
- OpenCores ULPI USB 2.0 controller
- OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really* important to avoid, because mixed analog and digital is incredibly hard to get right. also note that things like HDMI, SATA, and even ethernet are quite deliberately NOT on the list. Ethernet RMII (which is digital) could be implemented in software using a minion core. the advantage of using the opencores VGA (actually LCD) controller is: i already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into 40nm, would easily compete with some of TI's offerings, as well as the Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on debian/testing (took a while) so it *might* not be necessary even to pay for the ASIC design tooling (the cost of which is insane). coriolis2 includes a reasonable auto-router. i still have yet to go through the tutorials to see how it works. for design rules: 90nm design rules (stacks etc.) are actually publicly available, which would potentially mean that a clock rate of at least 300mhz would be achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm geometry. 65 down to 40nm would be much more preferable but may be hard to get.
graphics: i'm going through the list of people who have done GPUs (or parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been modified to "the text of the GPL license plus an additional clause which is that if you want to use this for commercial purposes then... you can't". which is *NOT* a GPL license, it's a proprietary commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's compatible with AMD's software. nyuzi is an experimental GPU where i hope its developer believes in its potential. ORGFX i am currently evaluating but it looks pretty damn good, and i think it is slightly underestimated. i could really use some help evaluating it properly. my feeling is that a combination of MIAOW to handle shading and ORGFX for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Fri, Apr 28, 2017 at 11:35 AM, mike.valk@gmail.com mike.valk@gmail.com wrote:
Perhaps put it sirectly to an USB bridge.
no. very important to keep it simple. by wiring something like an STM32 directly to the 2 UART wires the STM32 can do the job of an FTDI dongle, but it can also be reprogrammed into an openocd interface, as well as contain the bootloader (and plug that manually directly into SRAM over the debug interface).
i discussed this all with dan, the developer of zipcpu, it's what he *already* does.
UART's on debugging hardware is non existant. We all use FTDI dongles.
Look like OpenCores has a module. https://opencores.org/project,usb2uart
spoke to the developer of zipcpu (dan). he's the one that has the DDR3 controller on opencores.
l.
arm-netbook@lists.phcomp.co.uk