This is the most interesting article I've read in a long time. Like machine learning but on an fpga... and analog!!! Comes to prove my hunch that the binary approach to computing is not the most optimal one. Analog might be hard but with enough investment it can give better results in the long run. It's just it's hard enough to be considered impossible right now.
On Fri, Apr 28, 2017 at 3:58 PM, Luke Kenneth Casson Leighton <lkcl@lkcl.net
wrote:
On Fri, Apr 28, 2017 at 12:47 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
If you're trying to trans-code something that you don't have a co-processor/module for you're forced to CPU/GPU trans-coding.
you may be misunderstanding: the usual way to interact with a GPU is to use a memory buffer, drop some data in it, tell the GPU (again via a memory location) "here, get on with it" - basically a hardware-version of an API - and it goes an executes its *OWN* instructions, completely independently and absolutely nothing to do with the CPU.
there's no "transcoding" involved because they share the same memory bus.
Would a FPGA still be more power huns gry then?
yes.
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
you wouldn't give a general-purpose task to an FPGA, and you wouldn't give a specialist task for which they're not suited to a CPU, GPU _or_ an FPGA: you'd give it to a custom piece of silicon.
in the case where you have something that falls outside of the custom silicon (a newer CODEC for example) then yes, an FPGA would *possibly* help... if and only if you have enough bandwidth.
video is RIDICULOUSLY bandwidth-hungry. 1920x1080 @ 60fps 32bpp is... an insane data-rate. it's 470 MEGABYTES per second. that's what the framebuffer has to handle, so you not only have to have the HDMI (or other video) PHY capable of handling that but the CODEC hardware has to be able to *write* - simultaneously - on the exact same memory bus.
the point is: if you're considering using an FPGA to accelerate video it's gonna be a *really* big and expensive FPGA, and you would need to implement something like PCIe just to cope with the communications between the two.
costs just escalated way beyond market value.
this is why companies just simply... abandon one SoC and do another one which has an improved custom CODEC silicon which *does* handle the newer CODEC(s).
We can always have evolution create a efficient decoder ;-) https://www.damninteresting.com/on-the-origin-of-circuits/
woooow.
"It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip’s operation, but they were interacting with the main circuitry through some unorthodox method— most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors’ absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white."
that's incredible.
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk