Hello,
Nice to see the progress being made in the recent update:
https://www.crowdsupply.com/eoma68/micro-desktop/updates/what-do-1-000-eoma6...
It's a shame that you have to redo your Parabola bootstrapping. I experimented with bootstrapping Parabola on MIPS a few months ago using two different approaches - (1) try and cross-compile packages, (2) try and native build in another environment - and it was rather frustrating either way. Ultimately, I didn't regard it as a great way to spend my time.
Paul
On Tue, Dec 4, 2018 at 8:12 PM Paul Boddie paul@boddie.org.uk wrote:
Hello,
Nice to see the progress being made in the recent update:
https://www.crowdsupply.com/eoma68/micro-desktop/updates/what-do-1-000-eoma6...
It's a shame that you have to redo your Parabola bootstrapping. I experimented with bootstrapping Parabola on MIPS a few months ago using two different approaches - (1) try and cross-compile packages, (2) try and native build in another environment - and it was rather frustrating either way. Ultimately, I didn't regard it as a great way to spend my time.
:)
it was quite hilarious to get up and running. https://wiki.parabola.nu/MIPS_Installation - oh. discontinued. that's not going to help *sigh*.
basically it's a hell of a lot easier if you have a native x86 parabola install, as you can then use the equivalent of debian foriegn arch debootstrap, and the --second-stage you run a qemu emulator to finish off the install.
that's basically exactly what i did... except because i didn't have native x86 parabola i ran in qemu.
it worked really well and makes for a hilarious story.
l.
On Wednesday 5. December 2018 03.38.59 Luke Kenneth Casson Leighton wrote:
it was quite hilarious to get up and running. https://wiki.parabola.nu/MIPS_Installation - oh. discontinued. that's not going to help *sigh*.
Yes, it seems that people got enthusiastic when Stallman started using that Lemote laptop, but when he stopped using it, they all dropped MIPS as soon as they could. So, gNewSense and Parabola both supported the MIPS architecture on some level, and Guix still does, I think, although since the Lemote stuff supported mips64el, any remaining support in these distributions is not useful for 32-bit devices, amongst which is the Ingenic SoC that you were looking at.
Of course, Debian supports everything of interest, but then there has to be a process of weeding out non-free packages and content. Since your EOMA68 campaign, PureOS has become a candidate for a suitable Debian-based FSF- endorsed distribution (ignoring systemd concerns). If Trisquel hadn't switched to Ubuntu as its base, it would also have been a candidate, but instead it suffers from Ubuntu's arbitrary architecture selection policy.
basically it's a hell of a lot easier if you have a native x86 parabola install, as you can then use the equivalent of debian foriegn arch debootstrap, and the --second-stage you run a qemu emulator to finish off the install.
that's basically exactly what i did... except because i didn't have native x86 parabola i ran in qemu.
it worked really well and makes for a hilarious story.
I guess you are able to rely on the existing ARM port of Arch, though. What I found was that for building packages from scratch you have to combine Parabola and Arch repositories, and the mechanisms for doing this are not very coherent.
I ended up having to write tools to look up packages in different packaging repositories, first trying one place, then another, and so on. Some of these tools I ran in an appropriate x86 Parabola installation because they won't work in a "portable" way in other environments.
What I learned is that there is a considerable difference between genuine multi-architecture distributions and the kind of architecture-and-a-half distribution that Arch seems to be, with Parabola being under that umbrella. I think Arch has already thrown i386 over the side, so I wonder whether Parabola will also have to do so in time as well.
Generally, I think that the Arch maintainers make some pretty questionable decisions: switching the default version of Python to version 3 very early on in the 3.x lifespan being one notable example. But having the choice is good for people who can get along with such decisions, I guess.
Paul
P.S. It is also pretty frustrating that people seem to need Richard Stallman to tell them what to do. When I asked people supposedly interested in porting the Hurd to L4-based systems about such matters, one of the responses indicated that Stallman didn't think that working on operating system fundamentals was worthwhile compared to doing other things.
But if something is worth doing, even if not everyone agrees, why does anyone need some kind of "sign off" from someone they've heard of? Just do what you think is right or interesting or enjoyable or useful, already! I honestly don't know why anyone would follow a mailing list on a topic if they didn't already know it was worthwhile.
On Wed, Dec 5, 2018 at 4:41 PM Paul Boddie paul@boddie.org.uk wrote:
But if something is worth doing, even if not everyone agrees, why does anyone need some kind of "sign off" from someone they've heard of? Just do what you think is right or interesting or enjoyable or useful, already! I honestly don't know why anyone would follow a mailing list on a topic if they didn't already know it was worthwhile.
interesting insights that you raise, paul (all of them), this one caught my attention in particular. occasionally i encounter people who follow some logical conclusion that i, personally, have reached... *without* themselves having reviewed the facts / data and associated logic. this completely freaks me out.
the second part is: i think that people know / feel that without "sign-off", the person (e.g. dr stallman) acting as a diplomatic gateway / channel to other resources and other people will not put them in touch with other people / resources if that person doesn't believe the proposal is workable.
given that large projects succeed based on collaboration, the high-profile person, who will have a lot more experience than them, becomes not just a "reviewer" of the proposal, but a channel and potential advocate as well.
put another way: if someone's not strongly convinced of the value of their idea, they're not going to stand up and make it happen when faced with someone who says "no", are they? :)
l.
On 2018-12-05 at 17:40:29 +0100, Paul Boddie wrote:
Of course, Debian supports everything of interest, but then there has to be a process of weeding out non-free packages and content.
Note that this "process" simply involves not adding the 'non-free' repository to /etc/apt/sources.list. It's not silently added by default, the user/admin needs to make an explicit choice to add it.
On Thursday 6. December 2018 09.58.51 Elena ``of Valhalla'' wrote:
On 2018-12-05 at 17:40:29 +0100, Paul Boddie wrote:
Of course, Debian supports everything of interest, but then there has to be a process of weeding out non-free packages and content.
Note that this "process" simply involves not adding the 'non-free' repository to /etc/apt/sources.list. It's not silently added by default, the user/admin needs to make an explicit choice to add it.
Sorry, I meant in the context of being free enough for FSF endorsement. Otherwise, there would be no need for distributions like PureOS, gNewSense, and so on.
Discussion can be had about the FSF criteria, of course, but since Luke is actually seeking such endorsement, the only thing that might be helpful for him is to indicate to him that various FSF concerns are now addressed in the more mainstream distributions, such as there not being random firmware binaries in kernel packages, and so on. Since I don't track what Debian's policy on such things is, bringing any new developments to his attention could be useful and save him a lot of time and effort.
I tend to be wary about making any statements about Debian policy these days since it tends to get me flamed by random people.
Paul
On Thu, Dec 6, 2018 at 4:58 PM Paul Boddie paul@boddie.org.uk wrote:
Discussion can be had about the FSF criteria, of course, but since Luke is actually seeking such endorsement, the only thing that might be helpful for him is to indicate to him that various FSF concerns are now addressed in the more mainstream distributions, such as there not being random firmware binaries in kernel packages, and so on.
the RYF Criteria are extremely specific: it's not enough to have a non-free section that's "disabled", it must be *not possible* for an average end-user to *accidentally* end up installing non-free software by complete accident such as "running a GUI and arbitrarily clicking random buttons".
the absolute worst-case is where an inexperienced end-user, running e.g. synaptics, goes "i have no idea what this does, i'm just gonna click it" and it happens to enable the "non-free" section, happens to silently and happily perform an apt-get update, and wow, suddenly there's binary firmware available... all WITHOUT warning the user of the consequences.
the "convenience" scripts that download mstruetypefonts.
the "convenience" script that gets the latest adobe flash player.
the broadcom wifi firmware extractor scripts
my feeling is, here, that if aptitude, apt, synaptics and other apt front-ends added a simple warning dialog "hello you are adding the non-free section, this can have severe consequences as the source is not available for review", that would quite likely eliminate one of the FSF's major concerns.
l.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 12/06/2018 01:39 PM, Luke Kenneth Casson Leighton wrote:
On Thu, Dec 6, 2018 at 4:58 PM Paul Boddie paul@boddie.org.uk wrote:
Discussion can be had about the FSF criteria, of course, but since Luke is actually seeking such endorsement, the only thing that might be helpful for him is to indicate to him that various FSF concerns are now addressed in the more mainstream distributions, such as there not being random firmware binaries in kernel packages, and so on.
the RYF Criteria are extremely specific: it's not enough to have a non-free section that's "disabled", it must be *not possible* for an average end-user to *accidentally* end up installing non-free software by complete accident such as "running a GUI and arbitrarily clicking random buttons".
the absolute worst-case is where an inexperienced end-user, running e.g. synaptics, goes "i have no idea what this does, i'm just gonna click it" and it happens to enable the "non-free" section, happens to silently and happily perform an apt-get update, and wow, suddenly there's binary firmware available... all WITHOUT warning the user of the consequences.
the "convenience" scripts that download mstruetypefonts.
the "convenience" script that gets the latest adobe flash player.
the broadcom wifi firmware extractor scripts
my feeling is, here, that if aptitude, apt, synaptics and other apt front-ends added a simple warning dialog "hello you are adding the non-free section, this can have severe consequences as the source is not available for review", that would quite likely eliminate one of the FSF's major concerns.
Just a note, the only way to add the non-free section in Debian involves typing the word "non-free" into a particular spot on a text file, or if using the GUI interface, to edit the entry for the Debian repo to type "non-free" into the "Components" textbox in the "Edit Source" dialog box. So it's not really possible to accidentally enable the non-free repository. I honestly tend to disagree with the FSF's classification of Debian for this reason.
That being said, it's the FSF's rules, so Parabola is the only real option regardless of what one might think of the FSF's classification. And as for 32-bit MIPS... just outta luck, I guess, at least until an FSF-approved distro starts supporting it.
- -- Julie Marchant http://onpon4.github.io
Encrypt your emails with GnuPG: https://emailselfdefense.fsf.org
On Fri, Dec 7, 2018 at 2:39 AM Julie Marchant onpon4@riseup.net wrote:
And as for 32-bit MIPS... just outta luck, I guess, at least until an FSF-approved distro starts supporting it.
all 32-bit OSes are on the ropes, but not for the reason that most people think. it's down to the linker phase of binutils *running out of memory*, due to a default option to keep the object files and the binary being linked in memory:
https://marc.info/?l=binutils-bugs&m=153030202426968&w=2 https://sourceware.org/bugzilla/show_bug.cgi?id=22831
unfortunately this is quotes not anyone's responsibility quotes, it's one of those little-known syndrome / underlying / indirect causes.
please please for goodness sake could people spread awareness about this more widely, try out some very large builds including comprehensive debug build options, adding the option advised in that bugreport, and report back on the bugreport if it was successful or not.
*without* that option, the fact that firefox now needs SEVEN GIGABYTES of resident RAM in order to complete the linker phase (which is obviously impossible on a 32-bit processor), armhf, mips32, and many other 32-bit architectures are just going to get... dropped by distros...
*for no good reason*.
l.
On Friday 7. December 2018 08.49.57 Luke Kenneth Casson Leighton wrote:
On Fri, Dec 7, 2018 at 2:39 AM Julie Marchant onpon4@riseup.net wrote:
And as for 32-bit MIPS... just outta luck, I guess, at least until an FSF-approved distro starts supporting it.
From what people have said about Debian, I can envisage getting PureOS to work
on mipsel. It is just that I haven't dedicated any time to really looking into it yet.
all 32-bit OSes are on the ropes, but not for the reason that most people think. it's down to the linker phase of binutils *running out of memory*, due to a default option to keep the object files and the binary being linked in memory:
https://marc.info/?l=binutils-bugs&m=153030202426968&w=2 https://sourceware.org/bugzilla/show_bug.cgi?id=22831
unfortunately this is quotes not anyone's responsibility quotes, it's one of those little-known syndrome / underlying / indirect causes.
It is a consequence of people not really valuing the longevity of hardware. A decade or so ago, people still cared about whether GNU/Linux worked on old hardware: it was even a selling point. Now people are probably prioritising the corporate space where you can just request a new laptop with double the memory from your manager and pretend that it was a cheap way of solving the problem.
please please for goodness sake could people spread awareness about this more widely, try out some very large builds including comprehensive debug build options, adding the option advised in that bugreport, and report back on the bugreport if it was successful or not.
It's interesting to search for the suggested linker options...
-Wl,--no-keep-memory
...to see where they also came up:
https://bugzilla.redhat.com/show_bug.cgi?id=117868
I like the way the reporter gets an internal compiler error. These things, including linker assertion errors which the user shouldn't see, don't seem to get adequately diagnosed or remedied in my experience: you just get told that "you're holding it wrong" and WONTFIX. Still, since 2004 there should be some test cases by now. ;-)
I see that you have been working hard to persuade people, though:
https://bugzilla.redhat.com/show_bug.cgi?id=117868
It really sounds like a classic database problem, and I wonder what the dataset size is. Of course data processing is faster if you can shove everything into memory, but the trick is to manage volumes that are larger than memory.
Cross-building would be a workaround, but Debian appears fundamentally opposed to that, even though the alternative is the abandonment of architectures. And you even have the new and shiny arm64 support in jeopardy because the appropriate server hardware never seems to get to market (or stick around).
*without* that option, the fact that firefox now needs SEVEN GIGABYTES of resident RAM in order to complete the linker phase (which is obviously impossible on a 32-bit processor), armhf, mips32, and many other 32-bit architectures are just going to get... dropped by distros...
*for no good reason*.
Well, it sounds like the usual practice of greasing up a hippo and hoping that the result will be as lean and as fast as a cheetah. The Web gets ever more complicated and yet many of the tasks we need it for remain the same. Still, the usual advocates of disposable computing would have us continually upgrade to keep being able to do even the simplest things, finding a way of selling us what we already have, as always.
Paul
On Fri, Dec 7, 2018 at 2:25 PM Paul Boddie paul@boddie.org.uk wrote:
I like the way the reporter gets an internal compiler error. These things, including linker assertion errors which the user shouldn't see, don't seem to get adequately diagnosed or remedied in my experience: you just get told that "you're holding it wrong" and WONTFIX. Still, since 2004 there should be some test cases by now. ;-)
you know how lobsters will sit in a pot that's boiling because the temperature differential is not big enough? well... it's like that.
unfortunately, over the past decade+, nobody predicted that the linker phase would go *anywhere near* 4GB in size. that would be INSANE!! so of *course* they ripped out all of the historic archaic techniques that dealt efficiently with linking when there's less memory than can be addressed even by adding Virtual Memory.
regarding cross-compiling: if you've seen how openembedded uses bitbake to do cross-compiling (which includes setting an automatic qemu redirect of autoconf script running and even qemu-emulated gcc running), you start to get a deep appreciation for the dangers of cross-compiling.
the key one is, you have absolutely no idea if the host correctly generated the autoconf settings correctly or not. even running under qemu is not accepted, because of the risks of qemu giving the wrong information when compared to native hardware.
L.
On Fri, 7 Dec 2018 15:36:33 +0000 Luke Kenneth Casson Leighton lkcl@lkcl.net wrote: <snip>
regarding cross-compiling: if you've seen how openembedded uses bitbake to do cross-compiling (which includes setting an automatic qemu redirect of autoconf script running and even qemu-emulated gcc running), you start to get a deep appreciation for the dangers of cross-compiling.
the key one is, you have absolutely no idea if the host correctly generated the autoconf settings correctly or not. even running under qemu is not accepted, because of the risks of qemu giving the wrong information when compared to native hardware.
L.
Yuk. You saved me a lot of time. I trusted qemu too!
David
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Sat, Dec 8, 2018 at 3:46 AM David Niklas doark@mail.com wrote:
the key one is, you have absolutely no idea if the host correctly generated the autoconf settings correctly or not. even running under qemu is not accepted, because of the risks of qemu giving the wrong information when compared to native hardware.
L.
Yuk. You saved me a lot of time. I trusted qemu too!
well... on the other hand... i heard archlinux and the parabola team use the technique all the time, for building officially-released packages.
l.
On 2018-12-07 15:24 +0100, Paul Boddie wrote:
Cross-building would be a workaround, but Debian appears fundamentally opposed to that,
Do you mean 'as buildds'? Debian's support from cross-building has improved hugely over the last 8 years, and you can now crossbuild a _lot_ of debian. Helmut told me 'nearly 2/3rds' a while ago, although I'm not sure if that's 2/3rds of _everything_, or some subset.
There is lots more that could be done (not least educating upstreams that like to do 'uncrossable things'), but we are the opposite of 'fundamentally opposed to it'.
even though the alternative is the abandonment of architectures. And you even have the new and shiny arm64 support in jeopardy because the appropriate server hardware never seems to get to market (or stick around).
In what way is arm64 'in jeopardy'? I don't think it's going anywhere.
Wookey
On Friday 7. December 2018 16.28.24 Wookey wrote:
On 2018-12-07 15:24 +0100, Paul Boddie wrote:
Cross-building would be a workaround, but Debian appears fundamentally opposed to that,
Do you mean 'as buildds'?
I don't know. If that is the way the Debian archive is built then perhaps the answer is "yes".
Debian's support from cross-building has improved hugely over the last 8 years, and you can now crossbuild a _lot_ of debian. Helmut told me 'nearly 2/3rds' a while ago, although I'm not sure if that's 2/3rds of _everything_, or some subset.
It is certainly easier to perform cross-building activities, although I will admit that I am not typically cross-building packages these days. I've probably said before that when the cross-toolchains became available, it helped a great deal with the things I tend to do, so I really appreciate them.
One thing that I do find frustrating, however, is the trail of pages on various sites (wikis, typically) that describe the state of progress at different points in time. It isn't particularly coherent and undermines the impression of the progress that has been made.
There is lots more that could be done (not least educating upstreams that like to do 'uncrossable things'), but we are the opposite of 'fundamentally opposed to it'.
Perhaps I should have clarified that Debian appears fundamentally opposed to using cross-building as the means of building the archive for an architecture. I understand that the aim is to ensure that people can run systems that are able to build their own packages, but it seems that we will arrive at a point where the imperfect result of natively-built packages needing to be complemented by cross-built packages will become unavoidable.
even though the alternative is the abandonment of architectures. And you even have the new and shiny arm64 support in jeopardy because the appropriate server hardware never seems to get to market (or stick around).
In what way is arm64 'in jeopardy'? I don't think it's going anywhere.
Maybe I got the wrong impression from this message:
https://groups.google.com/d/msg/linux.debian.devel.release/meYaIZR7Sm0/GtHUx...
I guess that the main concerns are that some arm64 products aren't supporting arm32 (of whichever flavour), that there are lots of "development board" products but not so many "data centre" products, making automation and management difficult. I might also add that when looking at ARM server offerings, they seem to be pretty expensive (maybe to differentiate themselves from existing offerings on traditional server architectures), and it probably doesn't help that companies don't follow through on their roadmaps, meaning that people end up waiting forever for something that might have been a usable product.
But maybe there is no shortage of usable arm64 hardware for archive-building purposes and that my perceptions of such shortages for other architectures are also incorrect, too. If so, I stand corrected.
Paul
On Fri, 7 Dec 2018 08:49:57 +0000 Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
On Fri, Dec 7, 2018 at 2:39 AM Julie Marchant onpon4@riseup.net wrote:
And as for 32-bit MIPS... just outta luck, I guess, at least until an FSF-approved distro starts supporting it.
all 32-bit OSes are on the ropes, but not for the reason that most people think. it's down to the linker phase of binutils *running out of memory*, due to a default option to keep the object files and the binary being linked in memory:
https://marc.info/?l=binutils-bugs&m=153030202426968&w=2 https://sourceware.org/bugzilla/show_bug.cgi?id=22831
unfortunately this is quotes not anyone's responsibility quotes, it's one of those little-known syndrome / underlying / indirect causes.
please please for goodness sake could people spread awareness about this more widely, try out some very large builds including comprehensive debug build options, adding the option advised in that bugreport, and report back on the bugreport if it was successful or not.
*without* that option, the fact that firefox now needs SEVEN GIGABYTES of resident RAM in order to complete the linker phase (which is obviously impossible on a 32-bit processor), armhf, mips32, and many other 32-bit architectures are just going to get... dropped by distros...
*for no good reason*.
l.
HOLY COW! Luke, there has been no progress on this for ~6 months. I could test 2 systems of my own and comment on the bug report. It would not be soon, probably at least a week, but what's a week if you wait 6 months? I have an RK3399 board by firefly and an HP Stream Notebook with unupgreadable or replaceable RAM and disk. Perfect candidates for this test. I would preferably use Gentoo Linux, does it matter?
However, luke, have you, or anyone else, tried to trigger this with llvm's gold linker? I've no idea how to enable the gold linker in projects, or if it's a "safe" thing to do. It was beta when last I looked.
FYI: You should see what happens with gcc and graph-tool? ... # bug 453544 CHECKREQS_DISK_BUILD="6G" ... # most machines don't have enough ram for parallel builds python_foreach_impl run_in_build_dir emake -j1 ... Yes, that's right, 6GB for 1 gcc process! And I've seen gcc use 7GB!
And that's not all, have a bunch of files to back up? Dar-2.5.17 (Disk ARchiver)(latest). Using bzip2 -9, NOT XZ -9! compression single threaded! RSS (part way through!) 14.8GB I have to talk to the dar devs.
RAM is not cheap, what is this FLOSS world coming to?
Sincerely, David
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Sat, Dec 8, 2018 at 3:46 AM David Niklas doark@mail.com wrote:
HOLY COW! Luke, there has been no progress on this for ~6 months. I could test 2 systems of my own and comment on the bug report. It would not be soon, probably at least a week, but what's a week if you wait 6 months? I have an RK3399 board by firefly and an HP Stream Notebook with unupgreadable or replaceable RAM and disk. Perfect candidates for this test. I would preferably use Gentoo Linux, does it matter?
not in the slightest. the more the better
However, luke, have you, or anyone else, tried to trigger this with llvm's gold linker?
no.
Yes, that's right, 6GB for 1 gcc process! And I've seen gcc use 7GB!
gcc is fine as (ok this is what i was told 15 years ago) there's detection built-in to utilise available resident RAM, dynamically.
And that's not all, have a bunch of files to back up? Dar-2.5.17 (Disk ARchiver)(latest). Using bzip2 -9, NOT XZ -9! compression single threaded! RSS (part way through!) 14.8GB I have to talk to the dar devs.
RAM is not cheap, what is this FLOSS world coming to?
the assumption is, you're on an x86 64-bit system, with 32-64GB of RAM and a 3200mbytes/sec NVMe SSD, so why should you care?
l.
On Sat, 8 Dec 2018 05:36:36 +0000 Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
Yes, that's right, 6GB for 1 gcc process! And I've seen gcc use 7GB!
gcc is fine as (ok this is what i was told 15 years ago) there's detection built-in to utilise available resident RAM, dynamically.
I've hit OOM with GCC 5 and 7, building Webkit with LTO (on 64-bit).
- Lauri
On Sat, Dec 8, 2018 at 7:56 AM Lauri Kasanen cand@gmx.com wrote:
I've hit OOM with GCC 5 and 7, building Webkit with LTO (on 64-bit).
hmmm that's new.... gcc 4 was fine.
On 12/08/2018 12:36 AM, Luke Kenneth Casson Leighton wrote:
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Sat, Dec 8, 2018 at 3:46 AM David Niklas doark@mail.com wrote:
HOLY COW! Luke, there has been no progress on this for ~6 months. I could test 2 systems of my own and comment on the bug report. It would not be soon, probably at least a week, but what's a week if you wait 6 months? I have an RK3399 board by firefly and an HP Stream Notebook with unupgreadable or replaceable RAM and disk. Perfect candidates for this test. I would preferably use Gentoo Linux, does it matter?
not in the slightest. the more the better
Any plans to use rk3399 for a 2nd revision of the eoma68 standard? or any alternative arm processor? Since risc-v is maybe 3+ years out of the way?
On Sat, Dec 8, 2018 at 7:54 PM zap calmstorm@posteo.de wrote:
Any plans to use rk3399 for a 2nd revision of the eoma68 standard? or any alternative arm processor? Since risc-v is maybe 3+ years out of the way?
yes. bear in mind it's around USD $5k-10k per design effort. i *may* be able to recover the RK3288 PCB i did. the RK3399 would be nicer.
l.
On 12/09/2018 12:07 AM, Luke Kenneth Casson Leighton wrote:
On Sat, Dec 8, 2018 at 7:54 PM zap calmstorm@posteo.de wrote:
Any plans to use rk3399 for a 2nd revision of the eoma68 standard? or any alternative arm processor? Since risc-v is maybe 3+ years out of the way?
yes. bear in mind it's around USD $5k-10k per design effort. i *may* be able to recover the RK3288 PCB i did. the RK3399 would be nicer.
My bad for responding to you directly and now too, but yeah, I think RK3399 would be better for meltdown spectre protection, etc...
Also, it is faster :)
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On Sunday 9. December 2018 05.07.51 Luke Kenneth Casson Leighton wrote:
On Sat, Dec 8, 2018 at 7:54 PM zap calmstorm@posteo.de wrote:
Any plans to use rk3399 for a 2nd revision of the eoma68 standard? or any alternative arm processor? Since risc-v is maybe 3+ years out of the way?
yes. bear in mind it's around USD $5k-10k per design effort. i *may* be able to recover the RK3288 PCB i did. the RK3399 would be nicer.
Although it's not a 64-bit CPU and is MIPS-based not ARM-based, wasn't the JZ4775 board almost done? And didn't it have 2GB RAM already, before the A20 2GB RAM (and HDMI) rework effort? Of course there is the software distribution issue (if you want FSF endorsement), but there seems to be some demand for similar products.
I follow the Dingoonity scene at a distance and although there is plenty of gadget chasing - people seem to throw themselves at random handhelds with poor screens and obsolete JZ variants - there are some that still wonder about successors to the GCW-Zero handheld or even getting one of the original ones, particularly as it seems that not everyone managed to receive one in that campaign. The enthusiasm is presumably something to do with MIPS-based stuff being used in various consoles of a certain era.
Paul
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Wed, Dec 12, 2018 at 6:39 PM Paul Boddie paul@boddie.org.uk wrote:
On Sunday 9. December 2018 05.07.51 Luke Kenneth Casson Leighton wrote:
On Sat, Dec 8, 2018 at 7:54 PM zap calmstorm@posteo.de wrote:
Any plans to use rk3399 for a 2nd revision of the eoma68 standard? or any alternative arm processor? Since risc-v is maybe 3+ years out of the way?
yes. bear in mind it's around USD $5k-10k per design effort. i *may* be able to recover the RK3288 PCB i did. the RK3399 would be nicer.
Although it's not a 64-bit CPU and is MIPS-based not ARM-based, wasn't the JZ4775 board almost done?
pretty much - i just couldn't get beyond u-boot, as i had put a 24mhz XTAL in rather than a 48mhz, and i couldn't get the PLLs right to get beyond the SPL loader.
And didn't it have 2GB RAM already, before the A20 2GB RAM (and HDMI) rework effort?
yes.
Of course there is the software distribution issue (if you want FSF endorsement), but there seems to be some demand for similar products.
I follow the Dingoonity scene at a distance and although there is plenty of gadget chasing - people seem to throw themselves at random handhelds with poor screens and obsolete JZ variants - there are some that still wonder about successors to the GCW-Zero handheld or even getting one of the original ones, particularly as it seems that not everyone managed to receive one in that campaign. The enthusiasm is presumably something to do with MIPS-based stuff being used in various consoles of a certain era.
ok - well, i've still got 2 of the cards around somewhere, and i can order some 48mhz XTALs: i have a heat gun so should be ok to put them in.
i have no idea what kinds of volumes to expect.
l.
On Wednesday 12. December 2018 19.19.43 Luke Kenneth Casson Leighton wrote:
On Wed, Dec 12, 2018 at 6:39 PM Paul Boddie paul@boddie.org.uk wrote:
Although it's not a 64-bit CPU and is MIPS-based not ARM-based, wasn't the JZ4775 board almost done?
pretty much - i just couldn't get beyond u-boot, as i had put a 24mhz XTAL in rather than a 48mhz, and i couldn't get the PLLs right to get beyond the SPL loader.
Yes, 48MHz seems to be the EXCLK frequency for this particular generation of JZ SoCs: the JZ4780 uses that frequency for the CI20. As for the PLLs, the clock configuration seems to change from SoC to SoC, but they are probably similar, and documentation, kernel and U-Boot support should be out there for perusal.
[Dingoo, GCW-Zero, MIPS-based consoles and emulation]
ok - well, i've still got 2 of the cards around somewhere, and i can order some 48mhz XTALs: i have a heat gun so should be ok to put them in.
i have no idea what kinds of volumes to expect.
Demand or supply volume? As for demand, it is difficult to say, but I thought some reporting of things from the wider world might help you decide what might be worthwhile doing in future, especially if most of the work is done.
Paul
On Fri, 7 Dec 2018 08:49:57 +0000 Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
On Fri, Dec 7, 2018 at 2:39 AM Julie Marchant onpon4@riseup.net wrote:
And as for 32-bit MIPS... just outta luck, I guess, at least until an FSF-approved distro starts supporting it.
Until recently, GuixSD supported MIPS (specifically for yeelong laptops, iirc). I believe the support has unfortunately lapsed due to a lack of developer effort, but it could probably be resurected with some more attention.
all 32-bit OSes are on the ropes, but not for the reason that most people think. it's down to the linker phase of binutils *running out of memory*, due to a default option to keep the object files and the binary being linked in memory:
https://marc.info/?l=binutils-bugs&m=153030202426968&w=2 https://sourceware.org/bugzilla/show_bug.cgi?id=22831
unfortunately this is quotes not anyone's responsibility quotes, it's one of those little-known syndrome / underlying / indirect causes.
please please for goodness sake could people spread awareness about this more widely, try out some very large builds including comprehensive debug build options, adding the option advised in that bugreport, and report back on the bugreport if it was successful or not.
I recently added this flag to Guix's qtwebkit build, and it seems to work well so far.
https://git.savannah.gnu.org/cgit/guix.git/commit/?id=ebdb15bc3540b1901f223b...
Unfortunately, an possibly unrelated error is causing the build to fail for i686 and dependency failures are preventing the armhf builds from going through, currently:
https://hydra.gnu.org/job/gnu/master/qtwebkit-5.212.0-alpha2.i686-linux https://hydra.gnu.org/job/gnu/master/qtwebkit-5.212.0-alpha2.armhf-linux
While the Guix projects has a nice build farm to provide users with pre-built packages, I try to, when I can, make it less painful for folks to build their packages locally.
*without* that option, the fact that firefox now needs SEVEN GIGABYTES of resident RAM in order to complete the linker phase (which is obviously impossible on a 32-bit processor), armhf, mips32, and many other 32-bit architectures are just going to get... dropped by distros...
*for no good reason*.
I might do some exploration to see if this can fix some of Guix's current build failures for i686 and armhf.
`~Eric
On Thu, Dec 06, 2018 at 09:38:43PM -0500, Julie Marchant wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 12/06/2018 01:39 PM, Luke Kenneth Casson Leighton wrote: the RYF Criteria are extremely specific: it's not enough to have a non-free section that's "disabled", it must be *not possible* for an average end-user to *accidentally* end up installing non-free software by complete accident such as "running a GUI and arbitrarily clicking random buttons".
the absolute worst-case is where an inexperienced end-user, running e.g. synaptics, goes "i have no idea what this does, i'm just gonna click it" and it happens to enable the "non-free" section, happens to silently and happily perform an apt-get update, and wow, suddenly there's binary firmware available... all WITHOUT warning the user of the consequences.
[...]
Just a note, the only way to add the non-free section in Debian involves typing the word "non-free" into a particular spot on a text file, or if using the GUI interface, to edit the entry for the Debian repo to type "non-free" into the "Components" textbox in the "Edit Source" dialog box. So it's not really possible to accidentally enable the non-free repository. I honestly tend to disagree with the FSF's classification of Debian for this reason.
I have just checked synaptics on my Debian stable Laptop with only main repos enabled and could not find a button to enable non-free with just one (or a few) clicks.
But according to the points here: https://www.gnu.org/distros/common-distros.en.html
there are other concerns besides "number of clicks" and "accidents of Joe Average Enduser". So in the foreseeable future FSF endorsement of Debian is unlikely.
Has anyone on this list attended: https://www.fsf.org/events/molly-deblanc-john-sullivan-20180803-hsinchucity-...
The user group of the first (crowd supply) computer cards will split into subgroups running the distributions mentioned in the campaign rewards (Debian, Devuan, Parabola, Fedora). Some will try to stay as close to the vanilla state of a fresh install. Some will use a state working with linux-sunxi-kernel (as previously discussed on this list). I think we can expect a lot of diversity. I am going to be in the Debian group.
That being said, it's the FSF's rules, so Parabola is the only real option regardless of what one might think of the FSF's classification.
I agree.
kind regards Pablo
Just a note, the only way to add the non-free section in Debian involves typing the word "non-free" into a particular spot on a text file, or if using the GUI interface, to edit the entry for the Debian repo to type "non-free" into the "Components" textbox in the "Edit Source" dialog box. So it's not really possible to accidentally enable the non-free repository. I honestly tend to disagree with the FSF's classification of Debian for this reason.
I'm not happy about the Debian<->FSF situation either, but note that if you want your Emacs install to include all the online doc with which it should normally be accompanied, you have to add the `emacs-common-non-dfsg` package from the `non-free` section.
So, the FSF ends up wanting to encourage Debian users to add the `non-free` section to access the docs of several important GNU packages.
There's also the fact that the wiki.debian.org includes several places where they encourage users to install `non-free` packages.
I wish the two associations could take a more pragmatic look at the situation to resolve these disagreements.
Stefan
On Fri, Dec 7, 2018 at 1:11 PM Stefan Monnier monnier@iro.umontreal.ca wrote:
There's also the fact that the wiki.debian.org includes several places where they encourage users to install `non-free` packages.
ohh. yeah. that won't help :) that would be viewed as "endorsement of the installation of non-free software", for sure.
I wish the two associations could take a more pragmatic look at the situation to resolve these disagreements.
well, one simple practical way would be to follow the trick deployed by devuan. what they did is extremely nifty: they created a proxy apt service that, for the most part, is just a "pass-through" to the standard debian mirrors.
one of the issues associated normally with maintaining a debian-augmented distro is: it takes considerably deep pockets to maintain a 160GB+ debian mirror.
a pass-through proxy that pre-vetted the repos, filtering out the entirety of the non-free section, would actively prevent and prohibit unintentional mistaken installations of non-free software by end-users blindly following online documentation, "authoritative" wiki pages and so on.
ultimately, their nightmare scenario is where *uninformed* end-users en-masse end up installing proprietary software *without* being aware of the consequences.
they genuinely don't care about the *intelligent* end-users that make informed decisions and smash through all and any barriers to bypass all and any limitations and restrictions placed in their path to get at the non-free firmware [or whatever].
l.
Julie Marchant onpon4@riseup.net writes:
That being said, it's the FSF's rules, so Parabola is the only real option regardless of what one might think of the FSF's classification.
The EOMA68-A20 is an armhf board, isn’t it? Since GuixSD is available for armhf, porting it to this board should not be too difficult. GuixSD is also one of the FSF-endorsed GNU+Linux distributions.
[I don’t know if any Guix hackers have received an A20 board for porting (I don’t have one), and if they have I don’t know what the status is.]
-- Ricardo
On Fri, Dec 7, 2018 at 10:42 PM Ricardo Wurmus rekado@elephly.net wrote:
[I don’t know if any Guix hackers have received an A20 board for porting
i donated one to the key developers when i encountered them at fosdem 2017.
l.
Luke Kenneth Casson Leighton lkcl@lkcl.net writes:
On Fri, Dec 7, 2018 at 10:42 PM Ricardo Wurmus rekado@elephly.net wrote:
[I don’t know if any Guix hackers have received an A20 board for porting
i donated one to the key developers when i encountered them at fosdem 2017.
Ah, great. I just chatted with that person and we’re tracking progress on this in the Guix bug tracker at: https://issues.guix.info/issue/33676
-- Ricardo
On Wed, 5 Dec 2018 03:38:59 +0000 Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
it was quite hilarious to get up and running. https://wiki.parabola.nu/MIPS_Installation - oh. discontinued. that's not going to help *sigh*.
You also have an ARM installation guide[1], however in your case that's probably not the easiest way to deal with it.
There is a way to use LXC to do a cross-pacstrap, however it's a bit tricky.
If you don't have Parabola installed already, install it on your GNU/Linux distribution. It should be something like that (I didn't test that though): 1) First install and setup virt-manager 2) Create a disk image that you can mount externally: # cd /var/lib/libvirt/images/ # qemu-img create -f raw parabola.raw 1G 3) Run the installation with the install media 4) boot the installed rootfs and run # pacstrap /mnt 5) power off the vm and copy the /mnt outside of the vm, then setup a loop device # udisksctl loop-setup \ -f /var/lib/libvirt/images/parabola.raw 6) Copy the /mnt outside of the rootfs 7) Setup LXC through virt-manager to be able to use systemd 8) Install and setup the OpenSSH server inside the LXC system vm 9) Install the required packages: # pacman -S qemu-user-static-binfmt libretools
Then trick is to: 1) export the /proc/sys/fs/binfmt_misc/ from the host to the /proc/sys/fs/binfmt_misc/ from the target in virt-manager. 2) run /usr/lib/systemd/systemd-binfmt by hand (I didn't manage to make the binfmt systemd service start)
Once that is done you can then create a chroot for building packages[2]: $ sudo librechroot -A armv7h -n parabola-armv7h make # arch-chroot /var/lib/archbuild/parabola-armv7h
Or you could create an installation rootfs: # mkdir parabola-armv7h # pacstrap -C /usr/share/pacman/defaults/pacman.conf.armv7h \ parabola-armv7h/ # arch-chroot /var/lib/archbuild/parabola-armv7h
If you want to use a different kernel (like a deblobbed Allwinner Linux kernel) that is way older it would be best to: 1) Make a PKGBUILD for it. That can be integrated in Parabola very easily 2) libremakepkg is an abstraction on top of makepkg which enables you to build in a chroot, including chroot of different architectures. $ cd /path/to/directory/that/has/a/PKGBUILD $ sudo libremakepkg -n parabola-armv7h 3) You will need to use OpenRC if systemd doesn't support your old kernel version
I've verified that with librechroot under a Parabola LXC under a Parabola host and I got: # file[...]/usr/bin/bash [...]/usr/bin/bash: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=2ed90c858f8b750764b9ef6ebf6e23499640e3c9, stripped
If you need serial console with OpenRC, I've written some documentation for it with another device in mind here: https://libreplanet.org/wiki/Group:Hardware/Freest/e-readers/Aura_H2O_Editio...
I hope that it's still useful, especially if you need to build things. Also note that I'm new to Parabola development, so there might be even easier ways that I don't know of yet.
References: ----------- [1]https://wiki.parabola.nu/ARM_Installation_Guide [2]There is no Linux image in it. You can use libremakepkg to build packages in this chroot[3] [3]https://wiki.parabola.nu/Package_maintainer_guide
Denis.
arm-netbook@lists.phcomp.co.uk