On Friday 7. December 2018 08.49.57 Luke Kenneth Casson Leighton wrote:
On Fri, Dec 7, 2018 at 2:39 AM Julie Marchant onpon4@riseup.net wrote:
And as for 32-bit MIPS... just outta luck, I guess, at least until an FSF-approved distro starts supporting it.
From what people have said about Debian, I can envisage getting PureOS to work
on mipsel. It is just that I haven't dedicated any time to really looking into it yet.
all 32-bit OSes are on the ropes, but not for the reason that most people think. it's down to the linker phase of binutils *running out of memory*, due to a default option to keep the object files and the binary being linked in memory:
https://marc.info/?l=binutils-bugs&m=153030202426968&w=2 https://sourceware.org/bugzilla/show_bug.cgi?id=22831
unfortunately this is quotes not anyone's responsibility quotes, it's one of those little-known syndrome / underlying / indirect causes.
It is a consequence of people not really valuing the longevity of hardware. A decade or so ago, people still cared about whether GNU/Linux worked on old hardware: it was even a selling point. Now people are probably prioritising the corporate space where you can just request a new laptop with double the memory from your manager and pretend that it was a cheap way of solving the problem.
please please for goodness sake could people spread awareness about this more widely, try out some very large builds including comprehensive debug build options, adding the option advised in that bugreport, and report back on the bugreport if it was successful or not.
It's interesting to search for the suggested linker options...
-Wl,--no-keep-memory
...to see where they also came up:
https://bugzilla.redhat.com/show_bug.cgi?id=117868
I like the way the reporter gets an internal compiler error. These things, including linker assertion errors which the user shouldn't see, don't seem to get adequately diagnosed or remedied in my experience: you just get told that "you're holding it wrong" and WONTFIX. Still, since 2004 there should be some test cases by now. ;-)
I see that you have been working hard to persuade people, though:
https://bugzilla.redhat.com/show_bug.cgi?id=117868
It really sounds like a classic database problem, and I wonder what the dataset size is. Of course data processing is faster if you can shove everything into memory, but the trick is to manage volumes that are larger than memory.
Cross-building would be a workaround, but Debian appears fundamentally opposed to that, even though the alternative is the abandonment of architectures. And you even have the new and shiny arm64 support in jeopardy because the appropriate server hardware never seems to get to market (or stick around).
*without* that option, the fact that firefox now needs SEVEN GIGABYTES of resident RAM in order to complete the linker phase (which is obviously impossible on a 32-bit processor), armhf, mips32, and many other 32-bit architectures are just going to get... dropped by distros...
*for no good reason*.
Well, it sounds like the usual practice of greasing up a hippo and hoping that the result will be as lean and as fast as a cheetah. The Web gets ever more complicated and yet many of the tasks we need it for remain the same. Still, the usual advocates of disposable computing would have us continually upgrade to keep being able to do even the simplest things, finding a way of selling us what we already have, as always.
Paul