pleroma.debian.social

pleroma.debian.social

Instead of 32 bits going straight into retro-computing (e.g. with Linux & Firefox ditching it), maybe we should keep it alive as some kind of "alternative mainstream", given the insane availability of hardware and the fact that it's both fast enough for most purposes and not that hard to backport anything but the most wasteful "modern" softwares. That's a "permacomputing" I could get behind, instead of the cold war computing cosplay.

@mhd Also - do we have a complete set of patent-free hardware specs now for CPUs from ~20 years ago? Given a spare chunnk of change (100M?) could we just set up a fab to make chips and sell them at break-even basically indefinitely? Zero-growth "good enough" computing?

@kittylyst @mhd If Intel had its wits about it, I reckon there is still mileage in the i860 and i960 32-bit RISC chip designs that it owns.

There are other now-discontinued 32-bit CPUs that one might be able to get away with re-implementing.

@mhd I have NetBSD on line 42 for you. It sounds like it's urgent.

@kittylyst @mhd

64 bit RISC V would be good enough

@lproven Yeah, I've been thinking about that a lot more recently, it might be a great "rallying point".

@mhd @lproven Depending on what problems you want to solve there's a lot of existing stuff basically lying around, no need to even go back to 32-bit. The Intel Core 2 is 20 years old and the common PCs being liquidated by the pallet for a few dollars are mostly newer. The real obstacle to supporting newer web pages is that they are built by and for people with fast hardware, there's nothing intrinsic to modern web standards that is slow. Modern browser perf is actually quite impressive.

@mhd @lproven I think to make slower hardware work it's necessary to concede support for web sites built for big hardware, and focus on compatibility and performance for other applications. Nearly everything besides the worst React web monstrosities can work well with a little bit of love, protected by a cabal of interested users and developers who can ensure that the new code gets tested and tuned for the older hardware.

@mhd @lproven Code built by + for people on newer hardware inadvertently creeps into bloat. It's easy to miss that a music player uses 100x the CPU and RAM it needs when you have a typical 4 core 16GB RAM laptop, but the user with an old netbook will feel it for sure. Same in other categories. But modern kernels, compilers, compositors, etc should actually be _better_ on the old hardware than what was available in their original era. In my testing that's true, it's just fiddly.

@mirth @mhd This ought to be true, but in my extensive testing with what could be called mainstream alternative OSes, such as Linux and the BSDs, they're substantially slower on ~20Y old kit (Vista era) than anything except WinXP or OS/2. :-(

We should have put effort into a dead easy no-options self-config one-disk OS to make an old PC into an X terminal, and a server OS that delivers essential (ChromeOS style) services to a fleet of them.

All the tech was there.

Now, it's too late.

@lproven @mhd What I'm saying is the performance regressions are often easily fixed, generally tractable with small amounts of engineering effort, and that the underlying infrastructure is substantially better than 20 years ago. That is not to say things haven't regressed, that's my larger point, things do regress because there aren't enough maintainers that have older hardware as a priority.

@burnoutqueen @kittylyst @mhd Probably significantly cheaper and several factors more sustainable too
replies
0
announces
0
likes
1

@lproven @mhd Let's take a small example: Gnome's System Monitor uses a ton of CPU just to draw itself. Probably nobody has noticed because their machines are fast, and a rework to perhaps not smooth-update the graphs at 60Hz would be a big improvement.

Another: Gnome's app launcher layout is semi-broken on small screens, the app titles are unreadable. Again, probably <10 line fix to adjust the layout.

And: As I noted Mutter has removed older OpenGL support, just due to lack of maintainers.

@mirth @mhd GNOME is one of the worst examples going... of everything.

Of currently-maintained stuff, Xfce is the go-to for me. But there have been smaller and lighter in the past: LXDE until it went dormant, but before that, EDE as an example.

One of my personal favourites was the Rox Desktop:

https://rox.sourceforge.net/desktop/

... but it's inexorably tied up with its own packaging system, 0install.

https://0install.net/

And I believe a fair chunk of Python 2.

@lproven @mhd Looking at the tools, though, there is some "free" help. Compilers are much better (both faster and more correct). More libraries, even common stuff like image decoding, have SIMD optimizations. People building compositors understand latency better now, and there is actually work to address it. The kernel itself is much smarter about things like scheduling for HyperThreading, or preemptible scheduling.

@lproven @mhd I think the net picture is yes, if you install a current distro on old hardware the experience might be quite slow, but no, it's not unfixable if a handful of people care.

The underlying reason I don't think we have good support for the 20 year old hardware is that there is so much 10 year old hardware that is essentially the same price but much more capable.

@lproven @mhd XFCE is nice too, my experience is Gnome has a bigger user and developer base so it tends to be more complete. The Linux desktop options all have a lot of rough edges so it's a matter of what compromises you want to make, but my experience on Gnome has generally been pretty good over the last 20 or 25 years.

I'm a big proponent for reuse but that ship has already sailed
no one is actually using 32 bit x86 anymore outside of embedded systems that haven't seen a single software change for 30+ years
there's heaps of perfectly good 64 bit hardware still available, that can't run windows but is still supported by other operating systems and actually has a chance of being usable

I'd focus on not leaving "early" 64 bit hardware in the dust instead; sandy bridge remains the gold standard to this day and that's x86_64-v2 (not *all* SB hardware is still good, but for example my dad still uses my ivy bridge CPU with modern windows and it's very snappy)
currently pretty much all operating systems target x86_64-v1, but some distros are considering bumping that to v3 which is haswell

RE: https://tilde.zone/@mhd/115153931810193530

@novenary my hot take about "32-bit systems that you actually use" is install haiku or netbsd.

@izzy tbh, yeah

@novenary they're both committed to supporting 32-bit for different reasons
for haiku, it's compatibility with BeOS software. for NetBSD, supporting weird hardware is their thing. both are safe bets and still useful

@izzy @novenary also NetBSD feels much quicker, and is much less memory intensive than the Linux kernel. NetBSD is happy with as little as 32MB RAM; some would even call it “usable”. Linux really wants 64MB or more nowadays, and it will barely boot with that amount

@domi @izzy still really funny that a nintendo 64 port was merged when that has 8MB at most

@novenary @izzy since they control the whole platform tree they probably changed how much memory various buffers allocate. I haven’t been able to run x86 linux 5.x with 16MB, let alone 8…

when I see those hacks nowadays I wish they dedicated the effort to helping NetBSD instead, would be much more fun to see. allegedly you can make x86 NetBSD run on 4MB RAM, or so i’ve been told by the kernel config files from 2021 :)

@mirth @mhd The trouble is that "complete" is an awfully nebulous term. I had this argument several times with colleagues when they challenged my use of Xfce. Why didn't I use a richer desktop? Why didn't I want full function? They were power users with more certificates needs than I, they said, and Xfce wasn't rich enough.

So I showed them that everything that their desktop did, mine did. I had multi-window exposé. I had integrated search. I could pop open my app launcher with 1 key and type 2 letters to find any app. I had full text searching. I had 3D compositing across 3 monitors. It automatically signed into a half dozen accounts for me, with SSH keychain unlocked.

I had to install some of those things just to be able to demonstrate to the doubters that Xfce does that stuff. They didn't believe me unless shown.

I'm not aware of any significant function you can do with KDE or GNOME that you can't do with Xfce, and Xfce does it in half the RAM without needing a half written experimental replacement display server developed using the CADT model.

@domi @izzy yeah even openwrt dropped official support for 32MB devices years ago (stripped down images will boot but they're getting less and less useful)
it's not strictly a kernel issue though, userspace is also getting fatter over time

@novenary @izzy i’m thinking exclusively kernel, as for userspace I can pick alternatives that take up less memory. booting into a small busybox environment is one way to do so

same thing: try booting a 20 year old userspace on a current kernel. It’ll work (or at least most of it), but it will use more memory (and in some cases, more cycles will be wasted due to various mitigations)

@lproven @mirth I agree, but it makes me a bit sad to see how easy it is to get a "complete" desktop experience these days. And basically it's the same reason why all the talk about faster compilers, profilers, etc. doesn't work: Everything's done in the browser (which is a foreign object and slow).

Sometimes you might need to install another big application, but surprise, it's a browser, too.

So, yes, there's no worry anymore that third party applications that fit into the desktop environment are missing or look oddly because they use a different UI toolkit. Even file managers matter less and less, so soon an "environment" is what now, a window manager, dock/tray and a settings application?

I can't believe I miss the days when people were excited about COM/CORBA/OpenDoc etc. Now we don't even have XEMBED.

@mhd @lproven @mirth

I get quite frustrated with this. Siloing is pushed by mobile ecosystems with full-screen apps and, especially, ad-supported ones. There’s a huge incentive for apps to become platforms because that drives lock in. For ad-supported things, your revenue is proportional to the time people have the app open, so the incentive is to make people stay in the app as much as possible, which discourages interoperability.

None of these incentives apply in a Free Software ecosystem. A F/OSS project that interoperates with another project and makes it 10% more useful to 1% of users is a win.

But the biggest F/OSS ecosystems are intent on trying to duplicate software models that exist to support economic systems that are irrelevant for their model and where the users don’t get the majority of the benefits of alternate models and also don’t get the benefits that they would get from the proprietary ecosystems. And then they wonder why people aren’t adopting them.

@lproven @mhd I don't track the exact features in different desktops, and nothing on Linux has been close enough to Mac for me to switch in the past. When I do use Linux GUIs Gnome has historically had the fewest sharp edges for me. I don't think it matters much what gaps affect me specifically though...