Readit News logoReadit News
skywal_l · 4 months ago
Removing nommu feels wrong to me. Being able to run linux on a simple enough hardware that anybody sufficiently motivated could write an emulator for, help us, as individuals, remain in control. The more complex things are, the less freedom we have.

It's not a well argumented thought, just a nagging feeling.

Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings. A system that would allow communication, simple media processing and productivity.

These days it feels like we are at a tipping point for open computing. It feels like being a frog in hot water.

dragontamer · 4 months ago
I don't think software emulation is very important.

Let's look at the lowest end chip in the discussion. Almost certainly the SAM9x60.... it is a $5 ARMv5 MMU chip supporting DDR2/LPDDR/DDR3/LPDDR3/PSRAM, a variety of embedded RAM and 'old desktop RAM' and mobile RAM.

Yes it's 32-bit but at 600MHz and GBits of RAM support. But you can seriously mass produce a computer under $10 with the chip (so long as you support 4-layer PCBs that can breakout the 0.75mm pitch BGA). As in, the reference design with DDR2 RAM is a 4-layer design.

There are a few Rockchips and such that are (rather large) TQFP that are arguably easier. But since DDR RAM is BGA I think it's safe to assume BGA level PCB layout as a point of simplicity.

---------

Everything smaller than this category of 32-bit / ARMv5 chips (be it Microchip SAM9x60, or competing Rockchips or AllWinner) is a microcontroller wholly unsuitable for running Linux as we know it.

If you cannot reach 64MBs of RAM, Linux is simply unusable. Even for embedded purposes. You really should be using like FreeRTOS or something else at that point.

---------

Linux drawing the line at 64MB hardware built within the last 20 years is.... reasonable? Maybe too reasonable. I mean I love the fact that the SAM9x60 is still usable for modern and new designs but somewhere you have to draw the line.

ARMv5 is too old to compile even like Node.js. I'm serious when I say this stuff is old. It's an environment already alien to typical Linux users.

zgs · 4 months ago
Out by a factor of five or more.

A $1 Linux capable ARM: https://www.eevblog.com/forum/microcontrollers/the-$1-linux-...

I'd expect that there were even cheaper processors now since it's eight years later.

reactordev · 4 months ago
We need accessible open hardware. Not shoehorning proprietary hardware to make it work with generic standards they never actually followed.

Open source is one thing, but open hardware - that’s what we really need. And not just a framework laptop or a system76 machine. I mean a standard 64-bit open source motherboard, peripherals, etc that aren’t locked down with binary blobs.

AnthonyMouse · 4 months ago
> I mean a standard 64-bit open source motherboard, peripherals, etc that aren’t locked down with binary blobs.

The problem here is scale. Having fully-open hardware is neat, but then you end up with something like that Blackbird PowerPC thing which costs thousands of dollars to have the performance of a PC that costs hundreds of dollars. Which means that only purists buy it, which prevents economies of scale and prices out anyone who isn't rich.

Whereas what you actually need is for people to be able to run open code on obtainium hardware. This is why Linux won and proprietary Unix lost in servers.

That might be achievable at the low end with purpose-built open hardware, because then the hardware is simple and cheap and can reach scale because it's a good buy even for people who don't care if it's open or not.

But for the mid-range and high end, what we probably need is a project to pick whichever chip is the most popular and spend the resources to reverse engineer it so we can run open code on the hardware which is already in everybody's hands. Which makes it easier to do it again, because the second time it's not reverse engineering every component of the device, it's noticing that v4 is just v3 with a minor update or the third most popular device shares 80% of its hardware with the most popular device so adding it is only 20% as much work as the first one. Which is how Linux did it on servers and desktops.

bigiain · 4 months ago
Bunny Huang has been doing a lot of work on this:

Open hardware you can buy now: https://www.crowdsupply.com/sutajio-kosagi/precursor

The open OS that runs on it: https://betrusted.io/xous-book/

A secret/credential manager built on top of the open hardware and open software: https://betrusted.io

His blog section about it: https://www.bunniestudios.com/blog/category/betrusted/precur...

"The principle of evidence-based trust was at work in our decision to implement Precursor’s brain as an SoC on an FPGA, which means you can compile your CPU from design source and verify for yourself that Precursor contains no hidden instructions or other backdoors. Accomplishing the equivalent level of inspection on a piece of hardwired silicon would be…a rather expensive proposition. Precursor’s mainboard was designed for easy inspection as well, and even its LCD and keyboard were chosen specifically because they facilitate verification of proper construction with minimal equipment."

numpad0 · 4 months ago
Lots of SoCs are "open" in the sense that complete documentation including programming manuals are available. With couple man-centuries of developer time each, you could port Linux over those SoCs. but that doesn't count as being "open". On the other hand, there are a lot of straight up proprietary hardware that are considered "open", like Raspberry Pi.

Which means, "open" has nothing to do with openness. What you want is standardization and commoditization.

There are practically no x86 hardware that require model-specific custom images to boot. There are practically no non-x86 hardware that don't require model-specific custom images to boot. ARM made perceptible amount of efforts in that segment with Arm SystemReady Compliance Program, which absolutely nobody in any serious businesses cares about, and it only concern ARM machines even if it worked.

IMO, one of problems in efforts going in from software side is over-bloated nature of desktop software stacks and bad experiences widely had with UEFI. They aren't going to upgrade RAM to adopt overbloated software that are bigger than the application itself just because that is the new standard.

rwmj · 4 months ago
Until we have affordable photolithography machines (which would be cool!), hardware is never really going to be open.
TheAmazingRace · 4 months ago
We kinda have this with IBM POWER 9. Though that chip launched 8 years ago now, so I'm hoping IBM's next chip can also avoid any proprietary blobs.
tonyhart7 · 4 months ago
I would love to works with hardware, if you can foot my bill then I be happy to do that since open source software is one thing but open source hardware need considerable investment that you cant ignore from the start.

also this is what happen to prusa, everyone just take the design and outsource the manufacture to somewhere in china which is fine but if everybody doing that, there is no fund to develop next iteration of product (someone has to foot the bill)

and there is not enough sadly, we live in reality after all

pjc50 · 4 months ago
> Open source is one thing, but open hardware - that’s what we really need

This needs money. It is always going to have to pay the costs of being niche, lower performance, and cloneable, so someone has to persuade people to pay for that. Hardware is just fundamentally different. And that's before you get into IP licensing corner cases.

eric__cartman · 4 months ago
Those operating systems already exist. You can run NetBSD on pretty much anything (it currently supports machines with a Motorola 68k CPU for example). Granted many of those machines still have an MMU iirc but everything is still simple enough to be comprehend by a single person with some knowledge in systems programming.
jmclnx · 4 months ago
FWIW, Linux is not the only OS looking into dropping 32bit.

FreeBSD is dumping 32 bit:

https://www.osnews.com/story/138578/freebsd-15-16-to-end-sup...

OpenBSD has this quote:

>...most i386 hardware, only easy and critical security fixes are backported to i386

I tend to think that means 32bit on at least x86 days are numbered.

https://www.openbsd.org/i386.html

I think DragonflyBSD never supported 32bit

For 32bit, I guess NetBSD may eventually be the only game in town.

kimixa · 4 months ago
NetBSD doesn't support any devices without an mmu.

I think people here are misunderstanding just how "weird" and hacky trying to run an OS like linux on those devices really is.

duskwuff · 4 months ago
nommu is a neat concept, but basically nobody uses it, and I don't see that as likely to change. There's no real use case for using it in production environments. RTOSes are much better suited for use on nommu hardware, and parts that can run "real" Linux are getting cheaper all the time.

If you want a hardware architecture you can easily comprehend - and even build your own implementation of! - that's something which RISC-V handles much better than ARM ever did, nommu or otherwise.

speed_spread · 4 months ago
There's plenty of use cases for Linux on microcontrollers that will be impossible if nommu is removed. The only reason we don't see more Linux on MCUs is the lack of RAM. RP2350 are very close! Running Linux makes it much easier to develop than a plain RTOS.
MisterTea · 4 months ago
> Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings.

Simple and POSIX would be a BSD like NetBSD or OpenBSD.

This is why I gravitated to Plan 9. Overall a better design for a networked world and can be understood by a single developer. People can and have maintained their own forks. Its very simple, small and cross platform was baked in from day one. 9P makes everything into a IO socket organized as a tree of names objects. Thankfully it's not POSIX which IMO is not worth dragging along for decades. You can port Unix things with libraries. It also abandons the typewriter terminal and instead uses graphics. A fork, 9front, is not abandoning 32 bit any time soon AFIK. I netboot an older Industrial computer that is a 400MHz Geode (32 bit x86) with 128 MB RAM and it runs 9front just fine.

Its not perfect and lacks features but that stands to reason for any niche OS without a large community. Figure out what is missing for you and work on fixing it - patches welcome.

pajko · 4 months ago
Why do you need a full blown Linux for that? Much of the provided features are overkill for such embedded systems. Both NuttX and Zephyr provide POSIX(-like) APIs, NuttX has an API quite similar to the Linux kernel, so it should be somewhat easier to port missing stuff (have not tried to do that, the project I was working on got cancelled)
Denvercoder9 · 4 months ago
If you want a POSIX OS, nommu Linux already isn't it: it doesn't have fork().
em3rgent0rdr · 4 months ago
Just reading about this...turns out nommu Linux can use vfork(), which unlike fork() shares the parent's address space. Another drawback is that vfork's parent process gets suspended until the child exits or calls execve().
trebligdivad · 4 months ago
There are some other open OSs, like Zephyr, NuttX and Contiki - so maybe they're the right thing to use for the nommu case rather than Linux?
RantyDave · 4 months ago
Zephyr is not an OS in the conventional sense, it's more a library you link to so the application can "go".
guerrilla · 4 months ago
xv6 already runs on RISC-V.
JoshTriplett · 4 months ago
I don't think it makes sense to run Linux on most nommu hardware anymore. It'd make more sense to have a tiny unikernel for running a single application, because on nommu, you don't typically have any application isolation.
duskwuff · 4 months ago
> on nommu, you don't have any application isolation

That isn't necessarily the case. You can have memory protection without a MMU - for instance, most ARM Cortex-M parts have a MPU which can be used to restrict a thread's access to memory ranges or to hardware. What it doesn't get you is memory remapping, which is necessary for features like virtual memory.

lproven · 3 months ago
> Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings.

That was part of the plan for Minix 3.

Clean separation in a microkernel, simple enough for teaching students, but robust.

But Intel used it and gave nothing back, and AST retired. :-(

pjmlp · 4 months ago
There are plenty of FOSS POSIX like for such systems.

Most likely I won't be around this realm when that takes shape, but I predict the GNU/Linux explosion replacing UNIX was only a phase in computing history, eventually when everyone responsible for its success fades away, other agendas will take over.

It is no accident that the alternatives I mention, are all based on copyleft licenses.

chasil · 4 months ago
This is a foreseeable cataclysm for me, as I retire next year, and the core of our queing system is 64-bit clean (k&r) as it compiled on Alpha, but our client software is very much not.

This is a young mans' game, and I am very much not.

noobermin · 4 months ago
You're not alone, I feel the same way. I think the future if linux really will need to remove nommu would be a fork. I'm not sure if there's the community for that though.l
Blammmoklo · 4 months ago
Supporting 32bit is not 'simple' and the difference between 32bit hardware and 64bit hardware is not big.

The industry has a lot of experience doing so.

In parallel, the old hardware is still supported, just not by the newest Linux Kernel. Which should be fine anyway because either you are not changing anything on that system anyway or you have your whole tool stack available to just patch it yourself.

But the benefit would be a easier and smaller linux kernel which would probably benefit a lot more people.

Also if our society is no longer able to produce chips in a commercial way and we loose all the experience people have, we are probably having a lot bigger issues as a whole society.

But I don't want to deny that it would be nice to have the simplest way of making a small microcontroller yourself (doesn't has to be fast or super easy just doable) would be very cool and could already solve a lot of issues if we would need to restart society from wikipedia.

mort96 · 4 months ago
The comment you're responding to isn't talking about 32 vs 64 bit, but MMU vs no MMU.
cout · 4 months ago
ELKS can still run on systems without an mmu (though not microcontrollers afaik).
snvzz · 4 months ago
ELKS runs 16bit x86, including 8086.

Note ELKS is not Linux.

There's also Fuzix.

762236 · 4 months ago
Removing nommu makes the kernel simpler and easier to understand.
ohdeargodno · 4 months ago
Nothing prevents you from maintaining nommu as a fork. The reality of things is, despite your feelings, people have to work on the kernel, daily, and there comes a point where your tinkering needs do not need to be supported in main. You can keep using old versions of the kernel, too.

Linux remains open source, extendable, and someone would most likely maintain these ripped out modules. Just not at the expense of the singular maintainer of the subsystem inside the kernel.

stephen_g · 4 months ago
> there comes a point where your tinkering needs do not need to be supported in main.

Linux's master branch is actually called master. Not that it really matters either way (hopefully most people have realised by now that it was never really 'non-inclusive' to normal people) but pays to be accurate.

jnwatson · 4 months ago
It is amazing that big endian is almost dead.

It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.

Main-core computing is vastly more homogenous than when I was born almost 50 years ago. I guess that's a natural progression for technology.

goku12 · 4 months ago
> It is amazing that big endian is almost dead.

I wish the same applied to written numbers in LTR scripts. Arithmetic operations would be a lot easier to do that way on paper or even mentally. I also wish that the world would settle on a sane date-time format like the ISO 8601 or RFC 3339 (both of which would reverse if my first wish is also granted).

> It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.

I never really understood those non-8-bit bytes, especially the 7 bit byte. If you consider the multiplexer and demux/decoder circuits that are used heavily in CPUs, FPGAs and custom digital circuits, the only number that really makes sense is 8. It's what you get for a 3 bit selector code. The other nearby values being 4 and 16. Why did they go for 7 bits instead of 8? I assume that it was a design choice made long before I was even born. Does anybody know the rationale?

idoubtit · 4 months ago
> I also wish that the world would settle on a sane date-time format like the ISO 8601

IIRC, in most countries the native format is D-M-Y (with varying separators), but some Asian countries use Y-M-D. Since those formats are easy to distinguish, that's no problem. That's why Y-M-D is spreading in Europe for official or technical documents.

There's mainly one country which messes things up...

pavon · 4 months ago
There are a lot of computations where 256 is too small of a range but 65536 is overkill. When designers of early computers were working out how many digits of precision their calculations needed to have for their intended purpose 12 bits commonly ended up being a sweet spot.

When your RAM is vacuum tubes or magnetic core memory, you don't want 25% of it to go unused, just to round your word size up a power of two.

jcranmer · 4 months ago
I don't know that 7-bit bytes were ever used. Computer word sizes have historically been multiples of 6 or 8 bits, and while I can't say as to why particular values were chosen, I would hypothesize that multiples of 6 and 8 work well for representation in octal and hexadecimal respectively. For many of these early machines, sub-word addressability wasn't really a thing, so the question of 'byte' is somewhat academic.

For the representation of text of an alphabetic language, you need to hit 6 bits if your script doesn't have case and 7 bits if it does have case. ASCII ended up encoding English into 7 bits and EBCDIC chose 8 bits (as it's based on a binary-coded decimal scheme which packs a decimal digit into 4 bits). Early machines did choose to use the unused high bit of an ASCII character stored in 8 bits as a parity bit, but most machines have instead opted to extend the character repertoire in a variety of incompatible ways, which eventually led to Unicode.

creshal · 4 months ago
> both of which would reverse if my first wish is also granted

But why? The brilliance of 8601/3339 is that string sorting is also correct datetime sorting.

formerly_proven · 4 months ago
Computers never used 7-bit bytes similarly to how 5-bit bytes were uncommon, but both 6-bit and 8-bit bytes were common in their respective eras.
blahedo · 4 months ago
I believe that 10- and 12-bit bytes were also attested in the early days. As for "why": the tradeoffs are different when you're at the scale that any computer was at in the 70s (and 60s), and while I can't speak to the specific reasons for such a choice, I do know that nobody was worrying about scaling up to billions of memory locations, and also using particular bit combinations to signal "special" values was a lot more common in older systems, so I imagine both were at play.
globular-toast · 4 months ago
In Britain the standard way to write a date has always been, e.g "12th March 2023” or 12/3/2023 for short. Don't think there's a standard for where to put the time, though, I can imagine it both before and after.

Doing numbers little-endian does make more sense. It's weird that we switch to RTL when doing arithmetic. Amusingly the Wikipedia page for Hindu-Arabic numeral system claims that their RTL scripts switch to LTR for numbers. Nope... the inventors of our numeral system used little-endian and we forgot to reverse it for our LTR scripts...

Edit: I had to pull out Knuth here (vol. 2). So apparently the original Hindu scripts were LTR, like Latin, and Arabic is RTL. According to Knuth the earliest known Hindu manuscripts have the numbers "backwards", meaning most significant digit at the right, but soon switched to most significant at the left. So I read that as starting in little-endian but switching to big-endian.

These were later translated to Arabic (RTL), but the order of writing numbers remained the same, so became little-endian ("backwards").

Later still the numerals were introduced into Latin but, again, the order remained the same, so becoming big-endian again.

1718627440 · 4 months ago
> I also wish that the world would settle on a sane date-time format like the ISO 8601 or RFC 3339 (both of which would reverse if my first wish is also granted).

YYYY-MM-DD to me always feels like a timestamp, while when I want to write a date, I think of a name, (for me DD. MM. YYYY).

vrighter · 4 months ago
7 bits was chosen to reduce transmission costs, not storage costs, because you send 12.5% less data. Also, because computers usually worked on 8-bit bytes, the 8th bit could be used as a parity bit, where extra reliability was needed.

Deleted Comment

ndiddy · 4 months ago
Big endian will stay around as long as IBM continues to put in the resources to provide first-class Linux support on s390x. Of course if you don’t expect your software to ever be run on s390x you can just assume little-endian, but that’s already been the case for the vast majority of software developers ever since Apple stopped supporting PowerPC.
metaphor · 4 months ago
> ...that’s already been the case for the vast majority of software developers ever since Apple stopped supporting PowerPC.

For better or worse, PowerPC is still quite entrenched in the industrial embedded space.

Aardwolf · 4 months ago
Now just UTF-16 and non '\n' newline types remaining to go
syncsynchalt · 4 months ago
Of the two UTF-16 is much less of a problem, it's trivially[1] and losslessly convertible.

[1] Ok I admit, not trivially when it comes to unpaired surrogates, BOMs, endian detection, and probably a dozen other edge and corner cases I don't even know about. But you can offload the work to pretty well-understood and trouble-free library calls.

hypeatei · 4 months ago
UTF-16 will be quite the mountain as Windows APIs and web specifications/engines default to it for historical reasons.
jeberle · 4 months ago
UTF-16 arguably is Unicode 2.0+. It's how the code point address space is defined. Code points are either 1 or 2 16-bit code units. Easy. Compare w/ UTF-8 where a code point may be 1, 2, 3, or 4 8-bit code units.

UTF-16 is annoying, but it's far from the biggest design failure in Unicode.

augustk · 4 months ago
> Now just UTF-16 and non '\n' newline types remaining to go

Also ISO 8601 (YYYY-MM-DD) should be the default date format.

dgshsg · 4 months ago
We'll have to deal with it forever in network protocols. Thankfully that's rather walled off from most software.
newpavlov · 4 months ago
As well as in a number of widely spread cryptographic algorithms (e.g. SHA-2), which use BE for historic reasons.
delduca · 4 months ago
Good call out, I have just removed some #ifdef about endianness from my engine.
mort96 · 4 months ago
I have some places in some software where I assume little endian for simplicity, and I just leave in a static_assert(std::endian::native == std::endian::little) to let future me (or future someone else) know that a particular piece of code must be modified if it is ever to run on a not-little-endian machine.
chasil · 4 months ago
"...with the CPU running in big endian mode."

Hey, you! You're supposed to be dead!

https://wiki.netbsd.org/ports/evbarm/

shmerl · 4 months ago
On the userland side, there is some good progress of using thunking to run 32-bit Windows programs in Wine on Linux without the need for 32-bit libraries (the only edge case remaining is thunking 32-bit OpenGL which is lacking needed extensions for acceptable performance). But the same can't be said for a bunch of legacy 32-bit native Linux stuff like games which commonly have no source to rebuild them.

May be someone can develop such thunking for legacy Linux userland.

eric__cartman · 4 months ago
How many of those legacy applications where the source is not available actually need to run natively on a modern kernel?

The only thing I can think of is games, and the Windows binary most likely works better under Wine anyways.

There are many embedded systems like CNC controllers, advertisement displays, etc... that run those old applications, but I seriously doubt anyone would be willing to update the software in those things.

shmerl · 4 months ago
Yeah, games I'd guess is the most common case or at least one enough people would care about.
snarfy · 4 months ago
I run a game server running a 32bit binary from 2004. I guess I will not be upgrading in the future.
cwzwarich · 4 months ago
It shouldn’t be difficult to write a binary translator to run 32-bit executables on a 64-bit userspace. You will take a small performance hit (on top of the performance hit of using the 32-bit architecture to begin with), but that should be fine for anything old enough to not be recompiled.
Fulgen · 4 months ago
In some ways, Windows already does that too - the 32-bit syscall wrappers [switch into a 64-bit code segment](https://aktas.github.io/Heavens-Gate) so the 64-bit ntdll copy can call the 64-bit syscall.
shmerl · 4 months ago
I would guess so, but I haven't seen anyone developing that so far.

Deleted Comment

5- · 4 months ago
most of those games would have windows builds?

that said, i sometimes think about a clean-room reimplementation of e.g. the unity3d runtime -- there are so many games that don't even use native code logic (which still could be supported with binary translation via e.g. unicorn) and are really just mono bytecode but still can't be run on platforms for which their authors didn't think to build them (or which were not supported by the unity runtime at the time of the game's release).

shmerl · 4 months ago
> most of those games would have windows builds?

Yeah, that's a reasonable workaround, as long as it doesn't hit that OpenGL problem above (now it mostly affects DX7 era games, since they don't have Vulkan translation path). Hopefully it can be fixed.

xeonmc · 4 months ago
Perhaps a new compatibility layer, call it LIME -- LIME Is My Emulater
ninkendo · 4 months ago
LIME Isn’t Merely an Emulator
dontlaugh · 4 months ago
In practice, the path for legacy software on Linux is Wine.
majorchord · 4 months ago
I have heard people say the only stable ABI on Linux is Win32.
hinkley · 4 months ago
Win32S but the other way around.

Win64S?

greatgib · 4 months ago
It's the end of an area, Linux used to be this thing that was running on quite anything and allowing to salvage old computers.

I think that there is a shitload of old desktop and laptop computers from 10 to 15 yrs that are still usable only with a linux distribution and that will not be true anymore.

Now Linux will be in the same lane as osx and windows running after the last shiny new things, and being like: if you want it, buy a new machine that will support it.

arp242 · 4 months ago
You can still run an older kernel. There are the "Super-long-term support" releases that have 10+ year support cycles. Some distros may go even further.

If you install 6.12 today (via e.g. Debian 13) then you'll be good until at least 2035. So removing it now de-facto means it will be removed in >10 years.

And as the article explains, this mostly concerns pretty old systems. Are people running the latest kernel on those? Most of the time probably not. This is really not "running after the last shiny thing". That's just nonsensical extreme black/white thinking.

threatripper · 4 months ago
Won't these super old kernels basically turn into forks after some time that are maintained and even extended for special purposes?
account42 · 4 months ago
That's assuming your old machines will never need to interface with new peripherals or new network protocols or new filesystems or anything that could require changes only found in newer kernels. It's not far removed from saying that Windows still supports them as well because you can always use Windows ME for the rest of the millennium.
Dylan16807 · 4 months ago
Desktops and laptops from 10 to 15 years ago are basically all 64 bit. By the time this removal happens, we'll be at 20 years of almost all that hardware being 64 bit. By the time hardware becomes "retro", you don't need the latest kernel version.

Lots of distros already dropped 32 bit kernel support and it didn't cause much fuss.

account42 · 4 months ago
20 years isn't all that much though. We maintain houses for much longer than that so why should we accept such low lifetimes for computers.
mananaysiempre · 4 months ago
Ten- or fifteen-year-old hardware is still perfectly serviceable now for some modern applications. (The decade-long Intel monopoly drought of 5% generational improvements to CPU performance has a great deal to do with that.) So this is not as strong of an argument as the same sentence would be if it were said ten years ago.
greatgib · 4 months ago
I'm quite sure that still a few years after 2020 there were still atom or Celeron processors powered laptop that did not support 64 bits.

Maybe it is not that the architecture was not compatible as much as it was restricted or limited by Intel and co for these cpus

creshal · 4 months ago
> I think that there is a shitload of old desktop and laptop computers from 10 to 15 yrs that are still usable only with a linux distribution and that will not be true anymore.

For mainstream laptops/desktops, the 32 bit era ended around 2006 (2003, if you were smart and using Athlon 64s instead of rancid Pentium 4).

Netbooks and other really weak devices held out a few years longer, but by 2010, almost everything new on the market, and a good chunk of the second-hand market, was already 64 bits.

markjenkinswpg · 4 months ago
In my experience, the 10-15 year old salvaged computer that still works okay with GNU/Linux is increasingly a 64 bit machine.

Case in point, I'm writing on a x86_64 laptop that was a free give away to me about a year ago with a CPU release year that is 2012.

I have personally given away a x86_64 desktop unit years ago that was even older, might have had DDR1 memory.

Circa 2013 my old company was gifted a x86_64 motherboard with DDR2 memory that ended up serving as our in-office server for many years. We maxed the RAM (8GB) and at some point bought a CPU upgrade on ebay that gave us hardware virtualization extensions.

octoberfranklin · 4 months ago
The Apple Watch has 32-bit memory addressing (and 64-bit integer arithmetic -- it's ILP32). Granted it doesn't run Linux, but it's a very very modern piece of hardware, in production, and very profitable.

Same for WASM -- 32-bit pointers, 64-bit integers.

Both of these platforms have a 32-bit address space -- both for physical addresses and virtual addresses.

Ripping out support for 32-bit pointers seems like a bad idea.

mrpippy · 4 months ago
With watchOS 26, S9/10 watches will be going to normal ILP64 ARM64.

RAM limitations were one reason to use arm64_32, but a bigger reason is that the first watches were only ARMv7 (32-bit) so by sticking with 32-bit pointers, Apple was able to statically recompile all the 3rd party (ARMv7) apps from LLVM bitcode to arm64_32.

https://www.macrumors.com/2025/06/16/watchos-26-moves-apple-...

int_19h · 4 months ago
64-bit memories are already in wasm 3.0 draft (and in any case this isn't a platform where you'd need the Linux kernel running).
SAI_Peregrinus · 4 months ago
WASM isn't being used to run the Linux kernel, it's run by an application on top of an OS. That OS can be 64-bit, the WASM VMs don't care.

Deleted Comment

jacquesm · 4 months ago
Funny, I remember 32 bits being 'the future', now it is the distant past. I think they should keep it all around, and keep it buildable. Though I totally understand the pressure to get rid of it I think having at least one one-size-fits-all OS is a very useful thing to have. You never know what the future will bring.
justin66 · 4 months ago
There's always NetBSD. I'm pretty sure that's supporting x86 as far back was 80486 and 32-bit SPARC as far back as... something I wouldn't want to contemplate.
nektro · 4 months ago
important to remember that this fate isn't going to happen again with 64bit
petcat · 4 months ago
Just because support would be removed from current and new versions doesn't mean the old code and tarballs are just going to disappear. Can dust off an old 32 bit kernel whenever you want
SlowTao · 4 months ago
Always to option to fork it. Linux Legacy? Linux 32? Linux grey beard!
smitty1e · 4 months ago
Technologies have lifecycles. Film at 11.
Mathnerd314 · 4 months ago
Linux has become the dominant operating system for a wide range of devices, even though other options like FreeRTOS or the BSD family seem more specialized. The widespread adoption of Linux suggests that a single, versatile operating system may be more practical than several niche ones. However, the decision to drop support for certain hardware because it complicates maintenance, as seen here, would seem to contradict the benefit of a unified system. I wouldn't be surprised if it really just results in more Linux forks - Android is already at the point of not quite following mainline.
charcircuit · 4 months ago
>Android is already at the point of not quite following mainline.

It follows the latest LTS which I think is reasonable especially since phone vendors wants to have support for the device for several years.

shasheene · 4 months ago
I think this is premature and a big mistake for Linux.

The costs of distros and the kernel steadily dropping older x86 support over the last few years never causes an outcry but it's an erosion of what made Linux great. Especially for non-English speaking people in less developed countries.

Open-source maintenance is not a obligation, but it's sad there is not more people pushing to maintain support. Especially for the "universal operating system" Debian which was previously a gold standard in architecture support.

I maintain a relatively popular live Linux distro based on Ubuntu and due to user demand will look into a NetBSD variant to continue support (as suggested in this thread), potentially to support legacy 586 and 686 too.

Though a Debian 13 "Trixie" variant with a custom compiled 686 kernel will be much easier than switching to NetBSD, it appears like NetBSD has more commitment to longer-term arch support.

It would be wonderful to develop systems (eg emulation) to make it practical to support architectures as close to indefinitely as possible.

It does feel like a big end of an era moment for Linux and distros here, with the project following the kind of decision making of big tech companies rather than the ideals of computer enthusiasts.

Right now these deprecation decisions will directly make me spend time working at layers of abstraction I wasn't intending to in order to mitigate the upstream deprecations of the kernels and distros. The reason I have used the kernel and distros like Debian has been to offload that work to the specialist maintainers of the open-source community.