Startup chime in SGI machines depended on model. So an Indy had a different one than an Onyx. My first PC (80286) also had iconic sounds when it started up. Never forget.
Micro distro, is recovery OS. All three major desktop OSes have such, or a key combination to activate such. Android has two recovery partitions I believe, redundancy is key.
If you like the power of snapshots, yep filesystems with CoW like ZFS can show a list during boot. An OS like NixOS wouldn't even need such. Works perfectly fine with Ext4FS, including boot menu with snapshots, rollback feature, etc.
BEEP BOP. bidiedi-bop. bip.
Presumably some of the things being worked on in Valkey, etc can be upstreamed back to Redis in some form (not entirely straightforward since it is a hard fork with a diff license, but concepts can be borrowed back too).
e.g. apparently Valkey has introduced some performance improvements. Redis can implement similar if it seems worthwhile. Without the fork those performance ideas might have never surfaced.
1. All teams will henceforth expose their data and functionality through service interfaces.
2. Teams must communicate with each other through these interfaces.
3. There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
4. It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter.
5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
6. Anyone who doesn’t do this will be fired.
7. Thank you; have a nice day!
I’ve laid my actual criticisms bare multiple times on this site.
1) The tight coupling of the userland with systemd means that while systemd replaced a pleathora of inits (not just sysv): the target is now too large to be replaced even if there are better ones. Systemd is the last init linux will have, and increases the barrier to port software to other unix-likes.
2) The non-optional/default features have been buggy and difficult to replace. Journald has no replacement; systemd-networkd is one of the most common causes of failure for my desktop due in part to being flakey when dnssec is not available.
3) The overreliance on dbus turns the “the unix philosophy” ;) away. Text as a universal communication medium, everything is a file, etc.
There are more, but these are my main ones. Throwing away the corpus of admin muscle memory is not an an issue anymore.
To be blunt, I was using Fedora when systemd was coming out, I know how it works intimately because it was constantly broken. Part of what gives me pause is that I know how utterly undebugabble it is when it fails: it just hits those causes less frequently as the world is forced into using it. It becomes battle hardened.
Oh, and the obvious criticism agains the maintainers who have been very unapologetic to bugs and undesired behaviour, in a much worse way than the Apple “you’re holding it wrong”.
Did you ever consider that it’s also free software nerds who are the most likely to hate being told what to do?
This was Fedora 15, in 2011. I would say using this as a baseline qualifies as living in the past?
Not true in the case of the US, which famously adopted a culture of universal literacy earlier than the rest of the world. By the mid-19th century, literacy rates among whites were not much different than they are today. It is one of the bright spots of American history; they took literacy very seriously for complicated historical reasons. Their book consumption per capita was also the highest in the world by a very large margin back in those days, which lends evidence.
It may or may not be relevant to your point, but at least in the US the idea that the average person was illiterate is ahistorical. They were the best read population in the world 150 years ago, and took some pride in that.
But the states does have among the lowest literacy rate in the west. Less than 80% was considered literate in 2024, compared to almost 99% in the EU (with a range from 94% to almost 100%).
Going over the article, it mentions three constraints pertinent to obtaining timber, and it seems that their intersection is empty.
The constraints for obtaining timber are:
1. A UK source is preferred (due to national pride?)
2. Price needs to be very cheap given the archeologists' non-profit budget
3. Veteran trees are necessary for a reconstruction, but specifically these are a scarce and precious resource.
It seems likely that either they relax (1) or get funding/sponsorship to relax (2). (3) does not admit relaxation
My translation from Wikipedia[0]: In 1975 the agency in charge of the forest notified the head of the Navy that the timber was ready for harvest. The navy declined the offer since they had transitioned to new materials
0: https://sv.wikipedia.org/wiki/Ekplanteringarna_p%C3%A5_Visin...
Without those, it's hard to get a sense for whether particular design choices were really critical or not.
> a fly instance is hardwired to one physical server and thus cannot fail over
I'm having trouble understanding how else this is supposed to be? I understand that live migration is a thing, but even in those cases, a VM is "hardwired" to some physical server, no?
You can run your workload (in this case a VM) on top of a scheduler, so if one node goes down the workload is just spun up on another available node.
You will have downtime, but it will be limited.
Organizations like Apple who service billions of devices cannot rely on a "push data to system only when something has updated" type of system, as such a system doesnt operate at their scale. They have to operate a system where individual clients are assumed to have an unreliable connection to the service, and where the client does the legwork of checking for new data stored in a centralized system.
This is what you are seeing in the article. Domains like [gdmf.apple.com] which govern device management, are where the declarative device management system is checking Apple's various databases to see if they need to update their configuration.