I've been running testing/trixie since the end of 2023 or so. (I generally always run testing, but stick with stable for ~6 months after stabilization, in order to avoid lots of package churn in new-testing.)
It's been what I expect from Debian: boring and functional. I've never run into an issue where the system wouldn't boot after an update (I usually update once every 2-4 weeks when on testing), and for the most part everything has worked without the need to fix broken packages or utter magic apt incantations.
Debian has always been very impressive to me. They're certainly not perfect, but what they can do based on volunteers, donations, and sponsors, is amazing.
This is exactly why I use Debian when I install Linux. I want something that will keep chugging along, yet may not have the most cutting edge software. I can take my time with the system, and know that it is solid.
If I need newer software that isn't in their package repository, I understand that I have the ability to compile what I need, or at least make an active decision to modify my system to run what I want. Basically, the possibility of instability is a conscious choice for me, which I do sometimes take.
Exactly what I expect from the latest Debian. Boring and not working. Too many hacks by people who cannot work with upstream and have no idea what they are doing. But they are having their own good idea of a "proper" layout and package management.
14 different schemes multiplied by some acting slightly different in every version. Sure you can pin it, but that fixes only their internal back and forth, is only possible via the kernel cmdline and there is no guarantee for how long the old versions will stay available, as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again.
And sure, one can pin interfaces to custom names, but why should anybody have to bother with such things?!
I like systemd a lot, but this is one of the thing they fumbled big time and seemingly still aren't done.
Pinning interfaces by their MAC to a short and usable name, would e.g. have been much more stable as doing that by PCI slot, which firmware updates, new hardware, newer kernel exposing newer features, ... changes rather often.
This works well for all but virtual functions, but those are sub-devices of their parent interface anyway and can just get named with a suffix added to the parent name.
> as they deprecated much more invasive things in the past (e.g., cgroupv1) I'd expect them to also drop older versions here, breaking ones naming again
Note that the naming scheme is in control of systemd, not the kernel. Even if it is passed on the kernel commandline.
This worked brilliantly in Debian for more than a decade, had almost zero downside, and just did what asked. I went through 3+ dist-upgrades, for the first time in my life, without a NIC change.
It was deprecated for this nonsense in systemd.
Yes, there were edge cases in the Debian scheme. Yet it did work with VMs (as most VMs kept the same MAC in config files), and it was easy to maintain if you wanted 'fresh'. Just rm the pin file in the udev dir. Done.
Again it worked wonderful on every VM, every bare metal system I worked with.
One of the biggest problems with systemd, is it seems to be developed by people that have no real world, industrial scale admin experience. It's almost like a bunch of DEVs got together, couldn't understand why things were "so confusing", and just figured "Oh, it must be a mistake".
Nope.
It's called covering edge cases, ensuring things are stable for decades, because Linux and the init system are the bottom of the stack. The top of the stack changes like the wind in spring, but the bottom of the stack must be immensely stable, consensus driven, I repeat stable change.
The "stable" interface naming scheme is a scam. And I have proof. Test upgraded a VM today, from bookworm to trixie. And guess what. Everything worked, except after reboot the network interface was unconfigured? Guess what. The name changed...
The best use of AI I've gotten so far is having it explain to me how to manage a Fedora Server's core infrastructure "the right way". Which files, commands, etc. to permanently or temporarily change network, firewall, DNS, NTP settings.
I use Debian Stable on almost all the systems I use (one is stuck on 10/Buster due to MoinMoin). I installed Trixie in a container last week, using an LXC container downloaded from linuxcontainers.org [1].
Three things I noted on the basic install :
1) Ping didn't work due to changed security settings (iputils-ping) [2]
2) OpenSSH server was installed as systemd socket
activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
3) Systemd-resolved uses LLMNR as an name lookup alternative to DNS and pinging a firewalled host failed because the lookup seemed to be LLMNR accessing TCP port 5355. I disabled LLMNR.
Generally, Debian version updates have been succesful with me for a few years now, but I always have a backup, and always read the release notes.
> 2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config.
sshd still reads /etc/ssh/sshd_config
at startup. As far as I know, this is hard-coded in the executable.
What Debian has changed happens before the daemon is launched: the service is socket activated.
So, _if you change the default port of sshd_ in its config, then you have to change the activation:
- either enable the sshd@service without socket activation,
- or modify the sshd.socket file (`systemctl edit sshd.socket`) which has the port 22 by default.
Since Debian already have a environment file (/etc/default/ssh), which is loaded by this service, the port could be set in a variable there and loaded by the socket activation. But then it would conflict with OpenSSH's own files. This is why I've always disliked /etc/default/ as a second level of configuration in Debian.
I suspect that systemd people are looking at this thread in perplexity, and probably doing their thing (that I've seen over the years) of regarding the world of Debian as being amazingly behind the times in places.
The SSH server being a socket unit with systemd doing all of the socket parallelism-limiting and accepting was one of the earliest examples of socket activation ever given in systemd. It was in one of Lennart Poettering's earliest writings on the subject back in 2011.
And even that wasn't the earliest discussion of this way of running SSH by a long shot, as this was old news even before systemd was invented. One can go back years earlier than even Barrett's, Silverman's, and Byrnes's SSH: The Secure Shell: The Definitive Guide published in 2005, which is one of many places explaining that lots of options in sshd_config get ignored when SSH is started by something else that does all of the socket stuff.
Like inetd.
This has been the case ever since it has been possible to put an SSH server together with inetd in "nowait" mode. Some enterprising computer historian might one day find out when the earliest mention of this was. It might even be in the original 1990s INSTALL file for SSH.
The original SSH server was daemonized by default, but inetd operation was always supported. The INSTALL file said this in 1995:
The server is not started using inetd, because it needs to generate
the RSA key before serving the connection, and this can take about a
minute on slower machines. On a fast machine, and small (breakable)
key size (< 512 bits) it may be feasible to start the server from
inetd on every connection. The server must be given "-i" flag if
started from inetd.
And, of course, SSH was designed to replace rlogin and rsh - both of which ran from inetd as standard. As you say, socket-activation of sshd is simply following very long-standing Unix practice: it's not at all novel, and there's no reason to believe that it's any more risky than any other method of running it.
As you say, network programs activating from a master network management is, of course, the history of Unix. It's ironic to see knee-jerk complaints about it.
systemd-resolved is an effing nightmare when combined with network-manager. these two packages consistently manage to stomp all over DNS resolution in their haste to be the one true source of name resolution. i tried enabling systemd-resolved as part of an effort to do dns over https and i end up with zero dns. i swear that /etc/resolv.conf plus helper scripts is more consistent and easy.
It’s why I always say in the typical “systemd bad” threads that systemd the init system is great, it’s the systemd-* everything else’s that give it a bad name.
I want systemd nowhere fucking near my NTP or DNS config.
It's the only long-term solution to that problem that I endorse. Every attempt of working with the system, whether via systemd.network, resolved.conf or resolvconf, has always eventually bit me one way or another.
What is the rationale for changing OpenSSH into a socket activated service? Given that it comes with issues, I assume the benefits outweigh the downsides.
> Given that it comes with issues, I assume the benefits outweigh the downsides.
Any change can introduce regressions or break habits. The move toward socket activation for sshd is part of a larger change in Debian. I don't think the Debian maintainers changed that just for the fun of it. I can think of two benefits:
+ A service can restart without interruption, since the socket will buffer the requests during the restart.
+ Dependencies are simpler and faster (waiting for a service to start and accept requests is costly).
My experience is that these points largely outweigh the downsides (the only one I can think of is that the socket could be written in two places).
> Given that it comes with issues, I assume the benefits outweigh the downsides.
I think it doesn't outweight the downside. Let's not forget this:
"OpenSSH normally does not load liblzma, but a common third-party patch used by several Linux distributions causes it to load libsystemd, which in turn loads lzma."
The "XZ utils backdoor" nearly backdoored every single distro running systemd.
People (including those who tried to plant this backdoor) are going to say: "systemd has nothing to do with the backdoor" but I respectfully disagree.
systemd is one heck of a Rube-Goldberg piece of machinery: the attack surface is gigantic seen that systemd's tentacles reaches everywhere.
With a tinfoil hat on one could think the goal of systemd was, precisely, to make sure the most complicated backdoors could be inserted here and there: "Let's have openssh use a lib it doesn't need at all because somehow we'll call libsystemd from openssh".
Genius idea if you ask me.
What could possibly go wrong with systemd "now just opening a port for openssh" uh? Nothing I'm sure.
Now that said I'm very happy that we've now got stuff like the Talos Linux distribution (ultra minimal, immutable, distro meant to run Kubernetes with as few executables as possible and of course no systemd) and then containers using Alpine Linux or, even if Debian based, minimal system with (supposedly) only one process running (and, once again, no systemd).
Containerization is one way out of systemd.
I can't wait for a good systemd-less hypervisor: then I can kiss Microsoft goodbye (systemd is a Microsoft technology, based on Microsoftism, by a now Microsoft employee).
Thanks but no thanks.
Talos distro, systemd-less containers: I want more of this kind of mindset.
The future looks very nice.
systemd lovers should just throw the towel in and switch to Windows: that's what they actually really want and it's probably no better than they deserve.
> 2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
Doesn't the .socket unit point to a .service unit? Why would using a socket be connected to which config sshd reads?
Have been using Trixie on my laptop for a year (?) now, it has been a very positive experience. I had brought a brand new, very recent ThinkPad, not considering that the relevant drivers would not be in Debian Stable yet. Now on Trixie, having a relatively recent version of everything KDE plasma is a blessing. Things have changed so much, for the better, particularly regarding Wayland. The experience with Trixie is already better than it ever was for me with Ubuntu (good riddance!), and I cannot believe that this is supposed to be an unstable release. I broke stuff once, and that was my own fault (forcing update when not all necessary packages were staged yet, learned my lesson on that!).
> You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting.
>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.
Seems like an easy change to revert from the release notes.
As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.
I have yet to hear of someone wearing out an SSD on a desktop/laptop system (not server, I'm sure there's heavy applications that can run 24/7 and legitimately get the job done), even considering bugs like the Spotify desktop client writing loads of data uselessly some years ago
Making such claims on HN attracts edge cases like nobody's business but let's see
I'm using OpenSUSE Tumbleweed that has this option enabled by default.
Until about a year ago, whenever I would try to download moderately large files (>4GB) my whole system would grind to a halt and stop responding.
It took me MONTHS to figure out what's the problem.
Turns out that a lot of applications use /tmp for storing files while they're downloading. And a lot of these applications don't cleanup on fail, some don't even move files after success, but extract and copy extracted files to destination, leaving even more stuff in temp.
Yeah, this is not a problem if you have 4X more ram than the size of files you download. Surely, this is a case for most people. Right?
If it's easily reproducible, I guess checking `top` while downloading a large file might have given a clue, since you could have seen that you're running out of memory?
A misbehaving program can cause out of memory errors already by filling up memory. It wouldn't persist past that program's death but the effect is pretty catastrophic on other programs regardless.
Not only that: Dovecot 2.4 will also remove the functionalities of dsync, replicator and director [1]. This is frustrating and a big loss as these enabled e.g. very simple and reliable two-node (active-active) redundant setups, which will not be possible anymore with 2.4.
I use it for years to achieve HA for personal mail servers and will now have to look for alternatives -- until then will stick with Debian Bookworm and its Dovecot 2.3.
> I use it for years to achieve HA for personal mail servers and will now have to look for alternatives
Yeah, Dovecot seem to be going hardline commercial. Basically the open-source version will be maintained as a single server solution. Looks like if you want supported HA etc. you'll have to pay the big bucks.
There is a video up on YouTube[1] where their Sales Director is speaking at a conference and he said (rough transcript):
"there will be an open source version, but that open source version will be maintained for single server use only. we are actually taking out anything any actually kinda' involves multiple servers, dsync replication and err some other stuff. so dovecot will be a fully-featured single node server"
Have you looked at Stalwart[2] as an alternative ?
I usually fix those kind of problems by running the offending software in a docker container, with the correct version. Sometimes the boundaries of the container create their own problems. Dovecot 2.3 is at https://hub.docker.com/r/dovecot/dovecot/tags?name=2.3
Fair warning: the Trixie update does not allow you to roll back. It is in theory possible but practically it not only fails every single time, but leaves the system in an inconsistent and broken state. (Code for 'soon to be unbootable').
What this means is when you find out stuff breaks, like drivers and application software, and decide the upgrade was a bad idea, you are fucked.
More notably, some of the upgrade is irreversible - like MySQL/MariaDB. The database binary format is upgraded during the upgrade. So if you discover something else broke, and you want to go back, it's going to take some work.
The page about upgrading [0] does have this warning:
Back up your data
Performing a release upgrade is never without risk. The upgrade may fail, leaving the system in a non-functioning state. USERS SHOULD BACKUP ALL DATA before attempting a release upgrade. DebianStability contains more information on these steps.
In all fairness... How would that work? Not even just on Debian; in the general case, I don't see how to avoid that other than full filesystem snapshots or backups of some sort. Even on, say, a NixOS system where rolling back all the software and config (basically, /, /usr, and /etc) to exactly its old config is as easy as rebooting and picking the old generation, databases will still have migrated their on-disk format.
Indeed. Snapshots. And they are a breeze on operating systems where ZFS for everything is available. It's not like the Windows feature of the same name, which I suspect is in part what makes people wary of the idea. That works rather differently. A ZFS snapshot completes in seconds.
I had some containerized application software break and start misbehaving in odd ways which was indicative of a deeper incompatibility issue. Possibly GPU related. No time to debug, had to roll it back.
This was complicated by the fact that the machine also hosted a MySQL database which could not be easily rolled back because the database was versioned up during the upgrade.
It's been what I expect from Debian: boring and functional. I've never run into an issue where the system wouldn't boot after an update (I usually update once every 2-4 weeks when on testing), and for the most part everything has worked without the need to fix broken packages or utter magic apt incantations.
Debian has always been very impressive to me. They're certainly not perfect, but what they can do based on volunteers, donations, and sponsors, is amazing.
If I need newer software that isn't in their package repository, I understand that I have the ability to compile what I need, or at least make an active decision to modify my system to run what I want. Basically, the possibility of instability is a conscious choice for me, which I do sometimes take.
I just need boring stability to wildly experiment in isolation
Why do you want to switch to Ubuntu?
I'm sorry I had to, I'll show myself out
[1] https://manpages.debian.org/testing/systemd/systemd.net-nami...
And sure, one can pin interfaces to custom names, but why should anybody have to bother with such things?!
I like systemd a lot, but this is one of the thing they fumbled big time and seemingly still aren't done.
Pinning interfaces by their MAC to a short and usable name, would e.g. have been much more stable as doing that by PCI slot, which firmware updates, new hardware, newer kernel exposing newer features, ... changes rather often. This works well for all but virtual functions, but those are sub-devices of their parent interface anyway and can just get named with a suffix added to the parent name.
Note that the naming scheme is in control of systemd, not the kernel. Even if it is passed on the kernel commandline.
It was deprecated for this nonsense in systemd.
Yes, there were edge cases in the Debian scheme. Yet it did work with VMs (as most VMs kept the same MAC in config files), and it was easy to maintain if you wanted 'fresh'. Just rm the pin file in the udev dir. Done.
Again it worked wonderful on every VM, every bare metal system I worked with.
One of the biggest problems with systemd, is it seems to be developed by people that have no real world, industrial scale admin experience. It's almost like a bunch of DEVs got together, couldn't understand why things were "so confusing", and just figured "Oh, it must be a mistake".
Nope.
It's called covering edge cases, ensuring things are stable for decades, because Linux and the init system are the bottom of the stack. The top of the stack changes like the wind in spring, but the bottom of the stack must be immensely stable, consensus driven, I repeat stable change.
Systemd just doesn't "get" that.
Welcome back, eth0. :)
Deleted Comment
Deleted Comment
I use Debian Stable on almost all the systems I use (one is stuck on 10/Buster due to MoinMoin). I installed Trixie in a container last week, using an LXC container downloaded from linuxcontainers.org [1].
Three things I noted on the basic install :
1) Ping didn't work due to changed security settings (iputils-ping) [2]
2) OpenSSH server was installed as systemd socket activated and so ignored /etc/ssh/sshd_config*. Maybe this is something specific to the container downloaded.
3) Systemd-resolved uses LLMNR as an name lookup alternative to DNS and pinging a firewalled host failed because the lookup seemed to be LLMNR accessing TCP port 5355. I disabled LLMNR.
Generally, Debian version updates have been succesful with me for a few years now, but I always have a backup, and always read the release notes.
[1] https://linuxcontainers.org
[2] https://www.debian.org/releases/trixie/release-notes/issues....
sshd still reads /etc/ssh/sshd_config
at startup. As far as I know, this is hard-coded in the executable.What Debian has changed happens before the daemon is launched: the service is socket activated. So, _if you change the default port of sshd_ in its config, then you have to change the activation:
- either enable the sshd@service without socket activation,
- or modify the sshd.socket file (`systemctl edit sshd.socket`) which has the port 22 by default.
Since Debian already have a environment file (/etc/default/ssh), which is loaded by this service, the port could be set in a variable there and loaded by the socket activation. But then it would conflict with OpenSSH's own files. This is why I've always disliked /etc/default/ as a second level of configuration in Debian.
The SSH server being a socket unit with systemd doing all of the socket parallelism-limiting and accepting was one of the earliest examples of socket activation ever given in systemd. It was in one of Lennart Poettering's earliest writings on the subject back in 2011.
* https://0pointer.de/blog/projects/inetd.html
And even that wasn't the earliest discussion of this way of running SSH by a long shot, as this was old news even before systemd was invented. One can go back years earlier than even Barrett's, Silverman's, and Byrnes's SSH: The Secure Shell: The Definitive Guide published in 2005, which is one of many places explaining that lots of options in sshd_config get ignored when SSH is started by something else that does all of the socket stuff.
Like inetd.
This has been the case ever since it has been possible to put an SSH server together with inetd in "nowait" mode. Some enterprising computer historian might one day find out when the earliest mention of this was. It might even be in the original 1990s INSTALL file for SSH.
I want systemd nowhere fucking near my NTP or DNS config.
If you have specific issues, please file them over at systemds GitHub issue tracker.
Any change can introduce regressions or break habits. The move toward socket activation for sshd is part of a larger change in Debian. I don't think the Debian maintainers changed that just for the fun of it. I can think of two benefits:
+ A service can restart without interruption, since the socket will buffer the requests during the restart.
+ Dependencies are simpler and faster (waiting for a service to start and accept requests is costly).
My experience is that these points largely outweigh the downsides (the only one I can think of is that the socket could be written in two places).
I think it doesn't outweight the downside. Let's not forget this:
"OpenSSH normally does not load liblzma, but a common third-party patch used by several Linux distributions causes it to load libsystemd, which in turn loads lzma."
The "XZ utils backdoor" nearly backdoored every single distro running systemd.
People (including those who tried to plant this backdoor) are going to say: "systemd has nothing to do with the backdoor" but I respectfully disagree.
systemd is one heck of a Rube-Goldberg piece of machinery: the attack surface is gigantic seen that systemd's tentacles reaches everywhere.
With a tinfoil hat on one could think the goal of systemd was, precisely, to make sure the most complicated backdoors could be inserted here and there: "Let's have openssh use a lib it doesn't need at all because somehow we'll call libsystemd from openssh".
Genius idea if you ask me.
What could possibly go wrong with systemd "now just opening a port for openssh" uh? Nothing I'm sure.
Now that said I'm very happy that we've now got stuff like the Talos Linux distribution (ultra minimal, immutable, distro meant to run Kubernetes with as few executables as possible and of course no systemd) and then containers using Alpine Linux or, even if Debian based, minimal system with (supposedly) only one process running (and, once again, no systemd).
Containerization is one way out of systemd.
I can't wait for a good systemd-less hypervisor: then I can kiss Microsoft goodbye (systemd is a Microsoft technology, based on Microsoftism, by a now Microsoft employee).
Thanks but no thanks.
Talos distro, systemd-less containers: I want more of this kind of mindset.
The future looks very nice.
systemd lovers should just throw the towel in and switch to Windows: that's what they actually really want and it's probably no better than they deserve.
Doesn't the .socket unit point to a .service unit? Why would using a socket be connected to which config sshd reads?
I am not a fan of that as a default. I'd rather default to cheaper disk space than more limited and expensive memory.
>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.
Seems like an easy change to revert from the release notes.
As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.
Making such claims on HN attracts edge cases like nobody's business but let's see
Until about a year ago, whenever I would try to download moderately large files (>4GB) my whole system would grind to a halt and stop responding.
It took me MONTHS to figure out what's the problem.
Turns out that a lot of applications use /tmp for storing files while they're downloading. And a lot of these applications don't cleanup on fail, some don't even move files after success, but extract and copy extracted files to destination, leaving even more stuff in temp.
Yeah, this is not a problem if you have 4X more ram than the size of files you download. Surely, this is a case for most people. Right?
If it's easily reproducible, I guess checking `top` while downloading a large file might have given a clue, since you could have seen that you're running out of memory?
That's a very bad default.
In this new stable release, an update to Dovecot will break your configuration: https://willem.com/blog/2025-06-04_breaking-changes/
I use it for years to achieve HA for personal mail servers and will now have to look for alternatives -- until then will stick with Debian Bookworm and its Dovecot 2.3.
[1] https://doc.dovecot.org/2.4.0/installation/upgrade/2.3-to-2....
Yeah, Dovecot seem to be going hardline commercial. Basically the open-source version will be maintained as a single server solution. Looks like if you want supported HA etc. you'll have to pay the big bucks.
There is a video up on YouTube[1] where their Sales Director is speaking at a conference and he said (rough transcript):
"there will be an open source version, but that open source version will be maintained for single server use only. we are actually taking out anything any actually kinda' involves multiple servers, dsync replication and err some other stuff. so dovecot will be a fully-featured single node server"
Have you looked at Stalwart[2] as an alternative ?
[1] https://youtu.be/s-JYrjCKshA?t=912 [2] https://stalw.art/
(I love Debian) It's going to take a bit for me to get used to having a current version of Python on the system by default.
What this means is when you find out stuff breaks, like drivers and application software, and decide the upgrade was a bad idea, you are fucked.
More notably, some of the upgrade is irreversible - like MySQL/MariaDB. The database binary format is upgraded during the upgrade. So if you discover something else broke, and you want to go back, it's going to take some work.
Ask me how I know.
Of course anyone can restore from backups. It's a pain and it's time consuming.
My post serves more as a warning to those who may develop buyer's remorse.
Too many bits of 'advice' on Stack Overflow, etc. claiming it's possible as top Google results.
I'm here to say unequivocally: it does not work, will not work, and will leave the system in an irreversibly broken state. Do not attempt.
What problems did you have that made you want to roll back the update?
This was complicated by the fact that the machine also hosted a MySQL database which could not be easily rolled back because the database was versioned up during the upgrade.