This whole ideology of "the user should get all their software from their Linux distribution" and it's implicit consequence: there's no clear difference between system software (internal tooling) and application software installed by the user (Audacity and friends) should just die already.
I want my OS to just provide a decent interface over which I can install application packages myself, packages that I get from my own sources, just like on Windows. if those packages are statically linked, fine. I know most Linux users disagree, but I don't want the relationship between software vendor and user to be distorted by some distro maintainer, or having to be limited to a package manager. I want to be able to store application installers in my filesystem.
I also want my distribution to hide it's Python binary from me so I can install my own Python without breaking the OS.
Basically: stop assuming that I want to live under your wing. I just want you to give me a nice desktop environment, a terminal and a well docomented way to install third party software.
I know distro developers don't owe me anything, and it's fine if they do something else, but this is the actual reason why Linux isn't used in the desktop.
First of all, you can do that already. No one stops you. So I guess what you actually want is someone to fix the problems that come with your wishes, but do not agree with the way the distributions try to do so. You know what? Thet really don't owe you anything. Go ahead, build the system you want. But please don't come along whining when it ends up unmaintainable, instable, or starts acting against you, like, you know, Windows does.
Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.
> But please don't come along whining when it ends up unmaintainable, instable, or starts acting against you, like, you know, Windows does.
As a die-hard linux user, this line of thinking really plays against Linux. Software distribution hasn't caused problems in windows since XP. Likewise, all mac users seem pretty happy with .app bundles.
> Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.
Do you realise how insulting this comes across ? As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds - who cares if it crashes as soon as you do more than open it anyways. I'd rather not have my software packaged than have it like that. My subjective experience from the software I use is that stuff like AppImage made straight by the dev is generally much more stable and works much better than whatever chtonian hack a debian packager decided to apply.
I don't really agree with the comment you're responding to, but your comment is needlessly incendiary and not particularly useful. All software critique has an assumed element of "if the developers wish to make this better for users like me", and so remarks like "[they] don't owe you anything" should be reserved for people who lean into specific developers with demands, not people earnestly offering suggestions to make the third-party software ecosystem better.
You are right on only one thing: 1 version globally is stupid. Global properties in general are stupid. Have a notion of public vs private dependencies and one gets the right amount of coherence, not too much or too little.
Otherwise, hell fucking no. This misconception is causing so many mistakes. Software is an ecosystem; I give 0 shits about individual libraries programs whatever, just that the end composition meets my criteria. The app bundle / flatpak / corse-grained Docker vision of software is plain wrong, and will basically prevent future gains in productivity.
Nix and friends get it right: no single version dumbness, everything is installed the same way, be it by the admin or by regular user, and with proper notions of dependencies.
Socially, I understand where OP is coming from. The common view that distros are some crusty 90s holdover held hostage by a bunch of neckbeards who don't care about users not like themselves. But distros like NixOS put the user in full control. There's feeling of de-alienation using a NixOS machine that's really hard to convey to those that haven't yet tried it (and gotten over the initial learning curve).
Yeah one version really is an antediluvian limitation. The ideal state of this would be something like this: I can have 100 different versions of a library, stored on disk only once per each version, and any application I install could use the special version it wants as a shared library. Does NixOS provide this capability currently?
I think having a separate package manager for every piece of software I use is just terrible. I also don't want to be forced to use application bundles.
You get isolation (re "hide its Python binary"), multiple package variants, a unified software management interface for all applications, etc -- from functional package managers like Nix or Guix.
To disagree with the least important part of what you said:
> but this is the actual reason why Linux isn't used in the desktop
I still believe the actual biggest issue with Linux on the desktop is graphics card drivers (and other aspects of the graphics stack like handling High DPI). Too many machines fail the basic test of 'can I install Linux, plug my screen in and have it behave sensibly'.
Graphics card drivers is a solved problem. Intel/AMD "just work". Nvidia works okay-ish if you don't mind closed source drivers and unless it's a card nvidia has dropped support for. In both cases, switching to Windows is not an improvement: all the bad parts of nvidia on Linux also apply to Nvidia on Windows.
HiDPI is mostly fine, with the exception of mixing high dpi in xorg. Mixing DPI works in wayland. So I'll grant you that the combination of nvidia+mixing DPI doesn't work, because nvidia's drivers don't support wayland.
Otherwise I strongly disagree, and I'm genuinely confused as to why this myth persists. Wrangling drivers on windows is a huge pain. There's no one "update" button I can press to update all my drivers, let alone all my other software. I have to go through the driver manager and manually right click and select "update driver" which frankly nuts. And to update the graphics card I have to periodically go to nvidia's website and check manually? What year is it again? Why doesn't Windows do this for me?
I think the original reason for shared libraries, and the only true one is that it’s meant to save hard drive (and maybe memory) space. But the ratio between assets vs code is now so big (media files, or data in data intensive algorithms) with code representing nothing, i don’t think any optimisation is really worth it anymore.
That's absolutely not true for desktop GUI apps because so many of them are written in Electron these days.
Even small desktop accessories carry their own copy of the Electron and Chromium libraries, usually around 150 MB. It would be a tremendous improvement to use a shared browser engine for this, but developers are resistant.
I’m not a lawyer, but I recall seeing a comment that one reason companies currently prefer dynamically linked apps in bundles like Snap and Flatpak over static linking, is that dynamic linking permits more open-source libraries to be used without legal issues.
I think it had to do with what legal precedents had been set for linking GPL/LGPL libraries with non-GPL binaries?
Not for the vast majority of libraries at any rate, but for libraries that most OSs consider part of their core platform (a concept eschewed by Linux Desktop) they are still a good idea.
Yeah, I'm happy to keep track of all my third party software that I install manually and I never forget about what applications I have installed so far and I always update my packages manually when there are any security flaws and I get to know about all the security flaws right away when they are discovered because I'm on all seclists. Even if I forget to update my third party apps/packages, my third party apps/packages remind me of the new updates and I never turn the update-notifications off.
> This whole ideology of "the user should get all their software from their Linux distribution"
This isn't true at all, and I struggle to think where you came up with this idea.
Linux distributions do distribute a hand-picked set of packages. That's essentially what a distribution does: distribute packages. Some are installed by default, others are made available. That's pretty much the full extent of it.
Yet, just because a distribution distributed packages, that doesn't mean you are expected to no use anything else. In fact, all the dominant Linux distributions even support custom repositories and allow everyone to not only put together their repository but also offer a myriad of tools and services to allow anyone to build their very own packages.
Even Debian and debian-derived distributions such as Ubuntu, which represent about half of Linux install base, offer personal/privayr package archives (PPA), which some software makers use to distribute their stuff directly to users.
So, exactly where did you get the idea that that so called ideology even exists?
1. Snap and/or Flatpak allow you to install GUI applications from most places nowadays. The internal tooling (system packages) are kept separate from the User installed applications and are effectively sandboxed in this way
2. Linuxbrew allows a Mac OS-like separation from your personal development tools and your OS's internal packages. Notably, this also allows you to install far newer tooling than your distribution would typically provide
3. Drop application binaries in ~/.local/bin if all else fails
If I weren't on a rolling release distribution I'd probably go that route. I hate being restricted by whatever my distro provides, I hate upgrading the entire world when new releases are made, and I hate third party repos and the hell the provide reconciling everything together.
It's really the in-between state that's terrible. Either go full *BSD or Mac OS and separate the concepts or full Arch Linux (w/ AUR) and don't. All other ways of distribution software tend to be more server centric anyway
> Either go full *BSD or Mac OS and separate the concepts or full Arch Linux (w/ AUR) and don't.
Regarding the first alternative, I think it will be interesting to see how Fedora Silverblue [1] turns out. It’s basically going for an immutable base system coupled with Flatpak for apps the user installs.
Haven’t tried it myself, but for average desktop users, I think that sounds like a very good solution in the long run: a stable base system with up-to-date user-facing apps.
Now the Go dev has to voluntarily update said binary whenever a security issue or bug is found in his code or any of its dependencies.
And you have to replace that whole binary everywhere it is installed.
Think about how many systems would still be vulnerable to heartbleed if every thing that used libopenssl had to be re-compiled and redistributed as a statically linked binary.
This proposed “statically linked” world might be worse than the current mess. See Docker.
So if you want to download precompiled static binaries and "install" them into your home directory or /usr/local/bin that's fine. You can do that, and it works, although now you're stuck in the same quagmire Windows users are stuck in.
My question though... why? Everything about that seems to me worse in every way.
Specifically regarding python, on my system right now, I have python 2.7, 3.8.5 and 3.9rc1 installed on my system and maintained by the package manager. When 3.9rc2 or 3.9.0 final is released it will update to that. It's configured to run 3.8.5 when I just run "python", although I can manually run other versions by running the command python2 or python3.9, and I could reconfigure the default to be one of the other versions if I wanted. The package manager has versions going back to 3.4. I guess I don't understand the desire to install python manually - what can you do with a manually installed python that you can't do with the one installed by the package manager?
> and a well documented way to install third party software
I have made a few packages for Debian ARM Linux when I needed them. I wouldn’t call the documentation great, but it’s not too bad either. Same with the infrastructure, the OS support versioning and dependencies, can be configured for custom repositories and to verify signatures…
It’s not terribly hard to make them fully working, install/remove/start/stop systemd services, support upgrade/purge/reinstalls, matter of a few shell scripts to write. The command to install a custom package from a local file is this:
sudo dpkg -i custom-package_0.11.deb
However, this all was for embedded or similar, where I had very good idea about the target OS version and other environmental factors. Not sure how well the approach would work for a desktop Linux.
Additionally, there's often a package in the AUR for the software you want to install already. Just gotta look at the votes and comments beforehand, and if you're feeling paranoid, the PKG itself.
So do that? Distros provide packages but they don't make you install them. Download the applications from their website and stick them in /usr/local or your home directory. If the applications don't provide builds, that's their problem. We have AppImages for a single-file solution, but I've used plenty of application directories you can just unpack and use. Julia is one.
The article is pretty interesting and I learned quite a few things, but it looks like the author is knowingly not answering the issue they raise by themselves.
In my opinion, the most important thing distros does that is incompatible with how rust currently works, is handling security/bug updates.
The one libjpeg.so for everyone is meant to fix libjpeg flaws for everyone. And it has many security flaws. And it has many users. There is no denying the way this is done by distros is good.
Now, to pick author's code, one of its dependency is a CSS parsing, which is prone to flaws. (Maybe not /security/ flaws, but still) The question is, how is the distro supposed to handle that?
I know rust has tooling for that, but it seems to me that with the perfect version match crate build system, every dependency will happily break API. So let's say author no longer has time to develop rust librsvg, and cssparser crate has a major flaw, which is fixed only in a new API rewrite branch, then what? Distro are supposed to fix that themselves? Sounds like much more work for them.
> There is no denying the way this is done by distros is good.
Let me tell you, the way it is done by distros (Centos, Debian) is far from being good. You will get the fix a long time after the bug is published. And you only get it if your system is recent enough.
I appreciated the author's approach. They did address many of the ancillary concerns while "staying with the question" about whether the dominant Linux distro way of handling libraries is indeed still the best way. Sometimes teaching or blogging on a topic helps a person clarify their own ideas over time.
Yes, every crate using a different versions of their dependencies involves a lot more work for distros, especially when a crate uses a -sys crate (e.g. libgit2-sys) and libgit2-sys does an API break. Now every crate that uses libgit2-sys in the repo manually needs its dependencies updated, which is a rather time consuming process (especially if the bindings in libgit2-sys are only built against some random git version).
From a security point of view, you shouldn't be using unmaintained libraries anyway, no? And if librsvg is maintained, then all the distro has to do is package the latest version.
This is a bunch of nonsense. Rust prefers static linking because it is predictable. These supposedly "huge" binaries are laughably small on a modern >1TB hard drive. If you're building a tiny embedded system, by all means optimize your builds system-wide, you have total control! But for a desktop, is this really a concern?
If you add 9MB to each binary on the system by statically linking them, and your system runs 200 programs (including system services) on average, your system now uses about 2GB more memory (to be fair, probably not all the time but it does increase memory pressure needlessly). Shared libraries aren't just about storage space. They also provide page cache sharing (memory for a shared library is only mapped once and the mapping is shared by different programs).
A slight aside but they also provide the ability to apply security updates sanely to all programs using that library on the system (just update the library and restart the programs, as opposed to having to install a rebuilt version of every program that uses the library). Is this a game-changing feature? These days probably not, but it is (again) just needless waste.
And I say this as someone who is currently developing a shared library in Rust that will probably be included in a lot of distributions because I expect quite a fair amount of container runtimes will end up using it. (But I do also work for a Linux distribution.)
9MB is a lot. For scale: I have a rust application; assets (font + music), GUI via GPU, networking, unzipping, a bit of cryptography, it's all in there. 400 dependencies in total (yes, i don't like it).
By these metrics it's by far the largest rust application I've seen thus far. When fully optimized, it's 11MB in size.
Yes, it's a concern. Firstly, hard drive space isn't the only reason to make binaries small - you have RAM pressure, cache pressure, and bandwidth to save. Secondly and more importantly, waste adds up. If you replaced every binary on the system with a Rust equivalent - which, to listen to some advocates, is the eventual goal - you could end up with a base system that's many times larger.
In a larger sense, something that sets out to be a "systems programming language" needs to be exactly the sort of thing suitable for a tiny embedded system, even if it isn't running on one, because everything else builds on top of it. The attitude that "we have tons of power, why not waste it" just doesn't fly at the very lowest levels. You can write a desktop application in Python, and it's broadly fine - but try writing an OS kernel!
There are patches that let Rust run on ESP32 systems, so think it's entirely suitable for tiny embedded things. What makes it bloaty is the linked in standard library, but it's not an unsolvable crisis; you can dynlink against glibc, there is a crate for core rust IIRC. That'll get you reasonably sized rust applications.
And for writing kernels the same applies; without the stdlib, it gets a lot smaller very fast. I've done it, so I think I can count myself on having some experience there. The biggest part of my kernel is a 128KB scratch space variable it uses during boot as temporary memory until it has read the memory mappings from the firmware and bootstrapped memory management on a basic level. The remainder of the kernel (minus initramfs) is then about 1MB large, the largest part after the 128KB scratch space using about 96KB.
Buy a bigger ssd if you can, they're cheap. The last drive I bought was a 1TB Intel SSD for like $90.
But the pain point is on laptops like MacBooks which have comically small storage space for their price point. I think the base models have a measly 256GB SSD and charge crazy amounts for upgrades.
Wouldn't it be possible to have a C-like language that is somewhat backward compatible with C, and have the nice security features of Rust?
I get that Rust is awesome, but I'm not certain you need to make an entire new language just to have the security stuff.
Of course it might be complicated to do, but in the end, aren't there linters or other validators that can give the same security results rust has, but with C or even C++?
These are valid questions a lot of people new to Rust have, so:
1. Rust is "backward compatible" in the sense that Rust code can use C libraries and C code can use Rust libraries - both ways via CFFI [1]. Security gaurentees only apply to the Rust code.
2. We've tired static and dynamic analysis of C to find security bugs for decades, there has been a plethora of research and commercial tools in the space. None fix the problem like Rust does [2].
Almost any language can call C functions and we don't call all languages backwards compatible with C when they can merely interoperate with it.
Objective-C and C++ are the only two languages which offer backwards compatibility. AFAIK it's complete in the case of the former and there are some limitations for the latter.
None fix the problem like Rust does, but it's worthwhile to examine why: typical companies and developers have an aversion to paying for tools and for anything which slows down development. That's why usually those tools and languages which are reasonably user-friendly are more successful. Ironically that's both an advantage and a problem for rust: it's nicer to use than some C tools, but still not user-friendly compared to alternatives like Go or Java and in some cases even C++.
There is Cyclone, Checked C, Deputy. Such "C-but-weird" languages have an "uncanny valley" problem:
• "C, but safer" on its own is not very enticing. With no other benefits, it's easy not to switch, and instead promise to try harder writing standard C safely, or settle for analysis tools or sandboxes.
• People who use C often have backwards compatibility constraints. Switching to another compiler and a dialect that isn't standard C is a tough sell. You can still find C programmers who think adopting C99 is too radical.
• Programming patterns commonly used in C (rich in pointer juggling, casts, and textual macros) are inherently risky, but if you try to replace them (e.g. with generics, iterators), it stops looking like C anyway.
So "safer C" is unattractive to people who are tied to C implementations or don't want to learn a new language.
But people who do want to learn a new language and use modern tooling, don't want all the unfixable legacy baggage of C.
Rust dodges these problems by not being a "weird C". It's a clean design, with enough features to be attractive on its own, and safety is just a cherry on top.
And there are languages that try to keep to C and add some minor safety improvements, eg my language C3 (subarrays/slices, contracts, all UB has runtime checks in debug builds and more)
I think maybe you're conflating what Rust's borrow checker does with the notion of "security." They're related in that the borrow checker does some stuff that makes it difficult to create certain bugs that can be security issues, but they're not the same.
But to answer the question, I suspect no, and if you did it would be basically re-engineering the borrow checker and forcing Rust semantics into C and C++.
I don't know if anyone has proven it, but my hunch is that borrow checking C is undecidable.
1. You could possibly get closer, but you'd lose a lot. Most of Rust's "nice security features" are wildly incompatible with existing C/C++ code and inherent language features.
2. No. If C/C++ could be made safe* Rust would not exist.
* everyone agrees on this point, including the richest and largest software companies on the planet
This is an intractable problem because checking for buffer overflows requires buffer bounds and C pointers lack buffer bounds. Rust solves this by "fat pointer", pointer that knows its size, but fat pointer can't be the same size as thin pointer hence it would be backward incompatible.
Not an expert, but if you want a stable Rust-to-non-rust ABI, you can use the C ABI as the article mentions. If you want a stable rust-to-rust ABI for FFI there's a crate for that
https://crates.io/crates/abi_stable
It seems somewhat unrealistic to expect a really new language commit across the board to the same sort of ABI stablility as a decades-old language such as C.
With everything slowly( or sometimes rapidly) moving into containers (docker, systemd portable services, flatpak, snaps) I think the concept of system library will probably become irrelevant at some point not that far into the future.
The expectation is that with a "system" package, one can update that one package and (basically) everything on the system now uses that new version. Practical for security and important bugfixes.
> While C++ had the problem of "lots of template code in header files", Rust has the problem that monomorphization of generics creates a lot of compiled code. There are tricks to avoid this and they are all the decision of the library/crate author.
Is there any research on having compilers do some of these tricks automatically? A compiler should, at least in principle, be able to tell what aspects of a type parameter are used in a given piece of code. Such a compiler could plausibly produce code that is partially or fully type-erased automatically without losing efficiency. In some cases, I would believe that code size, runtime performance (due to improved cache behavior), and compile times would all improve.
In a way I'm happy rust does not have a stable abi. Swift does, but the stability is 'whatever apple swift emit'. Very little documentation, that what's there is out of date, so the only practical language that can interact with swift is swift. To be able to interact from another language, one would have to parse swift, make the semantics of all types and generics match exactly to be able to do the simplest things. (For example array and string, two core types are stock full of generics and protocols)
I want my OS to just provide a decent interface over which I can install application packages myself, packages that I get from my own sources, just like on Windows. if those packages are statically linked, fine. I know most Linux users disagree, but I don't want the relationship between software vendor and user to be distorted by some distro maintainer, or having to be limited to a package manager. I want to be able to store application installers in my filesystem.
I also want my distribution to hide it's Python binary from me so I can install my own Python without breaking the OS.
Basically: stop assuming that I want to live under your wing. I just want you to give me a nice desktop environment, a terminal and a well docomented way to install third party software.
I know distro developers don't owe me anything, and it's fine if they do something else, but this is the actual reason why Linux isn't used in the desktop.
Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.
As a die-hard linux user, this line of thinking really plays against Linux. Software distribution hasn't caused problems in windows since XP. Likewise, all mac users seem pretty happy with .app bundles.
> Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.
Do you realise how insulting this comes across ? As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds - who cares if it crashes as soon as you do more than open it anyways. I'd rather not have my software packaged than have it like that. My subjective experience from the software I use is that stuff like AppImage made straight by the dev is generally much more stable and works much better than whatever chtonian hack a debian packager decided to apply.
Otherwise, hell fucking no. This misconception is causing so many mistakes. Software is an ecosystem; I give 0 shits about individual libraries programs whatever, just that the end composition meets my criteria. The app bundle / flatpak / corse-grained Docker vision of software is plain wrong, and will basically prevent future gains in productivity.
Nix and friends get it right: no single version dumbness, everything is installed the same way, be it by the admin or by regular user, and with proper notions of dependencies.
Socially, I understand where OP is coming from. The common view that distros are some crusty 90s holdover held hostage by a bunch of neckbeards who don't care about users not like themselves. But distros like NixOS put the user in full control. There's feeling of de-alienation using a NixOS machine that's really hard to convey to those that haven't yet tried it (and gotten over the initial learning curve).
You get isolation (re "hide its Python binary"), multiple package variants, a unified software management interface for all applications, etc -- from functional package managers like Nix or Guix.
Deleted Comment
> but this is the actual reason why Linux isn't used in the desktop
I still believe the actual biggest issue with Linux on the desktop is graphics card drivers (and other aspects of the graphics stack like handling High DPI). Too many machines fail the basic test of 'can I install Linux, plug my screen in and have it behave sensibly'.
HiDPI is mostly fine, with the exception of mixing high dpi in xorg. Mixing DPI works in wayland. So I'll grant you that the combination of nvidia+mixing DPI doesn't work, because nvidia's drivers don't support wayland.
Otherwise I strongly disagree, and I'm genuinely confused as to why this myth persists. Wrangling drivers on windows is a huge pain. There's no one "update" button I can press to update all my drivers, let alone all my other software. I have to go through the driver manager and manually right click and select "update driver" which frankly nuts. And to update the graphics card I have to periodically go to nvidia's website and check manually? What year is it again? Why doesn't Windows do this for me?
Even small desktop accessories carry their own copy of the Electron and Chromium libraries, usually around 150 MB. It would be a tremendous improvement to use a shared browser engine for this, but developers are resistant.
This isn't true at all, and I struggle to think where you came up with this idea.
Linux distributions do distribute a hand-picked set of packages. That's essentially what a distribution does: distribute packages. Some are installed by default, others are made available. That's pretty much the full extent of it.
Yet, just because a distribution distributed packages, that doesn't mean you are expected to no use anything else. In fact, all the dominant Linux distributions even support custom repositories and allow everyone to not only put together their repository but also offer a myriad of tools and services to allow anyone to build their very own packages.
Even Debian and debian-derived distributions such as Ubuntu, which represent about half of Linux install base, offer personal/privayr package archives (PPA), which some software makers use to distribute their stuff directly to users.
So, exactly where did you get the idea that that so called ideology even exists?
1. Snap and/or Flatpak allow you to install GUI applications from most places nowadays. The internal tooling (system packages) are kept separate from the User installed applications and are effectively sandboxed in this way
2. Linuxbrew allows a Mac OS-like separation from your personal development tools and your OS's internal packages. Notably, this also allows you to install far newer tooling than your distribution would typically provide
3. Drop application binaries in ~/.local/bin if all else fails
If I weren't on a rolling release distribution I'd probably go that route. I hate being restricted by whatever my distro provides, I hate upgrading the entire world when new releases are made, and I hate third party repos and the hell the provide reconciling everything together.
It's really the in-between state that's terrible. Either go full *BSD or Mac OS and separate the concepts or full Arch Linux (w/ AUR) and don't. All other ways of distribution software tend to be more server centric anyway
Regarding the first alternative, I think it will be interesting to see how Fedora Silverblue [1] turns out. It’s basically going for an immutable base system coupled with Flatpak for apps the user installs.
Haven’t tried it myself, but for average desktop users, I think that sounds like a very good solution in the long run: a stable base system with up-to-date user-facing apps.
[1]: https://docs.fedoraproject.org/en-US/fedora-silverblue/
[0] https://stackoverflow.com/a/19286458/
And you have to replace that whole binary everywhere it is installed.
Think about how many systems would still be vulnerable to heartbleed if every thing that used libopenssl had to be re-compiled and redistributed as a statically linked binary.
This proposed “statically linked” world might be worse than the current mess. See Docker.
Deleted Comment
My question though... why? Everything about that seems to me worse in every way.
Specifically regarding python, on my system right now, I have python 2.7, 3.8.5 and 3.9rc1 installed on my system and maintained by the package manager. When 3.9rc2 or 3.9.0 final is released it will update to that. It's configured to run 3.8.5 when I just run "python", although I can manually run other versions by running the command python2 or python3.9, and I could reconfigure the default to be one of the other versions if I wanted. The package manager has versions going back to 3.4. I guess I don't understand the desire to install python manually - what can you do with a manually installed python that you can't do with the one installed by the package manager?
I have made a few packages for Debian ARM Linux when I needed them. I wouldn’t call the documentation great, but it’s not too bad either. Same with the infrastructure, the OS support versioning and dependencies, can be configured for custom repositories and to verify signatures…
It’s not terribly hard to make them fully working, install/remove/start/stop systemd services, support upgrade/purge/reinstalls, matter of a few shell scripts to write. The command to install a custom package from a local file is this:
However, this all was for embedded or similar, where I had very good idea about the target OS version and other environmental factors. Not sure how well the approach would work for a desktop Linux./bin /etc = System
/usr/local/bin /usr/local/etc = Programs
Deleted Comment
Deleted Comment
Dead Comment
> this is the actual reason why Linux isn't used in the desktop.
Yeah, no.
In my opinion, the most important thing distros does that is incompatible with how rust currently works, is handling security/bug updates.
The one libjpeg.so for everyone is meant to fix libjpeg flaws for everyone. And it has many security flaws. And it has many users. There is no denying the way this is done by distros is good.
Now, to pick author's code, one of its dependency is a CSS parsing, which is prone to flaws. (Maybe not /security/ flaws, but still) The question is, how is the distro supposed to handle that?
I know rust has tooling for that, but it seems to me that with the perfect version match crate build system, every dependency will happily break API. So let's say author no longer has time to develop rust librsvg, and cssparser crate has a major flaw, which is fixed only in a new API rewrite branch, then what? Distro are supposed to fix that themselves? Sounds like much more work for them.
Let me tell you, the way it is done by distros (Centos, Debian) is far from being good. You will get the fix a long time after the bug is published. And you only get it if your system is recent enough.
Deleted Comment
A slight aside but they also provide the ability to apply security updates sanely to all programs using that library on the system (just update the library and restart the programs, as opposed to having to install a rebuilt version of every program that uses the library). Is this a game-changing feature? These days probably not, but it is (again) just needless waste.
And I say this as someone who is currently developing a shared library in Rust that will probably be included in a lot of distributions because I expect quite a fair amount of container runtimes will end up using it. (But I do also work for a Linux distribution.)
Security updates are a plus, but introducing bugs through a shared library is a minus.
I would prefer we focus on robust application sandboxing instead.
At the server datacenter level its pretty much how its handled anyway, you have isolated vm / containers sitting on deduped storage.
By these metrics it's by far the largest rust application I've seen thus far. When fully optimized, it's 11MB in size.
In a larger sense, something that sets out to be a "systems programming language" needs to be exactly the sort of thing suitable for a tiny embedded system, even if it isn't running on one, because everything else builds on top of it. The attitude that "we have tons of power, why not waste it" just doesn't fly at the very lowest levels. You can write a desktop application in Python, and it's broadly fine - but try writing an OS kernel!
And for writing kernels the same applies; without the stdlib, it gets a lot smaller very fast. I've done it, so I think I can count myself on having some experience there. The biggest part of my kernel is a 128KB scratch space variable it uses during boot as temporary memory until it has read the memory mappings from the firmware and bootstrapped memory management on a basic level. The remainder of the kernel (minus initramfs) is then about 1MB large, the largest part after the 128KB scratch space using about 96KB.
FWIW, this is does not seem to be empirically true. The savings are possible in theory, but false in practice.
https://drewdevault.com/dynlib.html
The most used largest libraries, like libc and libpthread, would be dynamically linked by Rust anyways.
Dead Comment
Problem is that portable applications are an alien concept to Linux.
But the pain point is on laptops like MacBooks which have comically small storage space for their price point. I think the base models have a measly 256GB SSD and charge crazy amounts for upgrades.
I get that Rust is awesome, but I'm not certain you need to make an entire new language just to have the security stuff.
Of course it might be complicated to do, but in the end, aren't there linters or other validators that can give the same security results rust has, but with C or even C++?
1. Rust is "backward compatible" in the sense that Rust code can use C libraries and C code can use Rust libraries - both ways via CFFI [1]. Security gaurentees only apply to the Rust code.
2. We've tired static and dynamic analysis of C to find security bugs for decades, there has been a plethora of research and commercial tools in the space. None fix the problem like Rust does [2].
[1] https://michael-f-bryan.github.io/rust-ffi-guide/ [2] https://msrc-blog.microsoft.com/2019/07/18/we-need-a-safer-s...
Objective-C and C++ are the only two languages which offer backwards compatibility. AFAIK it's complete in the case of the former and there are some limitations for the latter.
None fix the problem like Rust does, but it's worthwhile to examine why: typical companies and developers have an aversion to paying for tools and for anything which slows down development. That's why usually those tools and languages which are reasonably user-friendly are more successful. Ironically that's both an advantage and a problem for rust: it's nicer to use than some C tools, but still not user-friendly compared to alternatives like Go or Java and in some cases even C++.
• "C, but safer" on its own is not very enticing. With no other benefits, it's easy not to switch, and instead promise to try harder writing standard C safely, or settle for analysis tools or sandboxes.
• People who use C often have backwards compatibility constraints. Switching to another compiler and a dialect that isn't standard C is a tough sell. You can still find C programmers who think adopting C99 is too radical.
• Programming patterns commonly used in C (rich in pointer juggling, casts, and textual macros) are inherently risky, but if you try to replace them (e.g. with generics, iterators), it stops looking like C anyway.
So "safer C" is unattractive to people who are tied to C implementations or don't want to learn a new language.
But people who do want to learn a new language and use modern tooling, don't want all the unfixable legacy baggage of C.
Rust dodges these problems by not being a "weird C". It's a clean design, with enough features to be attractive on its own, and safety is just a cherry on top.
And there are languages that try to keep to C and add some minor safety improvements, eg my language C3 (subarrays/slices, contracts, all UB has runtime checks in debug builds and more)
But to answer the question, I suspect no, and if you did it would be basically re-engineering the borrow checker and forcing Rust semantics into C and C++.
I don't know if anyone has proven it, but my hunch is that borrow checking C is undecidable.
2. No. If C/C++ could be made safe* Rust would not exist.
* everyone agrees on this point, including the richest and largest software companies on the planet
Such as?
Deleted Comment
It seems somewhat unrealistic to expect a really new language commit across the board to the same sort of ABI stablility as a decades-old language such as C.
Is there any research on having compilers do some of these tricks automatically? A compiler should, at least in principle, be able to tell what aspects of a type parameter are used in a given piece of code. Such a compiler could plausibly produce code that is partially or fully type-erased automatically without losing efficiency. In some cases, I would believe that code size, runtime performance (due to improved cache behavior), and compile times would all improve.
I'd hate to have the same happening for rust.