The biggest impact suckless had on me was via. Their Stali Linux FAQ: https://sta.li/faq/ .
They've built an entirely statically linked user space for Linux . Until then i never questioned the default Linux "shared libraries for everything" approach and assumed that was the best way to deliver software.
Every little cli tool i wrote at work - i used to create distro packages for them or a tarball with a shell script that set LD_LIBRARY_PATH to find the correct version of the xml libraries etc i used.
It didn't have to be this way. Dealing with distro versioning headaches or the finnicky custom packaging of the libraries into that tar ball just to let the users run by 150 kb binary.
Since then I've mostly used static linking where i can. AppImages otherwise. I'm not developing core distro libraries. I'm just developing a tiny "app" my users need to use. I'm glad with newer languages like Go etc... static linking is the default.
Don't get me wrong. Dynamic linking definitely has it's place. But by default our software deployment doesn't need to be this complicated.
The thing is, dynamic linking doesn't mean using LD_LIBRARY_PATH or building full blown OS packages as the only way to find the correct libraries. There's a first class facility for locating shared libraries, using the -R flag to provide a RUNPATH/RPATH in the binary. The runtime link editor will use that path to locate shared libraries. You can make your binaries relocatable as well, by using $ORIGIN in the RPATH: this gets expanded at runtime to the path of the executable, so, e.g., $ORIGIN/../lib would go up one from bin/ where the executable is and down alongside into the lib directory for your software.
LD_LIBRARY_PATH is a debugging and software engineering tool, and shouldn't ever be part of shipped software.
And the main advantage of doing all that work vs statically linking is? Don’t get me wrong - dynamically linking for dev builds makes a lot of sense to cut down on relink times. But I just don’t see it for distribution since doing that RPath work reduces the main argument for dynamic linking (i.e. the OS can patch the vulnerability for all installed packages without waiting for each to release).
There's definitely value in the static approach in some cases, but there are some downsides e.g. your utility will need to be recompiled and updated if a security vulnerability is discovered in one of those libraries. You also miss out on free bugfixes without recompiling.
If you require a library, you can specify it as a dependency in your dpkg/pacman/portage/whatever manifest and the system should take care of making it available. You shouldn't need to write custom scripts that trawl around for the library. Another approach could be to give your users a "make install" that sticks the libraries somewhere in /opt and adds it as the lowest priority ld_library_path as a last resort, maybe?
> e.g. your utility will need to be recompiled and updated if a security vulnerability is discovered in one of those libraries. You also miss out on free bugfixes without recompiling.
This was the biggest pain point in deploying *application software* on Linux though. Distributions with different release cycles providing different versions of various libraries and expect your program to work with all of those combinations. The Big famous libraries like Qt , gtk might follow proper versioning but the smaller libraries from distro packages - guarantee. Half of them don't even use semantic versioning.
Imagine distros swapping out the libraries you've actually tested out your code with with their libraries for "security fixes" or whatever the reason. That causes more problems than it fixes.
Custom start up script was to find the same xml library I've used in the tar ball i packaged the application in. They could then extract that tar ball wherever they need - including /opt and run the script to start my application and it ran as it should. Iirc we used to even use rpath for this.
It depends a lot on ABI/API stability and actual modularity of ... components. There's not always a guarantee of that.
Shared libraries add a lot of complexity to a system for the assumption that people can actually build modular code well in any language that can create a shared library. Sometimes you have to recompile because, while a #define might still exist, its value may have changed between versions, and chaos can ensue - including new, unexpected bugs - for free!
Fun fact... Many Windows programs generally do some sort of 'hybrid static linking' (this is my own terminology) where programs are distributed with the `.dll` libraries just next to the binary. There is no concept of RPATH on Windows—the loader looks for dynamically-linked libraries in a fixed set of locations which includes the binary's directory.
Windows programs generally do link dynamically to core Windows libraries—which users are never expected to mess with anyway—and the C and C++ runtimes, but even these can be statically linked against with `cl.exe /MT`. Some programs even distribute newer versions of the C/C++ runtimes; that's where the famous Visual C++ Redistributables come from.
I agree, though—static linkage should be the default for end-user programs. I long for a time when Linux gets 'libc' redistributables and I can compile for an old-hat CentOS 6 distribution on a modern, updated, rolling-release Arch Linux without faffing with a Docker container.
Imo this is a much saner solution for a system that supports precompiled applications.
Every time I tried to get a third-party binary app running on Linux, I discovered what the vendor did is they shipped half their dependencies as blobs, and relying on the system for the other half - which is an incredibly brittle system that breaks constantly.
The entry point usually is a script that sets LD_LIBRARY_PATH and then calls into the executable.
> There is no concept of RPATH on Windows—the loader looks for dynamically-linked libraries in a fixed set of locations which includes the binary's directory.
This is not true - you can control the DLL path via manifests. There's also a "known DLLs" list in registry which can globally redirect basically any DLL system-wide.
Compiling for CentOS 6 is a linking problem, and any linker lets you link with whatever you want, it's a matter of running the linker with the right arguments.
> I long for a time when Linux gets 'libc' redistributables and I can compile for an old-hat CentOS 6 distribution on a modern, updated, rolling-release Arch Linux without faffing with a Docker container.
The Linux linking model is so bad. So extremely very bad. Build systems
should never rely on whatever garbage happens to be around locally. The glibc devs should be ashamed.
The problem is that static libraries are actually more likely to break across time in practice, since "the system" is more than just "syscalls".
For example, in places where the a filesystem-related "standard" has changed, I have old static binaries that fail to start entirely, whereas the same-dated dynamic libraries just need to bother to actually install dependencies.
I am convinced that every argument in favor of static linking is because they don't know how to package-manager.
Nobody knows how to use the package manager. What happens in practice, is every single program uses the the package versions the distro happens to ship with.
If you want a newer version, too bad - your OS doesn't ship that so better luck in the next release. OR you can set up a private repo, and either ship a binary that has the dependencies included (shipping half the userland with your audio player), or they package the newer version of library, which will unwittingly break half your system, if not today, then surely at the next distro upgrade.
It speaks volumes of Linux package management woes, that no vendor ships anything analogous to brew or chocolatey.
The sane thing here is to maintain a clear notion of what the "OS" is versus the "app", and use dynamic linking on that boundary, but not elsewhere. Which is more or less how Windows and macOS do things.
It is also a statically linked Linux distribution. But it's core idea is reproducible nix-style builds (including installing as many different versions/build configurations of any package), but with less pl fuff (no fancy funcional language - just some ugly jinja2/shell style build descriptions; which in practice work amazingly well, because underlying package/dependency model is very solid - https://stal-ix.github.io/IX.html).
It is very opionated (just see this - https://stal-ix.github.io/STALIX.html), and a bit rough, but I was able to run it in VMs sucessfully. It would be amazing if it stabilizes one day.
I would be more accepting of the trade-off if it wasn't so brittle in practice.
Nix is much closer to a "good" dynamic linking solution IMO except that it makes it overly difficult to swap things out at runtime. I appreciate that the default is reproducible and guaranteed to correspond to the hash but sometimes I want to override that at runtime for various reasons. (It's possible this has changed since I last played with that tooling. It's been awhile.)
AppImages require a large amount of (obsolete) dependencies to run, making their portabiltiy practically worthless. Newer immutable distros like Aeon don't ship the necessary packages to run an AppImage.
I know. But Nix doesn't make any of this complexity go away. It just helps you to tame this complexity and give you a reproducible system. Even Gobo Linux for that matter.
At the end of the day the apps were a simple end user applications. They used a handful of library functions from different libraries. My users cared about just using my apps to do whatever the apps did. I just care that my app should work on their machines easily - no matter what version of what distro they're using.
1. You can bake the LD_LIBRARY_PATH into your executable with the -rpath mechanism.
2. Both rpath and LD_LIBRARY_PATH expand certain tokens such as $ORIGIN. $ORIGIN expands to the location of the program.
So you can (for instance) set a rpath of $ORIGIN in your executable, and then it will look for its libs in its own directory. Just like is the intelligent default on on you-know-what Redmond operating system everyone rags on.
I also like the static approach , there is staticx which I am loving which can take a dynamic binary and make it static.
I am wondering if I could create something like .deb /.rpm / appimage and flatpak all into static binaries which can work on any device
I have a non-LFS, Stali-like project for myself, 100% statically-linked Linux. Probably the biggest PITA for me is compiling a statically-linked cmake. Even just compiling a dynamically-linked cmake takes an inordinate amount of time.
It's been around ten years that my desktop barely changed except a few pixels thanks to dwm and dmenu. I am a bit exaggerating but I love the stability that minimalism brings. If only they could make a pdf viewer...
Eh, it vendors an old version of mupdf. Very bad idea, considering that it's a C program/library handling a notoriously complex format often shared on the Internet.
Personally, I just use mupdf (which I sandbox through bubblewrap).
To the currently dead sibling comment by kjrfghslkdjfl (on the off chance they get to see this): mupdf is extremely cross platform. I felt that should have at least been mentioned before your comment reached being dead over that misunderstanding.
Seconding this. It's my default choice for many file formats, not just pdf. However it doesn't support jpegxl so in those cases I use Okular (very much not minimal but quite usable).
That's a bad coding style document. There's no rationale given, except for a bunch of references at the top, which are clearly argument from authority.
I think the no "loop initial declarations" is for consistency with "all declarations at the top". Other coding style guides favor "declarations as close as possible to first use", including guidelines for mission critical systems (if you resort to argument from authority I have some too...) [1].
As much as I like Suckless, this section is just pet peeves that can safely be ignored; unless you submit a patch to a project that aligns with it.
True and it would indeed be desirable that it were. Here I go out on a limb and assume it's because someone got bitten by attempting to use the loop index outside the loop (common for search operations) while declaring the index within and outside the loop. A bug (gcc and clang can warn about using -Wshadow, but which sadly isn't part of -Wall) which might easily occur when multiple people edit the code over a longer time-span.
their rationale is very inconsistent: tells you to use c99 but you must place declarations at the top, and you can't use C++ style // comments (introduced in c99 aswell).
At least in my opinion, "for loop initial declaration" is especially useful in macros (although many programs will never need such macros, they are especially useful when you do have them).
An example of such a macro is the following macro (the loop and the variable declaration will both be optimized out by the compiler; I have tested this):
Or the self-reich-ousness of folks like Suckless[0], like telling the OG author of redare (pancake, who is a very competent malware reverse engineer) he's an idiot [1].
I had a look at your links, and I think the assertion that the people involved in Suckless are National Socialists is insufficiently supported. I don't know these people outside their software, but if you're going to accuse someone of being part of some reviled political group I think you should have something stronger than "they went on a hike carrying torches around the time some extremist group had a march with torches".
Suckless has a beautiful coding philosophy and I wish all software was written with this in mind, but surely a window manager and X-menu aren't really the best showcases? These aren't the types of programs where complexity is the biggest enemy.
I'm not claiming I could write these tools as simple as these, but surely the importance of these paradigms arise when actual complicated software is needed?
The drama around this community is silly. I use these tools because I absolutely love their philosophy on software, and software alone. I couldn't care less what the authors personal beliefs and political leanings are, or who they offended on IRC or social media.
I recently spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways. Most annoying of all is that I can't do anything about it. I'm not going to spend days of my life digging into their source code to make the changes I want, nor spend time pestering the maintainers to make the changes for me.
So I ended back at my st fork I've been using for years, which sucks... less. :) It consists of... 4,765 SLOC, of which I only understand a few hundred, but that's enough for my needs. I haven't touched the code in nearly 5 years, and the binary is that old too. I hope it compiles today, but I'm not too worried if it doesn't. This program has been stable and bug-free AFAICT for that long. I can't say that about any other program I use on a daily basis. Hhmm I suppose the GNU coreutils can be included there as well. But they also share a similar Unixy philosophy.
So, is this philosophy perfect? Far from it. But it certainly comes closer than any other approach at building reliable software. I've found that keeping complexity at bay is the most difficult, yet most crucial thing.[1]
> I couldn't care less what the authors personal beliefs and political leanings are, or who they offended on IRC or social media.
I just don't really want to use or support software by people who, at best, think it's appropriate to joke about an ideology that wants me [0] dead, or at worst, actively subscribe to that ideology. There are some things that I'm not willing to look past.
[0]: non-white, non-straight, left of the political spectrum
Having been on their mailing lists and IRC channel for over four years, I have seen maybe a handful of "edgy" comments that made me go "sigh" or "Ew!" and they are generally from two or so people that are on the fringe of the community. Yes, it is possible that this is some sort of elaborate trick, but they sure give the appearance of mostly a bunch of helpful folks that care deeply about their own code and projects while caring very little to police people and rather just ignore them.
Oh, there are also the edgelords occasionally lured in by Luke Smith's videos (who has never sat foot in community or contributed code while I have been around and I am not sure if he ever did) who usually get laughed out of IRC after delivering an unhinged chanspeak rant.
I get that, they're probably assholes. But if I limited my usage of software and consumption of art to only those not authored by assholes, I would probably have a less enjoyable and boring existence. Not to mention exhausting.
I think it's possible to separate the art from the artist, and enjoy the art without being concerned about the artist's beliefs, and whether I disagree with them.
Also, you don't necessarily support them by using their software. The software is free to use by anyone, and you never have to interact with the authors in any way. Software is an amorphous entity. Unless they're using it to spread their personal beliefs, it shouldn't matter what that is. By choosing not to use free software, you're only depriving yourself.
But this is your own choice, of course, and I'm not saying it's wrong. Just offering a different perspective.
That would indeed be concerning if true; do you have a reference? Unfortunately, the vast majority of such claims I've found to be misconstrued which makes me skeptical (the boy keeps crying wolf).
Then, when people are no longer allowed to talk about what (stupid shit) they believe and jokes should only be made behinds peoples back.
Who should be on the committee that decides what we may talk and joke about and how should the committee inform it self?
The new forbidden topics will be chosen from the set of topics people talk about which get smaller, stranger and more political. What people secretly believe will be much closer to the secret dialog while the public dialog floats away.
That people are saying things is the least of your concern.
Fascinating perspective tho. It is much easier if one is more secure, talks easy or has a more mundane world view. Not someone one can choose. Thicker skin however.
Also interesting, if one didn't like the people running the lunchroom at the end of the street or didn't like the visitors you use to be able to go to some other place. Today they are all part of the same chain. We've lost a lot of freedom there.
It's honestly distressing how all of these violent ideologies are growing in popularity. Nazism, socialism, and whatever else should be thrown on the pile. If you're a queer black "executive" like myself, there are a lot of people that believe the world would be a better place with you dead.
It's getting to the point that I'm considering keeping myself ignorant of developers' beliefs for my own mental wellness.
Absence of obsession with identity politics is not the same as wanting you dead. I don't want my tax dollars funding your personal lifestyle choices, just as you wouldn't want yours funding mine.
> I'm not going to spend days of my life digging into their source code to make the changes I want
This is an odd thing to bring up though because that's quite literally the only way to make any changes to suckless software, editing source code in C.
The entire philosophy behind is entirely performative in many ways. There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
> This is an odd thing to bring up though because that's quite literally the only way to make any changes to suckless software, editing source code in C.
You're ignoring the part where the tools are often a fraction of the size and complexity of similar tools. I can go through a 5K SLOC program and understand it relatively quickly, even if I'm unfamiliar with the programming language or APIs. I can't do the same for programs 10 or 100x that size. The code is also well structured and documented IME, so changing it is not that difficult.
In practice, once you configure the program to your liking, you rarely have to recompile it again. Like I said, I'm using a 5 year old st binary that still works exactly how I want it to.
Maintaining a set of patches is usually not a major problem either. The patches are often small, and conflicts are rare, but easily fixable. Again, in my experience, which will likely be different from yours. Our requirements for how we want the software to work will naturally be different.
The madness you describe to me sounds like a feature. It intentionally makes it difficult to add a bunch of functionality to the software, which is also what keeps it simple.
Sometimes, I have had to change software (although not from suckless, since I do not use any of their software) by modifying and recompiling it, to do what I wanted.
> There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
It is true, but depending on the software, sometimes this is acceptable. (Some of the internet server software that I wrote (such as scorpiond) are configured in this way, in order to take advantage of compiler optimizations.)
For some other programs, some things will have to be configured at compile time (mostly things that probably don't need to be changed after making a package of this program in some package manager), although most things can be configured at run time and do not need to be onfigured at compile time.
> I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
This is a valid criticism, and is why I don't do that for my own software. However, it is sometimes useful to make your own modifications to existing programs, but just applying sets of patches that do not necessarily match is the madness that you describe.
Not when you want to write your own patches, it isn't. I think the design of DWM could be improved to make patching easier, but it was a revelation to me when I discovered it: for the first time in my life, I was using open source software that was actually designed to be extended.
> I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways.
After moving to a gigantic monitor and gigantic resolutions, my poor st fork was suffering. zutty was a great replacement for me: https://git.hq.sig7.se/zutty.git
Whenever suckless comes up, I see more people saying "the drama is silly" than I do actual drama. I don't even know what drama people are talking about.
* One of the lead devs' laptops is named after Hitler's hideout in the forest
* Their 2017 conference had a torchwalk that was a staple of Nazi youth camping (and heavily encouraged by the SS as a nationalism thing)
* Multiple of the core devs are just assholes to people on and offline.
* Most of the suckless philosophy is "It does barely what it needs to and it was built by us, so it's superior to what anyone else has written". A lot of it shows in dwm, dmenu, etc.
I'm not sure how you missed it, since it comes up in practically every Suckless-related thread[1], including this one. The drama is mostly in social media and IRC circles, though it tends to spill over here as well.
> I couldn't care less what the authors personal beliefs and political leanings are, or who they offended on IRC or social media.
I agree. Such things are not relevant when considering to use their formats and programs and stuff like that.
What is relevant is their software and related stuff like that, and not their political leanings, etc. I do not agree with all of their ideas about computer software, although I agree with some of them.
Like them, I also don't like systemd, so I agree with them about not liking systemd.
I do use farbfeld, although I wrote all of the software for doing so by myself rather than using their software (although it should be interoperable with their software, and any other software that supports farbfeld (such as ImageMagick)). Also, I do not use farbfeld for disk files, but only with pipes. (My farbfeld utilities package also includes the only XPM encoder/decoder that I know of that supports some of the uncommon features, that most XPM encoders/decoders I know of are not compatible with or are not fully capable of.)
I may consider libzahl if I have a use for big integers, although I also might not need it. (I had written some dealing with big integers before; one program I wrote (asn1.c) that deals with big integers only converts between base 100 and base 128 in order to convert OIDs between text and binary format.)
However, I would also want software that can better handle non-Unicode text (so, it is one things I try to write), which many programs don't do properly. This should mean that any code that deals with Unicode (if any) is bypassed when non-Unicode is used. Some programs should not need to support Unicode at all (including some that should not need to care about character encoding at all, or that do not deal with text, etc). (I had considered writing my own terminal emulator for this and other reasons.)
I use foot and with a catpucchin theme , oh it's so nice and cozy.
I use pure zsh with some plugins manually installed , the luke smith dot files, and the history part sometimes take a lot to load but foot is just fast
> I recently spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways
Last time I did the same (days not hours tho lol) was somewhat surprised to find myself landing on xterm. After resolving a couple of gotchas (reliable font-resizing is somewhat esoteric; neovim needs `XTERM=''`; check your TERM) I have been very pleased and not looked back.
It also makes for very "efficient" software, the amount of time Sent has saved me, with very minor styling modifications, makes it one of the best software I've ever used.
> spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways.
If you don't mind, tell more? I use kitty and it seems a big upgrade from whatever I used before...
> Because dwm is customized through editing its source code, it's pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions.
...sucks less than what? :) Simple is good, but simpler does not necessarily mean better.
DWM is obviously not competing with GNOME or KDE and is quite a niche window manager. However, by focusing on being a simple, hackable tool—rather than adding menus, settings, help pages, and so on—it remains reliable and easy to maintain. Each DWM user typically has their own set of carefully selected patches and can (re)compile/(re)install it as a single binary in just two or three minutes.
No one is forced to use it, but the overall experience is quite convincing.
They've built an entirely statically linked user space for Linux . Until then i never questioned the default Linux "shared libraries for everything" approach and assumed that was the best way to deliver software.
Every little cli tool i wrote at work - i used to create distro packages for them or a tarball with a shell script that set LD_LIBRARY_PATH to find the correct version of the xml libraries etc i used.
It didn't have to be this way. Dealing with distro versioning headaches or the finnicky custom packaging of the libraries into that tar ball just to let the users run by 150 kb binary.
Since then I've mostly used static linking where i can. AppImages otherwise. I'm not developing core distro libraries. I'm just developing a tiny "app" my users need to use. I'm glad with newer languages like Go etc... static linking is the default.
Don't get me wrong. Dynamic linking definitely has it's place. But by default our software deployment doesn't need to be this complicated.
LD_LIBRARY_PATH is a debugging and software engineering tool, and shouldn't ever be part of shipped software.
If you require a library, you can specify it as a dependency in your dpkg/pacman/portage/whatever manifest and the system should take care of making it available. You shouldn't need to write custom scripts that trawl around for the library. Another approach could be to give your users a "make install" that sticks the libraries somewhere in /opt and adds it as the lowest priority ld_library_path as a last resort, maybe?
This was the biggest pain point in deploying *application software* on Linux though. Distributions with different release cycles providing different versions of various libraries and expect your program to work with all of those combinations. The Big famous libraries like Qt , gtk might follow proper versioning but the smaller libraries from distro packages - guarantee. Half of them don't even use semantic versioning.
Imagine distros swapping out the libraries you've actually tested out your code with with their libraries for "security fixes" or whatever the reason. That causes more problems than it fixes.
Custom start up script was to find the same xml library I've used in the tar ball i packaged the application in. They could then extract that tar ball wherever they need - including /opt and run the script to start my application and it ran as it should. Iirc we used to even use rpath for this.
It depends a lot on ABI/API stability and actual modularity of ... components. There's not always a guarantee of that.
Shared libraries add a lot of complexity to a system for the assumption that people can actually build modular code well in any language that can create a shared library. Sometimes you have to recompile because, while a #define might still exist, its value may have changed between versions, and chaos can ensue - including new, unexpected bugs - for free!
Windows programs generally do link dynamically to core Windows libraries—which users are never expected to mess with anyway—and the C and C++ runtimes, but even these can be statically linked against with `cl.exe /MT`. Some programs even distribute newer versions of the C/C++ runtimes; that's where the famous Visual C++ Redistributables come from.
I agree, though—static linkage should be the default for end-user programs. I long for a time when Linux gets 'libc' redistributables and I can compile for an old-hat CentOS 6 distribution on a modern, updated, rolling-release Arch Linux without faffing with a Docker container.
Every time I tried to get a third-party binary app running on Linux, I discovered what the vendor did is they shipped half their dependencies as blobs, and relying on the system for the other half - which is an incredibly brittle system that breaks constantly.
The entry point usually is a script that sets LD_LIBRARY_PATH and then calls into the executable.
This is not true - you can control the DLL path via manifests. There's also a "known DLLs" list in registry which can globally redirect basically any DLL system-wide.
For instance, GNU Make uses some of GNULib.
https://git.savannah.gnu.org/cgit/make.git (gl subdirectory)
The Linux linking model is so bad. So extremely very bad. Build systems should never rely on whatever garbage happens to be around locally. The glibc devs should be ashamed.
For example, in places where the a filesystem-related "standard" has changed, I have old static binaries that fail to start entirely, whereas the same-dated dynamic libraries just need to bother to actually install dependencies.
I am convinced that every argument in favor of static linking is because they don't know how to package-manager.
If you want a newer version, too bad - your OS doesn't ship that so better luck in the next release. OR you can set up a private repo, and either ship a binary that has the dependencies included (shipping half the userland with your audio player), or they package the newer version of library, which will unwittingly break half your system, if not today, then surely at the next distro upgrade.
It speaks volumes of Linux package management woes, that no vendor ships anything analogous to brew or chocolatey.
Which would be a fair reason. People who like to build things might just not want to also learn how to package stuff.
Deleted Comment
It is also a statically linked Linux distribution. But it's core idea is reproducible nix-style builds (including installing as many different versions/build configurations of any package), but with less pl fuff (no fancy funcional language - just some ugly jinja2/shell style build descriptions; which in practice work amazingly well, because underlying package/dependency model is very solid - https://stal-ix.github.io/IX.html).
It is very opionated (just see this - https://stal-ix.github.io/STALIX.html), and a bit rough, but I was able to run it in VMs sucessfully. It would be amazing if it stabilizes one day.
Consider how dynamic linking libc works when a critical security bug is found and fixed. To update your system you update libc.so.
If it were statically linked, you need to update your whole distribution.
Nix is much closer to a "good" dynamic linking solution IMO except that it makes it overly difficult to swap things out at runtime. I appreciate that the default is reproducible and guaranteed to correspond to the hash but sometimes I want to override that at runtime for various reasons. (It's possible this has changed since I last played with that tooling. It's been awhile.)
> AppImages
AppImages require a large amount of (obsolete) dependencies to run, making their portabiltiy practically worthless. Newer immutable distros like Aeon don't ship the necessary packages to run an AppImage.
At the end of the day the apps were a simple end user applications. They used a handful of library functions from different libraries. My users cared about just using my apps to do whatever the apps did. I just care that my app should work on their machines easily - no matter what version of what distro they're using.
1. You can bake the LD_LIBRARY_PATH into your executable with the -rpath mechanism.
2. Both rpath and LD_LIBRARY_PATH expand certain tokens such as $ORIGIN. $ORIGIN expands to the location of the program.
So you can (for instance) set a rpath of $ORIGIN in your executable, and then it will look for its libs in its own directory. Just like is the intelligent default on on you-know-what Redmond operating system everyone rags on.
https://pwmt.org/projects/zathura/
[0]: https://sioyek.info
No frills, super fast and small. Been using it on Windows for years.
Personally, I just use mupdf (which I sandbox through bubblewrap).
I like it too though.
Deleted Comment
> Variadic macros are acceptable, but remember
Maybe my brain is too smooth, but I don't understand how for(int i = 0...) is too clever but variadic macros are not. That makes no sense to me.
I think the no "loop initial declarations" is for consistency with "all declarations at the top". Other coding style guides favor "declarations as close as possible to first use", including guidelines for mission critical systems (if you resort to argument from authority I have some too...) [1].
As much as I like Suckless, this section is just pet peeves that can safely be ignored; unless you submit a patch to a project that aligns with it.
[1] https://pvs-studio.com/en/docs/warnings/v2551/
True and it would indeed be desirable that it were. Here I go out on a limb and assume it's because someone got bitten by attempting to use the loop index outside the loop (common for search operations) while declaring the index within and outside the loop. A bug (gcc and clang can warn about using -Wshadow, but which sadly isn't part of -Wall) which might easily occur when multiple people edit the code over a longer time-span.
Why not force c90 altogether then?
An example of such a macro is the following macro (the loop and the variable declaration will both be optimized out by the compiler; I have tested this):
Another macro (which is a part of a immediate mode UI implementation) is:[0] https://tilde.team/~ben/suckmore/ [1] https://dev.suckless.narkive.com/mEex8nff/cannot-run-st#post...
I'm not claiming I could write these tools as simple as these, but surely the importance of these paradigms arise when actual complicated software is needed?
I recently spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways. Most annoying of all is that I can't do anything about it. I'm not going to spend days of my life digging into their source code to make the changes I want, nor spend time pestering the maintainers to make the changes for me.
So I ended back at my st fork I've been using for years, which sucks... less. :) It consists of... 4,765 SLOC, of which I only understand a few hundred, but that's enough for my needs. I haven't touched the code in nearly 5 years, and the binary is that old too. I hope it compiles today, but I'm not too worried if it doesn't. This program has been stable and bug-free AFAICT for that long. I can't say that about any other program I use on a daily basis. Hhmm I suppose the GNU coreutils can be included there as well. But they also share a similar Unixy philosophy.
So, is this philosophy perfect? Far from it. But it certainly comes closer than any other approach at building reliable software. I've found that keeping complexity at bay is the most difficult, yet most crucial thing.[1]
[1]: https://grugbrain.dev/#grug-on-complexity
I just don't really want to use or support software by people who, at best, think it's appropriate to joke about an ideology that wants me [0] dead, or at worst, actively subscribe to that ideology. There are some things that I'm not willing to look past.
[0]: non-white, non-straight, left of the political spectrum
Oh, there are also the edgelords occasionally lured in by Luke Smith's videos (who has never sat foot in community or contributed code while I have been around and I am not sure if he ever did) who usually get laughed out of IRC after delivering an unhinged chanspeak rant.
I think it's possible to separate the art from the artist, and enjoy the art without being concerned about the artist's beliefs, and whether I disagree with them.
Also, you don't necessarily support them by using their software. The software is free to use by anyone, and you never have to interact with the authors in any way. Software is an amorphous entity. Unless they're using it to spread their personal beliefs, it shouldn't matter what that is. By choosing not to use free software, you're only depriving yourself.
But this is your own choice, of course, and I'm not saying it's wrong. Just offering a different perspective.
Who should be on the committee that decides what we may talk and joke about and how should the committee inform it self?
The new forbidden topics will be chosen from the set of topics people talk about which get smaller, stranger and more political. What people secretly believe will be much closer to the secret dialog while the public dialog floats away.
That people are saying things is the least of your concern.
Fascinating perspective tho. It is much easier if one is more secure, talks easy or has a more mundane world view. Not someone one can choose. Thicker skin however.
Also interesting, if one didn't like the people running the lunchroom at the end of the street or didn't like the visitors you use to be able to go to some other place. Today they are all part of the same chain. We've lost a lot of freedom there.
It's getting to the point that I'm considering keeping myself ignorant of developers' beliefs for my own mental wellness.
Dead Comment
This is an odd thing to bring up though because that's quite literally the only way to make any changes to suckless software, editing source code in C.
The entire philosophy behind is entirely performative in many ways. There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
You're ignoring the part where the tools are often a fraction of the size and complexity of similar tools. I can go through a 5K SLOC program and understand it relatively quickly, even if I'm unfamiliar with the programming language or APIs. I can't do the same for programs 10 or 100x that size. The code is also well structured and documented IME, so changing it is not that difficult.
In practice, once you configure the program to your liking, you rarely have to recompile it again. Like I said, I'm using a 5 year old st binary that still works exactly how I want it to.
Maintaining a set of patches is usually not a major problem either. The patches are often small, and conflicts are rare, but easily fixable. Again, in my experience, which will likely be different from yours. Our requirements for how we want the software to work will naturally be different.
The madness you describe to me sounds like a feature. It intentionally makes it difficult to add a bunch of functionality to the software, which is also what keeps it simple.
> There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
It is true, but depending on the software, sometimes this is acceptable. (Some of the internet server software that I wrote (such as scorpiond) are configured in this way, in order to take advantage of compiler optimizations.)
For some other programs, some things will have to be configured at compile time (mostly things that probably don't need to be changed after making a package of this program in some package manager), although most things can be configured at run time and do not need to be onfigured at compile time.
> I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
This is a valid criticism, and is why I don't do that for my own software. However, it is sometimes useful to make your own modifications to existing programs, but just applying sets of patches that do not necessarily match is the madness that you describe.
After moving to a gigantic monitor and gigantic resolutions, my poor st fork was suffering. zutty was a great replacement for me: https://git.hq.sig7.se/zutty.git
* One of the lead devs' laptops is named after Hitler's hideout in the forest
* Their 2017 conference had a torchwalk that was a staple of Nazi youth camping (and heavily encouraged by the SS as a nationalism thing)
* Multiple of the core devs are just assholes to people on and offline.
* Most of the suckless philosophy is "It does barely what it needs to and it was built by us, so it's superior to what anyone else has written". A lot of it shows in dwm, dmenu, etc.
[1]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I agree. Such things are not relevant when considering to use their formats and programs and stuff like that.
What is relevant is their software and related stuff like that, and not their political leanings, etc. I do not agree with all of their ideas about computer software, although I agree with some of them.
Like them, I also don't like systemd, so I agree with them about not liking systemd.
I do use farbfeld, although I wrote all of the software for doing so by myself rather than using their software (although it should be interoperable with their software, and any other software that supports farbfeld (such as ImageMagick)). Also, I do not use farbfeld for disk files, but only with pipes. (My farbfeld utilities package also includes the only XPM encoder/decoder that I know of that supports some of the uncommon features, that most XPM encoders/decoders I know of are not compatible with or are not fully capable of.)
I may consider libzahl if I have a use for big integers, although I also might not need it. (I had written some dealing with big integers before; one program I wrote (asn1.c) that deals with big integers only converts between base 100 and base 128 in order to convert OIDs between text and binary format.)
However, I would also want software that can better handle non-Unicode text (so, it is one things I try to write), which many programs don't do properly. This should mean that any code that deals with Unicode (if any) is bypassed when non-Unicode is used. Some programs should not need to support Unicode at all (including some that should not need to care about character encoding at all, or that do not deal with text, etc). (I had considered writing my own terminal emulator for this and other reasons.)
I use pure zsh with some plugins manually installed , the luke smith dot files, and the history part sometimes take a lot to load but foot is just fast
Last time I did the same (days not hours tho lol) was somewhat surprised to find myself landing on xterm. After resolving a couple of gotchas (reliable font-resizing is somewhat esoteric; neovim needs `XTERM=''`; check your TERM) I have been very pleased and not looked back.
urxvt is OG but xterm sixel support is nice.
If you don't mind, tell more? I use kitty and it seems a big upgrade from whatever I used before...
That’s… certainly a low bar for not sucking
> Because dwm is customized through editing its source code, it's pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions.
...sucks less than what? :) Simple is good, but simpler does not necessarily mean better.
No one is forced to use it, but the overall experience is quite convincing.