Warts are the product of caring about backwards compatibility with initial design mistakes. So I'm not sure they're a reliable heuristic for better choice of technology. E.g. YAML is definitely much wartier than JSON, but I think the latter is a much better choice for long-term use than the former.
It’s probably possible to minimize warts without negatively impacting other aspects of the software. It just isn’t done very often, probably because that requires either a full feature freeze (allowing all resources to go towards disposing of the figurative bathwater while keeping the baby). At minimum a pointedly slow, judicious approach to feature additions that helps designers and devs to get more right the first time and only ship fully baked products to users is required.
No, everything today is about endless frenzied stacking of features as high as possible, totally ignoring how askew the tower is sitting as a result. Just prop it up again for the 50th time and above all, ship faster!
Another possibility is that the designers carefully examined predecessors to learn from their mistakes, and went through a long refinement period before stabilizing the design. This is Common Lisp.
JSON also has a few warts that aren't going away exactly due to backwards compatibility. So I think that's actually a pretty good illustration of the point.
> JSON also has a few warts that aren't going away exactly due to backwards compatibility. So I think that's actually a pretty good illustration of the point.
How? All nontrivial software has warts, you're never choosing between warts and no warts, you're choosing between more warts and less warts. And JSON has few warts compared to the alternatives, especially YAML.
> There’s a chart in the presentation that shows how environmental churn and api deprecation leads desktop applications to have an expected lifetime of maybe a decade, and phone apps closer to a couple of years. On the other hand, simple web pages have worked unmodified for over 40 years! That’s a good reason to default to the web as a technology.
isn't it the opposite? Just take a look at any web app and tell me if it wasn't rewritten at some point due to the rapid shift in web frameworks and hype cycles. heck, even HN was rewritten afaik.
meanwhile i have Matlab that has remained literally the same for over a decade.
It said simple web pages, not web apps. Web apps that use APIs are basically in the same boat as a phone app. Constantly moving API target and platform requirements ever changing.
Ok but what is serving the simple web page? Do we think that software has been patched or rewritten over the last few decades?
I feel like there is a category error here where people think a website is the HTML that lands in your browser. When actually it is also the technology that received your request and sent the HTML. Even if it’s the same “flat” HTML sitting on the server, a web application is still sending it. And yes, those require maintenance.
Just like simple webpages, simple desktop and mobile apps last a long time too. It is only when working on complex applications that use cutting edge apis that churn is a problem
I mean, I guess the death of Flash / Java Applets / ActiveX / etc count, but in the javascript world, doesn't feel like we've actually had that many breaking changes
Webapps are rewritten because a developer wanted to use the new shiny, or someone was convinced that everything will be better with the newer frameworks everyone is using. Also, it often goes hand in hand with giving it a more modern look-and-feel.
But the point is not whether webapps are rewritten, but whether they have to be rewritten. I know some old enterprise webapps made with PHP about 10 years ago that are still working fine.
You do have to worry about security issues, and the occasional deprecation of an API, but there is no reason why a web-based service should need to be rewritten just to keep working. Is that true for mobile and desktop apps?
If your webapp is simple a rewrite is no big deal and often cheaper than updating the old. As your project gets large that is no longer true. I work with embedded systems, when everything was small ( 8 bits isn't enough for anything else - new feature often means removing something else) we of rewrote large parts to get one new feature. it was easy to estimate a new project and we came in on time. As projects get bigger (32 and 64 bits are now available) we can't do that we can't afford a billion dollar project to rewrite every year.
>but there is no reason why a web-based service should need to be rewritten just to keep working
I mean most webapps of any size are built on underlying libraries, and sometimes those libraries disappear requiring a significant amount of effort to port to a new library.
My current hobby project is for DOS. Runs everywhere, mostly thanks to DOSBox, and the API has not changed since 1994 and will never change. For something to run offline and to avoid being stuck forever maintaining code I think this is what I will stick to for most of my future projects as well.
It's not like any modern OS, or popular libraries/frameworks could not provide an equally stable (subset of an) API for apps, but sadly they don't.
I don't think anyone wants warts. It is a fact that every technology has them, though. And also that every attempt to get rid of warts seems to introduce new, perhaps yet unknown warts. There is a kind of law of conservation of misery here. I think I now have said in a somewhat more clear way what the article is trying to say and without a title that is actually false.
The point of the article is that warts are subjective. What one person considers unwanted behavior, another might see it as a feature. They are a testament to the software's flexibility, and to maintainers caring about backwards compatibility, which, in turn, is a sign that users can rely on it for a long time.
Nobody wants bugs. But I agree with the article's premise.
This makes me think of Alexander's pattern language- according to that framework one of the key components of beauty is texture. I wonder if our dislike of untextured experiences stem from an unease at not knowing where the thing is hiding its warts.
That could be true. What it makes me think about is the danger of things that magically just work. Things that magically just work also may stop working for some reason and then you do not have an idea how to fix them.
I agree with the article's premise, but would add one thing: warts might be a sign that the software is too flexible, and/or is doing too many things. For programming languages and databases this, of course, doesn't apply, since they're designed to be flexible. But simple tools that follow the Unix philosophy rarely have warts. I've never experienced warts with Unix tools such as `cat`, `cut`, `uniq`, etc., which I believe is primarily because they do a single thing, and their interface is naturally limited to that task. Maybe there were some growing pains when these tools were new, but now their interfaces are practically cemented, and they're the most reliable tools in my toolbelt. More advanced tools such as `grep`, `sed`, and particularly `awk`, are a bit different, since their use cases are much broader, they support regex, and `awk` is a programming language.
This is why my first instinct when automating a task is to write a shell script. Yes, shells like Bash are chock-full of warts, but if you familiarize yourself with them and are able to navigate around them, a shell script can be the most reliable long-term solution. Even cross-platform if you stick to POSIX. I wouldn't rely on it for anything sophisticated, of course, but I have shell scripts written years ago that work just as well today, and I reckon will work for many years to come. I can't say that about most programming languages.
> "I've never experienced warts with Unix tools such as `cat`, `cut`, `uniq`, etc."
Literally the first time of using uniq and finding duplicates in the output is a wart. Having to realise it's not doing "unique values" it's doing "drop consecutive duplicates" and the reason it's doing it is because 1970s computers didn't have enough memory to dedupe a whole dataset and could only do it streaming.
cat being "catenate" intended to catenate multiple files together into one pipeline stream, but always being used to read single files instead of a pager. "cat words.txt | less" is a wart seen all over the place.
cut being needed to separate fields because we couldn't all come together and agree to use ASCII Field Separator character to separate fields is arguably warty.
Or the find command where you can find files by different sizes and you can specify +100G for finding files with sizes 100 Gibibytes or larger, or M for Mebibytes, K for Kibibytes, and b for bytes. No just kidding b is 512-byte blocks, it's nothing at all for bytes just +100. No just kidding again that's also 512-byte blocks. It's c for bytes. Because the first version of Unix happened to use 512-byte blocks on its filesystem[1]. And the c stands for characters so you have to know that it's all pre-Unicode and using 8-bit characters.
I think you're misunderstanding what a "wart" is. None of those examples are warts. They're the way those programs work, which you can easily understand by reading their documentation. Your point about `cat` is an example of misuse, and `cut` is needed because of the deliberate design decision to make text streams the universal interface, which has far greater benefits than forcing any specific separator.
A wart is not surprising or unintuitive behavior. It is a design quirk—a feature that sticks out from the program's overall design, usually meant to handle edge cases, or to preserve backwards compatibility. For example, some GNU programs like `grep` and `find` support a `POSIXLY_CORRECT` env var, which changes their interface and behavior.
My point is that the complexity of the software and its interface is directly proportional to the likelihood of it developing "warts". `find` is a much more complex program than `cut` or `uniq`, therefore it has developed warts.
Next time, please address the point of the comment you're replying to, instead of forcing some counterargument that happens to be wrong.
The entire premise, that our goal is to create the software equivalent of 100+ year old bridges, is a bit flawed. We aren't building a historical legacy here. Our crummy web-apps are not the great pyramids, and should not be built like them. Nobody likes to admit it, but 99% of what we build today is disposable, and it should be built cheaply and quickly.
I see this attitude, especially with juniors, and often with project managers, that we need perfection and are building things that are meant to last for decades. Almost nothing does though. Many/most business applications are obsolete within 5 years and it costs more to cling to the fiction that what we're doing is important and lasting.
I think when you’re young & doe eyed what is shiny & new is exciting and your wants are easily confused with needs.
You don’t need the shiniest & newest framework to tell a computer to generate some HTML & CSS with a database and some logic. And don’t have the lived experience of building & shipping to realise that only 1 in 1,000,000 projects probably ever get more than 100 sets of eyeballs so end up using much more complicated tools than necessary.
But all the news & socials will sing praise to the latest shiny tools x.x.9 release so those needs easily get confused.
I more or less agree, if you are a small company with a small real world userbase, you don't need hyperscaling tech, and thus hyperscaling problems.
I think some companies oversubscribe to reliability technology too. You should assess if you really need 99.9999% uptime before building out a complex cloud infrastructure setup. It's very likely you can get away with one or two VMs.
As I understand, HN runs on a single server with a backup server for failover.
You may not need the shiniest new framework, but you do have to ensure your old framework has not been abandoned or is no longer getting security updates.
So much this. Warts = (software longevity) life lessons. (Though it must be noted that warts != bad design.)
Some years ago, I gave a talk on functional-style shell programming which began with this:
Prelude: Why even bother?
Personally…
Text is everywhere, better learn to process it
The masters already solved it better
Stable tools, well-known warts
Personal/team productivity, tool-building autonomy
To learn about
good and successful design
bad and successful design
computer history
For the fun of it
ref:
Dr. Strangepipes Or: How I Learned To Stop Worrying && Function In Shell (Functional Conf 2019)
Basically, if there are no obvious warts, then either:
1. The designers magically hit upon exactly the right needed solution the very first time, every time, or
2. The designers regularly throw away warty interfaces in favor of cleaner ones.
#1 is possible, but #2 is far more likely.
No, everything today is about endless frenzied stacking of features as high as possible, totally ignoring how askew the tower is sitting as a result. Just prop it up again for the 50th time and above all, ship faster!
How? All nontrivial software has warts, you're never choosing between warts and no warts, you're choosing between more warts and less warts. And JSON has few warts compared to the alternatives, especially YAML.
isn't it the opposite? Just take a look at any web app and tell me if it wasn't rewritten at some point due to the rapid shift in web frameworks and hype cycles. heck, even HN was rewritten afaik.
meanwhile i have Matlab that has remained literally the same for over a decade.
I feel like there is a category error here where people think a website is the HTML that lands in your browser. When actually it is also the technology that received your request and sent the HTML. Even if it’s the same “flat” HTML sitting on the server, a web application is still sending it. And yes, those require maintenance.
I mean, I guess the death of Flash / Java Applets / ActiveX / etc count, but in the javascript world, doesn't feel like we've actually had that many breaking changes
But the point is not whether webapps are rewritten, but whether they have to be rewritten. I know some old enterprise webapps made with PHP about 10 years ago that are still working fine.
You do have to worry about security issues, and the occasional deprecation of an API, but there is no reason why a web-based service should need to be rewritten just to keep working. Is that true for mobile and desktop apps?
I mean most webapps of any size are built on underlying libraries, and sometimes those libraries disappear requiring a significant amount of effort to port to a new library.
But that was more than a decade ago, I guess?
Desktop apps in theory can run too, but it depends on what they link and if OS still provides it.
It's not like any modern OS, or popular libraries/frameworks could not provide an equally stable (subset of an) API for apps, but sadly they don't.
The point of the article is that warts are subjective. What one person considers unwanted behavior, another might see it as a feature. They are a testament to the software's flexibility, and to maintainers caring about backwards compatibility, which, in turn, is a sign that users can rely on it for a long time.
Nobody wants bugs. But I agree with the article's premise.
This is why my first instinct when automating a task is to write a shell script. Yes, shells like Bash are chock-full of warts, but if you familiarize yourself with them and are able to navigate around them, a shell script can be the most reliable long-term solution. Even cross-platform if you stick to POSIX. I wouldn't rely on it for anything sophisticated, of course, but I have shell scripts written years ago that work just as well today, and I reckon will work for many years to come. I can't say that about most programming languages.
Literally the first time of using uniq and finding duplicates in the output is a wart. Having to realise it's not doing "unique values" it's doing "drop consecutive duplicates" and the reason it's doing it is because 1970s computers didn't have enough memory to dedupe a whole dataset and could only do it streaming.
cat being "catenate" intended to catenate multiple files together into one pipeline stream, but always being used to read single files instead of a pager. "cat words.txt | less" is a wart seen all over the place.
cut being needed to separate fields because we couldn't all come together and agree to use ASCII Field Separator character to separate fields is arguably warty.
Or the find command where you can find files by different sizes and you can specify +100G for finding files with sizes 100 Gibibytes or larger, or M for Mebibytes, K for Kibibytes, and b for bytes. No just kidding b is 512-byte blocks, it's nothing at all for bytes just +100. No just kidding again that's also 512-byte blocks. It's c for bytes. Because the first version of Unix happened to use 512-byte blocks on its filesystem[1]. And the c stands for characters so you have to know that it's all pre-Unicode and using 8-bit characters.
[1] https://unix.stackexchange.com/questions/259208/purpose-of-f...
A wart is not surprising or unintuitive behavior. It is a design quirk—a feature that sticks out from the program's overall design, usually meant to handle edge cases, or to preserve backwards compatibility. For example, some GNU programs like `grep` and `find` support a `POSIXLY_CORRECT` env var, which changes their interface and behavior.
My point is that the complexity of the software and its interface is directly proportional to the likelihood of it developing "warts". `find` is a much more complex program than `cut` or `uniq`, therefore it has developed warts.
Next time, please address the point of the comment you're replying to, instead of forcing some counterargument that happens to be wrong.
I see this attitude, especially with juniors, and often with project managers, that we need perfection and are building things that are meant to last for decades. Almost nothing does though. Many/most business applications are obsolete within 5 years and it costs more to cling to the fiction that what we're doing is important and lasting.
You don’t need the shiniest & newest framework to tell a computer to generate some HTML & CSS with a database and some logic. And don’t have the lived experience of building & shipping to realise that only 1 in 1,000,000 projects probably ever get more than 100 sets of eyeballs so end up using much more complicated tools than necessary.
But all the news & socials will sing praise to the latest shiny tools x.x.9 release so those needs easily get confused.
I think some companies oversubscribe to reliability technology too. You should assess if you really need 99.9999% uptime before building out a complex cloud infrastructure setup. It's very likely you can get away with one or two VMs.
As I understand, HN runs on a single server with a backup server for failover.
Some years ago, I gave a talk on functional-style shell programming which began with this:
ref:Dr. Strangepipes Or: How I Learned To Stop Worrying && Function In Shell (Functional Conf 2019)
org-mode slideware: https://gist.github.com/adityaathalye/93aba2352a5e24d31ecbca...
live demo: https://www.youtube.com/watch?v=kQNATXxWXsA&list=PLG4-zNACPC...