> Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters?
I think the analogy here is backwards. The better question is "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.
If basic computer operations like loading a webpage took minutes rather than seconds, I think there would be more general interest in improving performance. For now though, most users are happy-enough with the performance of most software, and other factors like aesthetics, ease-of-use, etc. are the main differentiators (admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance).
These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use. Hence the complexity/performance overhead of using technologies that allow software to be easily iterated and expanded are justified, to my mind (though we should be mindful of technology that claims to improve our agility but really only adds complexity).
> "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.
I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase! That means there is a huge opportunity to further optimize for many things in the system:
- The car no longer needs to have a hole on the side for filling up. A lot of pipes can be removed. Gas tank can be moved to a safer/closer location where it is used.
- The dashboard doesn't need a dedicated slot for showing the fuel gauge, more wirings and mechanical parts removed.
- No needs for huge exhaust and cooling systems, since the wasted energy is significantly reduce. No more pump, less vehicle weights...
Of course, that 0.005L car won't come earlier than a good electric car. However, if it's there, I'd totally prioritize it higher than other things you listed. I think people tend to underestimate how small efficiency improvements add up and enable exponential values to the system as a whole.
This is definitely an interesting take on the car analogy so thanks for posting it! I don't know that I agree 100% (I think I could 'settle' for a car that needed be be fueled once or twice a year if it came with some other noticeable benefits), but it is definitely worth remembering that sometimes an apparently small nudge in performance can enable big improvements. Miniaturization of electronics (including batteries and storage media) and continuing improvements to wireless broadband come to mind as the most obvious of these in the past decades.
I'm struggling to think of recent (or not-so-recent) software improvements that have had a similar impact though. It seems like many of the "big" algorithms and optimization techniques that underpin modern applications have been around for a long time, and there aren't a lot of solutions that are "just about" ready to make the jump from supercomputers to servers, servers to desktops, or desktops to mobile. I guess machine learning is a probably contender in this space, but I imagine that's still an active area of optimization and probably not what the author of the article had in mind. I'd love if someone could provide an example of recent consumer software that is only possible due to careful software optimization.
> I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase!
It's a nice idea but it wouldn't work. The gasoline would go bad before you could use it all.
Plug-in hybrids already have this problem. Their fuel management systems try to keep the average age of the fuel in the tank under 1 year. The Chevy Volt has a fuel maintenance mode that runs every 6 weeks:
Instead of having a "lifetime tank", a car that uses 0.005L per 100km would be better off with a tiny tank. And then instead of buying fuel at a fuel station you'd buy it in a bottle at the supermarket along with your orange juice.
You are thinking too small, with a car generating power that cheaply you could use it to power a turbine and provide cheap electricity to the entire world. It would fix our energy needs for a very long time and it would usher a new age!
The big problem is this, if we related this back to software it would mean the software being delivered in 10-15 years, rather than in 6 months. Kind of a big downside...
A UI where each interaction takes several seconds is poor UI design. I do lose most of my time and patience to poor UI design, including needless "improvements" every few iterations that break my workflow and have me relearn the UI.
I find the general state of interaction with the software I use on a daily basis to be piss poor, and over the last 20 or so years I have at best seen zero improvement on average, though if I was less charitable I'd say it has only gone downhill. Applications around the turn of the century were generally responsive, as far as I can remember.
> These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use.
I’m willing to bet that a significant percentage of my accidental inputs are due to UI latency.
Virtually all of my accidental inputs are caused by application slowness or repaints that occur several hundred milliseconds after they should have.
I want all interactions with all of my computing devices to occur in as close to 0ms as possible. 0ms is great; 20ms is good; 200ms is bad; 500ms is absolutely inexcusable unless you're doing significant computation. I find it astonishing how many things will run in the 200-500ms range for utterly trivial operations such as just navigating between UI elements. And no, animation is not an acceptable illusion to hide slowness.
I am with the OP. "Good enough" is a bane on our discipline.
Don’t get me started with all the impressive rotating zooming in Google Maps every time you accidentally brush the screen.
The usage story requires you to switch to turn-by-turn, and there’s no way to have bird eye map following your location along route (unless you just choose some zoom level and manually recenter every so often.)
It’s awful, distracting and frankly a waste of time... just to show a bit of animation every time I accidentally fail to register a drag...
I respectfully disagree -- something that is 10 times more efficient costs 10 times less energy (theoretically). When the end user suffers a server outage due to load, when they run out of battery ten times quicker, all of these things matter. When you have to pay for ten servers to run your product instead of one, this cost gets passed on to the end user.
I was forced to use a monitor at 30 fps for a few days due to a bad display setup. It made me realize how important 60 fps is. Even worse, try using an OS running in a VM for an extended period of time...
There are plenty of things that are 'good enough', but once users get used to something better they will never go back (if they have the choice, at least).
Another problem is that the inefficiency of multiple products tends to compound.
- Opening multiple tabs in a browser will kill your battery, and it's not the fault of a single page, but of all of them. Developers tend to blame the end user for opening too many tabs.
- Running a single Electron app is fast enough in a newer machine but if you need multiple instances or multiple apps you're fucked.
- Some of my teammates can't use their laptops without the charger because they have to run 20+ docker containers just to have our main website load. The machines are also noisy because the fan is always on.
- Having complex build pipelines that take minutes or hours to run is something that slows dow developers, which are expensive. It's not the fault of a single software (except maybe of the chosen programming language), but of multiple inefficient libraries and packages.
> "Even worse, try using an OS running in a VM for an extended period of time..."
I actually do this for development and it works really well.
Ubuntu Linux VM in VMware Fusion on a Macbook Pro with MacOS.
Power consumption was found to be better than running Linux natively. (I'm guessing something about switching between the two GPUs, but who knows.)
GPU acceleration works fine; the Linux desktop animations, window fading and movement animations etc are just as I'd expect.
Performance seems to be fine generally, and I do care about performance.
(But I don't measure graphics performance, perhaps that's not as good as native. And when doing I/O intensive work, that's on servers.)
Being able to do a four-finger swipe on the trackpad to switch between MacOS desktops and Linux desktops (full screen) is really nice. It feels as if the two OSes are running side by side, rather than one inside another.
I've been doing Linux-in-a-VM for about 6 years, and wouldn't switch back to native on my laptop if I had a choice. The side-by-side illusion is too good.
Before that I ran various Linux desktops (or Linux consoles :-) for about 20 years natively on all my development machines and all my personal laptops, so it's not like don't know what that's like. In general, I notice more graphics driver bugs in the native version...
(The one thing that stands out as buggy is VMware's host-to-guest file sharing is extremely buggy, to the point of corrupting files, even crashing Git. MacOS's own SMB client is also atrocious in numerous ways, to the point of even deleting random files, but does it less often so you don't notice until later what's gone. I've had to work hard to find good workarounds to have reliable files! I mention this as a warning to anyone thinking of trying the same setup.)
Yes, but it's not just relative quantities that matter, absolute values matter too, just as the post you replied to was saying.
Optimizing for microseconds when bad UI steals seconds is being penny-wise and pound foolish. Business might not understand tech but they do generally understand how it ends up on the balance sheet.
> Even worse, try using an OS running in a VM for an extended period of time...
I do that for most of my hobbyist Linux dev work. It's fine. It can do 4k and everything. It's surely not optimal but it's better than managing dual boot.
I have to be careful about what I describe, but I don't think people care about speed or performance at all when it comes to tech, and it makes me sad. In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.
At my current place of employment we have plenty of average requests hitting 5-10 seconds and longer, you've got N+1 queries against the network, rather than the DB. As long as it's within 15 or 30 seconds nobody cares, they probably blame their 4G signal for it (especially in the UK where our mobile infrastructure is notoriously spotty, and entirely absent even within the middle of London). But since I work on those systems I'm upset and disappointed that I'm working on APIs that can take tens of seconds to respond.
The analogy is also not great because MPG is an established metric for fuel efficiency in cars. The higher the MPG the better.
> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.
I never liked this view. I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time.
> they probably blame their 4G signal for it
Sad thing is, enough companies thinking like this and the incentive to improve on 4G itself evaporates, because "almost nothing can work fast enough to make use of these optimizations anyway".
> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.
I see this argument coming up a lot, but this can be solved by better UX. Making things slow on purpose is just designers/developers being lazy.
Btw users feeling uneasy when something is "too fast" is an indictment of everything else being too damn slow. :D
I wonder how this trend will be affected by the slowing of Moore’s law. There will always be demand for more compute, and until now that’s largely been met with improvements in hardware. When that becomes less true, software optimization may become more valuable.
I use webpages for most of the social networking platforms such as Facebook. I am left handed and scroll with my left thumb (left half of the screen). I have accidentally ‘liked’ people’s posts, sent accidental friend requests only because of this reason.
Guessing along with language selection, it might be helpful to have a selection of hand preference for mobile browsing.
> admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance
I think for webpages it is the opposite: non-orthogonal in most cases.
If you disable your JS/Ad/...-blocker, and go to pages like Reddit, it is definitely slower and the CPU spikes. Even with a blocker, the page still does a thousand things in the first-party scripts (like tracking mouse movements and such) that slow everything down a lot.
I dont know, that just feels wrong. If anything, the rise of mobile means there should be more emphasis on speed. All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness. Can you point to a newish app that is clearly better that its predecessor?
> All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness
That's not really true. Slack could be just as pretty and a fraction of the weight, if they hadn't used Electron.
I think there are two factors preventing mobile from being a force to drive performance optimizations.
One, phone OSes are being designed for single-tasked use. Outside of alarms and notifications in the background (which tend to be routed through a common service), the user can see just one app at a time, and mobile OSes actively restrict background activity of other apps. So every application can get away with the assumption that it's the sole owner of the phone's resources.
Two, given the above, the most noticeable problem is now power usage. As Moore's law has all but evaporated for single-threaded performance, hardware is now being upgraded for multicore and (important here) power performance. So apps can get away with poor engineering, because every new generation of smartphones has a more power-efficient CPU, so the lifetime on single charge doesn't degrade.
I think objections like this may be put in terms of measurable cost-benefits but they often come down to the feeling of wasted time and effort involved in writing, reading and understanding garbage software.
Moreover, the same cost-equation that produces software that is much less efficient than it could be produces software that might be usable for it's purpose (barely) but is much more ugly, confusing, and buggy than it needs to be.
That equation is add the needed features, sell the software first, get lock in, milk it 'till it dies and move on. That's equation is locally cost-efficient. Locally, that wins and that produces the world we see every day.
Maybe, the lack of craftsmanship, the lack of doing one's activity well, is simply inevitable. Or maybe the race to the bottom is going to kill us - see the Boeing 737 Max as perhaps food for thought (not that software as such was to blame there but the quality issue was there).
The analogy is wrong as well because a car engine is used for a single purpose, moving the car itself. Imagine if you had an engine that powered a hundred cars instead, but a lot of those cars were unoptimized so you can only run two cars at a time instead of the theoretical 100.
or... something.
The car analogy does remind me of one I read a while ago, comparing cars and their cost and performance with CPUs.
>And build times? Nobody thinks compiler that works minutes or even hours is a problem. What happened to “programmer’s time is more important”? Almost all compilers, pre- and post-processors add significant, sometimes disastrous time tax to your build without providing proportionally substantial benefits.
FWIW, I did RTFA (top to bottom) before commenting. I chose to reply to some parts of the article and not others, especially the parts I felt were particularly hyperbolic.
Anecdotally, in my career I've never had to compile something myself that took longer than a few minutes (but maybe if you work on the Linux kernel or some other big project, you have; or maybe I've just been lucky to mainly use toolchains that avoid the pitfalls here). I would definitely consider it a problem if my compiler runs regularly took O(10mins), and would probably consider looking for optimizations or alternatives at that point. I've also benefited immensely from a lot of the analysis tools that are built into the toolchains that I use, and I have no doubt that most or all of them have saved me more pain than they've caused me.
I agree it's all slower and sucks. But I don't think it's solely a technical problem.
1/ What didn't seem to get mentioned was the speed to market. It's far worse to build the right thing no one wants, than to build the crappy thing that some people want a lot. As a result, it makes sense for people to leverage electron--but it has consequences for users down the line.
2/ Because we deal with orders of magnitude with software, it's not actually a good ROI to deal with things that are under 1x improvement on a human scale. So what made sense to optimize when computers were 300MHz doesn't make sense at all when computers are 1GHz, given a limited time and budget.
3/ Anecdotally (and others can nix or verify), what I hear from ex-Googlers is that no one gets credit for maintaining the existing software or trying to make it faster. The only way you get promoted is if you created a new project. So that's what people end up doing, and you get 4 or 5 versions of the same project that do the same thing, all not very well.
I agree that the suckage is a problem. But I think it's the structure of incentives in the environment that software is written that also needs to be addressed, not just the technical deficiencies of how we practice writing software, like how to maintain state.
It's interesting Chris Granger submitted this. I can see that the gears have been turning for him on this topic again.
I might strengthen your argument even more and say it's largely a non-technical problem. We have had the tools necessary to build good software for a long time. As others have pointed out, I think a lot of this comes down to incentives and the fact that no one has demonstrated the tradeoff in a compelling way so far.
I find it really interesting that no one in the future of programming/coding community has been able to really articulate or demonstrate what an "ideal" version of software engineering would be like. What would the perfect project look like both socially and technically? What would I gain and what would I give up to have that? Can you demonstrate it beyond the handpicked examples you'll start with? We definitely didn't get there.
It's much harder to create a clear narrative around the social aspects of engineering, but it's not impossible - we weren't talking about agile 20 years ago. The question is can we come up with a complete system that resonates enough with people to actually push behavior change through? Solving that is very different than building the next great language or framework. It requires starting a movement and capturing a belief that the community has in some actionable form.
I've been thinking a lot about all of this since we closed down Eve. I've also been working on a few things. :)
I'll take this opportunity to appreciate C# in VS as a counterexample to the article. Fast as hell (sub-second compile times for a moderately large project on my 2011 vintage 2500k), extremely stable, productive, and aesthetically pleasing. So, thanks.
I've also been lurking on the FoC community, and hadn't seen much on an articulation on the social and incentive structures that produce software. Do you think they'd be receptive to it?
And by "social and inventive structures", I'm assuming you're talking about change on the order of how open source software or agile development changed how we develop software?
While agile did address how to do software in an environment for changing requirements and limited time, we don't currently have anything that addresses an attention to speed of software, building solid foundations, and incentives to maintain software.
What would a complete system encompass that's currently missing in your mind?
I think you will see great change if you were to look at the personalities around one opportunity.
Because it's never problems really, it's perceived that way though.
A certain challange needs a specific set of personalities to solve it. That's the real puzzle.
Great engineers will never be able to solve things properly unlessed given the chance by those who control the surroundings.
We seek how we should develop, what method should be used, is it agile or is it lean? But maybe the problem starts earlier and focusing on exactly what methods and tools to use we miss out on the most simplest solution even beginners can see.
For example I am an architect, I tend to not touch the economics in a project. It's better fitted for other persons.
While not having read much about team based development I do want to be directed to well read literature about it. Maybe it's better called social programming, just another label of what we really do.
The one I miss the most at work is my wife. She clearly is the best reverse of me and makes me perform 1000x better. I find that very funny since she does not care about IT at all.
The stuff I write I don't think is that bloated, but like most things these days the stuff I write pulls in a bunch of dependencies which in turn pulls in their own dependencies. The result, pretty bloated software.
Writing performant, clean, pure software is super appealing as a developer, so why don't I do something about the bloated software I write? I think a big part of it is it's hard to see the direct benefit from the very large amount of effort I'll have to put in.
Sure I can write that one thing from that one library that I use myself instead of pulling in the whole library. I might be faster, I might end up with a smaller binary, it might be more deterministic because I know exactly what it's doing. But it'll take a long time, might have a lot of bugs and forget about maintaining it. Then end of the day, do the people that use my software care that I put in the effort to do this? They probably won't even notice.
I think part of it is knowing how to use libraries. It's actually a good thing to make use of well-tested implementations a lot of time rather than re-inventing the wheel: for instance it would be crazy to implement your own cryptography functions, or your own networking stack in most cases. Libraries are good when they can encapsulate a very well-defined set of functionality behind a well-defined interface. Even better if that interface is arrived at through a standards process.
To me, where libraries get a bit more questionable is when they exist in the realm of pure abstraction, or when they try to own the flow of control or provide the structure around which your program should hang. For instance, with something like Ruby on Rails, it sometimes feels like you are trying to undo what the framework has assumed you need so that you can get the functionality you want. A good library should be something you build on top of, not something you carve your implementation out of.
Most developers I have known want to work on the new great new thing. They don't want to spend a great deal of time on the project either. Forget about them wanting to dedicate time to software maintenance. Not sexy enough.
Ok but why ? And what can we do to improve things? Promote maintenance, but I think one of the issues is that you can show something new, it's much more difficult to show that something could have changed (failure, difficulty to grow), but didn't.
> While I do share the general sentiment, I do feel the need to point out that this exact page, a blog entry consisting mostly of just text, is also half the size of Windows 95 on my computer and includes 6MB of javascript, which is more code than there was in Linux 1.0.
Linux at that point already contained drivers for various network interface controllers, hard drives, tape drives, disk drives, audio devices, user input devices and serial devices, 5 or 6 different filesystems, implementations of TCP, UDP, ICMP, IP, ARP, Ethernet and Unix Domain Sockets, a full software implementation of IEEE754 a MIDI sequencer/synthesizer and lots of other things.
>If you want to call people out, start with yourself. The web does not have to be like this, and in fact it is possible in 2018 to even have a website that does not include Google Analytics.
Since this Reddit comment was made, the Twitter iframe responsible for the megabytes of JavaScript has been replaced by a <video> tag. The only JavaScript left on the page is Google Analytics, which is way less than 6MB.
I feel bad now that my comment received so much attention. I didn’t realize that the Reddit comment was made a year ago, and I should have tested the webpage size myself. The author’s argument is still important, after all.
And this really wasn't the author's fault—it's completely logical that if your story contains a tweet, you should attempt to embed it in the way Twitter recommends.
Long ago I watched a documentary about the early Apple days, when management was encouraging their developers to reduce boot times by 10 seconds. The argument was that 10 seconds multiplied by the number of boot sequences would result in saving many human lives worth of time.
The software world needs more of this kind of thinking. Not more arguments like "programmer's time is worth less than CPU time", which often fail to account for all externalities.
I like the "human lifetimes wasted"-metric. It's interesting to think that a badly optimized piece of code used by a few million people basically kills a few each day. If every manager, client and programmer thought for a second if the 30min they save is worth the human lifespans wasted I think we'd have better software.
I wish more companies thought like this in general. I often think about the nature of the work I'm doing as a developer and wonder if it's making society better off as a whole. The answer is usually a resounding no.
In my country, SW engineer is one of the best careers in terms of income, and I bet it is similar in most of the other countries. Why do we deserve that much buzz/fame/respect/income if the work we are doing is NOT making the society better?
They could think like this if it became part of their cost structure. There's no reason for them to think like this other than in terms of profit & loss.
That's an important comment and made me think that nobody here has mentioned climate change (where human lives are/will be affected, literally). There is an emerging movement toward low-carbon, low-tech, sustainable web design, but it's still very much fringe. To make it mainstream, we all need to work on coming up with better economic incentives.
If the cost of that boot time was somehow materialized upstream - e.g. if companies that produced OSes had to pay for the compute resources they used, rather than the consumer paying for the compute - then economics would solve the problem.
As it is, software can largely free ride on consumer resources.
This implies that time not spent using their software is time wasted doing nothing. Not that reducing boot times would be a bad thing, but that sounds more like a marketing gimmick. As kids we would wait for forever for our Commodore 64 games to load - knowing this we planned accordingly.
"...would result in saving many human lives worth of time."
Meh this is manager-speak for "saving human lives" which they definitely were not. They weren't saving anybody. I mean, there's argument that, in modern day, 2020, time away from the computer is more well-spent than on a computer; so a faster boot time is actually worse than a slower boot time. Faster boot time is less time with the family.
Good managers, like Steve Jobs was, are really good at motivating people using false narratives.
Performance is one thing, but I'm really just struck by how often I run into things that are completely broken or barely working for extended periods of time.
As I write this, I've been trying to get my Amazon seller account reactivated for more than a year, because their reactivation process is just... broken. Clicking any of the buttons, including the ones to contact customer support just take you back to the same page. Attempts to even try to tell someone usually put you in touch with a customer service agent halfway across the world who has no clue what you're talking about and doesn't care; even if they did care, they'd have no way to actually forward your message along to the team that might be able to spend the 20 minutes it might take to fix the issue.
The "barely working" thing is even more common. I feel like we've gotten used to everything just being so barely functional that it isn't even a disadvantage for companies anymore. We usually don't have much of an alternative place to take our business.
Khan Academy has some lessons aimed at fairly young kids—counting, spotting gaps in counting, talking that simple. I tried to sit with my son on the Khan Academy iPad app a few weeks ago to do some with him, thinking it'd be great. Unfortunately it is (or seemed to be to such a degree that I'm about 99% sure it is) janky webtech, so glitches and weirdness made it too hard for my son to progress in without my constantly stepping in to fix the interface. Things like, no feedback that a button's been pressed? Guess what a kid (or hell, adult) is gonna do? Hammer the button! Which... then keeps it greyed out once it does register the press, but doesn't ever progress, so you're stuck on the screen and have to go back and start the lesson over. Missed presses galore, leading to confusion and frustration that nothing was working the way he though it was (and it, in fact, supposed) to work.
I don't mean to shit on Khan Academy exactly because it's not like I'm paying for it, but those lessons may as well not exist for a 4 year old with an interface that poor. It was bad enough that more than half my time intervening wasn't to help him with the content, nor to teach him how to use the interface, but to save him from the interface.
This is utterly typical, too. We just get so used to working around bullshit like this, and we're so good at it and usually intuit why it's happening, that we don't notice that it's constant, especially on the web.
I'd love to see a software-industry-wide quality manifesto. The tenets could include things like:
* Measure whether the service you provide is actually working the way your customers expect.
(Not just "did my server send back an http 200 response", not just "did my load balancer send back an http 200", not just "did my UI record that it handled some data", but actually measure: did this thing do what users expect? How many times, when someone tried to get something done with your product, did it work and they got it done?)
* Sanity-check your metrics.
(At a regular cadence, go listen for user feedback, watch them use your product, listen to them, and see whether you are actually measuring the things that are obviously causing pain for your users.)
* Start measuring whether the thing works before you launch the product.
(The first time you say "OK, this is silently failing for some people, and it's going to take me a week to bolt on instrumentation to figure out how bad it is", should be the last time.)
* Keep a ranked list of the things that are working the least well for customers the most often.
(Doesn't have to be perfect, but just the process of having product & business & engineering people looking at the same ranked list of quality problems, and helping them reason about how bad each one is for customers, goes a long way.)
You might be interested in Software Craftsmanship [0] manifesto. There are many communities and initiatives around the world gathering folks with the interest in producing high-quality software. From the few of the folks I have been working with that are involved in SC, I can definitely recommend the movement and so I'm also exploring options in joining some local meet-ups and/or events.
This is also one of my pet peeves. It's easier than ever to collect this data and analyse it. Unfortunately, most of our clients are doing neither, or they are collecting the logs but carefully ignoring them.
I've lost count of the number of monitoring systems I've opened up just to see a wall of red tapering off to orange after scrolling a couple of screens further down.
At times like this I like to point out that "Red is the bad colour". I generally get a wide-eyed uncomprehending look followed by any one of a litany of excuses:
- I though it was the other team's responsibility
- It's not in my job description
- I just look after the infrastructure
- I just look after the software
- I'm just a manager, I'm not technical
- I'm just a tech, it's management's responsibility
Unfortunately, as a consultant I can't force anyone to do anything, and I'm fairly certain that the reports I write that are peppered with fun phrases such as "catastrophic risk of data corruption", "criminally negligent", etc... are printed out only so that they can be used as a convenient place to scribble some notes before being thrown in the paper recycling bin.
Remember the "HealthCare.gov" fiasco in 2013? [1] Something like 1% of the interested users managed to get through to the site, which cost $200M to develop. I remember the Obama got a bunch of top guys from various large IT firms to come help out, and the guy from Google had an amazing talk a couple of months later about what he found.
The takeaway message for me was that the Google guy's opinion was that the root cause of the failure was simply that: "Nobody was responsible for the overall outcome". That is, the work was siloed, and every group, contractor, or vendor was responsible only for their own individual "stove-pipe". Individually each component was all "green lights", but in aggregate it was terrible.
I see this a lot with over-engineered "n-tier" applications. A hundred brand new servers that are slow as molasses with just ten UAT users, let alone production load. The excuses are unbelievable, and nobody pays attention to the simple unalterable fact that this is TEN SERVERS PER USER and it's STILL SLOW!
People ignore the latency costs of firewalls, as one example. Nobody knows about VMware's "latency sensitivity tuning" option, which is a turbo button for load balancers and service bus VMs. I've seen many environments where ACPI deep-sleep states are left on, and hence 80% of the CPU cores are off and the other 20% are running at 1 GHz! Then they buy more servers, reducing the average load further and simply end up with even more CPU cores powered off permanently.
It would be hilarious of it wasn't your money they were wasting...
His point is basically that there have been times in history where the people who were the creative force behind our technology die off without transferring that knowledge to someone else, and we're left running on inertia for a while before things really start to regress, and there are signs that we may be going through that kind of moment right now.
I can't verify these claims, but it's an interesting thing to think about.
This is an interesting talk, thank you. What frightens me, is that the same process could be happening in other fields, for example, medicine. I really hope we won't forget how to create antibiotics one day.
I have a feeling however that this is in fact not broken but working exactly as intended. Corporate dark pattern just to gently "discourage" problem customers from contacting them.
I feel like the entire implementation of AWS is designed to sell premium support. There is so much missing documentation, and so many arbitrary details you have to know to make it work in general that you almost have to have a way to ask for help in order to make it work.
this usually happens with ad blockers. they somehow mess up a page, and then you get angry customers saying the page doesn't work for them.
we need a solution to this mess. so far i've seen popups (of all things) letting users know they should disable the ad blocking. but that's not a solution. ideally websites should not break when ad blockers are enabled, but i've seen sites where their core product depends on ad blocking being disabled. strange/chaotic times we live in.
"...how often I run into things that are completely broken..."
That's because the shotgun approach(sick 40 developers on a single problem idc how they dole out the workload) works well for most low stakes, non-safety-critical software.
So like a reactivation portal for your Amazon seller account is very low stakes. But Boeing treating the 737-MAX the same way, would be(and was) a very bad idea.
Because that low-stakes approach is extremely bug prone.
I think it's also a problem with the culture of a lot of software practices. There's a tendency to naval-gaze around topics like TDD and code review to make sure you're doing Software Development(tm) effectively, without a lot of attention to the actual product or user experience. In other words, code quality over product quality.
Another take: rewrites and rehashes tend to be bad because they are not exciting for programmers. Everything you re about to write is predictable, nothing looks Clearly better and it just feels forced. First versions of anything are exciting, the possibilities are endless, and even if the choices along the path are suboptimal, they are willing to make it work right.
He hints at Electron in the end, but I think the real blame lies on React which has become standard in the past five years.
Nobody has any fucking idea what’s going on in their react projects. I work with incredibly bright people and not a single one can explain accurately what happens when you press a button. On the way to solving UI consistency it actually made it impossible for anyone to reason about what’s happening on the screen, and bugs like the ones shown simply pop up in random places, due to the complete lack of visibility into the system. No, the debug tooling is not enough. I’m really looking forward to whatever next thing becomes popular and replaces this shit show.
>I’m really looking forward to whatever next thing becomes popular and replaces this shit show.
I'm with you, but motivation to really learn a system tanks when there's something else on the horizon. And what happens when new-thing appears really great for the first 1-2 years, but goes downhill and we're back to asking for its replacement only 5 years after its release? That tells me we're still chasing 'new', but instead of a positive 'new', it's a negative one.
This was also reinforced constantly by people claiming you'll be unemployable if you aren't riding the 'new' wave or doing X amount of things in your spare time.
It's a natural consequence of an industry that moves quickly. If we want a more stable bedrock, we MUST slow down.
I completely agree, here. React has replaced the DOM, and it's pretty fast, pretty efficient when you understand its limitations... but when you start rendering to the canvas or creating SVG animation from within react code, everything is utterly destroyed. Performance is 1/1000 of what the platform provides. I have completely stopped using frameworks in my day-to-day, and moved my company to a simple pattern for updatable, optionally stateful DOM elements. Definitely some headaches, some verbosity, and so forth. But zero tool chain and much better performance, and the performance will improve, month-by-month, forever.
I think my favourite fact(oid) to point out here would be that the React model is essentially the same thing as the good ol' Windows GUI model. The good ol' 1980s Windows, though perhaps slightly more convenient for developers. See [0].
I think it's good to keep that in mind as a reference point.
What is better? Jquery? It comes with its own can of worms and React designers had solid reasoning to migrate away from immediate DOM modification. In general UI is hard. Nice features like compositing, variable width fonts, reflow etc come with the underlying mechanisms that are pretty complicated and once something behaves different to the expectations it might be hard to understand why.
This a thousand times. It's amazing how each new layer of abstraction becomes the smallest unit of understanding you can work with. Browser APIs were the foundation for a while, then DOM manipulation libs like jquery, and now full blown view libraries and frameworks like react and angular.
If someone's starting a new website project (that has potential to become quite complex), what would you recommend is the best frontend technology to adapt then?
Flutter is a very good bet IMO. It uses Dart was designed from the ground up to be a solid front end language instead of building on top of JS. The underlying architecture of flutter is clearly articulated and error messages are informative. Still seems a bit slow and bloated in some aspects but it is getting better every day and I think their top down control of the stack is going to let them trim it all the way down.
This doesn't seem unique to React projects. Can anyone explain what is happening under the hood in their Angular projects? How about Vue? It seems to be a failing of all major UI frameworks, lots of complexity is abstracted away.
Time is money and engineers aren't given time to properly finish developing software before releases.
Add to this the modern way of being able to hotfix or update features and you will set an even lower bar for working software.
The reason an iPod didn't release with a broken music player is that back then forcing users to just update their app/OS was too big an ask. You shipped complete products.
Now a company like Apple even prides itself by releasing phone hardware with missing software features: Deep Fusion released months after the newest iPhone was released.
Software delivery became faster and it is being abused. It is not only being used to ship fixes and complete new features, but it is being used to ship incomplete software that will be fixed later.
As a final sidenote while I'm whining about Apple: as a consultant in the devops field with an emphasis on CI/CD, the relative difficulty of using macOS in a CI/CD pipeline makes me believe that Apple has a terrible time testing its software. This is pure speculation based on how my experience. A pure Apple shop has probably solved many of the problems and hiccups we might run into, but that's why I used the term "relatively difficult".
Yet somehow, it seems to me that most software - including all the "innovative" hot companies - are mostly rewriting what came before, just in a different tech stack. So how come nobody wants to rewrite the prior art to be faster than it was before?
Rewrites can be really amazing if you incentivize it that way. Its really important to have a solid reason for doing a rewrite though. But if there are good reasons, the problem of 0 (or < x) downtime migrations is an opportunity to do some solid engineering work.
Anecdotally, a lot of rewrites happen for the wrong reasons, usually NIH or churn. The key to a good rewrite is understanding the current system really well, without that its very hard to work with it let alone replace it.
He seems to make a contradictory point... he complains:
> iOS 11 dropped support for 32-bit apps. That means if the developer isn’t around at the time of the iOS 11 release or isn’t willing to go back and update a once-perfectly-fine app, chances are you won’t be seeing their app ever again.
but then he also says:
> To have a healthy ecosystem you need to go back and revisit. You need to occasionally throw stuff away and replace it with better stuff.
So which is it? If you want to replace stuff with something better, that means the old stuff won't work anymore... or, it will work by placing a translation/emulation layer around it, which he describes as:
> We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages and their environment produce. We cover shit with blankets just not to deal with it.
And yet at the time of its release, iOS 11 was the most buggy version in recent memory. (This record has since been beaten by iOS 13.)
I don't quite know what's going on inside Apple, but it doesn't feel like they're choosing which features to remove in a particularly thoughtful way.
---
Twenty years ago, Apple's flagship platform was called Mac OS (Mac OS ≠ macOS), and it sucked beyond repair. So Apple shifted to a completely different platform, which they dubbed Mac OS X. A slow and clunky virtualization layer was added for running "classic" Mac OS software, but it was built to be temporary, not a normal means of operation.
For anyone invested in the Mac OS platform at the time, this must have really sucked. But what's important is that Apple made the transition once! They realized that a clean break was essential, and they did it, and we've been on OS X ever since. There's a 16-year-old OS X app called Audio Slicer which I still use regularly in High Sierra. It would break if I updated to Catalina, but, therein lies my problem with today's Apple.
If you really need to make a clean break, fine, go ahead! It will be painful, but we'd best get it over with.
But that shouldn't happen more than once every couple decades, and even less as we get collectively more experienced at writing software.
I think that's not quite the point in the article. The idea is, in my reading, that we've built lazily on castles of sand for so long that sometimes we think it makes sense to throw away things we shouldn't, and other times we obsessively wrap/rewrap/paper over things we should throw away. What falls into each category is obviously debatable, but the author seems to be critiquing the methodology we use to make those decision--debatable or not, people aren't debating it so much as they're taking the shortest and often laziest path without prioritizing the right things (efficiency, consistency).
Even with our priorities in order, there will still be contentious, hard choices (to deprecate so-and-so or not; to sacrifice a capability for consistency of interface or not), but the author's point is that our priorities are not in order in the first place, so the decisions we make end up being arbitrary at best, and harmful/driven by bad motivations at worst.
The goal is that you throw out things that aren't useful (cost > benefit, or better replacement available and easily usable), not that you have a periodic "throw out everything written before X".
See also: in Good times create weak men [0], the author explains his interpretation as to why. I can't summarize it well. It's centered around a Jonathan Blow talk [1] Preventing the collapse of civilization.
I watched that talk a while ago. It is great, and it did change my opinion on a few things. Whether you agree with the premise or not, you can still learn something. For me, the importance of sharing knowledge within a team to prevent "knowledge rot". "Generations" in a team are much more rapid than the general population/civilisation, so that effect is magnified IMO.
I think the analogy here is backwards. The better question is "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.
If basic computer operations like loading a webpage took minutes rather than seconds, I think there would be more general interest in improving performance. For now though, most users are happy-enough with the performance of most software, and other factors like aesthetics, ease-of-use, etc. are the main differentiators (admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance).
These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use. Hence the complexity/performance overhead of using technologies that allow software to be easily iterated and expanded are justified, to my mind (though we should be mindful of technology that claims to improve our agility but really only adds complexity).
I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase! That means there is a huge opportunity to further optimize for many things in the system:
- The car no longer needs to have a hole on the side for filling up. A lot of pipes can be removed. Gas tank can be moved to a safer/closer location where it is used.
- The dashboard doesn't need a dedicated slot for showing the fuel gauge, more wirings and mechanical parts removed.
- No needs for huge exhaust and cooling systems, since the wasted energy is significantly reduce. No more pump, less vehicle weights...
Of course, that 0.005L car won't come earlier than a good electric car. However, if it's there, I'd totally prioritize it higher than other things you listed. I think people tend to underestimate how small efficiency improvements add up and enable exponential values to the system as a whole.
I'm struggling to think of recent (or not-so-recent) software improvements that have had a similar impact though. It seems like many of the "big" algorithms and optimization techniques that underpin modern applications have been around for a long time, and there aren't a lot of solutions that are "just about" ready to make the jump from supercomputers to servers, servers to desktops, or desktops to mobile. I guess machine learning is a probably contender in this space, but I imagine that's still an active area of optimization and probably not what the author of the article had in mind. I'd love if someone could provide an example of recent consumer software that is only possible due to careful software optimization.
It's a nice idea but it wouldn't work. The gasoline would go bad before you could use it all.
Plug-in hybrids already have this problem. Their fuel management systems try to keep the average age of the fuel in the tank under 1 year. The Chevy Volt has a fuel maintenance mode that runs every 6 weeks:
https://www.nytimes.com/2014/05/11/automobiles/owners-who-ar...
https://www.autoblog.com/2011/03/18/chevy-volts-sealed-gas-t...
Instead of having a "lifetime tank", a car that uses 0.005L per 100km would be better off with a tiny tank. And then instead of buying fuel at a fuel station you'd buy it in a bottle at the supermarket along with your orange juice.
A UI where each interaction takes several seconds is poor UI design. I do lose most of my time and patience to poor UI design, including needless "improvements" every few iterations that break my workflow and have me relearn the UI.
I find the general state of interaction with the software I use on a daily basis to be piss poor, and over the last 20 or so years I have at best seen zero improvement on average, though if I was less charitable I'd say it has only gone downhill. Applications around the turn of the century were generally responsive, as far as I can remember.
I’m willing to bet that a significant percentage of my accidental inputs are due to UI latency.
I want all interactions with all of my computing devices to occur in as close to 0ms as possible. 0ms is great; 20ms is good; 200ms is bad; 500ms is absolutely inexcusable unless you're doing significant computation. I find it astonishing how many things will run in the 200-500ms range for utterly trivial operations such as just navigating between UI elements. And no, animation is not an acceptable illusion to hide slowness.
I am with the OP. "Good enough" is a bane on our discipline.
The usage story requires you to switch to turn-by-turn, and there’s no way to have bird eye map following your location along route (unless you just choose some zoom level and manually recenter every so often.)
It’s awful, distracting and frankly a waste of time... just to show a bit of animation every time I accidentally fail to register a drag...
Damn Ui
I was forced to use a monitor at 30 fps for a few days due to a bad display setup. It made me realize how important 60 fps is. Even worse, try using an OS running in a VM for an extended period of time...
There are plenty of things that are 'good enough', but once users get used to something better they will never go back (if they have the choice, at least).
- Opening multiple tabs in a browser will kill your battery, and it's not the fault of a single page, but of all of them. Developers tend to blame the end user for opening too many tabs.
- Running a single Electron app is fast enough in a newer machine but if you need multiple instances or multiple apps you're fucked.
- Some of my teammates can't use their laptops without the charger because they have to run 20+ docker containers just to have our main website load. The machines are also noisy because the fan is always on.
- Having complex build pipelines that take minutes or hours to run is something that slows dow developers, which are expensive. It's not the fault of a single software (except maybe of the chosen programming language), but of multiple inefficient libraries and packages.
I actually do this for development and it works really well.
Ubuntu Linux VM in VMware Fusion on a Macbook Pro with MacOS.
Power consumption was found to be better than running Linux natively. (I'm guessing something about switching between the two GPUs, but who knows.)
GPU acceleration works fine; the Linux desktop animations, window fading and movement animations etc are just as I'd expect.
Performance seems to be fine generally, and I do care about performance.
(But I don't measure graphics performance, perhaps that's not as good as native. And when doing I/O intensive work, that's on servers.)
Being able to do a four-finger swipe on the trackpad to switch between MacOS desktops and Linux desktops (full screen) is really nice. It feels as if the two OSes are running side by side, rather than one inside another.
I've been doing Linux-in-a-VM for about 6 years, and wouldn't switch back to native on my laptop if I had a choice. The side-by-side illusion is too good.
Before that I ran various Linux desktops (or Linux consoles :-) for about 20 years natively on all my development machines and all my personal laptops, so it's not like don't know what that's like. In general, I notice more graphics driver bugs in the native version...
(The one thing that stands out as buggy is VMware's host-to-guest file sharing is extremely buggy, to the point of corrupting files, even crashing Git. MacOS's own SMB client is also atrocious in numerous ways, to the point of even deleting random files, but does it less often so you don't notice until later what's gone. I've had to work hard to find good workarounds to have reliable files! I mention this as a warning to anyone thinking of trying the same setup.)
Optimizing for microseconds when bad UI steals seconds is being penny-wise and pound foolish. Business might not understand tech but they do generally understand how it ends up on the balance sheet.
I do that for most of my hobbyist Linux dev work. It's fine. It can do 4k and everything. It's surely not optimal but it's better than managing dual boot.
At my current place of employment we have plenty of average requests hitting 5-10 seconds and longer, you've got N+1 queries against the network, rather than the DB. As long as it's within 15 or 30 seconds nobody cares, they probably blame their 4G signal for it (especially in the UK where our mobile infrastructure is notoriously spotty, and entirely absent even within the middle of London). But since I work on those systems I'm upset and disappointed that I'm working on APIs that can take tens of seconds to respond.
The analogy is also not great because MPG is an established metric for fuel efficiency in cars. The higher the MPG the better.
I never liked this view. I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time.
> they probably blame their 4G signal for it
Sad thing is, enough companies thinking like this and the incentive to improve on 4G itself evaporates, because "almost nothing can work fast enough to make use of these optimizations anyway".
I see this argument coming up a lot, but this can be solved by better UX. Making things slow on purpose is just designers/developers being lazy.
Btw users feeling uneasy when something is "too fast" is an indictment of everything else being too damn slow. :D
IMO it can be attributed more to bad UI than optimizations.
I use webpages for most of the social networking platforms such as Facebook. I am left handed and scroll with my left thumb (left half of the screen). I have accidentally ‘liked’ people’s posts, sent accidental friend requests only because of this reason.
Guessing along with language selection, it might be helpful to have a selection of hand preference for mobile browsing.
I think for webpages it is the opposite: non-orthogonal in most cases.
If you disable your JS/Ad/...-blocker, and go to pages like Reddit, it is definitely slower and the CPU spikes. Even with a blocker, the page still does a thousand things in the first-party scripts (like tracking mouse movements and such) that slow everything down a lot.
That's not really true. Slack could be just as pretty and a fraction of the weight, if they hadn't used Electron.
One, phone OSes are being designed for single-tasked use. Outside of alarms and notifications in the background (which tend to be routed through a common service), the user can see just one app at a time, and mobile OSes actively restrict background activity of other apps. So every application can get away with the assumption that it's the sole owner of the phone's resources.
Two, given the above, the most noticeable problem is now power usage. As Moore's law has all but evaporated for single-threaded performance, hardware is now being upgraded for multicore and (important here) power performance. So apps can get away with poor engineering, because every new generation of smartphones has a more power-efficient CPU, so the lifetime on single charge doesn't degrade.
Moreover, the same cost-equation that produces software that is much less efficient than it could be produces software that might be usable for it's purpose (barely) but is much more ugly, confusing, and buggy than it needs to be.
That equation is add the needed features, sell the software first, get lock in, milk it 'till it dies and move on. That's equation is locally cost-efficient. Locally, that wins and that produces the world we see every day.
Maybe, the lack of craftsmanship, the lack of doing one's activity well, is simply inevitable. Or maybe the race to the bottom is going to kill us - see the Boeing 737 Max as perhaps food for thought (not that software as such was to blame there but the quality issue was there).
Wait, are you implying they don't ? What world do you live in, and how do I join?
It does fill some other requirements that a regular car doesn't.
or... something.
The car analogy does remind me of one I read a while ago, comparing cars and their cost and performance with CPUs.
>And build times? Nobody thinks compiler that works minutes or even hours is a problem. What happened to “programmer’s time is more important”? Almost all compilers, pre- and post-processors add significant, sometimes disastrous time tax to your build without providing proportionally substantial benefits.
Anecdotally, in my career I've never had to compile something myself that took longer than a few minutes (but maybe if you work on the Linux kernel or some other big project, you have; or maybe I've just been lucky to mainly use toolchains that avoid the pitfalls here). I would definitely consider it a problem if my compiler runs regularly took O(10mins), and would probably consider looking for optimizations or alternatives at that point. I've also benefited immensely from a lot of the analysis tools that are built into the toolchains that I use, and I have no doubt that most or all of them have saved me more pain than they've caused me.
1/ What didn't seem to get mentioned was the speed to market. It's far worse to build the right thing no one wants, than to build the crappy thing that some people want a lot. As a result, it makes sense for people to leverage electron--but it has consequences for users down the line.
2/ Because we deal with orders of magnitude with software, it's not actually a good ROI to deal with things that are under 1x improvement on a human scale. So what made sense to optimize when computers were 300MHz doesn't make sense at all when computers are 1GHz, given a limited time and budget.
3/ Anecdotally (and others can nix or verify), what I hear from ex-Googlers is that no one gets credit for maintaining the existing software or trying to make it faster. The only way you get promoted is if you created a new project. So that's what people end up doing, and you get 4 or 5 versions of the same project that do the same thing, all not very well.
I agree that the suckage is a problem. But I think it's the structure of incentives in the environment that software is written that also needs to be addressed, not just the technical deficiencies of how we practice writing software, like how to maintain state.
It's interesting Chris Granger submitted this. I can see that the gears have been turning for him on this topic again.
I find it really interesting that no one in the future of programming/coding community has been able to really articulate or demonstrate what an "ideal" version of software engineering would be like. What would the perfect project look like both socially and technically? What would I gain and what would I give up to have that? Can you demonstrate it beyond the handpicked examples you'll start with? We definitely didn't get there.
It's much harder to create a clear narrative around the social aspects of engineering, but it's not impossible - we weren't talking about agile 20 years ago. The question is can we come up with a complete system that resonates enough with people to actually push behavior change through? Solving that is very different than building the next great language or framework. It requires starting a movement and capturing a belief that the community has in some actionable form.
I've been thinking a lot about all of this since we closed down Eve. I've also been working on a few things. :)
And by "social and inventive structures", I'm assuming you're talking about change on the order of how open source software or agile development changed how we develop software?
While agile did address how to do software in an environment for changing requirements and limited time, we don't currently have anything that addresses an attention to speed of software, building solid foundations, and incentives to maintain software.
What would a complete system encompass that's currently missing in your mind?
Because it's never problems really, it's perceived that way though.
A certain challange needs a specific set of personalities to solve it. That's the real puzzle.
Great engineers will never be able to solve things properly unlessed given the chance by those who control the surroundings.
We seek how we should develop, what method should be used, is it agile or is it lean? But maybe the problem starts earlier and focusing on exactly what methods and tools to use we miss out on the most simplest solution even beginners can see.
For example I am an architect, I tend to not touch the economics in a project. It's better fitted for other persons.
While not having read much about team based development I do want to be directed to well read literature about it. Maybe it's better called social programming, just another label of what we really do.
The one I miss the most at work is my wife. She clearly is the best reverse of me and makes me perform 1000x better. I find that very funny since she does not care about IT at all.
There's ways to develop working software, but not if it's all locked behind closed OSes and other bullshit.
Writing performant, clean, pure software is super appealing as a developer, so why don't I do something about the bloated software I write? I think a big part of it is it's hard to see the direct benefit from the very large amount of effort I'll have to put in.
Sure I can write that one thing from that one library that I use myself instead of pulling in the whole library. I might be faster, I might end up with a smaller binary, it might be more deterministic because I know exactly what it's doing. But it'll take a long time, might have a lot of bugs and forget about maintaining it. Then end of the day, do the people that use my software care that I put in the effort to do this? They probably won't even notice.
To me, where libraries get a bit more questionable is when they exist in the realm of pure abstraction, or when they try to own the flow of control or provide the structure around which your program should hang. For instance, with something like Ruby on Rails, it sometimes feels like you are trying to undo what the framework has assumed you need so that you can get the functionality you want. A good library should be something you build on top of, not something you carve your implementation out of.
> While I do share the general sentiment, I do feel the need to point out that this exact page, a blog entry consisting mostly of just text, is also half the size of Windows 95 on my computer and includes 6MB of javascript, which is more code than there was in Linux 1.0. Linux at that point already contained drivers for various network interface controllers, hard drives, tape drives, disk drives, audio devices, user input devices and serial devices, 5 or 6 different filesystems, implementations of TCP, UDP, ICMP, IP, ARP, Ethernet and Unix Domain Sockets, a full software implementation of IEEE754 a MIDI sequencer/synthesizer and lots of other things.
>If you want to call people out, start with yourself. The web does not have to be like this, and in fact it is possible in 2018 to even have a website that does not include Google Analytics.
https://www.reddit.com/r/programming/comments/9go8ul/comment...
> Today’s egregiously bloated site becomes tomorrow’s typical page, and next year’s elegantly slim design.
[1] https://idlewords.com/talks/website_obesity.htm
This is Twitter, not some random framework!
Deleted Comment
Edit: found a link with the same story: https://www.folklore.org/StoryView.py?story=Saving_Lives.txt
The software world needs more of this kind of thinking. Not more arguments like "programmer's time is worth less than CPU time", which often fail to account for all externalities.
In my country, SW engineer is one of the best careers in terms of income, and I bet it is similar in most of the other countries. Why do we deserve that much buzz/fame/respect/income if the work we are doing is NOT making the society better?
These thoughts just haunt me from time to time.
As it is, software can largely free ride on consumer resources.
Meh this is manager-speak for "saving human lives" which they definitely were not. They weren't saving anybody. I mean, there's argument that, in modern day, 2020, time away from the computer is more well-spent than on a computer; so a faster boot time is actually worse than a slower boot time. Faster boot time is less time with the family.
Good managers, like Steve Jobs was, are really good at motivating people using false narratives.
As I write this, I've been trying to get my Amazon seller account reactivated for more than a year, because their reactivation process is just... broken. Clicking any of the buttons, including the ones to contact customer support just take you back to the same page. Attempts to even try to tell someone usually put you in touch with a customer service agent halfway across the world who has no clue what you're talking about and doesn't care; even if they did care, they'd have no way to actually forward your message along to the team that might be able to spend the 20 minutes it might take to fix the issue.
The "barely working" thing is even more common. I feel like we've gotten used to everything just being so barely functional that it isn't even a disadvantage for companies anymore. We usually don't have much of an alternative place to take our business.
I don't mean to shit on Khan Academy exactly because it's not like I'm paying for it, but those lessons may as well not exist for a 4 year old with an interface that poor. It was bad enough that more than half my time intervening wasn't to help him with the content, nor to teach him how to use the interface, but to save him from the interface.
This is utterly typical, too. We just get so used to working around bullshit like this, and we're so good at it and usually intuit why it's happening, that we don't notice that it's constant, especially on the web.
* Measure whether the service you provide is actually working the way your customers expect.
(Not just "did my server send back an http 200 response", not just "did my load balancer send back an http 200", not just "did my UI record that it handled some data", but actually measure: did this thing do what users expect? How many times, when someone tried to get something done with your product, did it work and they got it done?)
* Sanity-check your metrics.
(At a regular cadence, go listen for user feedback, watch them use your product, listen to them, and see whether you are actually measuring the things that are obviously causing pain for your users.)
* Start measuring whether the thing works before you launch the product.
(The first time you say "OK, this is silently failing for some people, and it's going to take me a week to bolt on instrumentation to figure out how bad it is", should be the last time.)
* Keep a ranked list of the things that are working the least well for customers the most often.
(Doesn't have to be perfect, but just the process of having product & business & engineering people looking at the same ranked list of quality problems, and helping them reason about how bad each one is for customers, goes a long way.)
[0] http://manifesto.softwarecraftsmanship.org/
I've lost count of the number of monitoring systems I've opened up just to see a wall of red tapering off to orange after scrolling a couple of screens further down.
At times like this I like to point out that "Red is the bad colour". I generally get a wide-eyed uncomprehending look followed by any one of a litany of excuses:
- I though it was the other team's responsibility
- It's not in my job description
- I just look after the infrastructure
- I just look after the software
- I'm just a manager, I'm not technical
- I'm just a tech, it's management's responsibility
Unfortunately, as a consultant I can't force anyone to do anything, and I'm fairly certain that the reports I write that are peppered with fun phrases such as "catastrophic risk of data corruption", "criminally negligent", etc... are printed out only so that they can be used as a convenient place to scribble some notes before being thrown in the paper recycling bin.
Remember the "HealthCare.gov" fiasco in 2013? [1] Something like 1% of the interested users managed to get through to the site, which cost $200M to develop. I remember the Obama got a bunch of top guys from various large IT firms to come help out, and the guy from Google had an amazing talk a couple of months later about what he found.
The takeaway message for me was that the Google guy's opinion was that the root cause of the failure was simply that: "Nobody was responsible for the overall outcome". That is, the work was siloed, and every group, contractor, or vendor was responsible only for their own individual "stove-pipe". Individually each component was all "green lights", but in aggregate it was terrible.
I see this a lot with over-engineered "n-tier" applications. A hundred brand new servers that are slow as molasses with just ten UAT users, let alone production load. The excuses are unbelievable, and nobody pays attention to the simple unalterable fact that this is TEN SERVERS PER USER and it's STILL SLOW!
People ignore the latency costs of firewalls, as one example. Nobody knows about VMware's "latency sensitivity tuning" option, which is a turbo button for load balancers and service bus VMs. I've seen many environments where ACPI deep-sleep states are left on, and hence 80% of the CPU cores are off and the other 20% are running at 1 GHz! Then they buy more servers, reducing the average load further and simply end up with even more CPU cores powered off permanently.
It would be hilarious of it wasn't your money they were wasting...
[1] https://en.wikipedia.org/wiki/HealthCare.gov#Issues_during_l...
https://www.youtube.com/watch?v=pW-SOdj4Kkk
His point is basically that there have been times in history where the people who were the creative force behind our technology die off without transferring that knowledge to someone else, and we're left running on inertia for a while before things really start to regress, and there are signs that we may be going through that kind of moment right now.
I can't verify these claims, but it's an interesting thing to think about.
we need a solution to this mess. so far i've seen popups (of all things) letting users know they should disable the ad blocking. but that's not a solution. ideally websites should not break when ad blockers are enabled, but i've seen sites where their core product depends on ad blocking being disabled. strange/chaotic times we live in.
That's because the shotgun approach(sick 40 developers on a single problem idc how they dole out the workload) works well for most low stakes, non-safety-critical software.
So like a reactivation portal for your Amazon seller account is very low stakes. But Boeing treating the 737-MAX the same way, would be(and was) a very bad idea.
Because that low-stakes approach is extremely bug prone.
https://tonsky.me/blog/good-times-weak-men/
Another take: rewrites and rehashes tend to be bad because they are not exciting for programmers. Everything you re about to write is predictable, nothing looks Clearly better and it just feels forced. First versions of anything are exciting, the possibilities are endless, and even if the choices along the path are suboptimal, they are willing to make it work right.
Nobody has any fucking idea what’s going on in their react projects. I work with incredibly bright people and not a single one can explain accurately what happens when you press a button. On the way to solving UI consistency it actually made it impossible for anyone to reason about what’s happening on the screen, and bugs like the ones shown simply pop up in random places, due to the complete lack of visibility into the system. No, the debug tooling is not enough. I’m really looking forward to whatever next thing becomes popular and replaces this shit show.
I'm with you, but motivation to really learn a system tanks when there's something else on the horizon. And what happens when new-thing appears really great for the first 1-2 years, but goes downhill and we're back to asking for its replacement only 5 years after its release? That tells me we're still chasing 'new', but instead of a positive 'new', it's a negative one.
This was also reinforced constantly by people claiming you'll be unemployable if you aren't riding the 'new' wave or doing X amount of things in your spare time.
It's a natural consequence of an industry that moves quickly. If we want a more stable bedrock, we MUST slow down.
I think it's good to keep that in mind as a reference point.
--
https://www.bitquabit.com/post/the-more-things-change/
I wrote a little bit more about my thoughts on the problem here: https://blog.usejournal.com/you-probably-shouldt-be-using-re...
> Nobody
Speak for yourself
Deleted Comment
Add to this the modern way of being able to hotfix or update features and you will set an even lower bar for working software.
The reason an iPod didn't release with a broken music player is that back then forcing users to just update their app/OS was too big an ask. You shipped complete products.
Now a company like Apple even prides itself by releasing phone hardware with missing software features: Deep Fusion released months after the newest iPhone was released.
Software delivery became faster and it is being abused. It is not only being used to ship fixes and complete new features, but it is being used to ship incomplete software that will be fixed later.
As a final sidenote while I'm whining about Apple: as a consultant in the devops field with an emphasis on CI/CD, the relative difficulty of using macOS in a CI/CD pipeline makes me believe that Apple has a terrible time testing its software. This is pure speculation based on how my experience. A pure Apple shop has probably solved many of the problems and hiccups we might run into, but that's why I used the term "relatively difficult".
Anecdotally, a lot of rewrites happen for the wrong reasons, usually NIH or churn. The key to a good rewrite is understanding the current system really well, without that its very hard to work with it let alone replace it.
> iOS 11 dropped support for 32-bit apps. That means if the developer isn’t around at the time of the iOS 11 release or isn’t willing to go back and update a once-perfectly-fine app, chances are you won’t be seeing their app ever again.
but then he also says:
> To have a healthy ecosystem you need to go back and revisit. You need to occasionally throw stuff away and replace it with better stuff.
So which is it? If you want to replace stuff with something better, that means the old stuff won't work anymore... or, it will work by placing a translation/emulation layer around it, which he describes as:
> We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages and their environment produce. We cover shit with blankets just not to deal with it.
Seems like he wants it both ways.
I don't quite know what's going on inside Apple, but it doesn't feel like they're choosing which features to remove in a particularly thoughtful way.
---
Twenty years ago, Apple's flagship platform was called Mac OS (Mac OS ≠ macOS), and it sucked beyond repair. So Apple shifted to a completely different platform, which they dubbed Mac OS X. A slow and clunky virtualization layer was added for running "classic" Mac OS software, but it was built to be temporary, not a normal means of operation.
For anyone invested in the Mac OS platform at the time, this must have really sucked. But what's important is that Apple made the transition once! They realized that a clean break was essential, and they did it, and we've been on OS X ever since. There's a 16-year-old OS X app called Audio Slicer which I still use regularly in High Sierra. It would break if I updated to Catalina, but, therein lies my problem with today's Apple.
If you really need to make a clean break, fine, go ahead! It will be painful, but we'd best get it over with.
But that shouldn't happen more than once every couple decades, and even less as we get collectively more experienced at writing software.
Even with our priorities in order, there will still be contentious, hard choices (to deprecate so-and-so or not; to sacrifice a capability for consistency of interface or not), but the author's point is that our priorities are not in order in the first place, so the decisions we make end up being arbitrary at best, and harmful/driven by bad motivations at worst.
Otherwise it's a tradeoff if you add constraints like cost, effort, time to market, and so on...
[0] https://tonsky.me/blog/good-times-weak-men/
[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk