Readit News logoReadit News
Apreche · 18 days ago
Because it’s not a consideration on the bottom line.

If someone comes to your company and says they want to give them money to buy an advertisement, nobody in power says “no thanks, that will make our website slow.” If someone in marketing says “put this tracking garbage on our site” nobody says “no can do, too slow.” If the designers, or executives looking at the design, are enamored with something really flashy looking nobody says “no, that will make the website slow.”

The engineers likely do complain it will make the website slow. I have been that engineer. But they are never in a position of power to overrule other parts of the company. This is especially true if it’s not a tech company. Web performance does now show up on the earnings report.

graemep · 18 days ago
> Because it’s not a consideration on the bottom line.

I would say (maybe this is what you mean by consideration on"?) that it has an impact on the bottom line, but this is not obvious and not understood by the people in charge.

mlinhares · 18 days ago
Its always the same reason, the business just doesn't hire people qualified to do the job.

If in 2025 you're not a content farm, your business is to get people to buy stuff from you, you don't have a team tracking every millisecond change in your p99 latency and page load speed across multiple devices, you're just incompetent.

lesuorac · 18 days ago
It seems fairly trackable though? Like money spend per second of loading time?

You do run into a weird problem where as the site gets faster for the p99 the median speed can get worse as people that originally avoiding the site over speed start to use it more often so you get a worse p99 population than before and the old-p99 creeps down into p50. But also you have more users so that's nice.

tossandthrow · 18 days ago
If that is the case, it should be almost trivial to write up a paper that quantifies the cost of poor performance and executives ymwouldnlove to read it.
danielvaughn · 18 days ago
Yep.

Another issue is that you often simply aren’t given the time to make it performant. Deadlines are heavily accelerated for web products. You’re barely allotted time to fix bugs, never mind enhancements.

Most web devs don’t want to make slow sites, they’re not given the opportunity to.

mwcz · 18 days ago
They aren't given the opportunity to, that's true. We've also been in an industry-wide performance drought for so many years that many devs don't even realize how fast websites can be.
sgarland · 18 days ago
TFA mentions this [0] delightful series of articles, in which it’s calculated that every KB of JS sent to clients cost Kroger $100K/yr.

[0]: https://dev.to/tigt/making-the-worlds-fastest-website-and-ot...

donatj · 18 days ago
As for the tracking code from on high, holy hell you are right. We got bought by a big company and suddenly we've got giant support panels in the bottom left and JS loading from random domains we've got no way to keep our content security policy up to date with updated domains because they're out of our hands.
game_the0ry · 18 days ago
> Because it’s not a consideration on the bottom line.

True if you work at non-technical company, like a bank.

codingdave · 18 days ago
There is a better approach, as an engineer, to get this type of point across. Don't just reject their solution... offer a better one. If they come to you saying they want tracking on a web site, ask what goal they are trying to achieve. Ask them what costs they are paying for the service they want you to implement. And then see if you can design a server-based system that gives them the info they want, and write up a proposal for it that includes the downsides and long-term hidden costs of their solutions. Whatever they are asking you, follow that pattern - treat them like a customer (which they are), determine their needs, determine their budget, and propose solutions that give a full comparison of the options.

Worst case scenario, they say no. But often you'll at least open a dialogue and get involved in the decision making. You might even get your solution implemented. And you are definitely more likely to be consulted on future decisions (as long as you are professional and polite during the discussions).

Apreche · 17 days ago
Sure. But they want it tomorrow.

When you offer to build a real solution, that sure is a lot of work, time, and expense to give them something they could have instantly. A tough sell. Also, volunteering yourself to do a lot of work on top of the responsibilities you already have.

sarchertech · 18 days ago
This is 100% the correct way to do things. The tactic of never saying no but proposing better alternatives is the best way to guide stakeholders into making better technical decisions.

However, it’s requires a lot more mental energy (and can be riskier) than just doing the exact dumb thing the jira ticket asks for, or just saying “this is bad” (and then doing the dumb thing anyway because there’s a deadline).

Because of that most people don’t do it and even food engineers won’t have the energy to do it all the time.

This is a huge part of why big companies can’t produce high quality, high performance software consistently.

fridder · 18 days ago
It used to matter a bit more, or at least the initial page load speed did.
koakuma-chan · 18 days ago
What a dystopian world we live in.
moomin · 18 days ago
Some anecdata for you: I used to work for a price comparison website. We had pretty good metrics on how long pages took to load and what the drop-off from page to page of the process was. It will shock you not in the least that milliseconds translates into percentages lost pretty quickly. Speed up your sign up process and that is money in the bank.
jerf · 18 days ago
It may seem absurd that the apps is costing Kroger that much but my family's experience backs it up.

In the post-COVID era, my wife has become quite accustomed to digital shopping. We actually live closer to a Meijer, which is basically the Midwest's answer to Walmart, except it's decades older. (You may be able to thank Meijer for Super Walmarts; it's Meijer that proved out the concept of attaching a grocery store to a general superstore for Walmart, and it gave Walmart some difficulty penetrating in to the Midwest so they had to add it to compete.) Of course COVID caused a big app rush and at first everybody's app was pretty crappy, so we just stuck with the closest one.

Over time, Meijer's app slowed down pretty badly, so my wife ended up switching to Kroger. I saw a lot of Kroger bags. One of the biggest problems with the Meijer was that trying to add a second of any item was a synchronous round-trip to a rather busy and slow server, so goodness help you if you wanted, say, 6 bananas. Going from 1 to 6 could literally take 30 seconds on the worst days. And that was just the worst issue, the whole app was generally slow and prone to failure.

But somewhere around two years ago, clearly someone at Meijer got the performance religion and cleaned up their app and website. I still wouldn't call it blazing fast, but I would call it acceptable by modern standards, and it blew away the Kroger app of the time... again, not because it was pushing 120fps with super low latency, but just because it was fairly reasonable to use. Adding five more bananas is now just tapping the button five times, and while I can still kind of see the async requests chasing each other a bit, it pretty much always ends up converging on the correct number in a couple of seconds. So my wife switched back.

I don't know what Kroger's current performance is, because now that we don't have a problem we haven't been seeking solutions. So they've lost thousands of dollars of business over the years to Meijer from us.

An anecdote, of course, but I suspect a common one.

I put this out there in the hope that it will push more people into caring a bit more about performance. I think there's a fairly large range where "normal people" will use a sluggish app or website, and wander away, and if you do manage to rope them into a marketing survey they won't necessarily say it's because it's slow, you'll get other rationalizations, because it isn't a fully-conscious reaction and realization for them... but nevertheless, you'll have a very, very leaky funnel and just reading those surveys may not tell you why.

techdmn · 18 days ago
My personal read on this is that everyone is still trying to recreate the "sudden success" of FAANG-like companies in their start-up phases. (Never mind how long it actually took them to become big.) Basically upper management incentivizes "big bets" that might turn into a "moon shot". Those bets are new features. You'll never get rich quick just by optimizing latency. You might get rich slowly, but how is that going to pump the stock this quarter / get me promoted?
jpdb · 18 days ago
Web performance is probably/mostly valued as efficiently as it needs to be.

The numbers mentioned in the article are...quite egregious.

> Oh, Just 2.4 Megabytes. Out of a chonky 4 MB payload. Assuming they could rebuild the site to hit Alex Russell's target of 450 KB, that's conservatively $435,000,000 per year. Not too bad. And this is likely a profound underestimation of the real gain

This is not a "profound underestimation." Not by several orders of magnitude. Kroger is not going save anywhere even remotely close to $435 million dollars by reducing their js bundle size.

Kroger had $3.6-$3.8 billion in allocated capex in the year of 2024. There is no shot javascript bundle size is ~9% of their *total* allocated capex.

I work with a number of companies of similar size and their entire cloud spend isn't $435,000,000 -- and bandwidth (or even networking all up) isn't in their time 10 line items.

A leak showed that Walmart spent $580m a year on Azure: https://www.datacenterdynamics.com/en/news/walmart-spent-580...

These numbers are so insanely inflated, I think the author needs to rethink their entire premise.

StopVibeCoding · 18 days ago
it's not just their direct cost, it's also the loss of revenue. the author wasn't arguing that they could save 435 million dollars in server costs.

Instead they were arguing that in addition to saving maybe a million or two in server costs, they would gain an additional 435 million dollars in revenue because less people would leave their website

nchmy · 18 days ago
Bizarre that this had to be spelled out...
zeroCalories · 18 days ago
I agree that the problem is product development, but the framing is wrong. Engineers generally have a solid intuition for what will perform well, but when UX designers and PMs with only a vague idea of how these technologies work dreams up an idea, gives a deadline, then the engineers are evaluated on fulfilling those metrics, then the outcome will be obvious.
torginus · 18 days ago
I disagree - it's often engineering's fault. It's been pretty consistent that a decently specced server, running PHP that was written sometime last century, generating static pages, redis cache in front, and serving static content via nginx, beats the everloving pants off whatever fotm microservice SPA monstrosity modern devs tend to come up with.

The most hilarious if you go to pirate sites (for stuff like comics, manga or movies), and it's 100x faster and works better than the official paid for alternative, even though I'm sure the former runs off some dudes former gamer PC in his bedroom.

extraisland · 18 days ago
A lot of companies seem to architect their web apps to deal with millions of users. When in reality they may have a couple of hundred hitting the site at once.

This explodes the cost of development and it makes current web development miserable IME.

I am forced to deal with everything being totally overengineered when a Flask app with a PostgresSQL backend could probably do the job on a reasonably priced VPS.

extraisland · 18 days ago
> Engineers generally have a solid intuition for what will perform well

I worked for about 15 years as a frontend developer. I've seen very little evidence of this being the case.

I've seen a huge amount of developers (backend, frontend doesn't matter much) will do things that are really dumb e.g. repeated look of values that don't change often, not trying to minimise roundtrips.

RajT88 · 18 days ago
Totally agree. I did a lot of work with an eCom site (a big one; you probably see their brand name daily) for about 5 years and latency mattered a lot to them. Any extra latency was deadly, and they freaked out about latency going up by 100ms.

So then you load their site - unbelievable garbage, tons of pop-up ads for promos, video frames all over the place, high res graphics, megabytes of javascript.

The backend just as messy, with dozens and dozens of layers that made the latency budget by the time you reached the database backend super slim. Think: orders dropping when database request latency hit 10ms at the 99th percentile.

Insanity.

cosmic_cheese · 18 days ago
And don’t forget that once the engineers are done, a nice thick layer of analytics junk is slathered on, plus whatever layer that allows marketing/sales to make arbitrary changes at will without code. By the time you’re done with all that even the best engineered web app has become a behemoth.

There’s are a few cases where the engineering side isn’t helping things though, like how the Spotify desktop app loads a full redundant set of JS dependencies for each pane since they’re each independent iframes, which they do so the teams responsible for the panes never have to interact.

ben_w · 18 days ago
It's not just that. I've seen plenty of technical talks where people are showing off (and a few jobs where we were required to use) stuff that's several layers of abstraction more complex than it needs to be.

Right now, I'm converting some C++ game code that is very obviously originally meant for a 68k mac (with resource forks etc.) into vanilla JS. It's marginally easier to work with than SwiftUI + VIPER, and I'm saying that as someone who has been working on iOS apps since the first retina iPod came out and has only 14 months experience of C++ and perhaps about that, perhaps a bit less than that, total experience of JS since getting one of those "lean foo in 24 hours" books from WHSmith using pocket money in the late 90s.

JimDabell · 18 days ago
It’s about to get worse. It doesn’t matter if you spend weeks optimising your web performance if visitors have to wait several seconds to go through a proof-of-work JavaScript widget, Cloudflare Turnstile, or a CAPTCHA to prove they aren’t AI crawlers before they can even see your site.
alerighi · 18 days ago
Because these day JS frameworks are used even for things where a website with a server side MVC framework like in the old days (in whatever language, PHP, Java, Python, etc) would be just fine. Maybe just to add some stuff like form validation in the frontend whenever needed with jQuery or even plain JS.

Not to say that React is useless, it has its applications, but just 95% of the websites shouldn't need it, and I shouldn't download 20+Mb of JS files just to load the homepage of a site.

Another thing to consider, most people that work in tech have probably gigabit or better internet connections. Unfortunately, the user of the website don't have this luxury, and often use either mobile (4G if lucky) connections, or slow ADSL connection (at my house fiber has still to be brought, and I have a 13mbit ADSL).

I hate when just to load the homepage of a site it takes more than 30 seconds (I'm looking at you, ClickUp!). It shouldn't be acceptable: just use HTTP for what was created for, serving hyper text, and serve me hypertext. I would rather load in continuous small HTML files (that is fast even with slow connections because the latency is typically in the ms order even with ADSL) that download a full JS application each time I access a page.