Readit News logoReadit News
wruza · 2 years ago
10MB, 12MB, …

Compare it to people who really care about performance — Pornhub, 1.4 MB

Porn was always actual web hi-tech with good engineering, not these joke-level “tech” giants. Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.

devjab · 2 years ago
I never really understood why SPAs became so popular on the web. It’s like we suddenly and collectively became afraid of the page reload on websites just because it’s not a wanted behaviour in actual web applications.

I have worked with enterprise applications for two decades, and with some that were build before I was born. And I think the React has been the absolute best frontend for these systems compared to everything that came before. You’re free to insert Angular/Vue/whatever by the way. But these are designed to replace all the various horrible client/server UIs that came before. For a web-page that’s hardly necessary unless you’re g-mail, Facebook or similar, where you need the interactive and live content updates because of how these products work. But for something like pornhub? Well PHP serves them just fine, and this is true for most web sites really. Just look at HN and how many people still vastly prefer the old.reddit.com site to their modern SPA. Hell, many people still would probably still prefer an old.Facebook to the newer much slower version.

figmert · 2 years ago
> It’s like we suddenly and collectively became afraid of the page reload on websites

I used to work at a place where page reloads was constantly an issue brought up as a negative. They couldn't be bothered to fix the slow page loads and instead avoided page changes.

I argued several times that we should improve performance instead of caring about page reloads, but never got through to anyone (in fairness, it was probably mostly cos of a senior dev there).

At some point a new feature was being developed, and instead of just adding it to our existing product, it was decided to use an iframe with the new feature as a separate product embedded.

ralusek · 2 years ago
I love SPAs. I love making them, and I love using them. The thing is, they have to be for applications. When I'm using an application, I am willing to eat a slower initial load time. Everything after that is faster, smoother, more dynamic, more responsive.
diggan · 2 years ago
> But for something like pornhub? Well PHP serves them just fine,

Kind of fun to make this argument for Pornhub when visiting their website with JavaScript disabled just seems to render a blank page :)

> how many people still vastly prefer the old.reddit.com site to their modern SPA

Also a fun argument, the times I've seen analytics on it, old.reddit.com seems to hover around/below 10% of the visitors to subs. But I bet this varies a lot by the subreddit.

dudus · 2 years ago
Why SPAs became popular? Because they "feel" native on mobile. Now you have page transitions and prefetch which really should kill this use case.

IMO the bloat he talks about on the post is not representative of 2024. Pretty much all frontend development of the last 2 years has been moving away from SPAs with smaller builds and faster loading times. Fair enough it's still visible in a lot of sites. But I'd argue it's probably better now than a couple years ago.

Deleted Comment

littlecranky67 · 2 years ago
Well to stay within the example porn website of OP, because it is not an SPA you cant really make a playlist play in full screen - the hard page reload will require you to intersct to go fullscreen again on every new video. Not an issues in SPAs (see youtube).
ffsm8 · 2 years ago
I feel like the term SPA has since ceased to have any meaning with the HN crowd.

I mean i do generally agree with your sentiment that SPAs are way overused, but several of the examples of TFA arent SPAs, which shoould already show you how misguided your opinion is.

depending on the framework, SPAs can start at ~10KB. really, the SPA is not the thing thats causing the bloat.

npteljes · 2 years ago
>Porn was always actual web hi-tech with good engineering, not these joke-level “tech” giants. Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.

Well, I do remember the myriad of shady advertisement tactics that porn sites use(d), like popups, popunders, fake content leading to other similar aggregation sites, opening partner website instead of content, poisoning the SEO results as much as they can, and so on. Porn is not the tech driver people make it up to be, even the popular urban legend around the Betamax vs VHS is untrue, and so is that they drive internet innovation. There is a handful of players who engineer a high quality product, but it's hardly representative to the industry as a whole. Many others create link farms, dummy content, clone websites, false advertisement, gaming the search results, and so on. Porn is in high demand, it's a busy scene, and so, many things happen related to it, and that's about it.

The current state of snappy top-level results is I think the result of competition. If one site's UX is shitty, I think the majority of the viewers would just leave for the next one, as there is a deluge of free porn on the internet. So, the sites actually have to optimize for retention.

These other websites have different incentives, so the optimized state is different too. The user is, of course, important, but if they also have shareholders, content providers, exclusive business deals, monopoly, then they don't have to optimize for user experience that much.

wruza · 2 years ago
I generally agree and understand. The reasoning is fine. But comments like this make me somewhere between sad and contemptuous towards the field. This neutral explanation supports the baseline that no professional can vocalize anywhere and retain their face. I'm talking youtube focus & arrows issues here, not rocket science. Container alignment issues [1], scrolling issues [2], cosmic levels of bloat [$subj], you name it. Absolutely trivial things you can't screw up if you're at all hireable. It's not "unoptimized", it's distilled personal/group incompetence of those who ought to be the best. That I cannot respect.

[1] https://www.youtube.com/watch?v=yabDCV4ccQs -- scroll to comments and/or switch between default/theater mode if not immediately obvious

[2] half of the internet, especially ux blogs

kmlx · 2 years ago
i worked in that field. one of the main reasons adult entertainment is optimised so heavily is because lots of users are from countries with poor internet.

countless hours spent on optimising video delivery, live broadcasts (using flash back in the day, and webrtc today), web page sizes... the works.

kcrwfrd_ · 2 years ago
Looks like us at playboy.com/app have Pornhub beat, with our 1.1 MB (when authenticated—when not, it's 993 kB).
qingcharles · 2 years ago
Do you work on it? Can I send you a bug list? :D
dontupvoteme · 2 years ago
Youtube also ripped off their "interesting parts of the video" bit entirely.
thakoppno · 2 years ago
> Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.

There are many, many cases of porn websites breaking the law.

yomly · 2 years ago
Yes - writing PHP in 2024 is a crime that we should hold PH accountable for.
SebastianKra · 2 years ago
Any reason why we're looking at uncompressed data? Some of the listed negative examples easily beat GMaps 1.5mb when compressed.

Also, I'll give a pass to dynamic apps like Spotify and GMail [1] if (and only if) the navigation after loading the page is fast. I would rather have something like Discord which takes a few seconds to update on startup, than GitLab, which makes me wait up to two seconds for every. single. click.

The current prioritisation of cold starts and static rendering is leading to a worse experience on some sites IMO. As an experiment, go to GitHub and navigate through the file tree. On my machine, this feels significantly snappier than the the rest of GitHub. Coincidentally, it's also one of the only parts that is not rendered statically. I click through hundreds of GitHub pages daily. Please, just serve me an unholy amount of JavaScript once, and then cache as much as possible, rather than making me download the entire footer every time I want to view a pipeline.

[1]: These are examples. I haven't used GMail and Spotify

acdha · 2 years ago
Compression helps transfer but your device still has to parse all of that code. This comes up in discussions about reach because there’s an enormous gap between iOS and Android CPU performance which gets worse when you look at the cheaper devices a lot of the public use where new Android devices sold today perform worse than a 2014 iPhone. If your developers are all using recent iPhones or flagship Android devices, it’s easy to miss how much all of that code bloat affects the median user.

https://infrequently.org/2024/01/performance-inequality-gap-...

SebastianKra · 2 years ago
I happen to develop a JS-App that also has to be optimised for an Android Phone from 2017. I don't think the amount of JS is in any way related to performance. You can make 1MB of JS perform just as poorly as 10MB.

In our case, the biggest performance issues were:

- Rendering too many DOM nodes at once - virtual lists help.

- Using reactivity inefficiently.

- Random operations in libraries that were poorly optimised.

Finding those things was only possible by looking at the profiler. I don't think general statements like "less JS = better" help anyone. It helps to examine the size of webpages, but then you have to also put that information into context: how often does this page load new data? once the data is loaded, can you work without further loading? Is the data batched, or do waterfalls occur? Is this a page that users will only visit once, or do they come regularly? ...

BlueTemplar · 2 years ago
Even decently powerful phones can have issues with some of these.

Substack is particularly infuriating : sometimes it lags so badly that it takes seconds to display scrolled text (and bottom of text references stop working). And that's on a 2016 flagship : Samsung Galaxy S7 ! I shudder to think of the experience for slower phones...

(And Substack also manages to slow down to a glitchy crawl when there are a lot of (text only !) comments on my gaming desktop PC.)

hn_acker · 2 years ago
> Any reason why we're looking at uncompressed data? Some of the listed negative examples easily beat GMaps 1.5mb when compressed.

Because for a single page load, decompressing and using the scripts takes time, RAM space, disk space (more scratch space used as more RAM gets used), and power (battery drain from continually executing scripts). Caching can prevent the power and time costs of downloading and decompressing, but not the costs of using. My personal rule of thumb is: the bigger the uncompressed Javascript load, the more code the CPU continually executes as I move my mouse, press any key, scroll, etc. I would be willing to give up a bit of time efficiency for a bit of power efficiency. I'm also willing to give up prettiness for staticness, except where CSS can stand in for JS. Or maybe I'm staring at a scapegoat when the actual/bigger problem is sites which download more files (latent bloat and horrendously bad for archival) when I perform actions other than clicking to different pages corresponding to different URLs. (Please don't have Javascript make different "pages" show up with the same URL in the address bar. That's really bad for archival as well.)

Tangent: Another rule of thumb I have: the bigger the uncompressed Javascript load, the less likely the archived version of the site will work properly.

sbergot · 2 years ago
While you are right that there is a cost, the real question is to know whether this cost is significant. 10 Mb is still very small in many contexts. If that is the price to pay for a better dev ex and more products then I don't see the issue.
flexagoon · 2 years ago
> go to GitHub and navigate through the file tree. On my machine, this feels significantly snappier than the the rest of GitHub. Coincidentally, it's also one of the only parts that is not rendered statically

And it's also the only part of it that doesn't work on slow connections.

I've had a slow internet connection for the past week, and GitHub file tree literally doesn't work if you click on it on the website, because it tries to load it through some scripts and fails.

However, if, instead of clicking on a file, I copy it's url and paste it into the browser url bar, it loads properly.

SebastianKra · 2 years ago
Wow, you're right. I just reproduced that by throttling the network.

But actually, that first click from the overview is still an HTML page. Once you're in the master-detail view, it works fast even when throttled.

willsmith72 · 2 years ago
gmail is terrible, idk if it's just me but i have to wait 20 seconds are marking an email as read before closing the tab. otherwise it's not saved as read

spotify has huge issues with network connectivity, even if i download the album it'll completely freak out as the network changes. plain offline mode would be better than its attempt at staying online

avgcorrection · 2 years ago
Gmail has this annoying preference which you can set: set email to read if viewed for x seconds. Mine was set to 3 seconds. Which is I guess why I sometimes would get a reply on a thread and I had to refresh multiple times to get rid of the unread status.

Maybe that’s related?

bmacho · 2 years ago
gmail still has the HTML view: https://mail.google.com/mail/u/0/h?ui=html

They are saying since a while that they will shut it down in February. So it may only work for 1-2 days

jaredcwhite · 2 years ago
GitHub's probably the worst example of "Pjax" or HTMX-style techniques out there at this point…I would definitely not look at that and paint a particular picture of that architecture overall. It's like pointing at a particularly poor example of a SPA and then saying that's why all SPAs suck.
agos · 2 years ago
is there a good example of reasonably big/complex application using pjax/htmx style that sucks less? Because GitHub isn't making a good case for that technology
SebastianKra · 2 years ago
Im inclined to agree, but the same thing happens on GitLab.
panstromek · 2 years ago
Interesting that you mention GitHub file tree. I recently encountered a periodic freezing of that whole page. I've profiled for a bit and found out that every few seconds it spends like 5 seconds recomputing relative timestamps on the main thread.
azangru · 2 years ago
> I recently encountered a periodic freezing of that whole page.

Yes; this started happening after they rolled out the new version of their UI built with React several months ago.

pama · 2 years ago
From the article: “To be honest, after typing all these numbers, 10 MB doesn’t even feel that big or special. Seems like shipping 10 MB of code is normal now. If we assume that the average code line is about 65 characters, that would mean we are shipping ~150,000 lines of code. With every website! Sometimes just to show static content! ”
sublinear · 2 years ago
Any piece of software reflects the organization that built it.

The data transferred is going to be almost entirely analytics and miscellaneous 3rd party scripts, not the javascript actually used to make the page work (except for the "elephant" category which are lazy loading modules i.e. React). Much of that is driven by marketing teams who don't know or care about any of this.

All devs did was paste Google Tag Manager and/or some other script injection service to the page. In some cases the devs don't even do that and the page is modified by some proxy out in the production infrastructure.

Maybe the more meaningful concern to have is that marketing has more control over the result than the people actually doing the real work. In the case of the "elephant" pages, the bloat is with the organization itself. Not just a few idiots but idiots at scale.

DanielHB · 2 years ago
> All devs did was paste Google Tag Manager and/or some other script injection service to the page. In some cases the devs don't even do that and the page is modified by some proxy out in the production infrastructure.

google tag manager is the best tool for destroying your page performance, previous job had google tag manager in the hands of another non-tech department. I had to CONSTANTLY monitor the crap that was being injected in the production pages. I tried very hard to get it removed.

jve · 2 years ago
I remember associating Google with lean and fast: Google the search (vs Yahooo) and Chrome (vs IE/FF (i'm talking about when Chrome was released))... chrome on itself had not much of an UI and it was a feature.
djtango · 2 years ago
I recently came back from a road trip in New Zealand - a lot of their countryside has little to no cell coverage. Combined with roaming (which seems to add an additional layer of slowness) and boy did it suck to try to use a lot of the web.

Also if any spotify PMs are here, please review the Offline UX. Offline is pretty much one of the most critical premium features but actually trying to use the app offline really sucks in so many ways

Tistron · 2 years ago
Offline is still miles and miles better than patchy Internet. If spotify thinks you have Internet it calls the server to ask for the contents of every context menu, waiting for a response for seconds before sometimes giving up showing a menu and sometimes falling back to what would have been instant if it was in offline mode. I really loathe their player.
jnsaff2 · 2 years ago
Not only that, there are many apps with no online aspect to them that have facebook sdk or some other spyware that does a blocking call on app startup and the app won't start without it succeeding, unless you are completely offline.

Especially annoying when one is using dns based filtering.

shepherdjerred · 2 years ago
This was one of the major reasons I left Spotify. Apple Music handles this much more gracefully.
jve · 2 years ago
Hmm, this would also imply they would need more infrastructure at their side when they could just maybe use cached values stored locally.
meowtimemania · 2 years ago
I get irritated by this too. When it happens I put my phone on airplane mode to force Spotify to show the offline ui.
diggan · 2 years ago
> Also if any spotify PMs are here, please review the Offline UX. Offline is pretty much one of the most critical premium features but actually trying to use the app offline really sucks in so many ways

Also, Spotify (at least on iOS) seems to have fallen into the trap of thinking there is only "Online" and "Offline", so when you're in-between (really high latency, or really lossy connection), Spotify thinks it's online when it really should be thinking it's offline.

But to be fair, this is a really common issue and Spotify is in no way alone in failing on this, hard to come up with the right threshold I bet.

rgblambda · 2 years ago
I've noticed BBC Sounds has the opposite problem. If you were offline and then get a connection it still thinks you're offline. Refreshing does nothing. You need to restart the app to get online.
dukeyukey · 2 years ago
I live in London which typically gets great signal everywhere. Except in the Underground network, where they're rolling out 5G but it's not there yet.

Please Spotify, why do I need to wait 30 seconds for the app to load anything when I don't have signal? All I want to do is keep listening to a podcast I downloaded.

m_rpn · 2 years ago
i will never understand what all the people on the tube are doing on their phones with no internet, do they have the entirety of youtube bufferred XD?
user432678 · 2 years ago
Re: Spotify

So much agree here, the offline mode is so beyond being annoying so I even started building my own iOS offline first music app.

jjav · 2 years ago
> my own iOS offline first music app

Sadly ironic that apple used to sell this, in the shape of an ipod!

I hold on to mine, it is perfect in every way that a phone is terrible.

It is tiny and 100% offline, just what I need.

rldjbpin · 2 years ago
i think they just need a more aggressive timeout value to fallback to offline mode. i wonder where their engineering made it too complicated to weigh out these scenarios.
lifthrasiir · 2 years ago
One thing completely ignored by this post, especially for actual web applications, is that it doesn't actually break JS files down to see why it is so large. For example, Google Translate is not an one-interaction app once you start to look further; it somehow has dictionaries, alternative suggestions, transliterations, pronunciations, a lot of input methods and more. I still agree that 2.5 MB is too much even after accounting that fact and some optional features can and should be lazily loaded, but as it currently stands, the post is so lazy that it doesn't help any further discussion.
troupo · 2 years ago
> For example, Google Translate is not an one-interaction app once you start to look further; it somehow has dictionaries, alternative suggestions, transliterations, pronunciations, a lot of input methods and more.

Almost none of those are loaded in the initial bundle, are they? All those come as data from the server.

How much JS do you need for `if data.transliteration show icon with audio embed`?

lifthrasiir · 2 years ago
> Almost none of those are loaded in the initial bundle, are they?

In my testing, at least some input methods are indeed included in the initial requests (!). And that's why I'm stressing it is not "one-interaction" app; it is interactive enough that some (but not all) upfront loading might be justifiable.

BandButcher · 2 years ago
Don't want to hate on the author's post but the screenshots being slow to load made me chuckle, understandable as images can be big and there were a lot, but just found it a little ironic.
crooked-v · 2 years ago
These days, slow-loading images usually mean that somebody hasn't bothered to use any of the automatic tooling various frameworks and platforms have for optimized viewport- and pixel density-based image sets, and just stuck in a maximum size 10+ MB image.
infensus · 2 years ago
100% agree. Most of these apps could definitely use some optimization, but trivializing them to something like "wow few MBs of javascript just to show a text box" makes this comparison completely useless
jakelazaroff · 2 years ago
I know the implication here is "too much JavaScript" but we also need to talk about how much of this is purely tracking junk.
BandButcher · 2 years ago
Was going to mention this, almost any company's brand site will have tracking and analytics libraries set in place. Usually to farm marketing and UX feedback.

Whats worse is some of them are fetched externally rather than bundled with the host code thus increasing latency and potential security risks

tadfisher · 2 years ago
> Whats worse is some of them are fetched externally rather than bundled with the host code thus increasing latency and potential security risks

Some vendor SDKs can be built and bundled from NPM, but most of them explicitly require you fetch their minified/obfuscated bundle from their CDN with a script tag. This is so they don't have to support older versions like most other software in the world, and so they can push updates without requiring customers to update their code.

Try to use vendors that distribute open-source SDKs, if you have to use vendors.

willsmith72 · 2 years ago
im pro privacy, but is it really so bad to get anonymous data about where people clicked and how long they stayed where?

it would be almost impossible to measure success without it, whether it's a conversion funnel or tracking usage of a new feature

DanielHB · 2 years ago
In a previous job I had to declare war against google tag manager (tool that let marketers inject random crap in your web application without developer input). Burned some bridges and didn't win, performance is still crap.

After those things it is the heavy libs that cause performance problems, like maps and charts, usually some clever lazy loading fixes that. Some things I personally ran into: - QR code scanning lib and Map lib being loaded at startup when it was actually just really small features on the application - ALL internationalisation strings being loaded at startup as a waterfall request before any other JS ran. Never managed to get this one fixed... - Zendesk just completely destroys your page performance, mandated through upper-management, all I could do was add a delay to load it

After that then it comes just badly designed code triggering too many DOM elements and/or rerenders and/or waterfall requests.

After that comes app-level code size, some lazy loading also fixes this, but it is usually not necessary until your application is massive.

veeti · 2 years ago
It's easy to test with adblock in place. For instance, the Gitlab landing page went from 13 megabytes to "just" 6 megabytes with tracking scripts blocked. The marketing department will always double the bloat of your software.
ginko · 2 years ago
But surely even pieces of scummy tracking code can't take up megabytes of memory, right?! Just collect user session data and send it to some host.
jakelazaroff · 2 years ago
Not sure whether you looked at the requests in the screenshots, but the tracking script code alone for many of these websites takes up megabytes of memory.
PetitPrince · 2 years ago
This compares how much Javascript is loaded from popular sites (cold loaded). Some highlights:

- PornHub loads ~10x less JS than YouTube (1.4MB vs 12MB)

- Gmail have an incomprehensible large footprint (20MB). Fastmail is 10x ligthter (2MB). Figma is equivalent (20MB) while being a more complex app.

- Jira has 58MB (whoa)

jkoudys · 2 years ago
Pornhub needs to be small. Jira will download once then be loaded locally until it gets updated, just like an offline app. Pornhub will be run in incognito mode, where caching won't help.
thrwwycbr · 2 years ago
> Jira will download once

Maybe you should take a look on the Network Tab, because Atlassian sure does have a crappy network stack.

panstromek · 2 years ago
JIRA transfers like 20MB of stuff everytime you open the board, including things like 5MB JSON file with list of all emojis with descriptions (at least the last time I profiled it).
wruza · 2 years ago
Pornhub will be run in incognito mode

It’s not '80s anymore, nobody cares about your porn. I have bookmarks on the bookmarks bar right next to electronics/grocery stores and HN. And if you’re not logged in, how would PH and others know your preferences?

mewpmewp2 · 2 years ago
YouTube feels really snappy to me, but Figma is consistently the worst experience I have ever felt for web apps. Jira is horrible and slow also though.
latency-guy2 · 2 years ago
YouTube does not feel snappy to me anymore, its still one of the better experiences I have on the internet, but quite bad from years before.

I just tested my connection to youtube right now, just a tiny bit over 1.2 seconds from not using it for a few days. A fresh, no cache, no cookies, the entire page loaded in 2.8 seconds. A hot reload on either side varied between 0.8s to 1.4 seconds. All done with at most ublock as an extension on desktop chrome with purported gigabit speeds from my ISP.

That speed is just OK, but definitely not the 54ms response time I got to hit google's server to send me the HTML document that bears all the dynamic content on youtube

Figma is very surprising to me, that bullshit somehow is PREFERRED by people, getting links from designers from that dogshit app screeches my browser to speeds I haven't seen in decades, and I don't think I'm exaggerating at all when I say that

troupo · 2 years ago
On desktop they load 2.5MB of JS and 12 MB of Javascript to show a grid of images. And it still takes them over 5 seconds to show video length in the previews.

Youtube hasn't felt snappy in ages

niutech · 2 years ago
Figma uses WASM, so its total size is much bigger: https://www.figma.com/blog/figma-faster/
pkphilip · 2 years ago
No idea why an email client should have 20 MB of JS.
Spivak · 2 years ago
Holy god YouTube is 12MB? How!?
troupo · 2 years ago
They also load 2.5 MB of CSS on desktop :)
okaleniuk · 2 years ago
Meanwhile, all the pages on https://wordsandbuttons.online/ with all the animation and interactivity are still below 64 KB.

This one, for example, https://wordsandbuttons.online/trippy_polynomials_in_arctang... is 51 KB.

And the code is not at all economical. It's 80% copy-paste with little deviations. There is no attempt to save by being clever either, it's all just good old vanilla JS. And no zipping, no space reduction. The code is perfectly readable when opened with the "View page source" button.

The trick is - zero dependency policy. No third party, no internal. All the code you need, you get along with the HTML file. Paradoxically, in the long run, copy-paste is a bloat preventor, not a bloat cause.

kaba0 · 2 years ago
It could add at least some minimal margin. On mobile, I literally can’t see the edges.
lifthrasiir · 2 years ago
You can do the same with dependencies and "modern" JS toolkits. Dependency itself is not a cause but a symptom; websites and companies are no longer incentivized to reduce bloats, so redundant dependencies are hardly pruned.