A few weeks back, individual files were no longer readable with Javascript off and a much worse browsing experience with JS on. Now its main repo pages too.
I've not seen any announcement about these changes.
I've not seen any announcement about these changes.
Could someone shed some light on this?
The other factor between the two people can be network speed, many areas of the world have access to extremely limited bandwidth, very slow bandwidth, or a mixture of both!
Combining both the computational gap and the bandwidth gap means the issue of large JavaScript files blocking access to information on a global website is an accessibility concern. People who block JS are pushing for a more accessible web, either directly or indirectly.
I won't fight tooth and nail for all websites to _not_ have JavaScript, but I wish they would be progressively enhanced by that JS and still (at least in the core sense) usable without it.
If you do a View Source on Reuters.com main landing page, it's script file is a single line that's 1,300,000 characters long. And every time you land on a page it tries to dump 1,300,000 characters of script on you (on top of the multi-MB videos it auto-loads) [and the pop-ups]
Most major corporate websites are that way. All written by algorithms with massive JS downloads and huge 64bit hash keys on every <div>. Tracking 64bit click numbers on each element and dumping all the calculation on the user.
There are a few who just won't engage with the utility of it, but for most people I've talked to, it's just a matter of oversaturation. Case in point with github - why should I need you to use a script to handle documents? The web was built for documents. Browsers are made to interpret documents. Putting javascript in the middle of that should require a good reason. And, unfortunately, for a lot of applications - and even static websites - devs tend to just use javascript because it's easier than reasoning about how to handle static content differently than any actually dynamic content that the dev is working with.
But, like others have said, Github does have some pretty compelling reasons to be an SPA. So there's a little bit of reasonable understanding here. The problem, in this instance, is a compound: first, github WAS document-first and has only been made to require an unnecessary technology layer, which is pretty galling and, second, Microsoft is not some 5-dev operation that is prioritizing tasks for their first app. They are one of the most well-established companies in the world, whom are famous for their backward compatibility. If ANYONE should be able to produce a competent update to a product (github) that threads a needle between implementing more functionality while preserving best-practices for content delivery, it's Microsoft. And yet, they've chosen to roll out an undeniable degradation in service to the production site. It's just kind of baffling.
– it consumes more battery and bandwidth (the latter still being expensive on mobile in some parts of the world;
– it's often used to track you (including tracking everything you do - mouse movement, things you type into forms without even submitting it, etc.), serve you ads, and so on;
– most security issues (XSS, browser-based 0-day exploits that escape the sandbox) rely on Javascript being turned on
– it overall slows down many pages with no tangible benefit to the user (this one is obviously not always the case, depends on the website).
It also eliminates most the incredibly annoying bits of the web: cookie banners, modal dialogs asking me to subscribe to newsletters, auto play videos.
On my phone 90% of the time I'm looking to read your content not interact with your spa. Enabling js is a 2 touch process if I need it.
I think it is fairly stupid on my part but it does add another layer of effort between me and doom scrolling
We believe in hypermedia.
• It’s just not necessary. The Web is, at its heart, about resources and actions on them. Those resources are mostly documents. There is just no need to execute a Turing-complete programming language in order to display a list of links to files (which is what GitHub is).
• It hinders use of lightweight browsers. There is no good reason that I should be forced to fire up a VM like Firefox or Chrome when I could use eww, w3m or elinks.
• It is insecure. Javascript enables insecure exploits. Large browsers such as Firefox and Chrome have a much larger surface area than small HTML viewers such as eww, w3m or elinks.
• It hinders privacy. Javascript and large browsers enable much more persistent user tracking than lightweight browsers such as eww, w3m or elinks.
For me, the first reason is the strongest: I think that those who push Javascript fundamentally misunderstand the Web. I am grudgingly fine with using it to provide functionality which would be impossible without it, but I also think that experiments such as htmx show a way forward to put more behaviour into the browser itself. Browser apps are certainly neat! But the Web is ultimately about linked documents. It is built on the Hyper Text Transfer Protocol and the Hyper Text Markup Language, not the Network App Protocol and the Network App Programming Paradigm for Inexperienced Engineering Students (although I do sometimes think Javascript proponents just need a nap and to have their nappies changed grin).
That being said: There is a certain entitlement that comes along with it, while they're absolutely entitled to decide what runs on their machine, they have this expectation that sites should dedicate engineering/QA/time/etc to this niche. In essence give a sub-3% user base a disproportionate amount of attention. IE11 has a higher usage, and you likely shouldn't support that either.
Sites should both morally and legally support ADA users. But screen readers and other accessible technologies have had full JavaScript support for going on 20-years now. If you're spending energy/money on this no-JS cause, you're doing it for a small handful of contrarians who won't thank you.
Some comments in this thread are arguing that many less economically developed countries provide poorer connectivity and lesser bandwidth than elsewhere. Are the users in these countries truly "sub-3%" of the global user base? I honestly don't know.
Depends on the site, naturally, but it seems to me that devoting dev resources to serve users in less developed countries is a good thing. Wikipedia, for instance, renders essentially the same with or without Javascript. That helps to account for its vast international uptake, is my guess.
https://en.wikipedia.org/wiki/Semantic_Web
This is a really important point, because the worldwide web (www) was designed for openness, collaboration, and compatibility.
This is also the origin of ideas like XML, which were designed to have schemas, namespaces, and transformability.
A lot of principal ideas have been lost in the name of productivity or profitability. Which is undermining innovation, free data movement. Also, JavaScript also used to be a weapon in the browser wars for creating incompatibilities in browsers (see Internet Explorer).
So the saga is rather complex. But the conclusion is clear: if you care, insist on openness and compatibility on the web. And study the classics, the original design principles of the web, and their motivations.
I usually just want a text from a webpage, and everything else is worthless to me. Or more than worthless, annoying, dangerous.
It would be great if I could somehow just get a text from a server.
I can do that via more complicated emacs web browsers that essentially render chrome in a buffer. But I shouldn't have to for things that are only text in the first place.
I didn't turn off JavaScript on my devices until I took a 2 month trip to East Africa and needed to do that to get any sane functionality, and since then have been a strong proponent for testing on slow devices and speeds for usability because that is the norm for a lot more of the world than we ever think about.
It's usually better to avoid it unless necessary, but many web developers overengineer their pages with JavaScript to the point where it's a problem.
At their core, web pages are documents not apps.
Assuming a website functions without it, disabling JS is almost always an improvement in my experience/preference
I want plain html, not whizbang documents that behave like nothing else
Dead Comment
(As an aside I hate the Jira UI for being the same.)
Which is fine ... but something has gone wrong somewhere if we require a full blown programming language to display text on the internet. It is absurd. It is one thing for HTML with a bit of extra JS on the side, but to be unable to display text without scripting enabled is comedy.
And if nothing else this really makes searching and indexing harder. That isn't good for the average internet denizen.
Tracking.
Pure HTML doesn't allow much tracking.
Our sector convinced c-suite that we need all kind of data, and now they want it.
[0] https://shottr.cc/s/ItrC/SCR-20240124-krx.png
$$$$$$$$$$