Readit News logoReadit News
spankalee · 7 months ago
People have had some huge understandings of what's actually going on:

- That Mason opening issues means that it's a Google-effort. It's not.

- That the "Should we remove..." issue for community feedback. It's not. Spec issues are a collaboration vehicle for spec maintainers. There's not enough of the community on GitHub for that to be a good feedback mechanism.

- That Mason or Google hide comments and locked the thread. I heard from good authority that it was Apple employees actually, in their role as spec repo admins.

- That Google brought up the idea. The best I can see from meeting minutes is that a Mozilla rep did this time, though it's been brought up occasionally for 10 years at least.

- That the spec PR will be merged. At this point the PR is to show what it would mean to move XSLT from the spec.

- That decision has been made. These things are the beginning of the process.

- That XSLT even can be removed. Even though the vendors are tentatively in support, they are fully aware that this might not be viable in practice. I would guess that they think they can remove it, but they don't know for sure. They know usage numbers aren't always accurate, and they have ways of hedging bets like flags with different default in different channels, enterprise policies, reverse origin trials, etc.

xg15 · 7 months ago
A lot of that is irrelevant though. I don't think the problem is that Google might have unilaterally decided this, the problem is that there are unilateral decisions of this kind at all for a tech that affects billions of people.

(And I'm counting agreement between the handful of browser vendors as unilateral decisions as well. The group is not exactly very large)

The second part was basically the author saying "calm down guys, relax, there is a process." - and then speculating what that process might be.

If there is an orderly, public process that is being followed here, that includes a time and place for community feedback, shouldn't you be able to read up on it somewhere instead of speculating?

riedel · 7 months ago
I would agree to see the outcry more as a symptom about this meritocratic system. I think we are at a point where new browser/rendering engines are developing like ladybird or Servo. An independent group should make sure that they will strive. Making specs simpler can help here. But I think many things are done for the wrong reasons (E.g. Google wanting to cut down cost on parts irrelevant to their income, other vendors that are reliant on them).

Last time I used XSLT in the browser was actually transforming PMML to JavaScript executable ML models about 10 yrs ago. Before that I think it was building a light weight web frontend for our SVN repo. With XML APIs replaced by json or binary formats, the relevance is becoming less and less. And in the end it is about legacy stuff only because there are XSLT compilers [0] that could fill the gap (maybe with a small web extension, that won't work on phones with native Apple and Google browsers...)

[0] https://github.com/egh/xjslt

throw7 · 7 months ago
My jotted down notes:

1. all major vendors (google, mozilla, webkit) want to remove xslt

2. chrome does not have resources to support xslt

3. removing/disabling xslt will be a slow methodical process. don't panic.

4. when opening a proposal change, a pull request of code changes is mandatory to show exact changes; it is not a "countdown to merge"(sic)

5. info that leaks to the public should include context or links to full context

6. removing xslt support in browsers is not good or bad, but "it depends"

Springtime · 7 months ago
Some of the things that stood out to me about the news:

- The thread by the Chromium dev proposed what was originally a 1MB minified polyfill for the Javascript only XSLT calls that in just the last few days has grown to 3MB minified. XSLT was beneficial in the browser because it was native, while a 3MB polyfill is a rather big ask to suggest as a per-site replacement for anything meant to be snappy on slower connections.

- It seems from various mentions the catalysts for this to surface now were the sole maintainer for the XSLT library used in Chromium expressed having trouble maintaining it some months back and left it to a different sole maintainer, along with a recently disclosed vulnerability in that particular library. Firefox OTOH is said to use a different XSLT library.

- Chromium team routinely awards vulnerability discovery bounties in the tens to hundreds of thousands of dollars. Just the other week they awarded $250k to an author who discovered a tricky Chromium exploit. I'd be curious if they've funded development of the XSLT library they use in the past as it seems like they'd rather just be rid of it.

- Within days of posting the open question to the working group and a week prior to the PR of the spec removal a Chromium ticket by the author set milestones for XSLT removal in Chromium. It seems it's less a tentative proposal and more leading by example.

dkiebd · 7 months ago
3. It will eventually be removed. Does it matter whether it will take three months or three years? Since I suppose none of the browser vendors will give developers money to change their xslt usage in codebases for something else.

5. Funny that we are talking about "info that leaks to the public" when we are discussing standards that may be important to billions of people, as if keeping things private was reasonable.

magicalist · 7 months ago
> Funny that we are talking about "info that leaks to the public"

It's a poor choice of words by the GP. This was a public discussion, what would be private about it?

Rather it "leaked" from people with shared context to people without it. The point of the article is that since the discussion is public, there will be people that come across it without context, so it would be a good idea to include context in these kinds of discussions in the future:

> If a removal discussion is going to be held in public, then it should assume the general public will see it and provide enough context for the general public to understand the actual nature of the discussion.

basscomm · 7 months ago
> Does it matter whether it will take three months or three years?

It does!

I run a small hobby site built with XML and XSLT because I'm not a great programmer, but XSLT is something I can actually wrap my head around and use without too much fuss. If support goes away I need to know how much time I have to rewrite/migrate my site to something else.

goyagoji · 7 months ago
I find it bizarre. I think we obviously we want to be able to run pages from 2015 far in the future but certainly for a few more years.

As a browser maker, why would you even put this work in for cordinated processes instead of investing in a way to patch away your native code and do that continuously at a slow pace for every aging feature?

meepmorp · 7 months ago
> Since I suppose none of the browser vendors will give developers money to change their xslt usage in codebases for something else.

Let's turn that around: are you willing to pay a browser vendor to keep supporting xslt so you can keep your codebase unchanged?

sugarpimpdorsey · 7 months ago
> Does it matter whether it will take three months or three years?

Do you think it matters to the guy that said he has an entire factory with IoT machinery that uses XSLT?

Should they shut the factory down?

lenkite · 7 months ago
I wish there was a way to fund XSLT feature in browsers. Chrome team should just open up a funding for features page.
danaris · 7 months ago
How could #2 possibly be true without it being a deliberate choice on Google's part? They have staggering, absurd amounts of money. If they needed more resources allocated to Chrome, they could just do that.
andybak · 7 months ago
This is addressed directly in the linked article. "Google" and "the Chrome team" are not the same entity.
nashashmi · 7 months ago
The people who use these features are busy using these features. And they are not part of browser development. So they revolt in a nasty manner. Like when ftp was torn down.

It is nice to see workarounds. But those workarounds are not conducive to HTML purists who do things without JS. They are the real web developers. They have always relied on the browser to improve and become faster but not start abandoning old technologies.

Chrome OS also became popular on this point that a browser can do things like being universal viewers and so the need for programs goes away. There are so many lite OS who are also using the browser to do everything.

Now I understand that the web has failed XML and XML failed the web in favor of JSON. I also whole heartedly believe that XML and XSLT can do so much more for the web and do this natively.

But open systems are not in the interest of the big FAANG and Microsoft ecosystem. They abandoned RSS. They abandon APIs on a regular basis. And this turn of events is causing browser vendors to start developing for big companies rather than open indie developers.

There is much gain from XML and XSLT. But I want to see a specific development. I want to see XSL import an XML. I want to see the reverse. XSL will be the view. XML will be the model. And the browser will be the controller. MVC paradigm.

oorza · 7 months ago
> HTML purists who do things without JS. They are the real web developers.

I don't think using one set of technologies as compared to another one can really be said to make one a "real" web developer. Real web developers are developers who put sites on the web, there is no benefit to anyone to be had claiming one choice is "real" and therefore the other choices are lesser-than.

Put it this way: whatever set of constraints you used to arrive at that decision does not apply to every situation, and when you frame things through the lens where you implicitly disregard that oh-so-obvious truth, it's hard for anyone to interpret your analysis as anything but myopic in the best case and actively self-serving and destructive in the worst case. It's nearly impossible to read through someone speaking this way about a topic and believe their analysis is objective, comprehensive, or without obvious bias, even if it may actually be all of those things.

karlgkk · 7 months ago
> Like when ftp was torn down

FTP needed to be torn down. It’s sad, but true. There was no reason for anyone to be using it past 2010 - and in fact many reasons actively against using it

ocdtrekkie · 7 months ago
I have yet to find a website which pleasantly lets me download a bunch of things in an efficient and organized manner that compares with FTP. The hilarity of buying a Humble Bundle and pressing "download all" and watching your browser spam "save as" windows for a minute and a half...

I kinda wish I could (S)FTP a lot more things than I can today.

nashashmi · 7 months ago
It was a quick way to share files. FTP as an internet application made a lot of sense. The only counter reason was it did not have password protections like SFTP did, and did not support encryption. But that is like arguing against HTTP in favor of HTTPS.

I don't think they put SFTP in browsers yet.

spankalee · 7 months ago
FTP was not torn down just because some web browsers stopped also being FTP clients. You can still use any FTIP client with any FTP server that you want.
nashashmi · 7 months ago
But it was torn down from the browser. That’s what we are talking about with respect to XSLT
dang · 7 months ago
Related ongoing thread:

Should the web platform adopt XSLT 3.0? - https://news.ycombinator.com/item?id=44987552

Recent and also related:

XSLT removal will break multiple government and regulatory sites - https://news.ycombinator.com/item?id=44987346 - Aug 2025 (99 comments)

"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (523 comments)

Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)

scrollaway · 7 months ago
Fun fact - A very old version of the "WoW Armory", which was Blizzard Entertainment's World of Warcraft database at https://wowarmory.com/ used XSLT for all of its styling - though, IIRC, only on Firefox which was the only web browser to properly implement it.

The website's "item" and "character" pages were served entirely in XML.

Here's a web archive: https://web.archive.org/web/20080220170805/https://wowarmory...

(dead_dove.gif)

thro1 · 7 months ago
Gecko currently has much deeper integration of the XSLT engine with the browser internals: The XSLT engine operates on the browser DOM implementation. WebKit and Chromium integrate with libxslt in a way that's inherently bad for performance ( https://github.com/whatwg/html/issues/11578#issuecomment-321... )

Just Firefox XSLT is faster, better than Google's (and JS), same, old Firefox extensions were to powerful Google could compete with Firefox.

JS is very needed for ads, tracking and other strings attaching - and XSLT is not for that - but would make JS mostly obsolete in many cases.. (only "cross-browser functionality for XSLT is incomplete with certain features like <xsl:text disable-output-escaping="yes"> having open issues" ).

Google pay Mozilla to criple Firefox. It's money from ads, to not let the web be free.

paulsutter · 7 months ago
Is there any reason this cant be solved with a proxy server? So that legacy software that uses XLST can still run on the new browsers that lack it?

XLST is a really weird feature and it seems sensible to drop it, be nice if there is a transparent solution for old software that uses it.

basscomm · 7 months ago
> Is there any reason this cant be solved with a proxy server? So that legacy software that uses XLST can still run on the new browsers that lack it?

With XSLT in a browser I can throw some static XML and XSLT files on any web server and they will Just Work™ and be usable without me having to do much other than telling the web server to serve index.xml instead of index.html. If I have to learn how to set up and maintain a proxy server so visitors to my site can still view the rendered pages, then I probably won't bother.

> XLST is a really weird feature and it seems sensible to drop it

What's weird about it? It lets me mark up some text using whatever XML makes sense to me and then write a template that the browser uses to transform it into (X)HTML that it can display. For someone who builds basic sites, it's an easy way to do templating that doesn't involve me setting up a 'real' programming environment

cosmic_cheese · 7 months ago
I hope some form of include makes it into the HTML standard and popular browser implementations before XSLT is gone. It’s perhaps the single largest gap in HTML’s capabilities and could reduce need for SSR, JS, and a build step in a lot of circumstances. Until now XSLT has been able to fill that gap for those who need that capability, but if it’s going away…
samdoesnothing · 7 months ago
There is no point of introducing yet another standard to do something that can be done in few lines of JS.

   customElements.define(
     "html-include",
     class extends HTMLElement {
       connectedCallback() {
         fetch(this.getAttribute("href"))
           .then((response) => response.text())
           .then((text) => (this.innerHTML = text))}});

The browser has had a powerful scripting language built in as a core primitive for decades, and yet people would rather create more standards to avoid using it for ideological reasons, and then complain that there aren't enough competing browsers.

cosmic_cheese · 7 months ago
That’s not nearly as nice as being able to drop a…

   <include src="/shared/header.htmp”>
…in anywhere you want a header. It being part of HTML also allows engines to optimize in ways that otherwise wouldn’t be possible and implement user-toggleable features like lazy loading. It can get better as browsers get better and is less likely to break than any custom JS I write.

layer8 · 7 months ago
We should drop HTML and CSS files as well, because you can just use JS to create DOM nodes and set styles on them.
naniwaduni · 7 months ago
It's something that people keep reinventing on both sides of the connection. That XSLT is the better of two terrible ways to do it, and that you're making the argument that the other one still exists, is an embarrassment.
nashashmi · 7 months ago
The point is to take JS implemented features and make them native to the browser.
sugarpimpdorsey · 7 months ago
> yet people would rather create more standards to avoid using it for ideological reasons

Uh the standard in question has existed for over two decades

spankalee · 7 months ago
cosmic_cheese · 7 months ago
Interesting. Not a bad proposal, but am I reading correctly that partials from a different URL (“includes”) is a someday thing rather than part of the initial spec?

Either way it’d allow several types of sites to have zero dependencies and no build step which is pretty cool.

nashashmi · 7 months ago
Remember shtml which was supposed to be server side include? It never made it to the web browser.