One thing I really appreciated during the peak years of working with XSLT was how much I learned about XPath. Once it clicks, it’s surprisingly intuitive and powerful. I don’t use XSLT much these days, but I still find myself using XPath occasionally. It’s one of those tools—once you understand it, it sticks with you.
Oh what filun it was when Internet Explorer supported XSLT natively ... have your webpage content in an RSS file or similar, add XSLT Stylesheet reference and the browser would render it as nice web page. Nice way to have a single page blog without server side code, no generator step or anything.
But well, Firefox didn't do it that way, thus no proper use.
Oh but yes, Firefox (and Chrome) do support XSLT natively! See https://paul.fragara.com/feed.xml as an example (the Atom feed of my website, styled with XSLT).
const atom = new XMLHttpRequest, xslt = new XMLHttpRequest;
atom.open("GET", "feed.xml"); xslt.open("GET", "atom2html.xsl");
atom.onload = xslt.onload = function() {
if (atom.readyState !== 4 || xslt.readyState !== 4) return;
const proc = new XSLTProcessor;
proc.importStylesheet(xslt.responseXML);
const frag = proc.transformToFragment(atom.responseXML, document);
document.getElementById("feed").appendChild(frag.querySelector("[role='feed']"));
};
atom.send(); xslt.send();
Server-side, I’ve leveraged XSLT (2.0) in the build process of another website, to slightly transform (X)HTML pages before publishing (canonicalize URLs, embed JS & CSS directly in the page, etc.): https://github.com/PaulCapron/pwa2uwp/blob/master/postprod.x...
Firefox does support XSLT. At Standard Ebooks, our ebook OPDS/RSS feeds are styled with XSLT when viewed with a browser. See for example https://standardebooks.org/feeds/opds/new-releases (use view source to see that it's an XML document).
To this day, XSLT is one of basically two ways to deliver "content" and "layout" almost fully separately and have a browser combine that in a presentable way, entirely on the client—the other, of course, being to make a mess in JS.
I used XSLT and Active Server Pages to build my school district's website back around 2000 - each school had their own page and each teacher had their own, too - teachers could modify their pages directly in the browser.
I forget the _exact_ mechanics but it was definitely all done in Internet Explorer with XML as the actual content per-page and transformation was done using XSLT.
Interesting but I definitely hate XSLT as a result. Considering this was my first real programming job, it's surprising I continued.
The bigger problem was the poor experience when anything failed and your users got an inscrutable error page or, if it was a gap in functionality rather than a hard compatibility error, parts of the page didn’t work as expected and you might not even notice if depended on a browser / MS shared library version you didn’t use.
Philosophically I like the idea but that era needed a lot more attention to tool quality and interoperability. I’m not sure anything anyone on an XML-based standards committee did after the turn of the century mattered as much as it would have if the money had been spent improving tools and avoiding things like so many tools relying on libxml2 / libxslt and thus never getting support for newer versions of XPath, XSLT, etc.
I did this in embedded some 20+ years ago, I was working on a project where the microcontroller didn't have enough resources to do much CGI and store much more than a few kilobytes, so a big dashboard with tens of temperature gauges and other things was all done as a combination of serving XML data and a single, small XSLT file and the rest was all in the browser. Fun times indeed.
XML/XSLT/XPath are great, but XSLT ecosystem has been effectively "frozen" for over a decade in terms of innovation and tooling. The last major step was XSLT 3.0 (2017), which introduced streaming and function integration. However, in practice, no new engines or radically different approaches have emerged since then.
And there is only one free XSLT 3.0 processor available, SAXON-HE (it lacks schema-aware & streaming though)
Even worse, in practice you still often only get to use XSLT 1 and XPath 1, because that's what the common Open Source libraries (i.e. libxml/libxslt and Xalan) typically used to embed XSLT into Software support. (The existence of EXSLT should better be forgotten). For anything else, esp. dedicated standalone XSLT development, there is basically only saxon. I wish, there was better Open Source support for XSLT/XPath 2+ but i don't think it's likely to happen.
I'd say damn near 20yrs. I used it quite extensively about 10yrs ago and it was an odd one then, but was very useful for the purpose we had (semantic validation of XML payloads).
XSLT is a bad programming language wrapped around XPath. I'd rather take any existing general purpose programming language, add an XPath library to it, and write anything I'd do in XSLT in a programming language where I don't have to wait until version 3.0 for, well, all this stuff: https://www.w3.org/TR/xslt-30/#whats-new-in-xslt3
And a lot of that "badness" is precisely that XSLT is a very closed, scarcity-minded language where basic library and language features have to be routed through a standards committee (you think it's hard to get a new function into Python's or Go's standard library?), when all you really need is an XPath library and any abundance-mindset language you can pick up, where if you need something like "regular expression" support you can just go get a library. Or literally anything else you may need to process an XML document, which is possibly anything. Which is why a general-purpose language is a good fit.
That "What's New In XSLT 3.0" is lunatic nonsense if you view it through the lens of being a programming language. What programming language gets associative arrays after 18 years? And another 8 years after that you still can't really count on that being available?
Programming languages tend to have either success feed success, or failure feed failure. Once one of those cascades start it's very difficult to escape from them. XSLT is pretty firmly in the latter camp, probably kept alive only by the fact it's a standard and that still matters to some people. It's frozen because effectively nobody cares, because it's frozen, because nobody cares.
I definitely recommend putting XPath in your toolbelt if you have to deal with XML at all though.
Years ago, I was maintaining a huge XML->XML transformation in XSLT. The input format was the XML based config file of the system, that was created by the configuration tool. Output was a XML that has the same information in a way that is optimized for the system to read in efficiently. (Changing order of things, introducting redundancy by replicating similar information for different parts of the system, etc.)
(It was a Building Information System, Fire Alarms, Access, Lots of business rules stored in XML)
While the XML was easier to transform in XSLT than in the native C++, and yes, XSLT was probably the right tool at that time I developed a deep hatred for XSLT at that time. It felt like a functional language that had just all the important parts removed.
Yes, pattern matching is a good thing, but hey - I can do pattern matching for rules in any decent language. It was just the amount of existing code that prevented me from porting it to another language.
(And I remember a few ugly hacks, where I exposed "programming language" stuff from C# - which we also used - to the XSLT processor)
However, with all the XSLT ugliness: XPath is amazing! I love that.
I was wondering whether I was the only person to think XSLT was a poor tool, although for different reasons. I had to work with XSLT for several years, and it felt like the worst programming language I had ever seen. For me, the use of code written in XML to process XML felt absurd, and debugging felt next to impossible. I thought it would be nice if someone would create a library in some other programming language (maybe Prolog) that would do the same thing. If it had to first had to "compile" stuff into XSLT, so be it, but programming in XML was so verbose that I had trouble keeping track of my program's structure.
Abandoning XML is and continues to be the webs biggest mistake.
Client side templating, custom elements, validation against schemas, native databinding for forms, we could have had it all and threw it away; instead preferring to rebuild it over and over again in React until the end of time.
It was actually a hypertext format as opposed to JSON. So HATEOS actually made sense. The fact that we went backwards in terms of no longer using a hyptertext format for almost all web requests is one of the dumbest moves in web development. I get the incentives that influenced it, but yuck.
If the web was being built today it would be nothing but Javascript and Canvas, the idea of something like HTML would have you laughed out of the room. Documents? You have PDF for that.
I did a lot of XSLT 20 years ago. I worked for a company making an open source CMS that did everything in XML. Content was XML, obviously, pipelines (using Apache Cocoon) were defined in XML, and used XSLT to transform XML into different XML. We got quite proficient in it. We even used XSLT to generate XSLT. A Coworker figured out how to calculate a square root in XSLT (not for production obviously).
It's fun to work in such a declarative way, although all the XML gets tiring. I learned a ton there, though. XSLT is great for its intended purpose, but maybe the fact that you can also use it for other things is a risk.
It's a problem in both the XML and RDF [1] worlds that the same representation gets used for everything. Part of the success of CSS is that it looks different from HTML, it could have been expressed with angle brackets and I think people would have had terrible trouble understanding where the HTML ends and the CSS begins.
Cocoon was indeed my first exposure to the wonderful world of caching. I used Cocoon's event caching to create a preemptive caching for a particularly complicated situation.
My impression of XSLT is that there were representatives from every different programming language paradigm on the XSLT standard committee, and each one of them was able to get just enough of what was special about their own paradigm into the standard to showcase it while sabotaging the others and making them all look foolish, but not enough to actually get any work done or lord forbid synergistically dovetail together into a unified whole.
The only way I was ever able to get anything done with XLST was to use Microsoft's script extensions to drop down into JavaScript and just solve the problem with a few lines of code. And that begs the question of why am I not just solving this problem with a few lines of JavaScript code instead of inviting XSLT to the party?
More on XML, XSLT, SGML, DSSSL, and the DDJ interview "A Triumph of Simplicity: James Clark on Markup Languages and XML":
I haven't used XSLT since 2007 but I used it as an alternative to ASP.NET for building some dynamic but not too advanced websites and it had a very impressive performance when cached properly (I believe it was over an order of magnitude faster than just default ASP.NET). I went sleuthing for the framework but I think it's lost. Also the Google Code archive is barely functional anymore, but I did find the XSLT cache function I built for it: https://github.com/blixt/old-google-code-svn/blob/main/trunk...
But well, Firefox didn't do it that way, thus no proper use.
FTR, there’s also XSLTProcessor (https://developer.mozilla.org/en-US/docs/Web/API/XSLTProcess...) available from Javascript in the browser. I use that on my homepage, to fetch and transform-to-HTML said Atom feed, then embed it:
Server-side, I’ve leveraged XSLT (2.0) in the build process of another website, to slightly transform (X)HTML pages before publishing (canonicalize URLs, embed JS & CSS directly in the page, etc.): https://github.com/PaulCapron/pwa2uwp/blob/master/postprod.x...I forget the _exact_ mechanics but it was definitely all done in Internet Explorer with XML as the actual content per-page and transformation was done using XSLT.
Interesting but I definitely hate XSLT as a result. Considering this was my first real programming job, it's surprising I continued.
Philosophically I like the idea but that era needed a lot more attention to tool quality and interoperability. I’m not sure anything anyone on an XML-based standards committee did after the turn of the century mattered as much as it would have if the money had been spent improving tools and avoiding things like so many tools relying on libxml2 / libxslt and thus never getting support for newer versions of XPath, XSLT, etc.
https://qt4cg.org/specifications/xslt-40/Overview.html
Last updated just under a week ago!
And a lot of that "badness" is precisely that XSLT is a very closed, scarcity-minded language where basic library and language features have to be routed through a standards committee (you think it's hard to get a new function into Python's or Go's standard library?), when all you really need is an XPath library and any abundance-mindset language you can pick up, where if you need something like "regular expression" support you can just go get a library. Or literally anything else you may need to process an XML document, which is possibly anything. Which is why a general-purpose language is a good fit.
That "What's New In XSLT 3.0" is lunatic nonsense if you view it through the lens of being a programming language. What programming language gets associative arrays after 18 years? And another 8 years after that you still can't really count on that being available?
Programming languages tend to have either success feed success, or failure feed failure. Once one of those cascades start it's very difficult to escape from them. XSLT is pretty firmly in the latter camp, probably kept alive only by the fact it's a standard and that still matters to some people. It's frozen because effectively nobody cares, because it's frozen, because nobody cares.
I definitely recommend putting XPath in your toolbelt if you have to deal with XML at all though.
(It was a Building Information System, Fire Alarms, Access, Lots of business rules stored in XML)
While the XML was easier to transform in XSLT than in the native C++, and yes, XSLT was probably the right tool at that time I developed a deep hatred for XSLT at that time. It felt like a functional language that had just all the important parts removed.
Yes, pattern matching is a good thing, but hey - I can do pattern matching for rules in any decent language. It was just the amount of existing code that prevented me from porting it to another language.
(And I remember a few ugly hacks, where I exposed "programming language" stuff from C# - which we also used - to the XSLT processor)
However, with all the XSLT ugliness: XPath is amazing! I love that.
Client side templating, custom elements, validation against schemas, native databinding for forms, we could have had it all and threw it away; instead preferring to rebuild it over and over again in React until the end of time.
It's fun to work in such a declarative way, although all the XML gets tiring. I learned a ton there, though. XSLT is great for its intended purpose, but maybe the fact that you can also use it for other things is a risk.
XPath is probably the most useful part of it.
[1] Oddly, SPARQL-in-Turtle isn't half bad to my eye, see https://spinrdf.org/sp.html
The only way I was ever able to get anything done with XLST was to use Microsoft's script extensions to drop down into JavaScript and just solve the problem with a few lines of code. And that begs the question of why am I not just solving this problem with a few lines of JavaScript code instead of inviting XSLT to the party?
More on XML, XSLT, SGML, DSSSL, and the DDJ interview "A Triumph of Simplicity: James Clark on Markup Languages and XML":
https://news.ycombinator.com/item?id=33728303
Pretty cool to see XSLT mentioned in 2025!