One extremely important XSLT use-case is for RSS/Atom feeds. Right now, clicking on a link to feed brings up a wall of XML (or worse, a download link). If the feed has an XSLT stylesheet, it can be presented in a way that a newcomer can understand and use.
I realize that not that many feeds are actually doing this, but that's because feed authors are tech-savvy and know what to do with an RSS/Atom link.
But someone who hasn't seen/used an RSS reader will see a wall of plain-text gibberish (or a prompt to download the wall of gibberish).
XSLT is currently the only way to make feeds into something that can still be viewed.
I think RSS/Atom are key technologies for the open web, and discovery is extremely important. Cancelling XSLT is going in the wrong direction (IMHO).
I've done a bunch of things to try to get people to use XSLT in their feeds: https://www.rss.style/
Isn't this kind of an argument for dropping it? Yeah it would be great if it was in use but even the people who are clicking and providing RSS feeds don't seem to care that much.
You are probably right, but it is depressing how techies don't see the big picture & don't want to provide an on-ramp to the RSS/Atom world for newcomers.
Gotta love the reference to the <link> header element. There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
> There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
This is actually a feature of Orion[0], and among the reasons why I believe it to be one of the most (power) user-oriented browsers in active development.
It's such a basic thing that there's really no good reason to remove the feature outright (as mainstream browsers have), especially when the cited reason is to "reduce clutter" which has been added back tenfold with garbage like chatbots and shopping assistants.
Man, reaching way back in history here, but this reminds me of why I stopped contributing to Mozilla decades ago. My contribution was the link toolbar, that was supposed to give a UI representation of the canonical link elements like next and prev and whatnot. At the last minute before a major release some jerkhole of a product manager at AOL cut my feature from the release. It's incredible the way such pretty bureaucrats have shaped web browsers over the years.
iIRC, all of the proposed workarounds involved updating the sites using XSLT, which may not always be particularly easy, or even something publishers will realize they need to do.
For RSS/Atom feeds presented as links in a site (for convenience to users), developers can always offer a simple preview for the feed output using: https://feedreader.xyz/
> Cancelling XSLT is going in the wrong direction (IMHO).
XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away. The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
It seems like something an extension ought to be capable of, and if not, fix the extension API so it can. In firefox I think it would be a full-blown plugin, which is a lower-level thing than an extension, but I don't know whether Chromium even has a concept of such a thing.
> XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away.
Not having it available from the browser really reduces the ability to use it in many cases, and lots of the nonbrowser XSLT ecosystem relies on the same insecure, unmaintained implementation. There is at least one major alternative (Saxon), and if browser support was switching backing implementation rather than just ending support, “XSLT isn’t going anywhere” would be a more natural conclusion, but that’s not, for whatever reason, the case.
I don’t see anything that looks remotely like a normative argument about what browsers should or should not do anywhere in my post that you are responding to, did you perhaps mean to respond to some other post?
My point was that the decision to remove XSLT support from browsers rather than replacing the insecure, unmaintained implementation with a secure, maintained implementation is an indicator opposed to the claim "XSLT isn’t going anywhere”. I am not arguing anything at all about what browser vendors should do.
The idea is that if they did so, the people using software running in the browser could continue to use XSLT with just the browser platform because the functionality would still be there with a different backend implementation, but instead that in-browser XSLT functionality is going somewhere, specifically, away.
Right but either way, the vulnerability exists today, and you're saying that whether or not the browser platform supports the functionality that harbors the vulnerabilities, the browser platform should be responsible for resolving those vulnerabilities. That's how I read it.
> and you're saying that whether or not the browser platform supports the functionality that harbors the vulnerabilities, the browser platform should be responsible for resolving those vulnerabilities.
No, I'm not (and I keep saying this explicitly) saying that browsers should or should not do anything, or be responsible for anything. I’m not making a normative argument, at all.
I am stating, descriptively, that browser vendors choosing to remove XSLT functionality rather than repairing it by using an alternative implementation is very directly contrary to the claim made upthread that “XSLT isn’t going anywhere”. It is being removed from the the most popular application platform in existence, with developers being required to bring their own implementation for what was previously functionality supported by the platform. I am not saying that this is good or bad or that anyone should or should not do anything differently or making any argument about where responsibility for anything related to this lies.
As others have pointed out, there are other options for styling XML that work well enough in practice. You can also do content negotiation on the server, so that a browser requesting an html document will get the human-readable version, while any feed reader will be sent the XML version. (If you render the html page with XSLT, you can even take advantage of better XSLT implementations where you don't need to work around bugs and cross-platform jank.) Or you can rely on `link` tags, letting users submit your homepage to their feed reader, and having the feed reader figure out where everything is.
There might even be a mime code for RSS feeds, such that if you open an RSS feed in your browser, it automatically figures out the correct application (i.e. your preferred RSS reader) to open that feed in. But I've not seen that actually implemented anywhere, which is a shame, because that seems like by far the best option for user experience.
Google decided to drop XSLT, because the volunteer-maintained libxslt had no maintainers for some time. So, instead of helping the project, they just decided to remove a feature.
Almost all of them? as I recall there was a single volunteer developer maintaining the xml/xslt libraries they were using.
Wasn't it similar with openssl 13+ years ago? Few volunteer maintainers, and only after a couple of major vulnerabilities money got thrown at that project?
I'm sure there's more and that's why the famous xkcd comic is always of relevance.
XSLT as a feature is being removed from web browsers, which is pretty significant. Sure it can still be used in standalone tools and libraries, but having it in web browsers enabled a lot of functionality people have been relying on since the dawn of the web.
> hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away
So why not switch to a better maintained and more secure implementation? Firefox uses TransforMiix, which I haven't seen mentioned in any of Google's posts on the topic. I can't comment on whether it's an improvement, but it's certainly an option.
> The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
Really? How about a trillion dollar corporation steps up to sponsor the lone maintainer who has been doing a thankless job for decades? Or directly takes over maintenance?
They certainly have enough resources to maintain a core web library and fix all the security issues if they wanted to. The fact they're deciding to remove the feature instead is a sign that they simply don't.
And I don't buy the excuse that XSLT is a niche feature. Their HTML bastardization AMP probably has even less users, and they're happily maintaining that abomination.
> It seems like something an extension ought to be capable of
I seriously doubt an extension implemented with the restricted MV3 API could do everything XSLT was used for.
> and if not, fix the extension API so it can.
Who? Try proposing a new extension API to a platform controlled by mega-corporations, and see how that goes.
I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
"XSLT is currently the only way to make feeds into something that can still be viewed."
You could use content negotiation just fine. I just hit my personal rss.xml file, and the browser sent this as the Accept header:
You can easily ship out an HTML rendering of an RSS file based on this. You can have your server render an XSLT if you must. You can have your server send out some XSLT implemented in JS that will come along at some point.
To a first approximation, nobody cares enough to use content negotiation any more than anyone cares about providing XML stylesheets. The tech isn't the problem, the not caring is... and the not caring isn't actually that big a problem either. It's been that way for a long time and we aren't actually all that bothered about it. It's just a "wouldn't it be nice" that comes up on those rare occasions like this when it's the topic of conversation and doesn't cross anyone's mind otherwise.
Another point: it is shocking how many feeds have errors in them. I analyzed the feeds of some of the top contributors on HN, and almost all had something wrong with them.
Even RSS wizards would benefit from looking at a human-readable version instead of raw XML.
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
Maybe it's more for people who have no idea what RSS is and click on the intriguing icon. If they weren't greeted with a load of what seems like nonsense for nerds there could have been broader adoption of RSS.
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
So that excludes you from the "someone who hasn't seen/used a RSS reader" demographic mentioned in the comment you are replying to.
It's encouraging to see browsers actually deprecate APIs, when I think a lot of problems with the Web and Web security in particular is people start using new technologies too fast but don't stop using old ones fast enough.
That said, it's also pretty sad. I remember back in the 2000s writing purely XML websites with stylesheets for display, and XML+XSLT is more powerful, more rigorous, and arguably more performant now in the average case than JSON + React + vast amounts of random collated libraries which has become the Web "standard".
But I guess LLMs aren't great at generating XSLT, so it's unlikely to gain back that market in the near future. It was a good standard (though not without flaws), I hope the people who designed it are still proud of the influence it did have.
Some people seem to think XSLT is used for the step from DOM -> Graphics. This is not the first time I have send a comment implying that, but it is wrong. XSLT is for the step from 'normalized data' -> DOM. And I like it, that this can be done in a declarative way.
What about XML + CSS? CSS works the exact same on XML as it does on HTML. Actually, CSS works better on XML than HTML because namespace prefixes provide more specific selectors.
The reason CSS works on XML the same as HTML is because CSS is not styling tags. It is providing visual data properties to nodes in the DOM.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Yup, "been there, done that" - at the time I think we were creating reports in SQL Server 2000, hooked up behind IIS.
It feels this is being deprecated and removed because it's gone out of fashion, rather than because it's actually measurably worse than whatever's-in-fashion-today... (eg React/Node/<whatever>)
Agreed on API deprecation, the surface is so broad at this point that it's nearly impossible to build a browser from scratch. I've been doing webdev since 2009 and I'm still finding new APIs that I've never heard of before.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Awesome! I made a blog using XML+XSLT, back in high school. It was worth it just to see the flabbergasted look on my friends faces when I told them to view the source code of the page, and it was just XML with no visible HTML or CSS[0].
The "severe security issue" in libxml2 they mention is actually a non-issue and the code in question isn't even used by Chrome. I'm all for switching to memory-safe languages but badmouthing OSS projects is poor style.
It is also kinda a self-burn. Chromium an aging code base [1]. It is written in a memory unsafe language (C++), calls hundreds of outdated & vulnerable libraries [2] and has hundreds of high severity vulnerabilities [3].
Google is too cheap to fund or maintain the library they've built their browser with after its hobbyist maintainers got burnt out, for more than a decade so they're ripping out the feature.
Their whole browser is made up of unsafe languages and their attempt to sort of make c++ safer has yet to produce a usable proof of concept compiler. This is a fat middle finger in the face of all the people's free work they grabbed to collect billions for their investors.
Sounded like the maintainers of libxml2 have stepped-back, so there needs to be a supported replacement, because it is widely used. (Or if you are worried the reputation of "OSS", you can volunteer!)
The issue in question is just one of the several long-unfixed vulnerabilities we know about, from a library that doesn't have that many hands or eyes on it to begin with.
Maintaining web standards without breaking backwards compatibility is literally what they signed up for when they decided to make a browser. If they didn't want to do that job, they shouldn't have made one.
Chromium is open source and free (both as in beer and speech). The license says they've made no future commitments and made no warrants.
Google signed up to give something away for free to people who want to use it. From the very first version, it wasn't perfectly compatible with other web browsers (which mostly did IE quirks things). If you don't want to use it, because it doesn't maintain enough backwards compatibility... Then don't.
The license would be relevant if I'd claimed that removing XSLT was illegal or opened them up to lawsuits, but I didn't. The obligation they took on is social/ethical, not legal. By your logic, chrome could choose to stop supporting literally anything (including HTML) in their "browser" and not have done anything that we can object to.
iIRC, lack of IE compatibility is fundamentally different, because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.
The license is the way it is not by choice. We should be clear about that and acknowledge KHTML, and both Safari and Chromium origins. Some parts remain LGPL to this day.
Nobody is badmouthing open source. It's the core truth, open source libraries can become unmaintained for a variety of reasons, including the code base becoming a burden to maintain by anyone new.
And you know what? That's completely fine. Open source doesn't mean something lives forever
Where's the best collection or entry point to what you've written about Chrome's use of Gnome's XML libraries, the maintenance burden, and the dearth of offers by browser makers foot the bill?
I like XSLT, and I’ve been using the browser-based APIs in my projects, but I must say that XSLT ecosystem has been in a sad state:
- Browsers have only supported XSLT 1.0, for decades, which is the stone age of templating. XSLT 3.0 is much nicer, but there’s no browser support for it.
- There are only two cross-platform libraries built for it: libxslt and Saxon. Saxon seriously lacks ergonomics to say the least.
One option for Google as a trillion dollar company would be to drive an initiative for “better XSLT” and write a Rust-based replacement for libxslt with maybe XSLT 3.0 support, but killing it is more on-brand I guess.
I also dislike the message “just use this [huge framework everyone uses]”. Browser-based template rendering without loading a framework into the page has been an invaluable boon. It will be missed.
To anyone who says to use JS instead of XSLT: I block JS because it is also used for ads, tracking and bloat in general. I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all).
I think being able to do client-side templating without JS is an important feature and I hope that since browser vendors are removing XSLT they will add some kind of client-side templating to replace it.
XSLT is being exploited right now for security vulnerabilities, and there is no solution on the horizon.
The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
As they say, security is not a product, it’s a process. The process we have for existing browser technologies is better. That process is better because more people use it.
But even if we were to try to consider the technologies in isolation, and imagine a timeline where things were different? I doubt whether XML+XSLT is the superior platform for security. If it had won, we’d just have a different nightmare of intermingled content and processing. Maybe more stuff being done client-side. I expect that browser and OS manufacturers would be warping content to insert their own ads.
The percentage of visitors who block JS is extremely small. Many of those visits are actually bots and scrapers that don’t interpret JS. Of the real users who block JS, most of them will enable JS for any website they actually want to visit if it’s necessary.
What I’m trying to say is that making any product decision for the extremely small (but vocal) minority of users who block JS is not a good product choice. I’m sorry it doesn’t work for your use case, but having the entire browser ecosystem cater to JS-blocking legitimate users wouldn’t make any sense.
I block JS, too. And so does about 1-2% of all Web users. JavaScript should NOT be REQUIRED to view a website. It makes web browsing more insecure and less private, makes page load times slower, and wastes energy.
To put that in context, about 6 percent of US homes have no internet access at all. The “I turn off JS” crowd is at least 3x smaller than the crowd with no access at all.
The JS ship sailed years ago. You can turn it off but a bunch of things simply will not work and no amount of insisting that it would not be required will change that.
I’m not saying change is not possible. I’m saying the change you propose is misguided. I do not believe the entire world should abandon JS to accommodate your unusual preferences nor should everyone be obliged to build two versions of their site, one for the masses and one for those with JS turned off.
Yes, JS is overused. But JS also brings significant real value to the web. JS is what has allowed websites to replace desktop apps in many cases.
> Yes, JS is overused. But JS also brings significant real value to the web. JS is what has allowed websites to replace desktop apps in many cases.
Exactly. JS should be used to make apps. A blog is not an app. Your average blog should have 0 lines of JS. Every time I see a blog or a news article who's content doesn't load because I have JS disabled I strongly reconsider whether it's worth my time to read or not.
Did I say abandon? No. I said it should not be required. JavaScript should be supplementary to a page, but not necessary to view it. This was its original intent.
> JS is what has allowed websites to replace desktop apps in many cases.
Horribly at that, with poorer accessibility features, worse latency, abused visual style that doesn't match the host operating system, unusable during times of net outages, etc, etc.
> JavaScript should be supplementary to a page, but not necessary to view it.
I’m curious. Do Google Maps, YouTube, etc even work with JS off?
> This was its original intent.
Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
> Horribly at that
I disagree. You say you turn JS off for security but JS has made billions of people more secure by creating a sandbox for these random apps to run in. I can load up a random web app and have high confidence that it can’t muck with my computer. I can’t do the same with random desktop apps.
> You say you turn JS off for security but JS has made billions of people more secure by creating a sandbox for these random apps to run in.
is "every website now expects to run arbitrary code on the client's computer" really a more secure state of affairs? after high profile hardware vulnerabilities exploitable even from within sandboxed js?
from how many unique distributors did the average person run random untrusted apps that required sandboxing before and after this became the normal way to deliver a purely informational website and also basically everything started happening online?
People used to download way more questionable stuff and run it. Remember shareware? Remember Sourceforge? (Remember also how Sourceforge decided to basically inject malware that time?)
I used to help friends and family disinfect their PCs from all the malware they’d unintentionally installed.
> I’m curious. Do Google Maps, YouTube, etc even work with JS off?
I use KDE Marble (OpenStreetMap) and Invidious. They work fine.
> Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
And that's why webshit is webshit.
> I can’t do the same with random desktop apps.
I can, and besides the point, why should anyone run random desktop apps? (Rhetorical question, they shouldn't.) I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
> I use KDE Marble (OpenStreetMap) and Invidious. They work fine.
So no. Some major websites don’t actually work for you.
> And that's why webshit is webshit.
I don’t understand this statement. Webshit is webshit because the platform grew beyond basic html docs? At some point this just feels like hating on change. The web grew beyond static html just like Unix grew beyond terminals.
> I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
If this is the archetype of the person who turns off JS then I would bet the real percentage is way less than 1%.
I don't see how this makes the "JS availability should be the baseline" assumption any more legitimate. We make it possible to function in a society for those 6% of people. Low percentage still works out to a whole lot of people who shouldn't be left out.
Without saying whether I think that's a good or bad thing, as a practical matter, I 100% agree. Approximately no major websites spend any effort whatsoever supporting non-JS browsers today. They probably put that in the class of text only browsers, or people who override all CSS: "sure, visitors can do that, but if they've altered their browser's behavior then what happens afterward is on them."
And frankly, from an economic POV, I can't blame them. Imagine a company who write a React-based website. (And again, I'm not weighing in on the goodness or badness of that.) Depending on how they implemented it, supporting a non-JS version may literally require a second, parallel version of the site. And for what, to cater to 1-2% of users? "Hey boss, can we triple our budget to serve two versions of the site, kept in lockstep and feature identical so that visitors don't scream at us, to pick up an extra 1% or 2% of users, who by definition are very finicky?" Yeah, that's not happening.
I've launched dozens of websites over the years, all of them using SSR (or HTML templates as we called them back in the day). I've personally never written a JavaScript-native website. I'm not saying the above because I built a career on writing JS or something. And despite that, I completely understand why devs might refuse to support non-JS browsers. It's a lot of extra work, it means they can't use the "modern" (React launched in 2013) tools they're use to, and all without any compelling financial benefit.
The point of the poster you're responding to is that sites are built JS-first for 98-99% of users, and it takes extra work to make them compatible with "JavaScript should NOT be REQUIRED to view a website", and no one is going to bother doing that work for 1-2% of users.
Yeah... or...... maybe they should just build websites the proper way the first time around, returning plain HTML, perhaps with some JS extras. Any user-entered input needs to be validated again on the backend anyway, so client-side JS is often a waste.
Of note here is that the segment we're talking about is actually an intersection of two very small cohorts; the first, as you note, are people who don't own a television errr disable Javascript, and the second is sites that actually rely on XSLT, of which there are vanishingly few.
> I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all)
Recent XSLT parser exploits were literally the reason this whole push to remove it was started, so this change will specifically be helping people in your shoes.
Makes me kind of sad. I started my carrier back in days when XHTML and co were lauded as the next thing. I worked with SOAP and WDSLs. I loved that one can express nearly everything in XML. And namespaces… Then came json and apart from being easier to read for humans I wondered why we switch from this one great exchange format to this half baked one. But maybe I’m just nostalgic. But every time I deal with json parsers for type serialization and the question how to express HashMaps and sets, how to provide type information etc etc I think back to XML and the way that everything was available on board. Looked ugly as hell though :)
JSON is expressive and simple, wheras XML, the great tech saviour of its time, is over-verbose and hard to read (every key/tag repeated twice, unnecessary attribute-vs-child-tag decision). It's full of overdesigned academic out-of-touch features, with 1000-page specs (namespaces, DTDs, schemas, semweb stuff, etc.).
Simplicity is pretty much the #1 priority in engineering I think, despite many of the voters in this thread apparently disagreeing.
json is sort of a gresham's law "bad money drives out the good" but for tech: lazy and forgiving technologies drive out the better but stricter ones.
bad technology seems to make life easier at the beginning, but that's why we now have sloppy websites that are an unorganized mess of different libraries, several MB in size without reason, and an absolute usability and accessibility nightmare.
xhtml and xml were better, also the idea separating syntax from presentation, but they were too intelligent for our own good.
That's upsetting. Being able to do templating without using JavaScript was a really cool party trick.
I've used it in an unfinished website where all data was stored in a single XML file and all markup was stored in a single XSLT file. A CGI one-liner then made path info available to XSLT, and routing (multiple pages) was achieved by doing string tests inside of the XSLT template.
XSLT seem like it could be something implemented with WebAssembly (and/or JavaScript), in an extension (if the extension mechanism is made suitable; I think some changes might be helpful to support this and other things), possibly one that is included by default (and can be overridden by the user, like any other extension should be); if it is implemented in that way then it might avoid some of the security issues. (PDF could also be implemented in a similar way.)
(There are also reasons why it might be useful to allow the user to manually install native code extensions, but native code seems to be not helpful for this use, so to improve security it should not be used for this and most other extensions.)
In my opinion this is not “we agree lets remove it”. This is “we agree to explore the idea”.
Google and Freed using this as a go ahead because the Mozilla guy pasted a pollyfill. However it is very clearly NOT an endorsement to remove it, even though bad actors are stating so.
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support1. If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
Freed et al also explicitly chose to ignore user feedback for their own decision and not even try to improve XSLT security issues at the cost of performance.
Yeah all these billion dollar corporations that can’t be bothered see it as the only path forward not because of technological or practical issues, but because none of them can be asked to give a shit and plan it into their budgets.
They’re MBAs who only know how to destroy and consolidate as trained.
I’m a modern developer and I see it as valuable. Why side with the browser teams and ignoring user feedback?
If “modern developers” actually spent time with it, they’d find it valuable. Modern developers are idiots if their constant cry is “just write it in JS”.
No idea what’s inaccurate about this. A billion dollar company that has no problem pivoting otherwise, can’t fund open technology “because budgets” is simply a lie.
If you are using XSLT to make your RSS or atom feeds readable in a browser should somebody click the link you may find this post by Jake Archibald useful: https://jakearchibald.com/2025/making-xml-human-readable-wit... - it provides a JavaScript-based alternative that I believe should work even after Chrome remove this feature.
Ah, so this is removing libxslt. For a minute I thought XSLT processing was provided by libxml2, and I remembered seeing that the Ladybird browser project just added a dependency on libxml2 in their latest progress update https://ladybird.org/newsletter/2025-10-31/.
I'm curious to see what happens going forward with these aging and under-resourced—yet critical—libraries.
I know it makes me an old and I am biased because one of the systems in my career I am most proud of I designed around XSLT transformations, but this is some real bullshit and a clear case why a private company should not be the de facto arbiter of web standards. Have a legacy system that depends on XSLT in the browser? Sucks to be you, one of our PMs decided the cost-benefit just wasn't there so we scrapped it. Take comfort in the fact our team's velocity bumped up for a few weeks.
And yes I am sour about the fact as an American I have to hope the EU does something about this because I know full-well it's not happening here in The Land of the Free.
I think that's sad. XSLT is in my point a view a very misunderstood technology. It gets hated on a lot. I wonder if this hate is by people who actually used and understood it, though. In any case, more often than not this is by people who in the same sentence endorse JavaScript (which, by any objective way of measuring is just a language far more poorly designed).
IMO XSLT was just too difficult for most webdevs. And IMO this created a political problem where the 'frontend' folks needed to be smarter than the 'backend' generating the XML in the first place.
XSLT might make sense as part of a processing pipeline. But putting it on front of your website was just an unnecessary and inflexible layer, so that's why everyone stopped doing it. (except rss feeds and etc.)
I'm not much of a programmer, but XSLT being declarative means that I can knock out a decent-looking template without having to do a whole lot of programming work.
Au contraire: the more you understand and use XSLT, the more you hate it. People who don't understand it and haven't used it don't have enough information and perspective to truly hate it properly. I and many other people don't hate XSLT out of misunderstanding at all: just the opposite.
XSLT is like programming with both hands tied behind your back, or pedaling a bicycle with only one leg. For any non-trivial task, you quickly hit a wall of complexity or impossibility, then the only way XSLT is useful is if you use Microsoft's non-standard XSLT extensions that let you call out to JavaScript, then you realize it's so easy and more powerful to simply do what you want directly in JavaScript there's absolutely no need for XSLT.
I understand XSLT just fine, but it is not the only templating language I understand, so I have something to compare it with. I hate XSLT and vastly prefer JavaScript because I've known and used both of them and other worse and better alternatives (like Zope Page Templates / TAL / METAL / TALES, TurboGears Kid and Genshi, OpenLaszlo, etc).
>My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
>Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
You should really try some of the modern alternatives. Don't let Angular and React's templating systems poison you, give Svelte a try!
Even just plain JavaScript is much better and more powerful and easier to use than XSLT. There are many JavaScript libraries to help you with templates. Is there even any such thing as an XSLT library?
Is there some reason you would prefer to use XSLT than JavaScript? You can much more easily get a job yourself or hire developers who know JavaScript. Can you say the same thing for XSLT, and would anyone in their right mind hire somebody who knows XSLT but refuses to use JavaScript?
XSLT is so clumsy and hard to modularize, only good for messy spaghetti monoliths, no good for components and libraries and modules and frameworks, or any form of abstraction.
And then there's debugging. Does a good XSLT debugger even exist? Can it hold a candle to all the off-the-shelf built-in JavaScript debuggers that every browser includes? How do you even debug and trace through your XSLT?
I think the fundamental disconnect here is that you're assuming that I am a developer. I'm not, I'm a lousy developer. It's not for lack of trying, programming just doesn't click for me in the way that makes learning it an enjoyable process.
XSLT is a good middle ground that gave me just enough rope to do some fun transformations and put up some pages on the internet without having to set up a dev environment or learn a 'real' programming language
Well said. I wrote an XSLT based application back in the early 2000s, and I always imagined the creators of XSLT as a bunch of slavering demented sadists. I hate XSLT with a passion and would take brainfuck over it any day.
Hearing the words Xalan, Xerces, FOP makes me break out in a cold sweat, 20 years later.
I don't use XSLT and don't object to this, but seeing "security" cited made me realize how reflexively distrustful I've become of them using that justification for a given decision. Is this one actually about security? Who knows!
Didn't this come pretty directly after someone found some security vulns? I think the logic was, this is a huge chunk of code that is really complex which almost nobody uses outside of toy examples (and rss feeds). Sure, we fixed the issue just reported, but who knows what else is lurking here, it doesn't seem worth it.
As a general rule, simplifying and removing code is one of the best things you can do for security. Sure you have to balance that with doing useful things. The most secure computer is an unplugged computer but it wouldn't be a very useful one; security is about tradeoffs. There is a reason though that security is almost always cited - to some degree or another, deleting code is always good for security.
> As a general rule, simplifying and removing code is one of the best things you can do for security.
Sure, but that’s not what they’re doing in the big picture. XSLT is a tiny drop in the bucket compared to all the surface area of the niche, non-standard APIs tacked onto Chromium. It’s classic EEE.
My understanding is that contrary to popular opinion it is firefox not chrome that originally pushed for the removal, so i dont know how relavent that is. It seems like all browser vendors are in agreement on xslt.
that said, xslt is a bit of a weird api in how it interacts with everything. Not all apis are equally risky and i suspect xslt is pretty high up there on the risk vs reward ratio.
There are security issues in the C implementation they currently use. They could remove this without breaking anything by incorporating the JS XSLT polyfill into the browser. But they won't because money.
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
Unquestionably the right move. From the various posts on HN about this, it's clear that (A) not many people use it (B) it increases security vulnerability surface area (C) the few people who do claim to use have nothing to back up the claim
The major downside to removing this seems to be that a lot of people LIKE it. But eh, you're welcome to fork Chromium or Firefox.
Chrome and other browsers could virtually completely mitigate the security issues by shipping the polyfil they're suggesting all sites depending on XSLT deploy in the browser. By doing so, their XSLT implementation would become no less secure than their javascript implementation (and fat chance they'll remove that). The fact that they've rejected doing so is a pretty clear indication that security is just an excuse, IMO.
I wish more people would see this. They know exactly how to sandbox it, they’re telling you how to, they’re even providing and recommending a browser extension to securely restore the functionality they’re removing!
The security argument can be valid motivation for doing something, but is utterly illegitimate as a reason for removing. They want to remove it because they abandoned it many years ago, and it’s a maintenance burden. Not a security burden, they’ve shown exactly how to fix that as part of preparing to remove it!
> When that solution isn't wanted, the polyfill offers another path.
A solution is only a solution if it solves the problem.
This sort of thing, basically a "reverse X/Y problem", is an intellectually dishonest maneuver, where a thing is dubbed a "solution" after just, like, redefining the problem to not include the parts that make it a problem.
The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
> As mentioned previously, the RSS/Atom XML feed can be augmented with one line, <script src="xslt-polyfill.min.js" xmlns="http://www.w3.org/1999/xhtml"></script>, which will maintain the existing behavior of XSLT-based transformation to HTML.
Oh, yeah? It's that easy? So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right? Or is this another instance where well-paid engineers on the Chrome team who elected to accept the responsibility of maintaining the stability of the Web have decided that they like the getting-paid part but don't like the maintaining-the-stability-of-the-Web part and are talking out of both sides of their mouths?
> So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right?
As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
I recently had an interesting chat with Liam Quin (who was on W3C's XML team) about XML and CDATA on Facebook, where he revealed some surprising history!
Liam Quinn in his award winning weirdest hat, also Microsoft's Matthew Fuchs' talk on achieving extensibility and reuse for XSLT 2.0 stylesheets, and Stephan Kesper̀̀'s simple proof that XSLT and XQuery are Turing complete using μ-recursive functions, and presentations about other cool stuff like Relax/NG:
How do we communicate the idea that declarative markup is a good idea? Declarative markup is where you identify what is there, not what it does. This is a title, not, make this big and bold. This is a a part number, not, make this blink when you click it - sure, you can do that to part numbers, but don't encode your aircraft manual that way.
But this idea is hard to grasp, for the same reason that WYSIAYG word processors (the A stands for All, What you see is all you get) took over from descriptive formatting in a lot of cases.
For an internal memo, for an insurance letter to a client, how much matters? Well, the insurance company has to be able to search the letters for specific information for 10, 20, 40, 100 years. What word processor did you use 40 years ago? Wordstar? Magic Wand? Ventura?
Liam Quin: hahaha i actually opposed the inclusion of CDATA sections when we were designing XML (by taking bits we wanted from SGML), but they were already in use by the people writing the XML spec! But now you’ve given me a reason to want to keep them. The weird syntax is because SGML supported more keywords, not only CDATA, but they were a security fail.
Don Hopkins: There was a REASON for the <![SYNTAX[ ]]> ?!?!? I though it was just some kind of tribal artistic expressionism, like lexical performance art!
At TomTom we were using xulrunner for the cross platform content management tool TomTom Home, and XUL abused external entities for internationalizing user interface text. That was icky!
For all those years programming OpenLaszlo in XML with <![CDATA[ JavaScript code sections ]>, my fingers learned how to type that really fast, yet I never once wondered what the fuck ADATA or BDATA might be, and why not even DDATA or ZDATA? What other kinds of data are there anyway? It sounds kind of like quantum mechanics, where you just have to shrug and not question what the words mean, because it's just all arbitrarily weird.
Liam Quin: haha it’s been 30 years, but, there’s CDATA (character data), replaceable character data (RCDATA) in which `é` entity definitions are recognised but not `<`, IGNORE and INCLUDE, and the bizarre TEMP which wraps part of a document that might need to be removed later. After `<!` you could also have comments, <!-- .... --> for example (all the delimiters in SGML could be changed).
Don Hopkins: What is James Clark up to these days? I loved his work on Relax/NG, and that Dr. Dobb's interview "The Triumph of Simplicity".
Note: James Clark is arguably the single most important engineer in XML history:
- Lead developer of SGMLtools, expat, and Jade/DSSSL
- Co-editor of the XML 1.0 specification
- Designer of XSLT 1.0 and XPath 1.0
- Creator of Relax NG, one of the most elegant schema languages ever devised
He also wrote the reference XSLT implementation XT, used in early browsers and toolchains before libxslt dominated.
James Clark’s epic 2001 Doctor Dobb's Journal "A Triumph of Simplicity: James Clark on Markup Languages and XML" interview captures his minimalist design philosophy and his critique of standards and committee-driven complexity (which later infected XSLT 2.0).
It touches on separation of concerns, simplicity as survival, a standard isn't one implementation, balance of pragmatism and purity, human-scale simplicity, uniform data modeling, pluralism over universality, type systems and safety, committe pathology, and W3C -vs- ISO culture.
He explains why XML is designed the way it is, and reframes the XSLT argument: his own philosophy shows that when a transformation language stops being simple, it loses the very quality that made XML succeed.
> Didn't this effort start with Mozilla and not Google?
Maybe round one of it like ten years ago did? From what I understand, it's a Google employee who opened the "Hey, I want to get rid of this and have no plans to provide a zero-effort-for-users replacement." Github Issue a few months back.
> It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.
I don't see any evidence of that claim from the materials I have available to me. [0] is the Github Issue I mentioned. [1] is the WHATNOT meeting notes linked to from that GH Issue... though I have no idea who smaug is.
Blame Apple and Mozilla, too, then. They all agreed to remove it.
They all agreed because XSLT is extremely unpopular and worse than JS in every way. Performance/bloat? Worse. Security? MUCH worse. Language design? Unimaginably worse.
EDIT: I wrote thousands of lines of XSLT circa 2005. I'm grateful that I'll never do that again.
This is only repeated by people who have never used it.
XSLT is still a great way of easily transforming xml-like documents. It's orders of magnitude more concise than transforming using Javascript or other general programming languages. And people are actively re-inventing XSLT for JSON (see `jq`).
I used to use XSLT a lot, though it was a while ago.
You can use Javascript to get the same effect and, indeed, write your transforms in much the same style as XSLT. Javascript has xpath (still). You have a choice of template language but JSX is common and convenient. A function for applying XSLT-style matching rules for an XSLT push style of transform is only a few lines of code.
Do you have a particular example where you think Javascript might be more verbose than XSLT?
I actually do have to work with raw XML and XSLTs every once in a while for a java-based CMS and holy hell, it's nasty.
Java in general... Maven, trying to implement extremely simple things in Gradle (e.g. only execute a specific Thing as part of the pipeline when certain conditions are met) is an utter headache to do in the pom.xml because XML is not a programming language!
It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
I agree though, "XML is not a programming language" and attempts to use it that way have produced poor results. You should have seen the `ant` era! But this is broader than XML - look at pretty much every popular CI system for "YAML is not a programming language".
That doesn't mean that XML isn't useful. Just not as a programming language.
But, that's what XSL is! XSL is a Turing-complete programming language in XML for processing XML documents. Being in XML is a big part of what makes XSL so awful to write.
XSL may be Turing-complete but it's not a programming language and wasn't intended to be one. It's a declarative way to transform XML. When used as such I never found it awful to write... it's certainly much easier than doing the equivalent in general purpose programming languages.
Maybe by analogy: There are type systems that are Turing complete. People sometimes abuse them to humorous effect to write whole programs (famously, C++ templates). That doesn't mean that type systems are bad.
> It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
npm isn't even a build tool, it's a package manager and at that it's actually gotten quite decent - the fact that the JS ecosystem at large doesn't give a fuck about respecting semantic versioning or keeps reinventing the wheel or that NodeJS / JavaScript itself lacks a decent standard library aren't faults of npm ;)
Maven and Gradle in contrast are one-stop-shops, both build orchestrators and dependency managers. As for ant, oh hell yes I'm aware of that. The most horrid build system I encountered in my decade worth of tenure as "the guy who can figure out pretty much any nuclear submarine project (aka, only surfaces every few years after everyone working on it departed)" involved Gradle, which then orchestrated Maven and Ant, oh and the project was built on a Jenkins that was half DSL, half clicked together in the web UI, and the runner that executed the builds was a manually set up, "organically grown" server. That one was a holy damn mess to understand, unwind, clean up and migrate to Gitlab.
> look at pretty much every popular CI system for "YAML is not a programming language".
Oh yes... I only had the misfortune of having to code for Github Actions once in my life time, it's utter fucking madness compared to GitLab.
Comparing single-purpose declarative language that is not even really turing-complete with all the ugly hacks needed to make DOM/JS reasonably secure does not make any sense.
Exactly what you can abuse in XSLT (without non-standard extensions) in order to do anything security relevant? (DoS by infinite recursion or memory exhaustion does not count, you can do the same in JS...)
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
> XSLT is extremely unpopular and worse than JS in every way
This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".
They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Only Google is pushing forward and twisting that message.
> They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Mozilla:
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support.
> WebKit is cautiously supportive. We'd probably wait for one implementation to fully remove support, though if there's a known list of origins that participate in a reverse origin trial we could perhaps participate sooner.
"The reality is that for all of the work that we've put into HTML, and CSS, and the DOM, it has fundamentally utterly failed to deliver on its promise.
It's even worse than that, actually, because all of the things we've built aren't just not doing what we want, they're holding developers back. People build their applications on frameworks that _abstract out_ all the APIs we build for browsers, and _even with those frameworks_ developers are hamstrung by weird limitations of the web."
Nice find — interesting to see browsers moving to drop XSLT support.
I used XSLT once for a tiny site and it felt like magic—templating without JavaScript was freeing.
But maybe it’s just niche now, and browser vendors see more cost than payoff.
Curious: have any of you used XSLT in production lately?
I lead a team that manage trade settlements for hedge funds; data is exported from our systems as XML and then transformed via XSLT into whatever format the prime brokers require.
All the transformed are maintained by non-developers, business analysts mainly. Because the language is so simple we don't need to give them much training, just get IntelliJ installed on their machine, show them a few samples and let them work away.
Although it's sad to see an interesting feature go, they're not wrong about security. It's more important to have a small attack surface if this was maintained by one guy in Nebraska and he doesn't maintain it any more.
No, XSLT isn't required for the open web. Everything you can do with XSLT, you can also do without XSLT. It's interesting technology, but not essential.
Yes, this breaks compatibility with all the 5 websites that use it.
Good, XSLT was crap. I wrote an RSS feed XSLT template. Worst dev experience ever. No one is/was using XSLT. Removing unused code is a win for browsers. Every anti bloat HNer should be cheering
The first few times you use it, XSLT is insane. But once something clicks, you figure out the kinds of things it’s good for.
I am not really a functional programming guy. But XSLT is a really cool application of functional programming for data munging, and I wouldn’t have believed it if I hadn’t used it enough for it to click.
XSLT's matching rules allow a 'push' style of transform that's really neat. But you can actually do that with any programming language such as Javascript.
Right. I didn't use it much on the client side so I am not feeling this particular loss so keenly.
But server side, many years ago I built an entire CMS with pretty arbitrary markup regions that a designer could declare (divs/TDs/spans with custom attributes basically) in XSLT (Sablotron!) with the Perl binding and a customised build of HTML Tidy, wrapped up in an Apache RewriteRule.
So designers could do their thing with dreamweaver or golive, pretty arbitrarily mark up an area that they wanted to be customisable, and my CMS would show edit markers in those locations that popped up a database-backed textarea in a popup.
What started off really simple ended up using Sablotron's URL schemes to allow a main HTML file to be a master template for sub-page templates, merge in some dynamic functionality etc.
And the thing would either work or it wouldn't (if the HTML couldn't be tidied, which was easy enough to catch).
The Perl around the outside changed very rarely; the XSLT stylesheet was fast and evolved quite a lot.
Actually a transformation system can reduce bloat, as people don't have to write their own crappy JavaScript versions of it.
Being XML the syntax is a bit convoluted, but behind that is a good functional (in sense of functional programming language, not functioning) system which can be used for templating etc.
The XML made it a bit hard to get started and anti-XML-spirit reduced motivation to get into it, but once you know it, it beats most bloaty JavaScript stuff in that realm by a lot.
I'm always puzzled by statements like this. I'm not much of a programmer and I wrote a basic XSLT document to transform rss.xml into HTML in a couple of hours. I didn't find it very hard at all (anecdotes are not data, etc)
Your response is like seeing the cops going to the wrong house to kick in your neighbors door, breaking their ornaments in their entry way, and then saying to yourself, "Good. I hate yellow, and would never have any of that tacky shit in my house."
As your first sentence of your comment indicates, the fact that it's supported and there for people to use doesn't (and hasn't) result in you being forced to use it in your projects.
Yes but software, and especially browser, complexity has balooned enormously over the years. And while XSLT probably plays a tiny part in that, it's likely embedded in every Electron app that could do in 1MB what it takes 500 MB to do, makes it incrementally harder to build and maintain a competing browser, etc., etc. It's not zero cost.
I do tend to support backwards compatibility over constant updates and breakage, and needless hoops to jump through as e.g. Apple often puts its developers through. But having grown up and worked in the overexuberant XML-for-everything, semantic-web 1000-page specification, OOP AbstractFactoryTemplateManagerFactory era, I'm glad to put some of that behind us.
Remove crappy JS APIs and other web-tech first before deprecating XSLT - which is a true-blue public standard. For folks who don't enable JS and XML data, XSLT is a life-saver.
If we're talking about removing things for security security, the ticking time bomb that is WebUSB seems top of the list to me of things that are dangerous, not actually standards (it is Chrome only), and yet a bunch of websites think it's a big, good reason to be Chrome-only.
I realize that not that many feeds are actually doing this, but that's because feed authors are tech-savvy and know what to do with an RSS/Atom link.
But someone who hasn't seen/used an RSS reader will see a wall of plain-text gibberish (or a prompt to download the wall of gibberish).
XSLT is currently the only way to make feeds into something that can still be viewed.
I think RSS/Atom are key technologies for the open web, and discovery is extremely important. Cancelling XSLT is going in the wrong direction (IMHO).
I've done a bunch of things to try to get people to use XSLT in their feeds: https://www.rss.style/
You can see it in action on an RSS feed here (served as real XML, not HTML: do view/source): https://www.fileformat.info/news/rss.xml
Not to downplay what you think is important, but I think it's pretty important that governments and public bodies use XSLT.
https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih....
https://www.govinfo.gov/content/pkg/BILLS-119hr400ih/xml/BIL...
https://www.weather.gov/xml/current_obs/KABE.xml
https://www.europarl.europa.eu/politicalparties/index_en.xml
https://apps.tga.gov.au/downloads/sequence-description.xml
https://cwfis.cfs.nrcan.gc.ca/downloads/fwi_obs/WeatherStati...
https://converters.eionet.europa.eu/xmlfile/EPRTR_MethodType...
They don't put ads on their sites, so I'm not surprised Google doesn't give a fuck about them...
The page is XML but styled with XSLT.
Isn't this kind of an argument for dropping it? Yeah it would be great if it was in use but even the people who are clicking and providing RSS feeds don't seem to care that much.
This is actually a feature of Orion[0], and among the reasons why I believe it to be one of the most (power) user-oriented browsers in active development.
It's such a basic thing that there's really no good reason to remove the feature outright (as mainstream browsers have), especially when the cited reason is to "reduce clutter" which has been added back tenfold with garbage like chatbots and shopping assistants.
[0]: https://kagi.com/orion/
For RSS/Atom feeds presented as links in a site (for convenience to users), developers can always offer a simple preview for the feed output using: https://feedreader.xyz/
Just URL-encode the feed like so: https://feedreader.xyz/?url=https%3A%2F%2Fwww.theverge.com%2...
...and you get a nice preview that's human readable.
XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away. The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
It seems like something an extension ought to be capable of, and if not, fix the extension API so it can. In firefox I think it would be a full-blown plugin, which is a lower-level thing than an extension, but I don't know whether Chromium even has a concept of such a thing.
Not having it available from the browser really reduces the ability to use it in many cases, and lots of the nonbrowser XSLT ecosystem relies on the same insecure, unmaintained implementation. There is at least one major alternative (Saxon), and if browser support was switching backing implementation rather than just ending support, “XSLT isn’t going anywhere” would be a more natural conclusion, but that’s not, for whatever reason, the case.
My point was that the decision to remove XSLT support from browsers rather than replacing the insecure, unmaintained implementation with a secure, maintained implementation is an indicator opposed to the claim "XSLT isn’t going anywhere”. I am not arguing anything at all about what browser vendors should do.
No, I'm not (and I keep saying this explicitly) saying that browsers should or should not do anything, or be responsible for anything. I’m not making a normative argument, at all.
I am stating, descriptively, that browser vendors choosing to remove XSLT functionality rather than repairing it by using an alternative implementation is very directly contrary to the claim made upthread that “XSLT isn’t going anywhere”. It is being removed from the the most popular application platform in existence, with developers being required to bring their own implementation for what was previously functionality supported by the platform. I am not saying that this is good or bad or that anyone should or should not do anything differently or making any argument about where responsibility for anything related to this lies.
That's about as new-user-hostile as I can imagine.
As others have pointed out, there are other options for styling XML that work well enough in practice. You can also do content negotiation on the server, so that a browser requesting an html document will get the human-readable version, while any feed reader will be sent the XML version. (If you render the html page with XSLT, you can even take advantage of better XSLT implementations where you don't need to work around bugs and cross-platform jank.) Or you can rely on `link` tags, letting users submit your homepage to their feed reader, and having the feed reader figure out where everything is.
There might even be a mime code for RSS feeds, such that if you open an RSS feed in your browser, it automatically figures out the correct application (i.e. your preferred RSS reader) to open that feed in. But I've not seen that actually implemented anywhere, which is a shame, because that seems like by far the best option for user experience.
Wasn't it similar with openssl 13+ years ago? Few volunteer maintainers, and only after a couple of major vulnerabilities money got thrown at that project?
I'm sure there's more and that's why the famous xkcd comic is always of relevance.
XSLT as a feature is being removed from web browsers, which is pretty significant. Sure it can still be used in standalone tools and libraries, but having it in web browsers enabled a lot of functionality people have been relying on since the dawn of the web.
> hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away
So why not switch to a better maintained and more secure implementation? Firefox uses TransforMiix, which I haven't seen mentioned in any of Google's posts on the topic. I can't comment on whether it's an improvement, but it's certainly an option.
> The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
Really? How about a trillion dollar corporation steps up to sponsor the lone maintainer who has been doing a thankless job for decades? Or directly takes over maintenance?
They certainly have enough resources to maintain a core web library and fix all the security issues if they wanted to. The fact they're deciding to remove the feature instead is a sign that they simply don't.
And I don't buy the excuse that XSLT is a niche feature. Their HTML bastardization AMP probably has even less users, and they're happily maintaining that abomination.
> It seems like something an extension ought to be capable of
I seriously doubt an extension implemented with the restricted MV3 API could do everything XSLT was used for.
> and if not, fix the extension API so it can.
Who? Try proposing a new extension API to a platform controlled by mega-corporations, and see how that goes.
"XSLT is currently the only way to make feeds into something that can still be viewed."
You could use content negotiation just fine. I just hit my personal rss.xml file, and the browser sent this as the Accept header:
except it has no newline, which I added for HN.You can easily ship out an HTML rendering of an RSS file based on this. You can have your server render an XSLT if you must. You can have your server send out some XSLT implemented in JS that will come along at some point.
To a first approximation, nobody cares enough to use content negotiation any more than anyone cares about providing XML stylesheets. The tech isn't the problem, the not caring is... and the not caring isn't actually that big a problem either. It's been that way for a long time and we aren't actually all that bothered about it. It's just a "wouldn't it be nice" that comes up on those rare occasions like this when it's the topic of conversation and doesn't cross anyone's mind otherwise.
I've not been in the RSS world very much. I don't use news readers. And even I have seen a stylized RSS in the wild.
Our individual experiences are of course anecdotal, I'm just surprised at how different they are given your background.
Even RSS wizards would benefit from looking at a human-readable version instead of raw XML.
I ended up writing a feed analyzer that you can try on your feed: https://www.rss.style/feed-analyzer.html
Once upon a time, nice in-browser rendering of RSS/Atom feeds complete with search and sorting was a headliner feature of Safari.
https://www.askdavetaylor.com/how_do_i_subscribe_to_rss_feed...
I think every page with an RSS feed should have a link to the feed in the html body. And it should be friendly to people who are not RSS wizards.
You can still style them with css if you want. I dont really see the point. RSS is for machines to read not humans.
Maybe it's more for people who have no idea what RSS is and click on the intriguing icon. If they weren't greeted with a load of what seems like nonsense for nerds there could have been broader adoption of RSS.
Why? Wouldn't just see a different view of the same website that had that intriguing icon and go "ok, so what?"
If they don't know what an RSS feed is, seeing a stylized version isn't really going to help them understand, imho.
So that excludes you from the "someone who hasn't seen/used a RSS reader" demographic mentioned in the comment you are replying to.
That said, it's also pretty sad. I remember back in the 2000s writing purely XML websites with stylesheets for display, and XML+XSLT is more powerful, more rigorous, and arguably more performant now in the average case than JSON + React + vast amounts of random collated libraries which has become the Web "standard".
But I guess LLMs aren't great at generating XSLT, so it's unlikely to gain back that market in the near future. It was a good standard (though not without flaws), I hope the people who designed it are still proud of the influence it did have.
The reason CSS works on XML the same as HTML is because CSS is not styling tags. It is providing visual data properties to nodes in the DOM.
Yup, "been there, done that" - at the time I think we were creating reports in SQL Server 2000, hooked up behind IIS.
It feels this is being deprecated and removed because it's gone out of fashion, rather than because it's actually measurably worse than whatever's-in-fashion-today... (eg React/Node/<whatever>)
Awesome! I made a blog using XML+XSLT, back in high school. It was worth it just to see the flabbergasted look on my friends faces when I told them to view the source code of the page, and it was just XML with no visible HTML or CSS[0].
[0] https://www.w3schools.com/xml/simplexsl.xml - example XML+XSLT page from w3schools
People in glass houses shouldn't throw stones.
[1] https://github.com/chromium/chromium/commits/main/?after=c5a...
[2] https://github.com/chromium/chromium/blob/main/DEPS
[3] https://www.cvedetails.com/product/15031/Google-Chrome.html?...
Their whole browser is made up of unsafe languages and their attempt to sort of make c++ safer has yet to produce a usable proof of concept compiler. This is a fat middle finger in the face of all the people's free work they grabbed to collect billions for their investors.
It is remarkable the anti-trust case went as it did.
Chromium is open source and free (both as in beer and speech). The license says they've made no future commitments and made no warrants.
Google signed up to give something away for free to people who want to use it. From the very first version, it wasn't perfectly compatible with other web browsers (which mostly did IE quirks things). If you don't want to use it, because it doesn't maintain enough backwards compatibility... Then don't.
iIRC, lack of IE compatibility is fundamentally different, because the IE specific stuff they didn't implement was never part of the open web standards, but rather stuff Microsoft unilaterally chose to add.
And you know what? That's completely fine. Open source doesn't mean something lives forever
- Browsers have only supported XSLT 1.0, for decades, which is the stone age of templating. XSLT 3.0 is much nicer, but there’s no browser support for it.
- There are only two cross-platform libraries built for it: libxslt and Saxon. Saxon seriously lacks ergonomics to say the least.
One option for Google as a trillion dollar company would be to drive an initiative for “better XSLT” and write a Rust-based replacement for libxslt with maybe XSLT 3.0 support, but killing it is more on-brand I guess.
I also dislike the message “just use this [huge framework everyone uses]”. Browser-based template rendering without loading a framework into the page has been an invaluable boon. It will be missed.
I think being able to do client-side templating without JS is an important feature and I hope that since browser vendors are removing XSLT they will add some kind of client-side templating to replace it.
The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
As they say, security is not a product, it’s a process. The process we have for existing browser technologies is better. That process is better because more people use it.
But even if we were to try to consider the technologies in isolation, and imagine a timeline where things were different? I doubt whether XML+XSLT is the superior platform for security. If it had won, we’d just have a different nightmare of intermingled content and processing. Maybe more stuff being done client-side. I expect that browser and OS manufacturers would be warping content to insert their own ads.
The percentage of visitors who block JS is extremely small. Many of those visits are actually bots and scrapers that don’t interpret JS. Of the real users who block JS, most of them will enable JS for any website they actually want to visit if it’s necessary.
What I’m trying to say is that making any product decision for the extremely small (but vocal) minority of users who block JS is not a good product choice. I’m sorry it doesn’t work for your use case, but having the entire browser ecosystem cater to JS-blocking legitimate users wouldn’t make any sense.
To put that in context, about 6 percent of US homes have no internet access at all. The “I turn off JS” crowd is at least 3x smaller than the crowd with no access at all.
The JS ship sailed years ago. You can turn it off but a bunch of things simply will not work and no amount of insisting that it would not be required will change that.
To quote someone who lived before me: don't accept the things you cannot change. Change the things you cannot accept.
And the no-JS ship has not sailed. Government websites require accessibility, and at least in the UK, do not rely on JS.
I’m not saying change is not possible. I’m saying the change you propose is misguided. I do not believe the entire world should abandon JS to accommodate your unusual preferences nor should everyone be obliged to build two versions of their site, one for the masses and one for those with JS turned off.
Yes, JS is overused. But JS also brings significant real value to the web. JS is what has allowed websites to replace desktop apps in many cases.
Exactly. JS should be used to make apps. A blog is not an app. Your average blog should have 0 lines of JS. Every time I see a blog or a news article who's content doesn't load because I have JS disabled I strongly reconsider whether it's worth my time to read or not.
> JS is what has allowed websites to replace desktop apps in many cases.
Horribly at that, with poorer accessibility features, worse latency, abused visual style that doesn't match the host operating system, unusable during times of net outages, etc, etc.
I’m curious. Do Google Maps, YouTube, etc even work with JS off?
> This was its original intent.
Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
> Horribly at that
I disagree. You say you turn JS off for security but JS has made billions of people more secure by creating a sandbox for these random apps to run in. I can load up a random web app and have high confidence that it can’t muck with my computer. I can’t do the same with random desktop apps.
is "every website now expects to run arbitrary code on the client's computer" really a more secure state of affairs? after high profile hardware vulnerabilities exploitable even from within sandboxed js?
from how many unique distributors did the average person run random untrusted apps that required sandboxing before and after this became the normal way to deliver a purely informational website and also basically everything started happening online?
I used to help friends and family disinfect their PCs from all the malware they’d unintentionally installed.
I use KDE Marble (OpenStreetMap) and Invidious. They work fine.
> Original intent is borderline irrelevant. What matters is how it is actually used and what value it brings.
And that's why webshit is webshit.
> I can’t do the same with random desktop apps.
I can, and besides the point, why should anyone run random desktop apps? (Rhetorical question, they shouldn't.) I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
So no. Some major websites don’t actually work for you.
> And that's why webshit is webshit.
I don’t understand this statement. Webshit is webshit because the platform grew beyond basic html docs? At some point this just feels like hating on change. The web grew beyond static html just like Unix grew beyond terminals.
> I don't run code that I don't trust. And I don't trust code that I can't run for any purpose, read, study, edit, or share. I enforce this by running a totally-free (libre) operating system, booted with a totally-free BIOS, and installing and using totally-free software.
If this is the archetype of the person who turns off JS then I would bet the real percentage is way less than 1%.
And frankly, from an economic POV, I can't blame them. Imagine a company who write a React-based website. (And again, I'm not weighing in on the goodness or badness of that.) Depending on how they implemented it, supporting a non-JS version may literally require a second, parallel version of the site. And for what, to cater to 1-2% of users? "Hey boss, can we triple our budget to serve two versions of the site, kept in lockstep and feature identical so that visitors don't scream at us, to pick up an extra 1% or 2% of users, who by definition are very finicky?" Yeah, that's not happening.
I've launched dozens of websites over the years, all of them using SSR (or HTML templates as we called them back in the day). I've personally never written a JavaScript-native website. I'm not saying the above because I built a career on writing JS or something. And despite that, I completely understand why devs might refuse to support non-JS browsers. It's a lot of extra work, it means they can't use the "modern" (React launched in 2013) tools they're use to, and all without any compelling financial benefit.
Recent XSLT parser exploits were literally the reason this whole push to remove it was started, so this change will specifically be helping people in your shoes.
Simplicity is pretty much the #1 priority in engineering I think, despite many of the voters in this thread apparently disagreeing.
I'll take this:
Over this (maybe exaggerated but not far off in some circles): The computer will parse it 100x faster too :)bad technology seems to make life easier at the beginning, but that's why we now have sloppy websites that are an unorganized mess of different libraries, several MB in size without reason, and an absolute usability and accessibility nightmare.
xhtml and xml were better, also the idea separating syntax from presentation, but they were too intelligent for our own good.
I've used it in an unfinished website where all data was stored in a single XML file and all markup was stored in a single XSLT file. A CGI one-liner then made path info available to XSLT, and routing (multiple pages) was achieved by doing string tests inside of the XSLT template.
(There are also reasons why it might be useful to allow the user to manually install native code extensions, but native code seems to be not helpful for this use, so to improve security it should not be used for this and most other extensions.)
> The Firefox[^0] and WebKit[^1] projects have also indicated plans to remove XSLT from their browser engines.
[^0]: https://github.com/mozilla/standards-positions/issues/1287#i...
[^1]: https://github.com/whatwg/html/issues/11523#issuecomment-314...
Google and Freed using this as a go ahead because the Mozilla guy pasted a pollyfill. However it is very clearly NOT an endorsement to remove it, even though bad actors are stating so.
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support1. If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
Freed et al also explicitly chose to ignore user feedback for their own decision and not even try to improve XSLT security issues at the cost of performance.
They’re MBAs who only know how to destroy and consolidate as trained.
XSLT in non-browser contexts is absolutely valuable.
If “modern developers” actually spent time with it, they’d find it valuable. Modern developers are idiots if their constant cry is “just write it in JS”.
No idea what’s inaccurate about this. A billion dollar company that has no problem pivoting otherwise, can’t fund open technology “because budgets” is simply a lie.
You can't trim the space of "users" to just "people who already adopted the technology" in the context of the cost of browser support.
- Chrome 164 (Aug 17, 2027): Origin Trial and Enterprise Policy stop functioning. XSLT is disabled for all users.**
Not the first time I've seen on Google's pages that the use of asterisks then lacks the corresponding footnotes.
Data and its visualisation should be strictly separate, and not require an additional engine in your environment of choice.
I'm curious to see what happens going forward with these aging and under-resourced—yet critical—libraries.
they went from a clean scheme based standard to a human-unreadable "use a GUI tool" syntax.
st text serna was a wysiwyg XML FO rendering editor: throw in call stylesheets and some input XML, wysiwyg edit away
but xslt didn't take off nor did derived products.
And yes I am sour about the fact as an American I have to hope the EU does something about this because I know full-well it's not happening here in The Land of the Free.
XSLT might make sense as part of a processing pipeline. But putting it on front of your website was just an unnecessary and inflexible layer, so that's why everyone stopped doing it. (except rss feeds and etc.)
XSLT is like programming with both hands tied behind your back, or pedaling a bicycle with only one leg. For any non-trivial task, you quickly hit a wall of complexity or impossibility, then the only way XSLT is useful is if you use Microsoft's non-standard XSLT extensions that let you call out to JavaScript, then you realize it's so easy and more powerful to simply do what you want directly in JavaScript there's absolutely no need for XSLT.
I understand XSLT just fine, but it is not the only templating language I understand, so I have something to compare it with. I hate XSLT and vastly prefer JavaScript because I've known and used both of them and other worse and better alternatives (like Zope Page Templates / TAL / METAL / TALES, TurboGears Kid and Genshi, OpenLaszlo, etc).
https://news.ycombinator.com/item?id=44396067
https://news.ycombinator.com/item?id=22264623
https://news.ycombinator.com/item?id=28878913
https://news.ycombinator.com/item?id=16227249
>My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
>Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
Even just plain JavaScript is much better and more powerful and easier to use than XSLT. There are many JavaScript libraries to help you with templates. Is there even any such thing as an XSLT library?
Is there some reason you would prefer to use XSLT than JavaScript? You can much more easily get a job yourself or hire developers who know JavaScript. Can you say the same thing for XSLT, and would anyone in their right mind hire somebody who knows XSLT but refuses to use JavaScript?
XSLT is so clumsy and hard to modularize, only good for messy spaghetti monoliths, no good for components and libraries and modules and frameworks, or any form of abstraction.
And then there's debugging. Does a good XSLT debugger even exist? Can it hold a candle to all the off-the-shelf built-in JavaScript debuggers that every browser includes? How do you even debug and trace through your XSLT?
XSLT is a good middle ground that gave me just enough rope to do some fun transformations and put up some pages on the internet without having to set up a dev environment or learn a 'real' programming language
Hearing the words Xalan, Xerces, FOP makes me break out in a cold sweat, 20 years later.
As a general rule, simplifying and removing code is one of the best things you can do for security. Sure you have to balance that with doing useful things. The most secure computer is an unplugged computer but it wouldn't be a very useful one; security is about tradeoffs. There is a reason though that security is almost always cited - to some degree or another, deleting code is always good for security.
Sure, but that’s not what they’re doing in the big picture. XSLT is a tiny drop in the bucket compared to all the surface area of the niche, non-standard APIs tacked onto Chromium. It’s classic EEE.
https://developer.chrome.com/docs/web-platform/
that said, xslt is a bit of a weird api in how it interacts with everything. Not all apis are equally risky and i suspect xslt is pretty high up there on the risk vs reward ratio.
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
— https://www.offensivecon.org/speakers/2025/ivan-fratric.html
— https://www.youtube.com/watch?v=U1kc7fcF5Ao
> libxslt -- unmaintained, with multiple unfixed vulnerabilities
— https://vuxml.freebsd.org/freebsd/b0a3466f-5efc-11f0-ae84-99...
This text from Cloudflare challenge pages is just a flat-out lie.
The major downside to removing this seems to be that a lot of people LIKE it. But eh, you're welcome to fork Chromium or Firefox.
The security argument can be valid motivation for doing something, but is utterly illegitimate as a reason for removing. They want to remove it because they abandoned it many years ago, and it’s a maintenance burden. Not a security burden, they’ve shown exactly how to fix that as part of preparing to remove it!
libxslt the library is a barely-maintained dumpster fire of bad practices.
A solution is only a solution if it solves the problem.
This sort of thing, basically a "reverse X/Y problem", is an intellectually dishonest maneuver, where a thing is dubbed a "solution" after just, like, redefining the problem to not include the parts that make it a problem.
The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
> As mentioned previously, the RSS/Atom XML feed can be augmented with one line, <script src="xslt-polyfill.min.js" xmlns="http://www.w3.org/1999/xhtml"></script>, which will maintain the existing behavior of XSLT-based transformation to HTML.
Oh, yeah? It's that easy? So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right? Or is this another instance where well-paid engineers on the Chrome team who elected to accept the responsibility of maintaining the stability of the Web have decided that they like the getting-paid part but don't like the maintaining-the-stability-of-the-Web part and are talking out of both sides of their mouths?
As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
And that's the issue with XSLT: it won't.
Liam Quinn in his award winning weirdest hat, also Microsoft's Matthew Fuchs' talk on achieving extensibility and reuse for XSLT 2.0 stylesheets, and Stephan Kesper̀̀'s simple proof that XSLT and XQuery are Turing complete using μ-recursive functions, and presentations about other cool stuff like Relax/NG:
https://www.cafeconleche.org/oldnews/news2004August5.html
Liam Quin's post:
https://www.facebook.com/liam.quin/posts/pfbid0X6jE58zjcEK5U...
#XML people!
How do we communicate the idea that declarative markup is a good idea? Declarative markup is where you identify what is there, not what it does. This is a title, not, make this big and bold. This is a a part number, not, make this blink when you click it - sure, you can do that to part numbers, but don't encode your aircraft manual that way.
But this idea is hard to grasp, for the same reason that WYSIAYG word processors (the A stands for All, What you see is all you get) took over from descriptive formatting in a lot of cases.
For an internal memo, for an insurance letter to a client, how much matters? Well, the insurance company has to be able to search the letters for specific information for 10, 20, 40, 100 years. What word processor did you use 40 years ago? Wordstar? Magic Wand? Ventura?
#markupMonday #declarativeMarkup
Don Hopkins: I Wanna Be <![CDATA[
https://donhopkins.medium.com/i-wanna-be-cdata-3406e14d4f21
Liam Quin: hahaha i actually opposed the inclusion of CDATA sections when we were designing XML (by taking bits we wanted from SGML), but they were already in use by the people writing the XML spec! But now you’ve given me a reason to want to keep them. The weird syntax is because SGML supported more keywords, not only CDATA, but they were a security fail.
Don Hopkins: There was a REASON for the <![SYNTAX[ ]]> ?!?!? I though it was just some kind of tribal artistic expressionism, like lexical performance art!
At TomTom we were using xulrunner for the cross platform content management tool TomTom Home, and XUL abused external entities for internationalizing user interface text. That was icky!
For all those years programming OpenLaszlo in XML with <![CDATA[ JavaScript code sections ]>, my fingers learned how to type that really fast, yet I never once wondered what the fuck ADATA or BDATA might be, and why not even DDATA or ZDATA? What other kinds of data are there anyway? It sounds kind of like quantum mechanics, where you just have to shrug and not question what the words mean, because it's just all arbitrarily weird.
Liam Quin: haha it’s been 30 years, but, there’s CDATA (character data), replaceable character data (RCDATA) in which `é` entity definitions are recognised but not `<`, IGNORE and INCLUDE, and the bizarre TEMP which wraps part of a document that might need to be removed later. After `<!` you could also have comments, <!-- .... --> for example (all the delimiters in SGML could be changed).
Don Hopkins: What is James Clark up to these days? I loved his work on Relax/NG, and that Dr. Dobb's interview "The Triumph of Simplicity".
https://web.archive.org/web/20020224025029/http://www.ddj.co...
Note: James Clark is arguably the single most important engineer in XML history:
- Lead developer of SGMLtools, expat, and Jade/DSSSL
- Co-editor of the XML 1.0 specification
- Designer of XSLT 1.0 and XPath 1.0
- Creator of Relax NG, one of the most elegant schema languages ever devised
He also wrote the reference XSLT implementation XT, used in early browsers and toolchains before libxslt dominated.
James Clark’s epic 2001 Doctor Dobb's Journal "A Triumph of Simplicity: James Clark on Markup Languages and XML" interview captures his minimalist design philosophy and his critique of standards and committee-driven complexity (which later infected XSLT 2.0).
It touches on separation of concerns, simplicity as survival, a standard isn't one implementation, balance of pragmatism and purity, human-scale simplicity, uniform data modeling, pluralism over universality, type systems and safety, committe pathology, and W3C -vs- ISO culture.
He explains why XML is designed the way it is, and reframes the XSLT argument: his own philosophy shows that when a transformation language stops being simple, it loses the very quality that made XML succeed.
I will not forget the name Mason Freed, destroyer of open collaborative technology.
Maybe round one of it like ten years ago did? From what I understand, it's a Google employee who opened the "Hey, I want to get rid of this and have no plans to provide a zero-effort-for-users replacement." Github Issue a few months back.
— https://news.ycombinator.com/item?id=44953349
[0] <https://github.com/whatwg/html/issues/11523>
[1] <https://github.com/whatwg/html/issues/11146#issuecomment-275...>
https://github.com/smaug----
They all agreed because XSLT is extremely unpopular and worse than JS in every way. Performance/bloat? Worse. Security? MUCH worse. Language design? Unimaginably worse.
EDIT: I wrote thousands of lines of XSLT circa 2005. I'm grateful that I'll never do that again.
XSLT is still a great way of easily transforming xml-like documents. It's orders of magnitude more concise than transforming using Javascript or other general programming languages. And people are actively re-inventing XSLT for JSON (see `jq`).
You can use Javascript to get the same effect and, indeed, write your transforms in much the same style as XSLT. Javascript has xpath (still). You have a choice of template language but JSX is common and convenient. A function for applying XSLT-style matching rules for an XSLT push style of transform is only a few lines of code.
Do you have a particular example where you think Javascript might be more verbose than XSLT?
Java in general... Maven, trying to implement extremely simple things in Gradle (e.g. only execute a specific Thing as part of the pipeline when certain conditions are met) is an utter headache to do in the pom.xml because XML is not a programming language!
I agree though, "XML is not a programming language" and attempts to use it that way have produced poor results. You should have seen the `ant` era! But this is broader than XML - look at pretty much every popular CI system for "YAML is not a programming language".
That doesn't mean that XML isn't useful. Just not as a programming language.
https://news.ycombinator.com/item?id=26663191
XSLT (or ANT) may be Turing complete, but it's firmly embedded in the Turing Tarpit.
https://en.wikipedia.org/wiki/Turing_tarpit
>"54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy." -Alan Perlis
Maybe by analogy: There are type systems that are Turing complete. People sometimes abuse them to humorous effect to write whole programs (famously, C++ templates). That doesn't mean that type systems are bad.
npm isn't even a build tool, it's a package manager and at that it's actually gotten quite decent - the fact that the JS ecosystem at large doesn't give a fuck about respecting semantic versioning or keeps reinventing the wheel or that NodeJS / JavaScript itself lacks a decent standard library aren't faults of npm ;)
Maven and Gradle in contrast are one-stop-shops, both build orchestrators and dependency managers. As for ant, oh hell yes I'm aware of that. The most horrid build system I encountered in my decade worth of tenure as "the guy who can figure out pretty much any nuclear submarine project (aka, only surfaces every few years after everyone working on it departed)" involved Gradle, which then orchestrated Maven and Ant, oh and the project was built on a Jenkins that was half DSL, half clicked together in the web UI, and the runner that executed the builds was a manually set up, "organically grown" server. That one was a holy damn mess to understand, unwind, clean up and migrate to Gitlab.
> look at pretty much every popular CI system for "YAML is not a programming language".
Oh yes... I only had the misfortune of having to code for Github Actions once in my life time, it's utter fucking madness compared to GitLab.
Comparing single-purpose declarative language that is not even really turing-complete with all the ugly hacks needed to make DOM/JS reasonably secure does not make any sense.
Exactly what you can abuse in XSLT (without non-standard extensions) in order to do anything security relevant? (DoS by infinite recursion or memory exhaustion does not count, you can do the same in JS...)
https://www.offensivecon.org/speakers/2025/ivan-fratric.html
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
https://nvd.nist.gov/vuln/detail/CVE-2025-7425
https://nvd.nist.gov/vuln/detail/CVE-2022-22834
(And, for the record, XSL is Turing-complete. It has xsl:variable, xsl:if, xsl:for-each, and xsl:apply-template function calls.)
All those people suck, too.
Were you counting on a different response?
> XSLT is extremely unpopular and worse than JS in every way
This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".
Only Google is pushing forward and twisting that message.
Mozilla:
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support.
— https://github.com/mozilla/standards-positions/issues/1287#i...
WebKit:
> WebKit is cautiously supportive. We'd probably wait for one implementation to fully remove support, though if there's a known list of origins that participate in a reverse origin trial we could perhaps participate sooner.
— https://github.com/whatwg/html/issues/11523#issuecomment-314...
Describing either of those as “they preferred to keep it” is blatantly untrue.
Google, Mozilla and Apple do not care if it doesn't make them money, unless you want to pay them billions to keep that feature?
> I will not forget the name Mason Freed, destroyer of open collaborative technology.
This is quite petty.
It's even worse than that, actually, because all of the things we've built aren't just not doing what we want, they're holding developers back. People build their applications on frameworks that _abstract out_ all the APIs we build for browsers, and _even with those frameworks_ developers are hamstrung by weird limitations of the web."
- https://news.ycombinator.com/item?id=34612696#34622514
I find it so weird that browser devs can point to the existence of stuff like React and not feel embarrassed.
Sorry, I don't follow. What's embarrassing about React?
Curious: have any of you used XSLT in production lately?
Because browsers only support XSLT 1.0 the transform to HTML is typically done server side to take advantage of XSLT 2.0 and 3.0 features.
It's also used by the US government:
1. https://www.govinfo.gov/bulkdata/BILLS
2. https://www.govinfo.gov/bulkdata/FR/resources
All the transformed are maintained by non-developers, business analysts mainly. Because the language is so simple we don't need to give them much training, just get IntelliJ installed on their machine, show them a few samples and let them work away.
We couldn't have managed with anything else.
No, XSLT isn't required for the open web. Everything you can do with XSLT, you can also do without XSLT. It's interesting technology, but not essential.
Yes, this breaks compatibility with all the 5 websites that use it.
I am not really a functional programming guy. But XSLT is a really cool application of functional programming for data munging, and I wouldn’t have believed it if I hadn’t used it enough for it to click.
But server side, many years ago I built an entire CMS with pretty arbitrary markup regions that a designer could declare (divs/TDs/spans with custom attributes basically) in XSLT (Sablotron!) with the Perl binding and a customised build of HTML Tidy, wrapped up in an Apache RewriteRule.
So designers could do their thing with dreamweaver or golive, pretty arbitrarily mark up an area that they wanted to be customisable, and my CMS would show edit markers in those locations that popped up a database-backed textarea in a popup.
What started off really simple ended up using Sablotron's URL schemes to allow a main HTML file to be a master template for sub-page templates, merge in some dynamic functionality etc.
And the thing would either work or it wouldn't (if the HTML couldn't be tidied, which was easy enough to catch).
The Perl around the outside changed very rarely; the XSLT stylesheet was fast and evolved quite a lot.
Actually a transformation system can reduce bloat, as people don't have to write their own crappy JavaScript versions of it.
Being XML the syntax is a bit convoluted, but behind that is a good functional (in sense of functional programming language, not functioning) system which can be used for templating etc.
The XML made it a bit hard to get started and anti-XML-spirit reduced motivation to get into it, but once you know it, it beats most bloaty JavaScript stuff in that realm by a lot.
Ah, when ignorance leads to arrogance; It is massively utilised by many large entreprise or state administration in some countries.
Eg if you're american the library of congress uses it to show all legislative text.
Good riddance I guess - it and most of the tech from the "XML era" was needlessly overcomplicated.
As your first sentence of your comment indicates, the fact that it's supported and there for people to use doesn't (and hasn't) result in you being forced to use it in your projects.
I do tend to support backwards compatibility over constant updates and breakage, and needless hoops to jump through as e.g. Apple often puts its developers through. But having grown up and worked in the overexuberant XML-for-everything, semantic-web 1000-page specification, OOP AbstractFactoryTemplateManagerFactory era, I'm glad to put some of that behind us.
If that makes me some kind of gestappo, so be it.
It's a loss, if you ask me, to remove it from client-side, but it's one I worked through years ago.
It's still really useful on the server side for document transformation.