Readit News logoReadit News
jdpage commented on XSLT removal will break multiple government and regulatory sites   github.com/whatwg/html/is... · Posted by u/colejohnson66
rhdunn · 2 days ago
The suggestion to move the RSS/Atom feed links to a hidden link element is a horrible one for me and presumably others who want to copy that and paste it into their podcast applications. With that suggestion it adds another layer of indirection an application has to fetch and inspect.

Part of the reason HTML 5/LS was created was to preserve the behaviour of existing sites and malformed markup such as omitting html/head/body tags or closing tags. I bet some of those had the same usage as XSLT on the web.

jdpage · 2 days ago
You're right, it's not a great flow! And while many podcast/feed reader applications support pasting the URL of the page containing the <link /> element, that still leaves the problem of advertising that one can even do that, or that there's a feed available in the first place.
jdpage commented on XSLT removal will break multiple government and regulatory sites   github.com/whatwg/html/is... · Posted by u/colejohnson66
jdpage · 2 days ago
One thing that I think this discussion is highlighting to me is that there's very little support in the web standard (as implemented by browsers) for surfacing resources to users that aren't displayable by the browser.

Consider, for example, RSS/Atom feeds. Certainly there are <link /> tags you can add, but since none of the major browsers do anything with those anymore, we're left dropping clickable links to the feeds where users can see them. If someone doesn't know about RSS/Atom, what's their reward for clicking on those links? A screenful of robot barf.

These resources in TFA are another example of that. The government or regulatory bodies in question want to provide structured data. They want people to be able to find the structured data. The only real way of doing that right now is a clickable link.

XSLT provides a stopgap solution, at least for XML-formatted data, because it allows you to provide that clickable, discoverable link, without risking dropping unsuspecting folks straight into the soup. In fact, it's even better than that, because the output of the XSLT can include an explainer that educates people on what they can do with the resource.

If browsers still respected the <link /> tag for RSS/Atom feeds, people probably wouldn't be pushing back on this as hard. But what's being overlooked in this conversation is that there is a real discoverability need here, and for a long time XSLT has been the best way to patch over it.

jdpage commented on R0ML's Ratio   blog.glyph.im/2025/08/r0m... · Posted by u/zdw
valicord · 15 days ago
No, it's always "per unit that you use". Units that you buy and not use are wasted, so that's would be the wrong way to compare.

Dressing up an obvious idea with fancy-sounding math formulas and pages of text is exactly how you get trapped by someone good at selling stuff.

jdpage · 15 days ago
You're right, but I think you're less likely to buy extra at per-unit pricing (since you can always buy just-in-time), so you're less likely to run into a situation where you overpurchased at per-unit pricing. (Edited to note that.)
jdpage commented on R0ML's Ratio   blog.glyph.im/2025/08/r0m... · Posted by u/zdw
valicord · 15 days ago
So buy in bulk if it's cheaper then per unit and buy per unit if it's cheaper than in bulk? Truly a brilliant advice. What's next? "Get multiple quotes and pick the best one"?
jdpage · 15 days ago
Buy in bulk if it's cheaper per unit that you buy, and per unit if it's cheaper per unit that you use. (EDIT: with the understanding that you're unlikely to buy much more than you use if just-in-time per-unit purchasing is available). No, it's not groundbreaking advice, but it's a very easy trap to fall into, especially if you're talking to someone who's good at selling you stuff.

I'm sure we've all had an experience at least once where we've bought something in a larger container from the grocery store thinking we were getting a deal, only to find that we didn't get through it as fast as we thought we would, it started spoiling, and we got to scrape our savings into the trash can.

And many of us have had an experience where someone higher up in the company has insisted that we use a tool that isn't very good "because we've already paid for it", so it's clearly needed advice, even if it's not groundbreaking. Everyone sometimes needs "obvious" conclusions pointed out to them, I think. It's a quirk of our mental processing as humans.

jdpage commented on Welcome to url.town, population 465   url.town/... · Posted by u/plaguna
xxr · 21 days ago
> For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect.

Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?

jdpage · 20 days ago
I think the layout as such (the grid of categories) isn't particularly dated, though a modern site would style them as tiles. The centered text can feel a little dated, but the biggest thing making it feel old is that it uses the default browser styles for a lot of page elements, particularly the font.
jdpage commented on Welcome to url.town, population 465   url.town/... · Posted by u/plaguna
Waraqa · 21 days ago
With the rise of these retro-looking websites, I feel it's possible again to start using a browser from the '90s. Someone should make a static-site social media platform for full compatibility.
jdpage · 21 days ago
Not so much. While a lot of these websites use classic approaches (handcrafted HTML/CSS, server-side includes, etc.) and aesthetics, the actual versions of those technologies used are often rather modern. For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect. Of course, you wouldn't want to do it that way nowadays, because it wouldn't be responsive or mobile-friendly.

(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)

jdpage commented on Major rule about cooking meat turns out to be wrong   seriouseats.com/meat-rest... · Posted by u/voxadam
matt3210 · a month ago
Juices? You mean blood?
jdpage · a month ago
Meat is typically exsanguinated as part of the slaughtering process[1], so no. The juices that come out of e.g. a steak are a mixture of water and proteins such as myoglobin, but can also include fat.

Adam Ragusea has a video[2] on the topic that goes into more depth, though the same information is repeated across a lot of sources of varying quality if you do a quick web search.

[1]: https://en.wikipedia.org/wiki/Meat#Slaughter

[2]: https://youtu.be/gLvQzwjI-IM

jdpage commented on Cognitive Behaviors That Enable Self-Improving Reasoners   arxiv.org/abs/2503.01307... · Posted by u/delifue
owenpalmer · 6 months ago
> four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ.

As we make AI better, perhaps we'll inadvertently find ways to make HI (human intelligence) better too.

I had a personal experience with this when I was studying for an exam recently. As I read over practice questions, I spoke aloud, replicating the reasoning methods/personality of Deepseek R1. By spending a lot of time reading long verbose R1 outputs, I've essentially fine-tuned my brain for reasoning tasks. I believe this method contributed to my excellent score on that exam.

jdpage · 6 months ago
This is a well-known approach: verbalizing your thought process (either by speaking aloud, or by writing) is something that's long established as a good tactic for making sure that you're actually thinking through something, rather than glossing over it. Ironically, I've seen people bemoaning that use of AI will rob people of that.

I agree that there's potential here, though, and do genuinely hope that we find ways to make human intelligence better as we're going about AI research. Even pessimistically, I think we'll at least surface approaches that people use without thinking about, which is on its own a good thing, because once you know you're doing something, it becomes a lot easier to train yourself to do it better.

jdpage commented on The Miraculous Resurrection of Notre-Dame   gq.com/story/the-miraculo... · Posted by u/divbzero
binary132 · a year ago
but like, couldn't they put some nice gardens or maybe a shiny glass pyramid on top to improve it for modern tastes? I think it could be really cute
jdpage · a year ago
The Ancient Egyptians were also like us in that they were into architectural bling and greenery, so I'm not actually sure they'd be complaining that much. They were into materials like gold, electrum, and polished stone, but I'm sure you could sell them on modern glass.

That said, the Great Pyramid is a historical site, not an active worship site, and modern archeological sensibilities prioritise conservation. A restoration like that might make it hard to answer future questions about the pyramid.

jdpage commented on Hypermedia Systems   hypermedia.systems/... · Posted by u/dsego
DrDroop · a year ago
What are you gonna do if you want to give a user the option to display the same search result in a html table and a map drawn on a canvas element, or maybe some info viz thing like a chart? No htmx fanboy has a nice solution for this. Im fine with making hyper media part of the synthesis but ignoring features that you have for free with modern hybrid ssr spa is not helping.
jdpage · a year ago
I'm assuming that you're referring to the fact that if you get the data as JSON from the server side, you can use it to render out your multiple visualisations. If this is not what you're getting at, my apologies.

If you send the HTML table over the wire, you can use it as the datasource for the other visualisations, same as you would with JSON. You can extend it with `data-` attributes if necessary to get some extra machine-readable information in there, but I have not needed to do that yet.

On the application I'm currently working on, we do this and then have a listener on the htmx event to turn the table into a d3.js graph. It works pretty well, and has the advantage that if someone is using our application with JavaScript turned off, they still get the table with all the data.

u/jdpage

KarmaCake day386July 23, 2011
About
email: jonathan@sleepingcyb.org
View Original