Readit News logoReadit News
dmsnell commented on How to safely escape JSON inside HTML SCRIPT elements   sirre.al/2025/08/06/safe-... · Posted by u/dmsnell
dullcrisp · 15 days ago
Huh, it’s still confusing to me why they would have this double-escaping behavior only inside an HTML comment. Why not have it always behave one way or the other? At what point did the parsing behavior inside and outside HTML comments split and why?
dmsnell · 14 days ago
At some point I think I read a more complete justification, but I can’t find it now. There is evidence that it came about as a byproduct of the interaction of the HTML parser and JS parsers in early browsers.

In this link we can see the expectation that the HTML comment surrounds a call to document.write() which inserts a new SCRIPT element. The tags are balanced.

https://stackoverflow.com/questions/236073/why-split-the-scr...

In this HTML 4.01 spec, it’s noted to use HTML comments to hide the script contents from render, which is where we start to get the notion of using these to hide markup from display.

https://www.w3.org/TR/html401/interact/scripts.html

Some drafts of the HTML standard attempted to escape differently and didn’t have the double escape state.

https://www.w3.org/TR/2016/WD-html52-20161206/semantics-scri...

My guess is that at some point the parsers looked for balanced tags, as evidenced in the note in the last link above, but then practical issues with improperly-generated scripts led to the idea that a single SCRIPT closing tag ends the escaping. Maybe people were attempting to concatenate script contents wrong and getting stacks of opening tags that were never closed. I don’t know, but I suppose it’s recorded somewhere.

Many things in today’s HTML arose because of widespread issues with how people generated the content. The same is true of XML and XHTML by the way. Early XML mailing lists were full of people parsing XML with naive PERL regular expressions and suggesting that when someone wants to “fix” broken markup, that they do it with string-based find-and-replace.

The main difference is that the HTML spec went in the direction of saying, _if we can agree how to handle these errors then in the face of some errors we can display some content_ and we can all do it in the same way. XML is worse in some regards: certain kinds of errors are still ambiguous and up to the parser to determine how to handle, whether they are non-recoverable or recoverable. For those non-recoverable, the presence of a single error destroys the entire document, like being refused a withdrawal at the bank because you didn’t cross a 7.

At least with HTML5, it’s agreed upon what to do when errors are present and all parsers can produce the same output document; XML parsers routinely handle malformed content and do so in different ways (though most at least provide or default to a strict mode). It’s better than the early web, but not that much better.

dmsnell commented on Don't “let it crash”, let it heal   zachdaniel.dev/p/elixir-m... · Posted by u/ahamez
gopher_space · 14 days ago
Both of your examples look like infinite crash-loops if your work needs to be correct more than it needs to be available. E.g. there aren't any known good states prior to an unexpected crash, you're just throwing a hail mary because the alternatives are impractical.
dmsnell · 14 days ago
When a process crashes, its supervisor restarts it according to some policy. These specify whether to restart the sibling process in their startup order or to only restart the crashed process.

But a supervisor also sets limits, like “10 restarts in a timespan of 1 second.” Once the limits are reached, the supervisor crashes. Supervisors have supervisors.

In this scenario the fault cascades upward through the system, triggering more broad restarts and state-reinitializations until the top-level supervisor crashes and takes the entire system down with it.

An example might bee losing a connection to the database. It’s not an expected fault to fail while querying it, so you let it crash. That kills the web request, but then the web server ends up crashing too because too many requests failed, then a task runner fails for similar reasons. The logger is still reporting all this because it’s a separate process tree, and the top-level app supervisor ends up restarting the entire thing. It shuts everything off, tries to restart the database connection, and if that works everything will continue, but if not, the system crashes completely.

Expected faults are not part of “let it crash.” E.g. if a user supplies a bad file path or network resource. The distinction is subjective and based around the expectations of the given app. Failure to read some asset included in the distribution is both unlikely and unrecoverable, so “let it crash” allows the code to be simpler in the happy path without giving up fault handling or burying errors deeper into the app or data.

dmsnell commented on How to safely escape JSON inside HTML SCRIPT elements   sirre.al/2025/08/06/safe-... · Posted by u/dmsnell
dullcrisp · 15 days ago
Wait can someone explain why a script tag inside a comment inside a script tag needs to be closed, while a script tag inside a script tag without a comment does not? They explained why comments inside script tags are a thing, but nothing further than that.
dmsnell · 15 days ago
The other comment explains this, but I think it can also be viewed differently.

It’s helpful to recognize that the inner script tags are not actual script tags. Yes, once entering a script element, the browser switches parsers and wants to skip everything until a closing script tag appears. The STYLE element, TITLE, TEXTAREA, and a few others do this. Once they chop up the HTML like this they send the contents to the separate inner parser (in this case, the JS engine). SCRIPT is unique due to the legacy behavior^1.

HTML5 specifies these “inner” tags as transitions into escape modes. The entire goal is to allow JavaScript to contain the string “</script>” without it leaking to the outer parser. The early pattern of hiding inside an HTML comment is what determined the escaping mechanism rather than making some special syntax (which today does exist as noted in the post).

The opening script tag inside the comment is actually what triggers the escaping mode, and so it’s less an HTML tag and more some kind of pseudo JS syntax. The inner closing tag is therefore the escaped string value and simultaneously closes the escaped mode.

Consider the use of double quotes inside a string. We have to close the outer quote, but if the inner quote is escaped like `\”` then we don’t have to close it — it’s merely data and not syntax.

There is only one level of nesting, and eight opening tags would still be “closed” by the single closing tag.

^1: (edit) This is one reason HTML and XML (XHTML) are incompatible. The content of SCRIPT and STYLE elements are essentially just bytes. In XML they must be well-formed markup. XML parsers cannot parse HTML.

dmsnell commented on How to safely escape JSON inside HTML SCRIPT elements   sirre.al/2025/08/06/safe-... · Posted by u/dmsnell
dmsnell · 16 days ago
Discussing why parsing HTML SCRIPT elements is so complicated, the history of why it became the way it is, and how to safely and securely embed JSON content inside of a SCRIPT element today.
dmsnell · 15 days ago
This was my first submission, and the above comment was what I added to the text box. It wasn’t clear to me what the purpose was, but it seemed like it would want an excerpt. I only discovered after submitting that it created this comment.

I guess people just generally don’t add those?

Still, to help me out, could someone clarify why this was down-voted? I don’t want to mess up again if I did, but I don't understand what that was.

dmsnell commented on How to safely escape JSON inside HTML SCRIPT elements   sirre.al/2025/08/06/safe-... · Posted by u/dmsnell
dmsnell · 16 days ago
Discussing why parsing HTML SCRIPT elements is so complicated, the history of why it became the way it is, and how to safely and securely embed JSON content inside of a SCRIPT element today.
dmsnell commented on Jujutsu for busy devs   maddie.wtf/posts/2025-07-... · Posted by u/Bogdanp
sshine · a month ago
That’s great advice, thanks for sharing --update-refs!

But I don’t see how that removes the usefulness of fixup commits, only that you can do them across stacked branches with ease.

But you’re saying I don’t need the particular hash of the parent, I can just rebase all the way back to main/trunk each time. That’s a good point!

I think I’m still saving time by fixup, it was the second hash lookup I wasn’t happy with.

I like to rebase my fixups immediately when possible so I don’t forget.

dmsnell · a month ago
You understand it correctly: it doesn’t remove any value from fixup; instead it just means you can fixup to whatever and move the commit visually to where you want it — doesn’t have to be the main/trunk branch, and you can do it whenever is convenient.

That’s where I made the comment about not actually running fixup. Instead of claiming to fix a SHA, I leave myself a note like “fix tests” so I can move it appropriately even if I get distracted for a few days

dmsnell commented on Jujutsu for busy devs   maddie.wtf/posts/2025-07-... · Posted by u/Bogdanp
sshine · a month ago
> How fast I can deliver value is almost never gated by my VCS

Oh, the number of times I do

  git commit --fixup ...searches `git log` and pastes`... && \
    git rebase -i --autosquash ...that same commit^...
I waste several half minutes several times per day in periods!

Another time-consuming thing is context-switching from a feature branch with staged/unstaged changes. If you're good with worktrees, you can largely avoid this, but the way I work with worktrees, I instantiate them as subdirectories to my repo's parent, which clutters my directories unless I make space for this as I clone the repository, which I haven't got used to.

I deliberately avoid too complicated git workflow aliases because I hate being stuck without them.

I am definitely in the camp of "I'd switch to jj the moment I give myself time to try it."

In the meantime, running git commands does sometimes take more time than it needs to.

dmsnell · a month ago
You can interactively rebase after the fact while preserving merges even. I usually work these days on stacked branches, and instead of using —-fixup I just commit with a dumb message.

    git rebase -i --rebase-merges --keep-base trunk
This lets me reorganize commits, edit commit messages, split work into new branches, etc… When I add --update-refs into the mix it lets me do what I read are the biggest workflow improvements from jj, except it’s just git.

This article from Andrew Lock on stacked branches in git was a great inspiration for me, and where I learned about --update-refs:

https://andrewlock.net/working-with-stacked-branches-in-git-...

dmsnell commented on Repasting a MacBook   christianselig.com/2025/0... · Posted by u/speckx
alliao · a month ago
have you tried PTM7950? I'm not even sure what is it just saw many swear by it
dmsnell · a month ago
No but I think I saw that it’s a thermal pad, one with comparable thermal transfer characteristics to the high end (non-conductive) pastes.
dmsnell commented on Staying cool without refrigerants: Next-generation Peltier cooling   news.samsung.com/global/i... · Posted by u/simonebrunozzi
scosman · a month ago
What's a "low temperature difference"?

I want to build a wine cooler in my basement ~20-24c, and I want it at ~16c. Is that low enough to be reasonably efficient?

dmsnell · a month ago
TECs are wonderful little devices with operating characteristics unlike comparable devices.

They can be designed to move a specific amount of heat or to cool at some delta-T below the hot side (and due to inefficiencies the hot side can climb above ambient temperatures too, raising the “cold side” above ambient!)

I ran through a design exercise with a high quality TEC and at 8°C delta-T for a wine cooler you could expect a COP of around 3.5–4 (theoretically). This is pretty good! But below the 2.5V max to do that you’re only able to exhaust up to around 40W. For a wine cooler this is not so bad. For a refrigerator it’s a harder challenge because the temperature drops when the door opens, and if someone sticks in a pot of hot soup, it’s important to eject that heat before it raises the temperature inside to levels where food safety becomes a problem. For a CPU it’s basically untenable under load because it’s too much heat entering the cold side thus temperatures will rise.

https://fluffyandflakey.blog/2019/08/29/cooling-a-cpu-with-t...

Things often overlooked:

- Most TECs are cheap and small and come without data sheets, so people tend to become disillusioned after running them too hot.

- You have to keep the hot side cool or else the delta-T doesn’t help you. For a wine cooler this is probably no big deal: you can add a sizable fan and heat sink. For CPU cooling it becomes a tighter problem. You basically can’t win by mounting on the CPU; they are best at mediating two independent water-cooling loops.

- Q ratings are useless without performance graphs. It’s meaningless to talk about a “100W” TEC other than to estimate that it has a higher capacity than a “20W” TEC.

- Ratings and data sheets are hypothetical best cases. Reality constrains the efficiency through a thousand cuts.

When I think about TECs I think more about heat transfer than temperature drops. If you open a well-insulated wine cooler once a week then once it cools it will only need to maintain its temperature, and that requires very little heat movement. Since nothing inside is generating heat you basically have zero watts as a first-order approximation. For the same device mentioned above, it stops working below 1V, and at 8° delta-T that’s a drop in COP to around zero but it’s also nearly zero waste. If you were to maintain a constant 2.5V, however, it would continue to try and pull 40W to the hot side. This would cause the internal temperature to drop and your COP would decrease even though the TEC is using constant power. The delta-T would in fact increase until the inefficiencies match the heat transfer and everything stabilizes. In this case that’s around a 20° drop from the hot side, assuming perfect insulation.

Unlike compressors, TECs have this convenient ability to scale up and down and maintain consistent temperatures; they just can’t respond quickly and dump a ton of heat in the same way.

edit: formatting of list

u/dmsnell

KarmaCake day71November 20, 2016View Original