However, CSS and JS are not error-tolerant. A syntax error in a CSS rule causes it to be ignored. An unhandled JavaScript exception is a hard stop. This way, web does not run on tolerance.
- Wanna subtract a string from a number? That's not a type error, that's a `NaN` -- which is just a perfectly-valid IEEE 754 float, after all, and we all float down here.
- Hell -- arithmetic between arbitrary data types? Chances are you get `[object Object]` (either as a string literal or an *actual* object), which you can still operate on.
- Accessing an object field but you typoed the field name? No worries, that's just `undefined`, and you can always operate on `undefined` values.Frankly, while I haven't had a frontend focus in about 15 years, I struggle to think of any situation where calling a stdlib function or standard language feature would result in an actual exception rather than just an off behaviour that'll accumulate over time the more of them you stack on eachother. I guess calling an undefined variable is a ReferenceError, but beyond that...
(This comment shouldn't be taken as an endorsement of this school of language design)
While I'm not a cryptographer... I never really understood the appeal of these things outside of one very well-defined threat model: namely, they're excellent if you're specifically trying to prevent someone from physically taking your hard drive, and only your hard drive, and walking out of a data centre, office, or home with it.
It also provides measured boot, and I won't downplay it, it's useful in many situations to have boot-time integrity attestation.
The technology's interesting, but as best as I can tell, it's limited through the problem of establishing a useful root-of-trust/root-of-crypt. In general:
- If you have resident code on a machine with a TPM, you can access TPM secrets with very few protections. This is typically the case for FDE keys assuming you've set your machine up for unattended boot-time disk decryption.
- You can protect the sealed data exported from a TPM, typically using a password (plus the PCR banks of a specific TPM), though the way that password is transmitted to the TPM is susceptible to bus sniffing for TPM variants which live outside the CPU. There's also the issue of securing that password, now, though. If you're in enterprise, maybe you have an HSM available to help you with that, in which case the root-of-crypt scheme you have is much more reasonable.
- The TPM does provide some niceties like a hardware RNG. I can't speak to the quality of the randomness, but as I understand it, it must pass NIST's benchmarks to be compliant with the ISO TPM spec.
What I really don't get is why this is useful for the average consumer. It doesn't meaningfully provide FDE in particular in a world where the TPM and storage may be soldered onto the same board (and thus impractical to steal as a standalone unit rather than with the TPM alongside it).
I certainly don't understand what meaningful protections it can provide to game anti-cheats (which I bring up since apparently Battlefield 6 requires a TPM regardless of the underlying Windows version). That's just silly.
Ultimately, I might be misunderstanding something about the TPM at a fundamental level. I'm not a layperson when it comes to computer security, but I'm certainly not a specialist when it comes to designing or working with TPMs, so maybe there's some glaring a-ha thing I've missed, but my takeaway is that it's a fine piece of hardware that does its job well, but its job seems too niche to be useful in many cases; its API isn't very clear (suffering, if anything, from over-documentation and over-specification), and it's less a silver bullet and more a footgun.
That bit was heartbreaking to me too. I knew the economy was bad for new grads but if a double major from MIT in SF is struggling, then the economy is cooked.
Compiler development is (for better or worse) a niche that favours people who've got real-world experience doing this. The traditional ways to get in have either been through high-quality, high-profile open-source contribs, or because your existing non-compiler-dev job let you inch closer to compiler development up until the point you could make the jump.
As the author noted, a lot of modern-day compiler work involves late-life maintenance of huge, nigh-enterprise-type code bases with thousands of files, millions of LOC, and no one person who has a full, detailed view of the entire project. This just isn't experience you get right out of school, or even a year or two on.
Honestly, I'd say that as a 2023 grad with no mentors in the compiler dev space, she's incredibly lucky to have gotten this job at all (and to be clear, I hope she makes the most of it, compiler dev can be a lot of fun).
Beyond that, if you’re from the part of the world where asterix comics were popular (mostly thr francosphere, but also europe more broadly), it really stands out.
That’s all to say nothing of people who’ve got formal higher education in history or even the classics.
We need more casual light-heartedness in this line of work considering how much casual bullshit there is.
For the first few months, act 3 (in the city) was legitimately hard to play. Performance, stability, visual glitches, all pervasive. But later patches did do a better job of improving those points.
Act 3's still the most intensive part of the game by far so on many setups it's still wise to at least crank down the crowd density, but it's come a long way since the launch version of the game.
The resulting models might be a terrible hybrid distillation of GPT5 and Claude with a strong preference for hustle culture & banal parables.
Let it burn.
- A good designer will be able to produce a page whose looks are appropriately engaging, complementary to the content, unique, and easy on the eyes. For every abrasive CSS (or lack thereof) justfuckingusehtml.com, there's a masterpiece like acko.net, many of which just aren't in the mainstream.
- If everything ends up looking the same wouldn't that get... boring? I get the desire to avoid obnoxious design choices, but those obnoxious design choices are part of the web, and they should be embraced as part of the decision-making process about if and how you want to keep reading a site. A bit of friction is, IMO, a good thing when browsing the web. It's the minimum level of keeping the web an interactive medium rather than just a content pipe.
That said, you do you. You're well within your rights to browse the web how you want, up to and including using automation to re-style sites with extreme prejudice.