Readit News logoReadit News
deadbolt commented on Inside Epstein’s network: what 1.4M emails reveal   economist.com/interactive... · Posted by u/doener
asdff · 5 hours ago
Pam Bondi said if the DOJ went after everyone the whole system would collapse is why.
deadbolt · 4 hours ago
If that's the case, the system deserves to.

Deleted Comment

deadbolt commented on High-speed train collision in Spain kills at least 39   bbc.com/news/articles/ced... · Posted by u/akyuu
0xfaded · 25 days ago
Disclaimer I work for Zoox, but here is us crash testing https://youtu.be/597C9OwV0o4
deadbolt · 25 days ago
I enjoyed watching that - though it wasn't really related to the seating direction, specifically.

Are you one of the safety engineers? Have you discovered anything which isn't included in normal safety tests which should be?

deadbolt commented on Texas police invested in phone-tracking software and won’t say how it’s used   texasobserver.org/texas-p... · Posted by u/nobody9999
lingrush4 · 25 days ago
[flagged]
deadbolt · 25 days ago
I've been robbed more frequently by the rich than the poor.

Of course I'm only one person.

Deleted Comment

deadbolt commented on ICE Is Going on a Surveillance Shopping Spree   eff.org/deeplinks/2026/01... · Posted by u/BeetleB
voganmother42 · a month ago
Will they murder more or fewer innocent people with better surveillance data?
deadbolt · a month ago
Well they don't care if you're innocent or not, so I'm wagering 'more'.

Deleted Comment

Deleted Comment

deadbolt commented on A power outage in Colorado caused U.S. official time be 4.8 microseconds off   npr.org/2025/12/21/nx-s1-... · Posted by u/bryan0
_wire_ · 2 months ago
If they don't know what time it is, how can it be 4.8 us off?
deadbolt · 2 months ago
> All of the atomic clocks continued ticking through the power outage last week thanks to their battery backup systems, according to NIST supervisory research physicist Jeff Sherman. What failed was the connection between some of the clocks and NIST's measurement and distribution systems, he said.
deadbolt commented on History LLMs: Models trained exclusively on pre-1913 texts   github.com/DGoettlich/his... · Posted by u/iamwil
libraryofbabel · 2 months ago
This is the 2023 take on LLMs. It still gets repeated a lot. But it doesn’t really hold up anymore - it’s more complicated than that. Don’t let some factoid about how they are pretrained on autocomplete-like next token prediction fool you into thinking you understand what is going on in that trillion parameter neural network.

Sure, LLMs do not think like humans and they may not have human-level creativity. Sometimes they hallucinate. But they can absolutely solve new problems that aren’t in their training set, e.g. some rather difficult problems on the last Mathematical Olympiad. They don’t just regurgitate remixes of their training data. If you don’t believe this, you really need to spend more time with the latest SotA models like Opus 4.5 or Gemini 3.

Nontrivial emergent behavior is a thing. It will only get more impressive. That doesn’t make LLMs like humans (and we shouldn’t anthropomorphize them) but they are not “autocomplete on steroids” anymore either.

deadbolt · 2 months ago
As someone who still might have a '2023 take on LLMs', even though I use them often at work, where would you recommend I look to learn more about what a '2025 LLM' is, and how they operate differently?

u/deadbolt

KarmaCake day308March 26, 2021View Original