Readit News logoReadit News
arowthway commented on History LLMs: Models trained exclusively on pre-1913 texts   github.com/DGoettlich/his... · Posted by u/iamwil
mleroy · a day ago
Ontologically, this historical model understands the categories of "Man" and "Woman" just as well as a modern model does. The difference lies entirely in the attributes attached to those categories. The sexism is a faithful map of that era's statistical distribution.

You could RAG-feed this model the facts of WWII, and it would technically "know" about Hitler. But it wouldn't share the modern sentiment or gravity. In its latent space, the vector for "Hitler" has no semantic proximity to "Evil".

arowthway · a day ago
I think much of the semantic proximity to evil can be derived straight from the facts? Imagine telling pre-1913 person about the holocaust.
arowthway commented on Security vulnerability found in Rust Linux kernel code   git.kernel.org/pub/scm/li... · Posted by u/lelanthran
arowthway · 2 days ago
I hate this bot-detection anime girl popping up on my monitor while I pretend to be working. Same goes for the funny pictures at the beginning of some Github readmes. Sorry for complaining about a tangential annoyance, but I haven't seen this particular sentiment expressed yet.
arowthway commented on Forget the far right. The kids want a 'United States of Europe.'   politico.eu/article/unite... · Posted by u/saubeidl
DivingForGold · 5 days ago
Seems more like the EU has descended into a "paper tiger", it's now reported the Russians are mapping out all the EU military bases in advance with drones from cargo ships near EU shores. Will be a real sh*t show to see the results when the Russians attack NATO across multiple fronts, and the EU is forced to place rifles in the hands of all these EU kids sending them to the front lines . . . as you have admitted: "decades of dismal economic and social outcomes . . ."
arowthway · 5 days ago
This scenario was a lot scarier before 2022.
arowthway commented on Auto-grading decade-old Hacker News discussions with hindsight   karpathy.bearblog.dev/aut... · Posted by u/__rito__
jasonthorsness · 10 days ago
It's fun to read some of these historic comments! A while back I wrote a replay system to better capture how discussions evolved at the time of these historic threads. Here's Karpathy's list from his graded articles, in the replay visualizer:

Swift is Open Source https://hn.unlurker.com/replay?item=10669891

Launch of Figma, a collaborative interface design tool https://hn.unlurker.com/replay?item=10685407

Introducing OpenAI https://hn.unlurker.com/replay?item=10720176

The first person to hack the iPhone is building a self-driving car https://hn.unlurker.com/replay?item=10744206

SpaceX launch webcast: Orbcomm-2 Mission [video] https://hn.unlurker.com/replay?item=10774865

At Theranos, Many Strategies and Snags https://hn.unlurker.com/replay?item=10799261

arowthway · 9 days ago
Comment dates on hn frontend are sometimes altered when submissions are merged, do you handle this case properly?
arowthway commented on Horses: AI progress is steady. Human equivalence is sudden   andyljones.com/posts/hors... · Posted by u/pbui
FuckButtons · 11 days ago
Why make the analogy at all if not for the implied slaughter. It is a visceral reminder of our own brutal history. Of what humans do given the right set of circumstances.
arowthway · 11 days ago
How is decreasing the number of horses killed every year brutal?
arowthway commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
mapontosevenths · 12 days ago
There is nothing special about that either. LLM's also have self awareness/introspection, or at least a some version of it.

https://www.anthropic.com/research/introspection

Its hard to tell sometimes because we specifically train them to believe they don't.

arowthway · 11 days ago
Thanks for the link, I haven't seen this before and it's interesting.

I don't think the version of self awareness they demonstrated is synonymous with subjective experience. But same thing can be said about any human other then me.

Damn, just let me believe all brains are magical or I'll fall into solipsism.

arowthway commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
akoboldfrying · 12 days ago
LLMs and human brains are both just mechanisms. Why would one mechanism a priori be capable of "learning abstract thought", but no others?

If it turns out that LLMs don't model human brains well enough to qualify as "learning abstract thought" the way humans do, some future technology will do so. Human brains aren't magic, special or different.

arowthway · 12 days ago
For some unexplainable reason your subjective experience happens to be localized in your brain. Sounds pretty special to me.
arowthway commented on I don't care how well your "AI" works   fokus.cool/2025/11/25/i-d... · Posted by u/todsacerdoti
arowthway · 24 days ago
I like the "what’s left" part of the article. It’s applicable regardless of your preferred flavor of resentment about where things are going.
arowthway commented on Ask HN: How are Markov chains so different from tiny LLMs?    · Posted by u/JPLeRouzic
vidarh · a month ago
Turing machines are deterministic if all their inputs are deterministic, which they do not need to be, and if we allow them to be. Indeed, by default, LLMs are by default not deterministic because we intentionally inject randomness.
arowthway · a month ago
It doesn't mean we can accurately simulate the brain by swapping its source of nondeterminism with any other PRNG or TRNG. It might just so happen that to simulate ingenuity you have to simulate the universe first.
arowthway commented on Ask HN: How are Markov chains so different from tiny LLMs?    · Posted by u/JPLeRouzic
lotyrin · a month ago
Yeah, some of the failure modes are the same. This one in particular is fun because even a human, given "the the the" and asked to predict what's next will probably still answer "the". How a Markov chain starts the the train and how the LLM does are pretty different though.
arowthway · a month ago
I wonder if "X is not Y - its' Z" LLM shibboleth is just an artifact of "is not" being a third most common bigram starting with is, just after "is a" and "is the" [0]. It doesn't follow as simply as it does with markov chains, but maybe this is where the tendency originated, and later was trained and RLHFed into the shape that kind of makes sense instead of getting eliminated.

[0] https://books.google.com/ngrams/graph?content=is+*

u/arowthway

KarmaCake day42September 3, 2025View Original