The closest parallel I’ve found is Peter Gärdenfors’ work on conceptual spaces, where meaning isn’t symbolic but geometric. Fedorenko’s research on predictive sequencing in the brain fits too. In both cases, the idea is that language follows a trajectory through a shaped mental space, and that’s basically what GPT is doing. It doesn’t know anything, but it generates plausible paths through a statistical terrain built from our own language use.
So when it “hallucinates”, that’s not a bug so much as a result of the system not being grounded. It’s doing what it was designed to do: complete the next step in a pattern. Sometimes that’s wildly useful. Sometimes it’s nonsense. The trick is knowing which is which.
What’s weird is that once you internalise this, you can work with it as a kind of improvisational system. If you stay in the loop, challenge it, steer it, it feels more like a collaborator than a tool.
That’s how I use it anyway. Not as a source of truth, but as a way of moving through ideas faster.
Let's not assume bygone days ever were what we think they were.
No doubt about the last part, but how does merging two giants create "More Choice"? I know corporate double-speak is already out of control and I know they're writing whatever they can do avoid regulators who surely are looking into the acquisition, but surely these executives cannot believe acquisitions lead to more choice, right?
> If every time you write a blog post it takes you six months, and you're sitting around your apartment on a Sunday afternoon thinking of stuff to do, you're probably not going to think of starting a blog post, because it'll feel too expensive.
Author in 2025:
> This is why it’s so useful to work on an article for a long time. If you’re reporting on something for six months, even if the really concentrated part, the key visit, is only a week or two of that, you have time for notes to accumulate.
What a difference 10 years of experience makes eh?