Readit News logoReadit News
ICBTheory commented on Formal Proof: LLM Hallucinations Are Structural, Not Statistical (Coq Verified)   philpapers.org/rec/SCHTIC... · Posted by u/ICBTheory
wiz21c · 2 months ago
Since we can't avoid hallucinations, maybe we can live with them ?

I mean, I regularly use LLM's and although, sometimes, they go a bit mad, most of the time they're really helpful

ICBTheory · 2 months ago
I'd say that conclusion is a manifestation of pragmatic wisdom.

Anyway: I agree. The paper certainly doesn't argue that AI is useless, but that autonomy in high-stakes domains is mathematically unsafe.

In the text, I distinguish between operating on an 'Island of Order' (where hallucinations are cheap and correctable, like fixing a syntax error in code) versus navigating the 'Fat-Tailed Ocean' (where a single error is irreversible).

Tying this back to your comment: If an AI hallucinates a variable name — no problem, you just fix it. But I would advise skepticism if an AI suggests telling your boss that 'his professional expertise still has significant room for improvement.'

If hallucinations are structural (as the Coq proof in Part II indicates), then 'living with them' means ensuring the system never has the autonomy to execute that second type of decision.

ICBTheory commented on Formal Proof: LLM Hallucinations Are Structural, Not Statistical (Coq Verified)   philpapers.org/rec/SCHTIC... · Posted by u/ICBTheory
ICBTheory · 2 months ago
Author here.

This paper is Part III of a trilogy investigating the limits of algorithmic cognition. Given the recent industry signals regarding "scaling plateaus" (e.g., Sutskever etc.), I attempt to formalize why these limits appear structurally unavoidable.

The Thesis: We model modern AI as a Probabilistic Bounded Semantic System (P-BoSS). The paper demonstrates via the "Inference Trilemma" that hallucinations are not transient bugs to be fixed by more data, but mathematical necessities when a bounded system faces fat-tailed domains (alpha ≤ 1).

The Proof: While this paper focuses on the CS implications, the underlying mathematical theorems (Rice’s Theorem applied to Semantic Frames, Sheaf Theoretic Gluing Failures) are formally verified using Coq.

You can find the formal proofs and the Coq code in the companion paper (Part II) here:

https://philpapers.org/rec/SCHTIC-16

I’m happy to discuss the P-BOSS definition and why probabilistic mitigation fails in divergent entropy regimes.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · 7 months ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · 7 months ago
And finally 7. On “But humans are finite too—so why not replicable?”

Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.

NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)

If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.

Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.

But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.

Let me know how that goes

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · 7 months ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · 7 months ago
6. On “This is just a critique of current models—not AGI itself”

No.

This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.

If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure

…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.

That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)

In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc

That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · 7 months ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · 7 months ago
5. On “Kolmogorov and Chaitin are misused”

It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.

But that’s not what’s happening here.

– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.

That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.

So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · 7 months ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · 7 months ago
4. On “This is just the No Free Lunch Theorem again”

Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.

But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction

In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.

That is what “IOpenER” means: Information Opens, Entropy Rises.

It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · 7 months ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · 7 months ago
3. On “He redefines AGI to make his result inevitable”

Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.

So unless those are now fringe newsletters, the definition stands:

- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -

If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · 7 months ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · 7 months ago
2. On “This is just philosophy with no testability”

Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.

You don’t need empirical falsification to verify this. You need mathematical framing. Period.

Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.

These are not performance limitations. They are limits of principle.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · 7 months ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · 7 months ago
1. On “The brain obeys physics, physics is computable—so AGI must be possible”

This is the classical foundational syllogism of computationalism. In short:

   1.The brain obeys the laws of physics.
   2.The laws of physics are (in principle) computable.
   3.Therefore, the brain is computable.
   4.Therefore, human-level general intelligence is computable, and AGI is  
     inevitable and a question of time, power and compute.
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical: Simulating a system’s physical behavior is not the same as instantiating its cognitive function.

The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.

Yes, we can simulate water. -> No, the simulation isn’t wet.

Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.

And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.

Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.

If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.

u/ICBTheory

KarmaCake day118June 22, 2025View Original