Readit News logoReadit News
ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · a month ago
And finally 7. On “But humans are finite too—so why not replicable?”

Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.

NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)

If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.

Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.

But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.

Let me know how that goes

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · a month ago
6. On “This is just a critique of current models—not AGI itself”

No.

This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.

If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure

…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.

That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)

In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc

That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · a month ago
5. On “Kolmogorov and Chaitin are misused”

It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.

But that’s not what’s happening here.

– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.

That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.

So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · a month ago
4. On “This is just the No Free Lunch Theorem again”

Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.

But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction

In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.

That is what “IOpenER” means: Information Opens, Entropy Rises.

It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · a month ago
3. On “He redefines AGI to make his result inevitable”

Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.

So unless those are now fringe newsletters, the definition stands:

- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -

If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · a month ago
2. On “This is just philosophy with no testability”

Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.

You don’t need empirical falsification to verify this. You need mathematical framing. Period.

Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.

These are not performance limitations. They are limits of principle.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory · a month ago
1. On “The brain obeys physics, physics is computable—so AGI must be possible”

This is the classical foundational syllogism of computationalism. In short:

   1.The brain obeys the laws of physics.
   2.The laws of physics are (in principle) computable.
   3.Therefore, the brain is computable.
   4.Therefore, human-level general intelligence is computable, and AGI is  
     inevitable and a question of time, power and compute.
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical: Simulating a system’s physical behavior is not the same as instantiating its cognitive function.

The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.

Yes, we can simulate water. -> No, the simulation isn’t wet.

Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.

And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.

Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.

If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.

ICBTheory commented on AGI Is Mathematically Impossible (3): Kolmogorov Complexity    · Posted by u/ICBTheory
ICBTheory · a month ago
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory commented on AGI is Mathematically Impossible 2: When Entropy Returns   philarchive.org/archive/S... · Posted by u/ICBTheory
vidarh · 2 months ago
No, I'm not flipping the logic.

> I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.

Any such "proof" is irrelevant unless you can prove that humans can exceed the Turing computable. If humans can't exceed the Turing computable, then any "proof" that shows limits for algoritmic systems that somehow don't apply to humans must inherently be incorrect.

And so you're sidestepping the issue.

> But now you’re asserting that the uncomputable must be computable because humans did it.

No, you're here demonstrating you failed to understand the argument.

I'm asserting that you cannot use the fact that humans can do something as proof that humans exceed the Turing computable, because if humans do not exceed the Turing computable said "proof" would still give the same result. As such it does not prove anything.

And proving that humans exceed the Turing computable is a necessary precondition for proving AGI impossible.

> I don’t claim humans are “super-Turing.”

Then your claim to prove AGI can't exist is trivially false. For it to be true, you would need to make that claim, and prove it.

That you don't seem to understand this tells me you don't understand the subject.

(See also my edit above; your proof also contains elmentary failures to understand Turing machines)

ICBTheory · 2 months ago
You’re misreading what I’m doing, and I suspect you’re also misdefining what a “proof” in this space needs to be.

I’m not assuming humans exceed the Turing computable. I’m not using human behavior as a proof of AGI’s impossibility. I’m doing something much more modest - and much more rigorous.

Here’s the actual chain:

1. There’s a formal boundary for algorithmic systems. It’s called symbolic containment. A system defined by a finite symbol set Σ and rule set R cannot generate a successor frame (Σ′, R′) where Σ′ introduces novel symbols not contained in Σ. This is not philosophy — this is structural containment, and it is provable.

2. Then I observe: in human intellectual history, we find recurring examples of frame expansion. Not optimization, not interpolation — expansion. New primitives. New rules. Special relativity didn’t emerge from Newton through deduction. It required symbols and structures that couldn’t be formed inside the original frame.

3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

4. This leads to a conclusion: if AGI is an algorithmic system (finite symbols, finite rules, formal inference)then it will not be capable of frame jumps.And it is not incapable of that, because it lacks compute. The system is structurally bounded by what it is.

So your complaint that I “haven’t proven humans exceed Turing” is misplaced. I didn’t claim to. You’re asking me to prove something that I simply don’t need to assert .

I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed). Therefore, if humans are purely algorithmic, something’s missing in our understanding of how those systems operate. And if AGI remains within the current algorithmic paradigm, it will not do X. That’s what I’ve shown.

You can still believe humans are Turing machines, fine for me. But if this belief is to be more than some kind of religious statement, then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅. It is you that would need to show how uncomputable concepts emerge from computable substrates without violating containment (->andthat means: witout violating its own logic - as in formal systems, logic and containment end up as the same thing: Your symbol set defines your expressive space, step outside that, and you’re no longer reasoning — you’re redefining the space, the universe you’re reasoning in).

Otherwise, the limitation stands — and the claim that “AGI can do anything humans do” remains an ungrounded leap of faith.

Also: if you believe the only valid proof of AGI impossibility must rest on metaphysical redefinition of humanity as “super-Turing,” then you’ve set an artificial constraint that ensures no such proof could ever exist, no matter the logic.

That’s intellectually trading epistemic rigor for insulation.

As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.

u/ICBTheory

KarmaCake day118June 22, 2025View Original