Readit News logoReadit News
malcontented commented on Hierarchical Reasoning Model   arxiv.org/abs/2506.21734... · Posted by u/hansmayer
bubblyworld · a month ago
> CoT models can, in principle, solve _any_ complex task.

The authors explicitly discuss the expressive power of transformers and CoT in the introduction. They can only solve problems in a fairly restrictive complexity class (lower than PTIME!) - it's one of the theoretical motivations for the new architecture.

"The fixed depth of standard Transformers places them in computational complexity classes such as AC0 [...]"

This architecture by contrast is recurrent with inference time controlled by the model itself (there's a small Q-learning based subnetwork that decides halting time as it "thinks"), so there's no such limitation.

The main meat of the paper is describing how to train this architecture efficiently, as that has historically been the issue with recurrent nets.

malcontented · a month ago
Agreed, regarding the computational simplicity of CoT LLMs, and that this solution certainly has much more flexibility. But is there a reason to believe that this architecture (and training method) is as applicable to the development of generally-capable models as it is to the solution of individual puzzles?

Don't get me wrong, this is a cool development, and I would love to see how this architecture behaves on a constraint-based problem that's not easily tractable via traditional algorithm.

malcontented commented on Hierarchical Reasoning Model   arxiv.org/abs/2506.21734... · Posted by u/hansmayer
JBits · a month ago
> CoT models can, in principle, solve _any_ complex task.

What is the justification for this? Is there a mathematical proof? To me, CoT seems like a hack to work around the severe limitations of current LLMs.

malcontented · a month ago
That's a fair argument to make. I should have, perhaps, written "are supposed to be able," or "have become famous for their apparent ability to solve loosely-specified arbitrary problems."

CoT _is,_ in my mind at least, a hack that is bolted to LLMs to create some sort of loose approximation of reasoning. When I read the paper I expected to see a better hack, but could not find anything on how you take this architecture, interesting though it is, and put it to use in a way similar to CoT. The whole paper seems to make a wild pivot between a fully general biomimetic grandeur of the first half, and the narrow effectiveness of the second half.

malcontented commented on Hierarchical Reasoning Model   arxiv.org/abs/2506.21734... · Posted by u/hansmayer
malcontented · a month ago
I appreciate the connections with neurology, and the paper itself doesn't ring any alarm bells. I don't think I'd reject it if it fell to me to peer review.

However, I have extreme skepticism when it comes to the applicability of this finding. Based on what they have written, they seem to have created a universal (maybe; adaptable at the very least) constraint-satisfaction solver that learns the rules of the constraint-satisfaction problem from a small number of examples. If true (I have not yet had the leisure to replicate their examples and try them on something else), this is pretty cool, but I do not understand the comparison with CoT models.

CoT models can, in principle, solve _any_ complex task. This needs to be trained to a specific puzzle which it can then solve: it makes no pretense to universality. It isn't even clear that it is meant to be capable of adapting to any given puzzle. I suspect this is not the case, just based on what I have read in the paper and on the indicative choice of examples they tested it against.

This is kind of like claiming that Stockfish is way smarter than current state of the art LLMs because it can beat the stuffing out of them in chess.

I feel the authors have a good idea here, but that they have marketed it a bit too... generously.

u/malcontented

KarmaCake day5July 27, 2025
About
I'm not happy about it either.
View Original