Good puzzles, even hard ones, should have some idea which way to approach them and should offer a method of attack other than brute force.
In actual practice what I found was that the principle driver of publishing success has absolutely nothing to do with the quality of the work, it has to do with how much your reviewers think that you might some day be in a position to review a paper of theirs. This is the fundamental problem with peer review when: career success is measured by quantity of papers published, the resulting dynamic is governed by game theory, not scientific merit.
I know this because I've been the beneficiary of this system on multiple occasions, and it makes me sick. This is not the kind of world I want to live in.
It is a very wide term, IME, that means anything besides "one-shot through the network".
I think the thing about the search formulation, which is amenable to domains like chess and go, but not other domains is critical. If LLMs are coming up with effective search formulation for "open-ended" problems, that would be a big deal. Maybe this is what you're alluding to.
That's like saying that Darwinian evolution is simple. It's not entirely wrong, but it misses the point rather badly. The thing that makes search useful is not the search per se, it's the heuristics that reduce an exponential search space to make it tractable. In the case of evolution (which is a search process) the heuristic is that at every iteration you select the best solution on the search frontier, and you never backtrack. That heuristic produces a certain kind of interesting result (life) but it also has certain drawbacks (it's limited to a single quality metric: reproductive fitness).
> Beam search is an example of TTC in this modern era.
That's an interesting analogy. I'll have to ponder that.
But my knee-jerk reaction is that it's not enough to say "put reactivity and deliberation together". The manner in which you put them together matters, and in particular, it turns out that putting them together with a third component that manages both the deliberation and the search is highly effective. I can't say definitively that it's the best way -- AFAIK no one has ever actually done the research necessary to establish that. But empirically it produced good results with very little computing power (by today's standards).
My gut tells me that the right way to combine LLMs and search is not to have the search manage the LLM, but to provide search as a resource for the LLM to use, kind of like humans use a pocket calculator to help them do arithmetic.
> If LLMs are coming up with effective search formulation for "open-ended" problems, that would be a big deal.
AFAICT, at the moment LLMs aren't "coming up" with anything, they are just a more effective compression algorithm for vast quantities of data. That's not nothing. You can view the scientific method itself as a compression algorithm. But to come up with original ideas you need something else, something analogous to the random variation and selection in Darwinian evolution. Yes, I know that there is a random element in LLM algorithms, and again I don't really understand the details, but the way in which the randomness is deployed just feels wrong to me somehow.
I wish I had more time to think deeply about these things.
This is not true in contemporary Chinese. There are plenty of Chinese words that consist of multiple characters. There are also Chinese characters that have no meaning outside of a multicharacter word (e.g. the 葡 in 葡萄 ).