Good point about hallucinations - low accuracy, high confidence. I wonder if AI will develop the ability to nuance its own confidence. It would be a more useful tool if it could provide a reasonable confidence level along with its output. Much like a human would say, "not sure about this, but..."
The thing is, with enough magical thinking, of course they could do anything. So that let's unscrupulous salesmen sell you something that is not actually possible. They let you do the extrapolation, or they do it for you, promising something that doesn't exist, and may never exist.
How many years has Musk been promising "full self driving", and how many times recently have we seen his cars driving off the road and crashing into a tree because it saw a shadow, or driving into a Wile E Coyote style fake painted tunnel?
While there is some value in evaluating what might come in the future when evaluating, for example, whether to invest in an AI company, you need to temper a lot of the hype around AI by doing most of your evaluation based on what the tools are currently capable of, not some hypothetical future that is quite far from where they are.
One of the things that's tricky is that we have had a significant increase in the capability of these tools in the past few years; modern LLMs are capable of something far better than two or three years ago. It's easy to think "well, what if that exponential curve continues? Anything could be possible."
But in most real life systems, you don't have an unlimited exponential growth, you have something closer to a logistic curve. Exponential at first, but it eventually slows down and approaches a maximum asymptotically.
Exactly where we are on that logistic curve is hard to say. If we still have several more years of exponential growth in capability, then sure, maybe anything is possible. But more likely, we've already hit that inflection point, and continued growth will go slower and slower as we approach the limits of this LLM based approach to AI.