I wouldn't say so at all. Poor eyesight carries on smartly. Baldness. I enjoy both.
But an old story about the controller code for a surface-to-air missile comes to mind.
Someone looking at the memory allocator spots an obvious resource leak: "This code is going to crash."
The reply was that, while the point was theoretically valid, it was irrelevant, since the system itself would detonate long before resource exhaustion became an issue.
So too prostate cancer back in the day: war, famine and plague were keeping the lifespan well below the threshold of every man's time bomb.
That, right here, is a world-shaking statement. Bravo.
My prompt,
"I'm considering buying stock in the company with symbol NU. The most important thing to me is answering the question, is the stock likely to rise in the future. Please help create a list of questions that will help me to understand the likely hood of this. Also please help to anwser those questions. Please highlight the global economic environment for the company. Any unique challenges and unique advantages. Finally let me know what others think of it"
Results: I know this stock well all though I'm not a pro. It nailed all of the relevant aspects and hits the analysis right on for everything I know about it. Pulled lot's of helpful resources and most importantly the information was timely enough to be relevant. The timely part is where other LLMS have failed miserably. I've gotten good analysis from other LLM products but they have always been way out of date which makes them useless.
His blog posts about AGI predate OpenAI.
I wonder who theorized this? Altman isn't known for having models about AGI.
To the actual theorist: Claiming in one paragraph that AI goes as log resources, and in the next paragraph that the resource costs drop by 10x per year, is a contradiction; the latter paragraph shows a dependence on algorithms that is nothing like "it's just the compute silly".
We have seen no noticable improvements (at usable prices) for 7 months, when the original Sonnet 3.5 came out.
Maybe specialized hardware for LLM inference will improve so rapidly that o1 (full) will be quick and cheap enough a year from now, but it seems extremely unlikely. For the end user, the top models hadn't gotten cheaper for kore than a year until the release of Deepseek v3 a few weeks ago. Even that is currently very slow at non-Deepseek providers, and who knows just how subsidized the pricing and speed at Deepseek itself is, given political interests.
If the only thing you owe your interlocutor is to use your "prodigious intellect" to restate their own argument in the way that sounds the most convincing to you, maybe you are in fact a terrible listener.
https://x.com/ESYudkowsky/status/1075854951996256256