Readit News logoReadit News
stingraycharles · 2 months ago
This is not a really valuable article. The Apple paper was widely considered as a “well, duh” paper, GPT5 being underwhelming seems to be mostly a cost cutting / supply can’t keep up issue, and the others are just mainly expert opinions.

To be clear, I am definitely an AGI skeptic, and I very much believe that our current techniques of neural networks on GPUs is extremely inefficient, but this article doesn’t really add a lot to this discussion; it seems to self congratulate on the insights by a few others.

an0malous · 2 months ago
I don’t think either of your first two statements are accurate, what is your support for those claims?
p1esk · 2 months ago
GPT5 compared to the original GPT4 is a huge improvement. It exceeded all my expectations from 2 years ago. I really doubted most GPT4 limitations will be resolved so quickly.

If they manage a similar quality jump with GPT6, it will probably meet most reasonable definitions of AGI.

dns_snek · 2 months ago
> GPT5 compared to the original GPT4 is a huge improvement. It exceeded all my expectations from 2 years ago.

Cool story. In my experience they're still on the same order of magnitude of usefulness as the original Copilot.

Every few months I read about these fantastic ground-breaking improvements and fire up whatever the trendiest toolchain is (most recently Claude Code and Cursor) and walk away less disappointed than last time, but ultimately still disappointed.

On simple tasks it doesn't save me any time but on more complex tasks it always makes a huge mess unless I mentor it like a really junior coworker. But if I do that I'm not saving any time and I end up with lower quality, poorly organized code that contains more defects than if I did it myself.

stogot · 2 months ago
What if GPT6 and Claude5 is equally a dud?
tim333 · 2 months ago
I'm not sure it matters much if they are underwhelming except some investors may be down a little, but such is the nature of investing.

I think AGI is coming but more along the lines of what Karpathy was saying "They're cognitively lacking and it's just not working. It will take about a decade to work through all of those issues."

I don't get the Marcus - GPT5 isn't AGI so it's game over type stuff. New technologies take time to develop.

make3 · 2 months ago
gpt 5 is not a dud though
stingraycharles · 2 months ago
I mean, they’re definitely not duds, and I personally foresee incremental improvements, not revolutionary. Which is fine.
mellosouls · 2 months ago
As somebody who used to link to the occasional Marcus essay, this is a really poor "article" by a writer who has really declined to the point of boorishness. The contents here are just a list of talking points already mostly discussed on HN, so nothing new, and his over-familiar soapbox proclamations add nothing to the discourse.

Its not that he's wrong, I probably still have a great deal of sympathy with his position, but his approach is more suited to social media echo chambers than intelligent discussion.

I think it would be useful for him to take an extended break, and perhaps we could also do the same from him here.

hopelite · 2 months ago
I’m not sure an ad hominem assault is any different. You make proclamations without any evidence as if what you say has any more credibility than the next person. In fact, this response makes a reasonable person discount you.

Sure, it reads like some biased and coping, possibly even interested or paid hit-piece as if what happens can be changed by just being really negative about LLMs, but maybe consider taking your own advice there, kid; you know, an extended break.

mellosouls · 2 months ago
Please give an example of how we might criticise somebody's method of communication and general strong decline in useful contributions to debate (of the order of that of Marcus) without you complaining ad hominem.
socketcluster · 2 months ago
TBH. I don't think we actually need AGI. It's not a win-win.. It's a civilization-altering double-edged sword with unclear consequences.

I'm quite satisfied with current LLM capabilities. Their lack of agency is actually a feature, not a bug.

An AGI would likely end up implementing some kind of global political agenda. IMO, the need to control things and move things in a specific, unified direction is a problem, not a solution.

With full agency, an AI would likely just take over the world and run it in ways which don't benefit anyone.

Agency manifests as thirst for power. Agency is man's inability to sit quietly in a room, by himself. This is a double-edged sword which becomes increasingly harmful once you run out of problems to solve... Then agency demands that new problems be invented.

Agency is not the same as consciousness or awareness. Too much agency can be dangerous.

We can't automate the meaning of life. Technology should facilitate us in pursuing what we individually decide to be meaningful. The individual must be given the freedom to decide their own purpose. If most individuals want to be used to fulfill some greater purpose (I.e. someone else's goals), so be it, but that should not be the compulsory plan for everyone.

dzink · 2 months ago
Why the padding self on the back after a few opinions? You have large tech and government players and then you have regular people.

1. For large players: AGI is a mission worth perusing at the cost of all existing profit (you won’t pay taxes today, the stock market values you on revenue anyway, and if you succeed you can control all people and means of production).

2. For regular people the current AI capabilities have already led to either life changing skill improvement for those who make things for themselves or life changing likely permanent employment reduction for those who do things for others. If current AI is sufficient to meaningfully reduce the employment market, AGI doesn’t matter much to regular people. Their life is altered and many will be looking for manual work until AI enters that too.

3. The AI vendors are running at tremendous expense right now and the sources of liquidity for billions and trillions are very very few. It is possible a black swan event in the markets causes an abrupt end to liquidity and thus forces AI providers into pricing that excludes many existing lower-end users. That train should not be taken for granted.

4. It is also possible WebGPU and other similar scale-ai-accross-devices efforts succeed and you get much more compute unlocked to replace Advertising.

Serious question: Who in HN is actually looking forward to AGI existing?

barrell · 2 months ago
I’m not convinced the current capabilities have impacted all that many people. I think the economy is much more responsible for the lack of jobs than “replacement with AI”, and most businesses have not seen returns on AI.

There is a tiny, tiny, tiny fraction of people who I would believe have been seriously impacted by AI.

Most regular people couldn’t care less about it, and the only regular people I know who do care are the ones actively boycotting it.

mapontosevenths · 2 months ago
> > Serious question: Who in HN is actually looking forward to AGI existing?

I am.

It's he only serious answer to the question of space exploration. Rockets filled with squishy meat were never going to accomplish anything serious, unless we find a way of beating the speed of light.

Further, humanities greatest weakness is that we can't plan anything long-term. Our flesh decays too rapidly and our species is one of perpetual noobs. Fields are becoming too complex to master in a single lifetime. A decent super-intelligence can not only survive indefinitely, it can plan accordingly, and it can master fields that are too complex to fit inside a single human skull.

Sometimes I wonder if humanity wasn't just evolutions way of building AI's.

bossyTeacher · 2 months ago
> It's he only serious answer to the question of space exploration.

It is. But the world's wealthiest are not pouring billions so that human can develop better space exploration tech. The goal is making more money

card_zero · 2 months ago
I don't agree with the point about "perpetual noobs". Fields that are too broad to fit in a single mind in a lifetime need to be made deeper, that is, better explained. If a field only gets more expansive and intricate, we're doing it wrong.

Still, 130+ years of wisdom would have to be worth something, I can't say I dislike the prospect.

Noaidi · 2 months ago
There are a lot of hopeful assumptions in the statement. Who’s to say that if AGI is achieved that it would want us to know how to go faster than the speed of light? you’re assuming that your wisdom and your plans would be AGI’s wisdom and plans. It might end up, just locking us here down on earth, sending us back to a more balanced primitive life, and killing off a mass amount of humans in order to achieve ecological balance on the Earth so humanity can survive without having to leave the planet.. Note that that’s not something I am advocating. I’m just saying it’s a possibility.
ACCount37 · 2 months ago
It's kind of ironic - that this generation of LLMs has worse executive functioning than humans do. Turns out the pre-training data doesn't really teach them that.

But AIs improve, as technology tends to. Humans? Well...

more_corn · 2 months ago
Efficient fusion would get us around pretty quickly.

Don’t get me wrong AI would be way faster. We’re nowhere near it though.

the_arun · 2 months ago
I was content even without AI. I’m good with whatever we have today as long we use them to change the life in a positive way.
card_zero · 2 months ago
I'm looking forward to artificial people existing. I don't see how they'd be a money-spinner, unless mind uploading is also developed and so they can be used for life extension. The LLM vendors have no relevance to AGI.
brazukadev · 2 months ago
> Serious question: Who in HN is actually looking forward to AGI existing?

90% of the last 12 batches of YC founders would love to believe they are pursuing AGI with their crappy ChatGPT wrapper, agent framework, observability platform, etc.

prox · 2 months ago
I am not against AGI, just the method and the players we have getting there. Instead of a curiosity to find intelligence, we just have rabid managers and derailed billionaires funding a race to … what? I don’t think even they know beyond a few hype words in their vocab and a buzz to bullshit powerpoint presentation.
pixl97 · 2 months ago
This is just the world we live in now for everything. Remember .com? Web3? Crypto? And now AI. Hell, really going back in the past you see dumb shit like this happening with tulips.

We're lucky to have managed to progress in spite of how greedy we are.

bossyTeacher · 2 months ago
> For regular people the current AI capabilities have already led to either life changing skill improvement for those who make things for themselves or life changing likely permanent employment reduction for those who do things for others

This statement sums up the tech centric bubble HN lives in. Your average former, shop assistant, fisherman or wood worker isn't likely to see significant life improevments from the transformer tech deployed until now.

dzink · 2 months ago
I know an average farmer who is using ChatGPT all the time to look up illnesses and treatments for his crop, or optimal production techniques, taxes etc. The speed of looking up and getting better information has never been greater. The average person is a gross overstatement when everyone has interests that vary and strives to improve. In fact the whole point of AI is to help provide much faster cycles of improvement to anyone trying to improve in anything. Thus not white collar specific. Especially when it provides window to people with other skills to do more white collar work for themselves or add those skills.
dmix · 2 months ago
I didn't know serious technical people were taking the AGI thing seriously. I figured it was just an "aim for the stars" goal where you try to get a bunch of smart people and capital invested into an idea, and everyone would still be happy if we got 25% of the way there.
analognoise · 2 months ago
If our markets weren’t corrupt, everyone in the AI space would be bankrupt by now, and we could all wander down to the OpenAI fire sale and buy nice servers for pennies on the dollar.
dmix · 2 months ago
I'd take this more seriously if I didn't hear the same thing every other time there was a spike in VC investment. The last 5 times were the next dot com booms too.
arnaudsm · 2 months ago
Genuine question : why are hyperscalers like OpenAI and Oracle raising hundreds of billions ? Isn't their current infra enough ?

Naive napkin math : a GB200 NVL72 is 3M$, can serve ~7000 concurrent users of gpt4o (rumored to be 1400B A200B), and ChatGPT has ~10M concurrent peak users. That's only ~4B$ of infra.

Are they trying to brute-force AGI with larger models, knowing that gpt4.5 failed at this, and deepseek & qwen3 proved small MoE can reach frontier performance ? Or is my math 2 orders of magnitude off ?

Noaidi · 2 months ago
They are raising the money because they can. While these businesses may go bankrupt, many people who ran these businesses will make hundreds of millions of dollars.

Either that or AGI is not the goal, rather it’s they want to function for, and profit off of , a surveillance state that might be much more valuable in the short term.

ACCount37 · 2 months ago
As a rule: inference is very profitable, frontier R&D is the money pit.

They need the money to keep pushing the envelope and building better AIs. And the better their AIs get, the more infra they'll need to keep up with the inference demand.

GPT-4.5's issue was that it wasn't deployable at scale - unlike the more experimental reasoning models, which delivered better task-specific performance without demanding that much more compute.

Scale is inevitable though - we'll see production AIs reach the scale of GPT-4.5 pretty soon. Newer hardware like GB200 enables that kind of thing.

lazide · 2 months ago
Their valuation projection spreadsheets call for it. If they touch those spreadsheets, a bunch of other things break (including their ability to be super-duper-crazy-rich), so don’t touch them.
anon291 · 2 months ago
We are already at agi yet no one seems to have noticed. Given its limited sense perception that makes chatgpt et al limited to talking with a partially blind, partially dead, quadraplegic, it has demonstrated and continues to demonstrate above average intelligence. Not sure what more needs to happen.

Sure we don't have embodied AI. Maybe it's not reflective enough. Maybe you find it jarring. Literally none of those things matter

CamperBob2 · 2 months ago
The models that achieve something like AGI will need object permanence as well as sensory input.

It'll happen, because there's no reason why it won't. But no one knows the day or the hour... least of all this Gary Marcus guy, whose record of being wrong about this stuff is more or less unblemished.

anon291 · 2 months ago
There is no reason to believe they will need anything of the sort other than our biased human perception of the thing. AI models could just as easily claim that humans need the ability to function call via MCP to do real cognition (tm).

If you mean sensory input as in qualia, such a thing is impossible to discern in non humans and only speculation in all humans but yourself. Requiring this unprovable standard is self sabotage

watwut · 2 months ago
I dont understand why AGI would be something we should want.
bossyTeacher · 2 months ago
According to HN, AGI is desirable because humans will absolutely won't try to:

- Use it to dominate others

- Make themselves and a few others immortal or/and their descendants smarter

- Increase their financial or/and political power

- Cause irreversible damage to the global core ecosystems in their pursuit of the 3 goals above

tim333 · 2 months ago
I'm kind of keen. Possibilities of semi unlimited stuff, diseases cured, immortality or something along those lines.
ModernMech · 2 months ago
> diseases cured

Why bother? We cured polio and measles yet they are resurging because people are morons. What's the point of curing diseases when they charge $300 for insulin? You think the billionaires are going to allow their AIs to tell us the solution to that riddle? You think they'll listen to the AI if it correctly surmises that the way to solve that problem is to simply stop charging $300? Most of our problems are social, not technical.