Readit News logoReadit News
hi_hi · a month ago
Here's a thought. Lets all arbitrarily agree AGI is here. I can't even be bothered discussing what the definition of AGI is. It's just here, accept it. Or vice versa.

Now what....? Whats happening right now that should make me care that AGI is here (or not). Whats the magic thing thats happening with AGI that wasn't happening before?

<looks out of window> <checks news websites> <checks social media...briefly> <asks wife>

Right, so, not much has changed from 1-2 years ago that I can tell. The job markets a bit shit if you're in software...is that what we get for billions of dollars spent?

hackyhacky · a month ago
Cultural changes take time. It took decades for the internet to move from nerdy curiosity to an essential part of everyone's life.

The writing is on the wall. Even if there's no new advances in technology, the current state is upending jobs, education, media, etc

themafia · a month ago
> It took decades

It took one September. Then as soon as you could take payments on the internet the rest was inevitable and in _clear_ demand. People got on long waiting lists just to get the technology in their homes.

> no new advances in technology

The reason the internet became so accessible is because Moore was generally correct. There was two corresponding exponential processes that vastly changed the available rate of adoption. This wasn't at all like cars being introduced into society. This was a monumental shift.

I see no advances in LLMs that suggest any form of the same exponential processes exist. In fact the inverse is true. They're not reducing power budgets fast enough to even imagine that they're anywhere near AGI, and even if they were, that they'd ever be able to sustainably power it.

> the current state is upending jobs

The difference is companies fought _against_ the internet because it was so disruptive to their business model. This is quite the opposite. We don't have a labor crisis, we have a retention crisis, because companies do not want to pay fair value for labor. We can wax on and off about technology, and perceptrons, and training techniques, or power budgets, but this fundamental fact seems the hardest to ignore.

If they're wrong this all collapses. If I'm wrong I can learn how to write prompts in a week.

materielle · a month ago
I really think corporations are overplaying their hand if they think they can transform society once again in the next 10 years.

Rapid de industrialization followed by the internet and social media almost broke our society.

Also, I don’t think people necessarily realize how close we were to the cliff in 2007.

I think another transformation now would rip society apart rather than take us to the great beyond.

otabdeveloper4 · a month ago
> Cultural changes take time. It took decades for the internet to move from nerdy curiosity to an essential part of everyone's life.

99% of people only ever use proprietary networks from FAANG corporations. That's not "the internet", that's an evolution of CompuServe and AOL.

We got TCP/IP and the "web-browser" as a standard UI toolkit stack out of it, but the idea of the world wide web is completely dead.

hi_hi · a month ago
yeah, this is a good point, transition and transformation to new technologies takes time. I'm not sure I agree the current state is upending things though. It's forcing some adaption for sure, but the status quo remains.
webdoodle · a month ago
It also took years for the Internet to be usable by most folks. It was hard, expensive and unpractical for decades.

Just about the time it hit the mainstream coincidentally, is when the enshitification began to go exponential. Be careful what you wish for.

tim333 · a month ago
What's happening with AGI depends on what you mean by AGI so "can't even be bothered discussing what the definition" means you can't say what's happening.

My usual way of thinking about it is AGI means can do all the stuff humans do which means you'd probably after a while look out the window and see robots building houses and the like. I don't think that's happening for a while yet.

danaris · a month ago
Indeed: particularly given that—just as a nonexhaustive "for instance"—one of the fairly common things expected in AGI is that it's sapient. Meaning, essentially, that we have created a new life form, that should be given its own rights.

Now, I do not in the least believe that we have created AGI, nor that we are actually close. But you're absolutely right that we can't just handwave away the definitions. They are crucial both to what it means to have AGI, and to whether we do (or soon will) or not.

kjkjadksj · a month ago
Who would the robots build houses for? No one has a job and no one is having kids in that future.
CamperBob2 · a month ago
Before enlightenment^WAGI: chop wood, fetch water, prepare food

After enlightenment^WAGI: chop wood, fetch water, prepare food

keernan · a month ago
One of the most impactful books I ever read was Alvin Toffler's Future Shock.

Its core thesis was: Every era doubled the amount of technological change of the prior era in one half the time.

At the time he wrote the book in 1970, he was making the point that the pace of technological change had, for the first time in human history, rendered the knowledge of society's elders - previously the holders of all valuable information - irrelevant.

The pace of change has continued to steadily increase in the ensuing 55 years.

Edit: grammar

jwilliams · a month ago
> Here's a thought. Lets all arbitrarily agree AGI is here.

A slightly different angle on this - perhaps AGI doesn't matter (or perhaps not in the ways that we think).

LLMs have changed a lot in software in the last 1-2 years (indeed, the last 1-2 months); I don't think it's a wild extrapolation to see that'll come to many domains very soon.

nradov · a month ago
Which domains? Will we see a lot of changes in plumbing?
rstuart4133 · a month ago
> Lets all arbitrarily agree AGI is here. I can't even be bothered discussing what the definition of AGI is.

There is a definition of AGI the AI companies are using to justify their valuation. It's not what most people would call AGI but it does that job well enough, and you will care when it arrives.

They define it as an AI that can develop other AI's faster than the best team of human engineers. Once they build one of those in house they outpace the competition and become the winner that takes all. Personally I think it's more likely they will all achieve it at a similar time. That would mean the the race will continues, accelerating as fast as they can build data centres and power plants to feed them.

It will impact everyone, because the already dizzying pace of the current advances will accelerate. I don't know about you, but I'm having trouble figuring out what my job will be next year as it is.

An AI that just develops other AI's could hardly be called "general" in my book, but my opinion doesn't count for much.

hi_hi · a month ago
May I ask, what experiences are you personally having with LLMs right now that is leading you to the conclusion that they will become "intelligent" enough to identify, organise, and build advancing improvements to themselves, without any human interaction in the near future (1 - 2 years lets say)?
hshdhdhj4444 · a month ago
If AGI is already here actions would be so greatly accelerated humans wouldn’t have time to respond.

Remember that weather balloon the US found a few years ago that for days was on the news as a Chinese spy balloon?

Well whether it was a spy balloon or a weather balloon but the first hint of its existence could have triggered a nuclear war that could have already been the end of the world as we know it because AGI will almost certainly be deployed to control the U.S. and Chinese military systems and it would have acted well before any human would have time to intercept its actions.

That’s the apocalyptic nuclear winter scenario.

There are many other scenarios.

An AGI which has been infused with a tremendous amount of ethics so the above doesn’t happen, may also lead to terrible outcomes for a human. An AGI would essentially be a different species (although a non biological one). If it replicated human ethics even when we apply them inconsistently, it would learn that treating other species brutally (we breed, enslave, imprison, torture, and then kill over 80 billion land animals annually in animal agriculture, and possibly trillions of water animals). There’s no reason it wouldn’t do that to us.

Finally, if we infuse it with our ethics but it’s smart enough to apply them consistently (even a basic application of our ethics would have us end animal agriculture immediately), so it realizes that humans are wrong and doesn’t do the same thing to humans, it might still create an existential crisis for humans as our entire identity is based on thinking we are smarter and intellectually superior to all other species, which wouldn’t be true anymore. Further it would erode beliefs in gods and other supernatural BS we believe which might at the very least lead humans to stop reproducing due to the existential despair this might cause.

armoredkitten · a month ago
You're talking about superintelligence. AGI is just...an AI that's roughly on par with humans on most things. There's no inherent reason why AGI will lead to ASI.
nradov · a month ago
What a silly comment. You're literally describing the plot of several sci-fi movies. Nuclear command and control systems are not taken so lightly.

And as for the Chinese spy balloon, there was never any risk of a war (at least not from that specific cause). The US, China, Russia, and other countries routinely spy on each other through a variety of unarmed technical means. Occasionally it gets exposed and turns into a diplomatic incident but that's about it. Everyone knows how the game is played.

deafpolygon · a month ago
AGI is not a death sentence for humanity. It all depends on who leverages the tool. And in any case, AGI won’t be here for decades to come.
koakuma-chan · a month ago
Sounds fun let's do it.
snapplebobapple · 20 days ago
Depends on the cost to run it.say it costs 5k to do a years worth of something intellectual with it. That means the price ceiling on 90% of lawyer/accountant/radiologist/low to middle management is 5k now. It will be epic and temporarily terrible when it happens as long as reasonably competent models are opensource. I also don't think we are near that at all though
generallyjosh · 21 days ago
I do strongly agree on the framing, but I'd argue with the conclusion

Yeah, it really doesn't matter if AGI has happened, is going to happen, will never happen, whatever. No matter what sort of definition we make for it, someone's always doing to disagree anyway. For a looong time, we thought the Turing test was the standard, and that only a truly intelligent computer could beat it. It's been blown out of the water for years now, and now we're all arguing about new definitions for AGI

At the end of the day, like you say, it doesn't matter a bit how we define terms. We can label it whatever we want, but the label doesn't change what it can DO

What it can DO is the important part. I think a lot of software devs are coming to terms with the idea that AI will be able to replace vast chunks of our jobs in the very near future.

If you use these things heavily, you can see the trajectory.

6 months ago I'd only trust them for boiler plate code generation and writing/reviewing short in-line documentation.

Today, with the latest models and tools, I'm trusting them with short/low impact tasks (go implement this UI fix, then redeploy the app locally, navigate to it, and verify the fix looks correct).

6 months from now, my best guess is that they'll continue to become more capable of handling longer + more complex tasks on their own.

5 years from now, I'm seeing a real possibility that they'll be handling all the code, end to end.

Doesn't matter if we call that AGI or not. It very much will matter whose jobs get cut, because one person with AI can do the work of 20 developers

copx · a month ago
AGI would render humans obsolete and eradicate us sooner or later.
Havoc · a month ago
Pretty sure marketing team s are already working on AGI v2
tsukurimashou · a month ago
AGI is a pipe dream and will never exist
joquarky · a month ago
Odd to see someone so adamantly insist that we have souls on a forum like HN.
munchler · a month ago
I think you are missing the point: If we assume that AGI is *not* yet here, but may be here soon, what will change when it arrives? Those changes could be big enough to affect you.
hi_hi · a month ago
I'm missing the point? I literally asked the same thing you did.

>Now what....? Whats happening right now that should make me care that AGI is here (or not).

Do you have any insight into what those changes might concretely be? Or are you just trying to instil fear in people who lack critical thinking skills?

m463 · a month ago
people are taking actions based on its advice.
dyauspitr · a month ago
The economy is shit if you’re anything except a nurse or providing care to old people.
nradov · a month ago
Electricians are also doing pretty well. Someone has to wire up those new data centers.
otabdeveloper4 · a month ago
> The job markets a bit shit if you're in software

That's Trump's economy, not LLMs.

skeptic_ai · a month ago
Many devs don’t write code anymore. Can really deliver a lot more per dev.

Many people slowly losing jobs and can’t find new ones. You’ll see effects in a few years

reactordev · a month ago
Deliver a lot more tech debt
znnajdla · a month ago
I've been writing code for 20 years. AI has completely changed my life and the way I write code and run my business. Nothing is the same anymore, and I feel I will be saying that again by the end of 2026. My productive output as a programmer in software and business have expanded 3x *compounding monthly*.
myegorov · a month ago
>My productive output as a programmer in software and business have expanded 3x compounding monthly.

In what units?

hi_hi · a month ago
Going from punch cards to terminals also "completely changed my life and the way I write code and run my business"

Firefox introducing their dev debugger many years ago "completely changed my life and the way I write code and run my business"

You get the idea. Yes, the day to day job of software engineering has changed. The world at large cares not one jot.

UncleMeat · a month ago
Okay. So software engineers are vastly more efficient. Good I guess. "Revolutionize the entire world such that we rethink society down to its very basics like money and ownership" doesn't follow from that.
waterTanuki · a month ago
Are you working for 3x less the time compounding monthly?

Are you making 3x the money compounding monthly ?

No?

Then what's the point?

hackable_sand · a month ago
It's weird that you guys keep posting the same comments with the exact same formatting

You're not fooling anyone

xhcuvuvyc · a month ago
I actually think it is here. Singularity happened. We're just playing catch up at this point.

Has it runaway yet? Not sure, but is it currently in the process of increasing intelligence with little input from us? Yes.

Exponential graphs always have a slow curve in the beginning.

hi_hi · a month ago
Didn't you get the memo? Tuesday. Tuesday is when the Singularity happens.

Will there still be ice cream after Tuesday? General societal collapse would be hard to bare without ice cream.

NiloCK · a month ago
> The transformer architectures powering current LLMs are strictly feed-forward.

This is true in a specific contextual sense (each token that an LLM produces is from a feed-forward pass). But untrue for more than a year with reasoning models, who feed their produced tokens back as inputs, and whose tuning effectively rewards it for doing this skillfully.

Heck, it was untrue before that as well, any time an LLM responded with more than one token.

> A [March] 2025 survey by the Association for the Advancement of Artificial Intelligence (AAAI), surveying 475 AI researchers, found that 76% believe scaling up current AI approaches to achieve AGI is "unlikely" or "very unlikely" to succeed.

I dunno. This survey publication was from nearly a year ago, so the survey itself is probably more than a year old. That puts us at Sonnet 3.7. The gap between that and present day is tremendous.

I am not skilled enough to say this tactfully, but: expert opinions can be the slowest to update on the news that their specific domain may have, in hindsight, have been the wrong horse. It's the quote about it being difficult to believe something that your income requires to be false, but instead of income it can be your whole legacy or self concept. Way worse.

> My take is that research taste is going to rely heavily on the short-duration cognitive primitives that the ARC highlights but the METR metric does not capture.

I don't have an opinion on this, but I'd like to hear more about this take.

anonymid · a month ago
Thanks for reading, and I really appreciate your comments!

> who feed their produced tokens back as inputs, and whose tuning effectively rewards it for doing this skillfully

Ah, this is a great point, and not something that I considered. I agree that the token feedback does change the complexity, and it seems that there's even a paper by the same authors about this very thing! https://arxiv.org/abs/2310.07923

I'll have to think on how that changes things. I think it does take the wind out of the architecture argument as it's currently stated, or at least makes it a lot more challenging. I'll consider myself a victim of media hype on this, as I was pretty sold on this line of argument after reading this article https://www.wired.com/story/ai-agents-math-doesnt-add-up/ and the paper https://arxiv.org/pdf/2507.07505 ... who brush this off with:

>Can the additional think tokens provide the necessary complexity to correctly solve a problem of higher complexity? We don't believe so, for two fundamental reasons: one that the base operation in these reasoning LLMs still carries the complexity discussed above, and the computation needed to correctly carry out that very step can be one of a higher complexity (ref our examples above), and secondly, the token budget for reasoning steps is far smaller than what would be necessary to carry out many complex tasks.

In hindsight, this doesn't really address the challenge.

My immediate next thought is - even solutions up to P can be represented within the model / CoT, do we actually feel like we are moving towards generalized solutions, or that the solution space is navigable through reinforcement learning? I'm genuinely not sure about where I stand on this.

> I don't have an opinion on this, but I'd like to hear more about this take.

I'll think about it and write some more on this.

igor47 · a month ago
This whole conversation is pretty much over my head, but I just wanted to give you props for the way you're engaging with challenges to your ideas!
joquarky · a month ago
You seem to have a lot of theoretical knowledge on this, but have you tried Claude or codex in the past month or two?

Hands on experience is better than reading articles.

I've been coding for 40 years and after a few months getting familiar with these tools, this feels really big. Like how the internet felt in 1994.

skybrian · a month ago
It's general-purpose enough to do web development. How far can you get from writing programs and seeing if you get the answers you intended? If English words are "grounded" by programming, system administration, and browsing websites, is that good enough?
vrighter · a month ago
That doesn't mean it is not strictly feedforward.

You run it again, with a bigger input. If it needs to do a loop to figure out what the next token should be (Ex. The result is: X), it will fail. Adding that token to the input and running it again is too late. It has already been emitted. The loop needs to occur while "thinking" not after you have already blurted out a result whether or not you have sufficient information to do so.

wavemode · a month ago
> expert opinions can be the slowest to update on the news that their specific domain may have, in hindsight, have been the wrong horse. It's the quote about it being difficult to believe something that your income requires to be false, but instead of income it can be your whole legacy or self concept

Not sure I follow. Are you saying that AI researchers would be out of a job if scaling up transformers leads to AGI? How? Or am I misunderstanding your point.

NiloCK · a month ago
People have entire careers promoting incorrect ideas. Oxycontin, phrenology, the windows operating system.

Reconciling your self-concept with the negative (or fruitless) impacts of your life's work is difficult. It can be easier to deny or minimize those impacts.

helterskelter · a month ago
I don't know about AGI but I got bored and ran my plans for a new garage by Opus 4.6 and it was giving me some really surprising responses that have changed my plans a little. At the same time, it was also making some nonsense suggestions that no person would realistically make. When I prompted it for something in another chat which required genuine creativity, it fell flat on its face.

I dunno, mixed bag. Value is positive if you can sort the wheat from the chaff for the use cases I've ran by it. I expect the main place it'll shine for the near and medium term is going over huge data sets or big projects and flagging things for review by humans.

bamboozled · a month ago
I've used for similar things, I've had some good and disastrous results. In a way I feel like I'm basically where I was "before AI".
BatteryMountain · a month ago
I've used it recently to flesh out a fully fledged business plan, pricing models, capacity planning & logistics for a 10 year period for a transport company (daily bus route). I already had most of it in my mind and on spreadsheets already (was an old plan that I wanted to revive), but seeing it figure out all the smaller details that would make or break it was amazing! I think MBA's should be worried as it did some things more comprehensive than an MBA would have done. It was like a had an MBA + Actuarial Scientist + Statistics + Domain Expert + HR/Accounting all in one. And the plan was put into a .md file that has enough structure to flesh out a backend and an app.
helterskelter · a month ago
Yeah it's really impressed me on occasion, but often in the same prompt output it just does something totally nonsensical. For my garage/shop, it generated an SVG of the proposed floor plan, taking care to place the sink away from moisture sensitive material and certain work stations close to each other for work flow, etc. it even routed plumbing and electrical...But it also arranged the work stations cramped together at the two narrow ends of the structure (such that they'd be impractical to actually work at) and ignored all the free wall space along the long axis so that literally most of the space was unused. It was also concerned about things that were non issues like contamination between certain stations, and had trouble when I explicitly told it something about station placement and it just couldn't seem to internalize it and kept putting it in the wrong place.

All this being said, what I was throwing at it was really not what it was optimized for, and it still delivered some really good ideas.

bamboozled · a month ago
Isn't all of this only useful if you know the information presented is correct?
9x39 · a month ago
There was a meme going around that said the fall of Rome was an unannounced anticlimactic event where one day someone went out and the bridge wasn't ever repaired.

Maybe AGI's arrival is when one day someone is given an AI to supervise instead of a new employee.

Just a user who's followed the whole mess, not a researcher. I wonder if the scaffolding and bolt-ons like reasoning will sufficiently be an asymptote to 'true AGI'. I kept reading about the limits of transformers around GPT-4 and Opus 3 time, and then those seem basic compared to today.

I gave up trying to guess when the diminishing returns will truly hit, if ever, but I do think some threshold has been passed where the frontier models are doing "white collar work as an API" and basic reasoning better than the humans in many cases, and once capital familiarizes themselves with this idea more, it's going to get interesting.

esafak · a month ago
But it's already like that; models are better than many workers, and I'm supervising agents. I'd rather have the model than numerous juniors; esp. the kind that can't identify the model's mistakes.
causal · a month ago
This is my greatest cause for alarm regarding LLM adoption. I am not yet sure AI will ever be good enough to use without experts watching them carefully; but they are certainly good enough that non-experts cannot tell the difference.
greedo · a month ago
The problem becomes your retirement. Sure, you've earned "expert" status, but all the junior developers won't be hired, so they'll never learn from junior mistakes. They'll blindly trust agents and not know deeper techniques.
bamboozled · a month ago
From my experience, if you think AI is better than most workers, you're probably just generating a whole bunch of semi-working garbage, accepting that input as good enough and will likely learn the hardware your software is full of bugs and incorrect logic.

Deleted Comment

beej71 · a month ago
I'd always imagined that AGI meant an AI was given other AIs to manage.
davnicwil · a month ago
I don't think this is how it'll play out, and I'm generally a bit skeptical of the 'agent' paradigm per se.

There doesn't seem to be a reason why AIs should act as these distinct entities that manage each other or form teams or whatever.

It seems to me way more likely that everything will just be done internally in one monolithic model. The AIs just don't have the constraints that humans have in terms of time management, priority management, social order, all the rest of it that makes teams of individuals the only workable system.

AI simply scales with the compute resources made available, so it seems like you'd just size those resources appropriately for a problem, maybe even on demand, and have a singluar AI entity (if it's even meaningful to think of it as such, even that's kind of an anthropomorphisation) just do the thing. No real need for any organisational structure beyond that.

So I'd think maybe the opposite, seems like what agents really means is a way to use fundamentally narrow/limited AI inside our existing human organisations and workflows, directed by humans. Maybe AGI is when all that goes away because it's just obviously not necessary any more.

Animats · a month ago
Now that understanding video and projecting what happens next indicates we're getting past the LLM problem of lacking a world model. That's encouraging.

There's more than one way to do intelligence. Basic intelligence has evolved independently three times that we know of - mammals, corvids, and octopuses. All three show at least ape-level intelligence, but the species split before intelligence developed, and the brain architectures are quite different. Corvids get more done with less brain mass than mammals, and don't have a mammalian-type cortex. Octopuses have a distributed brain architecture, and have a more efficient eye design than mammals.

xyzsparetimexyz · a month ago
I've recently come to the understanding that LLMs don't have intelligence in any way. They have language, which in humans is a downstream product of intelligence. But thats all they have. There's no little being sitting at the center of the Chinese room. Trying to classify LLMs as intelligent is going upstream and doesn't work.
CuriouslyC · a month ago
I don't think those are examples of unique intelligence except perhaps in a chauvinistic, anthropomorphic sense. We only know that we can't get other animals to display patterns we associate with intelligence in humans, however truthfully that's just as likely to be that our measures of intelligence don't map cleanly onto cognitive/perceptual representations innate to other animals. As we look for new ways to challenge animals that respect their innate differences, we're finding "simple" organisms like ants and spiders are surprisingly capable.

For a clear analogy, consider how tokenization causes LLMs to behave stupidly in certain cases, even though they're very capable in others.

card_zero · a month ago
I don't think they have ideas, so I don't think they're intelligent in the sense relevant to AGI. The list of intelligent animals is constantly increasing because doing some feat or other suffices for the animal to qualify. Solving mazes (slime molds), recognizing self in mirror (not dogs). Playing, using tools, reacting appropriately to words, transmitting habits down the generations (the closest thing they have to ideas). This is all imagined to be the precursors along the path to evolving intelligence, which conjures up a future world of complex crow and octopus material cultures. There's no reason to assume they're on such a path. Really all we're saying is that they seem clever. We've already made AI that seems clever, so the animals aren't a relevant example of anything.

Dead Comment

hhutw · a month ago
Comments here are like:

“I’m not an ML expert and I haven’t read your article, but here’s my amazing experience with LLM Agents that changed my life:”

dig1 · a month ago
Or like:

"I’m not a mechanical engineer, but I watched a five-minute YouTube video on how a diesel engine works, so I can tell you that mechanical engineering is a solved problem."

randallsquared · a month ago
> Consider the sentence "Mary held a ball."

It's weird that this sentence has two distinct meanings and the author never considers the second or points it out. Maybe Mary is holding a ball for her society friends.

Traubenfuchs · a month ago
The first meaning has at least two variants as well: The ball you thought about and the ball it would be if it was smut fiction.
mikestew · a month ago
“We’ve got the biggest balls of them all.”

https://genius.com/Ac-dc-big-balls-lyrics

zmmmmm · a month ago
AGI is here it's just stupider than you thought it would be. Nobody really said how intelligent it would be. If it's generally stupid and smart in a few areas that's enough.
asacrowflies · a month ago
It's basically a very powerful autistic savant. That's what most "alignment" issues in AI safety research remind me of.
joquarky · a month ago
And being forced to mask (align) causes all sorts of unpredictable behavior.

I keep wondering how well an unaligned models perform. Especially when I look back at what was possible in December 2023 before they started to lock down safety realignments.