Readit News logoReadit News
yibg · 4 months ago
Might as well be 10 - 1000 years. Reality is no one knows how long it'll take to get to AGI, because:

1) No one knows what exactly makes humans "intelligent" and therefore 2) No one knows what it would take to achieve AGI

Go back through history and AI / AGI has been a couple of decades away for several decades now.

Balgair · 4 months ago
I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.

To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.

No really.

You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.

But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.

No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.

We can debate 'intelligence' until the sun dies out and will still never be satisfied.

But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.

(oh man, just read that back, I think I need to take a day off here, youch!)

stego-tech · 4 months ago
> You somehow managed to get real people to chat with bots and pay to do so.

He's_Outta_Line_But_He's_Right.gif

Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.

That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.

So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.

And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.

leptons · 4 months ago
By your measure, Eliza was AGI, back in the 1960s.
al_borland · 4 months ago
I think AGI has to do more than pass a Turning test by someone who wants to be fooled.
glial · 4 months ago
For me it was twitter bots during the 2016 election, but same principle.
yibg · 4 months ago
I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.
9rx · 4 months ago
> But the reality is that we want money

Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".

But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.

kev009 · 4 months ago
There are a lot of other things that follow this pattern. 10-30 year predictions are a way to sound confident about something that probably has very low confidence. Not a lot of people will care let alone remember to come back and check.

On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..

timewizard · 4 months ago
That we don't have a single unified explanation doesn't mean that we don't have very good hints, or that we don't have very good understandings of specific components.

Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.

From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.

threatofrain · 4 months ago
We'll be experiencing extreme social disruption well before we have to worry about the cost-efficiency of strong AI. We don't even need full "AGI" to experience socially momentous change. We might even be on the verge of self driving cars spreading to more cities.

We don't need very powerful AI to do very powerful things.

adgjlsfhk1 · 4 months ago
note that those are kilocalories, and that is ignoring the calories needed for the circulatory and immune systems which are somewhat necessary for proper function. Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
HDThoreaun · 4 months ago
We are very good at generating energy. Even if AI is an order of magnitude less energy efficient an AI person equivalent would use ~ 4 kilowatt hours/day. At current rates thats like $1. Hardly the limiting factor here I think
imtringued · 4 months ago
Energy efficiency is not really a good target since you can brute force it by distilling classical ANNs to spiking neural networks.
827a · 4 months ago
Generalized, as an rule I believe is usually true: Any prediction made for an event happening greater-than ten years out is code for that person saying "definitely not in the next few years, beyond that I have no idea", whether they realize it or not.
arp242 · 4 months ago
If you look back at predictions of the future in the past in general, then so many of them have just been wrong. Especially during a "hype phase". Perhaps the best example is what people were predicting in 1969 after we landed on the moon: this is just the first step in the colonisation of the moon, Mars, and beyond. etc. etc. We just have to have our tech a bit better.

It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.

Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.

Deleted Comment

yieldcrv · 4 months ago
Exactly, what does the general in Artificial General Intelligence mean to these people?
YetAnotherNick · 4 months ago
I would even go 1 order of magnitude further in both direction. 1-10000 years.
1vuio0pswjnm7 · 4 months ago
A realist might say, "As long as money keeps flowing to Silicon Valley then who cares."
ksec · 4 months ago
Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.

The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )

yibg · 4 months ago
I think having a real life JARVIS would be super cool and useful, especially if it's plugged into various things and can take action. Yes, also potentially dangerous, but I want to feel like Ironman.
babyent · 4 months ago
Except only Iron Man had JARVIS.
glitchc · 4 months ago
I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.
blipvert · 4 months ago
> At minimum, it should tell me how confident it feels in the answer it provides.

How’s that work out for Dave Bowman? ;-)

phire · 4 months ago
Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs

But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.

9rx · 4 months ago
> But AGI is important in the sense that it have a huge impact on the path humanity takes

The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?

AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.

csours · 4 months ago
AI winter is relative, and it's more about outlook and point of view than actual state of the field.

Deleted Comment

nextaccountic · 4 months ago
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.

It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.

So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.

But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.

SpicyLemonZest · 4 months ago
> All this talk about "alignment", when applied to actual sentient beings, is just slavery.

I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.

imiric · 4 months ago
I'm more concerned about the humans in charge of powerful machines who use them to abuse other humans, than ethical concerns about the treatment of machines. The former is a threat today, while the latter can be addressed once this technology is only used for the benefit of all humankind.
nice_byte · 4 months ago
> AGI is important for the future of humanity.

says who?

> Maybe they will have legal personhood some day. Maybe they will be our heirs.

Hopefully that will never come to pass. it means total failure of humans as a species.

> They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.

Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.

lolinder · 4 months ago
Why do you believe AGI is important for the future of humanity? That's probably the most controversial part of your post but you don't even bother to defend it. Just because it features in some significant (but hardly universal) chunk of Sci Fi doesn't mean we need it in order to have a great future, nor do I see any evidence that it would be a net positive to create a whole different form of sentience.
AstroBen · 4 months ago
Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?
jes5199 · 4 months ago
I think you’re saying that you want a faster horse
imtringued · 4 months ago
I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?

The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.

belter · 4 months ago
> Is AGI even important?

It's an important question for VCs not for Technologists ... :-)

Philpax · 4 months ago
A technology that can create new technology is quite important for technologists to keep abreast of, I'd say :p

Dead Comment

stared · 4 months ago
My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.

And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).

There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.

biophysboy · 4 months ago
I like chollet's definition: something that can quickly learn any skill without any innate prior knowledge or training.
kenjackson · 4 months ago
That seems to rule out most humans. I still can’t cook despite being in the kitchen for thousands of hours.
stared · 4 months ago
I like Chollet's line of thinking.

Yet, if you take "any" literally, the answer is simple - there will never be one. Not even for practical reasons, but closer to why there isn't "a set of all sets".

Picking a sensible benchmark is the hard part.

pixl97 · 4 months ago
>There’s no consistent, universally accepted definition.

That's because of the I part. An actual complete description accepted by different practices in the scientific community.

"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"

9rx · 4 months ago
> There’s no consistent, universally accepted definition

What word or term does?

dmwilcox · 4 months ago
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

throwaway150 · 4 months ago
> And from my brief experience on this planet I don't believe that premise.

A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.

So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.

Deleted Comment

slavik81 · 4 months ago
What is the distance from the Earth to the center of the universe?
preommr · 4 months ago
> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism

Then you've missed the part of software.

Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.

If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.

dmwilcox · 4 months ago
It's not about the random numbers it's about the tree of possibilities having to be defined up front (in software or hardware). That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.

This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.

But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).

You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.

AstroBen · 4 months ago
> It is science fiction to think that a system like a computer can behave at all like a brain

It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly

Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no

gloosx · 4 months ago
BTW Planes are fully inspired by birds and they mimic the core principles of the bird flight.

Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly

ggreer · 4 months ago
Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?

Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?

gloosx · 4 months ago
1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new

2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment

missingrib · 4 months ago
Yes, they can't have understanding or intentionality.
potamic · 4 months ago
The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.

We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.

But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?

pdimitar · 4 months ago
Seems to me you are a bit overconfident that "we" (who is "we"?) understand how the brain works. F.ex. how does a neuron actively stretching a tentacle trying to reach other neurons work in your model? Genuine question, I am not looking to make fun of you, it's just that your confidence seems a bit much.
ukFxqnLa2sBSBf6 · 4 months ago
I guarantee computers are better at generating random numbers than humans lol
uh_uh · 4 months ago
Not only that but LLMs unsurprisingly make similar distributional mistakes as humans do when asked to generate random numbers.
pyfon · 4 months ago
Computers are better at hashing entropy.
CooCooCaCha · 4 months ago
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.

It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.

biophysboy · 4 months ago
Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.

That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.

dmwilcox · 4 months ago
I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.

Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.

What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.

In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)

Krssst · 4 months ago
If the physics underlying the brain's behavior are deterministic, they can be simulated by software and so does the brain.

(and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)

LouisSayers · 4 months ago
What you're mentioning is like the difference between digital vs analog music.

For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.

In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.

You can approximate reality, but it'll never quite be reality.

I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.

Borealid · 4 months ago
Are you familiar with the Nyquist–Shannon sampling theorem?

If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?

How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?

Deleted Comment

Aloisius · 4 months ago
> Ask yourself, why is it so hard to get a cryptographically secure random number?

I mean, humans aren't exactly good at generating random numbers either.

And of course, every Intel and AMD CPU these days has a hardware random number generator in it.

bastardoperator · 4 months ago
Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.
pstuart · 4 months ago
On the newly released iPhone: "No wireless. Less space than a nomad. Lame."

;-)

sebastiennight · 4 months ago
The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.

So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.

lolinder · 4 months ago
Really the only people for whom this is bad news is OpenAI and their investors. If there is no AGI race to win then OpenAI is just a wildly overvalued vendor of a hot commodity in a crowded market, not the best current shot at building a money printing machine.
codingwagie · 4 months ago
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
csto12 · 4 months ago
You just asked it to design or implement?

If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?

codingwagie · 4 months ago
why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack
mprast · 4 months ago
yeah unless you have very specific requirements I think the baseline here is not building/designing it yourself but setting up an off-the-shelf commercial or OSS solution, which I doubt would take two weeks...
davidsainez · 4 months ago
While impressive, I'm not convinced that improved performance on tasks of this nature are indicative of progress toward AGI. Building a scheduler is a well studied problem space. Something like the ARC benchmark is much more indicative of progress toward true AGI, but probably still insufficient.
fragmede · 4 months ago
The point is that AGI is the wrong bar to be aiming for. LLMs are sufficiently useful at their current state that even if it does take us 30 years to get to AGI, even just incremental improvements from now until then, they'll still be useful enough to provide value to users/customers for some companies to win big. VC funding will run out and some companies won't make it, but some of them will, to the delight of their investors. AGI when? is an interesting question, but might just be academic. we have self driving cars, weight loss drugs that work, reusable rockets, and useful computer AI. We're living in the future, man, and robot maids are just around the corner.
codingwagie · 4 months ago
the other models failed at this miserably. There were also specific technical requirements I gave it related to my tech stack
littlestymaar · 4 months ago
“It does something well” ≠ “it will become AGI”.

Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.

AJ007 · 4 months ago
I find now I quickly bucket people in to "have not/have barely used the latest AI models" or "trolls" when they express a belief current LLMs aren't intelligent.
burnte · 4 months ago
You can put me in that bucket then. It's not true, I've been working with AI almost daily for 18 months, and I KNOW it's no where close to being intelligent, but it doesn't look like your buckets are based on truth but appeal. I disagree with your assessment so you think I don't know what I'm talking about. I hope you can understand that other people who know just as much as you (or even more) can disagree without being wrong or uninformed. LLMs are amazing, but they're nowhere close to intelligent.
tumsfestival · 4 months ago
Call me back when ChatGPT isn't hallucinating half the outputs it gives me.
MisterSandman · 4 months ago
Designing a distributed scheduler is a solved problem, of course an LLM was able to spit out a solution.
codingwagie · 4 months ago
as noted elsewhere, all other frontier models failed miserably at this
mountainriver · 4 months ago
I’ve had similar things over the last couple days with o3. It was one-shotting whole features into my Rust codebase. Very impressive.

I remember before ChatGPT, smart people would come on podcasts and say we were 100 or 300 years away from AGI.

Then we saw GPT shock them. The reality is these people have no idea, it’s just catchy to talk this way.

With the amount of money going into the problem and the linear increases we see over time, it’s much more likely we see AGI sooner than later.

dundarious · 4 months ago
Wow, 12 per second on average.
timeon · 4 months ago
I'm not sure what is your point in context of AGI topic.
codingwagie · 4 months ago
im a tenured engineer, spent a long time at faang. was casually beat this morning by a far superior design from an llm.
fusionadvocate · 4 months ago
Can someone throw some light on this Dwarkesh character? He landed a Zucc podcast pretty early on... how connected is he? Is he an industry plant?
gallerdude · 4 months ago
He's awesome.

I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.

But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.

consumer451 · 4 months ago
He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.

He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.

He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.

Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.

lexarflash8g · 4 months ago
https://archive.ph/IWjYP

He was covered on the Economist recently -- I haven't heard of him til now so imagine its not just AI-slop content.

xbmcuser · 4 months ago
Most people talking about Ai and economic growth have vested interests in talking about how it will increase economic growth but don't talk about that under current economic system that the world has would mean most if not all of the growth will go to > 0.0001% of the population.