Readit News logoReadit News
karpathy · 6 months ago
Btw I notice many pretty bad errors in this transcription of the talk. The actual video will be up soon I hope.
dang · 6 months ago
Ah sorry! I'm going to downweight this thread now.

There's so much demand around this, people are just super eager to get the information. I can understand why, because it was my favorite talk as well :)

tomhow · 6 months ago
The video is now up and on the front page of HN:

https://news.ycombinator.com/item?id=44314423

Dead Comment

kapildev · 6 months ago
How soon? I am contemplating whether to read this errorful transcript or wait for the video
pudiklubi · 6 months ago
anything you'd want fixed immediately? happy to do so – or even take this down if you wish. it's your talk.
sotix · 6 months ago
Is this because it was recorded with AI tooling rather than a traditional note taker?
pudiklubi · 6 months ago
it was an audio recording, transcribed with speech to text models. there's definitely some errors and words lost. I also tried to emphasize this

Dead Comment

pudiklubi · 6 months ago
For context - I was in the audience when Karpathy gave this amazing talk on software 3.0. YC has said the official video will take a few weeks to release, by which Karpathy himself said the talk will be deprecated.

https://x.com/karpathy/status/1935077692258558443

levocardia · 6 months ago
To complete the loop, we need an AI avatar of Karpathy doing text-to-voice from the transcript. Who says AI can't boost productivity!

Deleted Comment

msgodel · 6 months ago
I listened to it with an old fashion CMU speech synth.
chrisweekly · 6 months ago
Do the talk's predictions about the future of the industry project beyond a few weeks? If so, I'd expect the salient points of the talk to remain valid. Hmm...
swyx · 6 months ago
i synced the slides with the talk transcript here : https://latent.space/s3
pudiklubi · 6 months ago
so you took my transcript and put it behind a newsletter sub? haha. just share them!
theyinwhy · 6 months ago
What a poor judgement he must have if his outlook becomes irrelevant in a few weeks' time.

Edit: the emoji at the end of the original sentence has not been quoted. How a smile makes the difference. Original tweet: https://x.com/karpathy/status/1935077692258558443

theturtletalks · 6 months ago
It was in jest, more a take of how quickly things move in AI
pudiklubi · 6 months ago
the way I read it it's more about how fast examples and references become irrelevant. fundamentals of the speech not.
qwertox · 6 months ago
We better stop talking about the future then.
afiodorov · 6 months ago
> So, it was really fascinating that I had the menu gem basically demo working on my laptop in a few hours, and then it took me a week because I was trying to make it do it

Reminds me of work where I spend more time figuring out how to run repos than actually modifying code. A lot of my work is focused on figuring out the development environment and deployment process - all with very locked down permissions.

I do think LLMs are likely to change industry considerably, as LLM-guided rewrites are sometimes easier than adding a new feature or fixing a bug - especially if the rewrite is something into more LLM-friendly (i.e., a popular framework). Each rewrite makes the code further Claude-codeable or Cursor-codeable; ready to iterate even faster.

andai · 6 months ago
The last 10% always takes 1000% of the time...
afiodorov · 6 months ago
I am not saying rewrites are always warranted, but I think LLMs change the cost-benefit balance considerably.
Aeolun · 6 months ago
Jup. Claude develops the first 90% without a sweat, and then starts flailing.
bcrosby95 · 6 months ago
I might be wrong, but it seems like some people are misinterpreting what is being said here.

Software 3.0 isn't about using AI to write code. It's about using AI instead of code.

So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.

imiric · 6 months ago
So... Who builds the AI?

This is why I think the AI industry is mostly smoke and mirrors. If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.

TeMPOraL · 6 months ago
> If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities.

Recursive self-improvement is literally the endgame scenario - hard takeoff, singularity, the works. Are you really saying you're dissatisfied with the progress of those tools because they didn't manage to end the world as we know it just yet?

iLoveOncall · 6 months ago
> If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.

Yes and we've actually been able to witness in public the dubious contributions that Copilot has made on public Microsoft repositories.

trgn · 6 months ago
> who builds the ai

3 to 5 companies iso of the hundreds of thousands who sell software now

bmicraft · 6 months ago
The AI isn't much easier, when you consider the "AI" step is actually: create dataset -> train model -> fine-tune model -> run model to train a much smaller model -> ship much smaller model to end devices.
autobodie · 6 months ago
I don't think people are misinterpreting. People just don't find it convincing or intriguing.
zie1ony · 6 months ago
This is great idea, until you have to build something.
layer8 · 6 months ago
Until you have to reliably automate something, I would say.
fellatio · 6 months ago
Let alone productionize it! And god forbid maintain it. And have support that doesn't crap out.
obiefernandez · 6 months ago
Self plug: I wrote a whole bestselling book on this exact topic

https://leanpub.com/patterns-of-application-development-usin...

adriand · 6 months ago
It’s like a friend of mine who has an AI company said to me: the future isn’t building a CRM with AI. The future is saying to the AI, act like a CRM.
__loam · 6 months ago
And it won't work as well as an actual crm because you've scrubbed all the domain knowledge of that software and how it ought to work out of the organization.
agarren · 6 months ago
That jibes with what Nadella said in an interview not too long ago. Essentially, SaaS apps disappear entirely as LLMs interface directly with the underlying data store. The unspoken implication being that software as we understand it goes away as people interface with LLMs directly rather than ~~computers~~ software at all.

I kind of expect that from someone heading a company that appears to have sold-the-farm in an AI gamble. It’s interesting to see a similar viewpoint here (all biases considered)

Vegenoid · 6 months ago
> people interface with LLMs directly rather than software at all

What does this mean? An LLM is used via a software interface. I don’t understand how “take software out of the loop” makes any sense when we are using reprogrammable computers.

Deleted Comment

__loam · 6 months ago
This industry is so tiring
mattgreenrocks · 6 months ago
Definitely. And it gets more tiring the more experience you have, because you've seen countless hype cycles come and go with very little change. Each time, the same mantra is chanted: "but this time, it's different!" Except, it usually isn't.

Started learning metal guitar seriously to forget about industry as a whole. Highly recommended!

Dead Comment

alganet · 6 months ago
> imagine changing it and programming the computer's life

> imagine that the inputs for the car are on the bottom, and they're going through the software stack to produce the steering and acceleration

> imagine inspecting them, and it's got an autonomy slider

> imagine works as like this binary array of a different situation, of like what works and doesn't work

--

Software 3.0 is imaginary. All in your head.

I'm kidding, of course. He's hyping because he needs to.

Let's imagine together:

Imagine it can be proven to be safe.

Imagine it being reliable.

Imagine I can pre-train on my own cheap commodity hardware.

Imagine no one using it for war.

serial_dev · 6 months ago
I tried to imagine all that he described and felt literally nothing. If he wants to hype AI, he should find his Steve Jobs.
msgodel · 6 months ago
It was easy for me to see and it's incredible. Maybe I should be launching a startup.
Henchman21 · 6 months ago
If I’m going to be leaning on my imagination this much I am going to imagine a world where the tech industry considers at great length whether or not something should be built.
alganet · 6 months ago
Let me be clear about what I think: I have zero fear of an AI apocalypse. I think the fear is part of the scam.

The danger I see is related to psychological effects caused by humans using LLMs on other humans. And I don't think that's a scenario anyone is giving much attention to, and it's not that bad (it's bad, but not world end bad).

I totally think we should all build it. To be trained from scratch on cheap commodity hardware, so that a lot of people can _really_ learn it and quickly be literate on it. The only true way of democratizing it. If it's not that way, it's a scam.

no_wizard · 6 months ago
A large contention of this essay (which I’m assuming the talk is based on or is transcribed from depending on order) I do think that open source models will eventually catch up to closed source ones, or at least be “good enough” and I also think you can already see how LLMs are augmenting knowledge work.

I don’t think it’s the 4th wave of pioneering a new dawn of civilization but it’s clear LLMs will remain useful when applied correctly.

bix6 · 6 months ago
Why would open source outpace? Isn’t there way more money in the closed source ones and therefore more incentive to work on them?
no_wizard · 6 months ago
I didn’t say outpace, but I do believe the collective nature of open source will allow it to catch up, much like it did with browser tech, and at which point you’ll see a shift of resources toward that by major companies. It’s a collective works thing. I think it also is attractive to work on in open source, much like Linux or web browsers (hence the comparison to one) and that will also help it along over time.

I stick by my general thesis that OSS will eventually catch up or the gap will be so small only frontier applications will benefit from using the most advanced models

oblio · 6 months ago
They didn't say "outpace", they said "catch up to good enough levels".
umeshunni · 6 months ago
> I do think that open source models will eventually catch up to closed source ones

It felt like that was the direction for a while, but in the last year or so, the gap seems to have widened. I'm curious whether this is my perception or validated by some metric.

msgodel · 6 months ago
Already today I can use aider with qwen3 for free but have to pay per token to use it with any of the commercial models. The flexibility is worth the lower performance.
no_wizard · 6 months ago
This was how early browsers felt too, the open source browser engines were slower at adapting than the ones developed by Netscape and Microsoft, but eventually it all reversed and open source excelled past the closed source software.

Another way to put it, is that over time you see this, it usually takes a little while for open source projects to catch up, but once they do they gain traction quite quickly over the closed source counter parts.

arkj · 6 months ago
>Software 2.0 are the weights which program neural networks. >I think it's a fundamental change, is that neural networks became programmable with large libraries... And in my mind, it's worth giving it the designation of a Software 3.0.

I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.

In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?

I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].

I was thinking he meant this,

Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs

If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?

[1] https://www.youtube.com/watch?v=HB5TrK7A4pI

autobodie · 6 months ago
Objectivity is lacking throughout the entire talk, not only in the thesis. But objectivity isn't very good for building hype.
bigyabai · 6 months ago
Reminds me of Vitalik Buterin. I spent a lot of my starry-eyed youth reading his blog, and was hopeful that he was applying the learned-lessons from the early days of Bitcoin. Turned out he was fighting the wrong war though, and today Ethereum gets less lip service than your average shitcoin. The whole industry went up in flames, really.

Nerds are good at the sort of reassuring arithmetic that can make people confident in an idea or investment. But oftentimes that math misses the forest for the trees, and we're left betting the farm on a profoundly bad idea like Theranos or DogTV. Hey, I guess that's why it's called Venture Capital and not Recreation Investing.

Karrot_Kream · 6 months ago
I'm curious why you think that? I thought the talk was pretty grounded. There was a lot of skepticism of using LLMs unbounded to write software and an insistence on using ground truth free from LLM hallucination. The main thesis, to me, seemed like "we need to write software that was designed with human-centric APIs and UI patterns to now use an LLM layer in front and that'll be a lot of opportunity for software engineers to come."

If anything it seemed like the middle ground between AI boosters and doomers.

baxtr · 6 months ago
How can someone so smart become a hype machine? I can’t wrap my head around it. Maybe he had the opportunity to learn from someone he worked closely with?
DaveChurchill · 6 months ago
The death of deterministic computing and unverifiable information is a horror show
pests · 6 months ago
I think to understand how Andrej views 3.0 is hinted at with his later analogy at Tesla. He saw a ton of manually written Software 1.0 C++ replaced by the weights of the NN. What we used to write manually in explicit code is now incorporated into the NN itself, moving the implementation from 1.0 to 3.0.
koakuma-chan · 6 months ago
"revision number" doesn't matter. He is just saying that traditional software's behaviour ("software 1.0") is defined by its code, whereas outputs produced by a model ("software 2.0") are driven by its training data. But to be fair I stopped reading after that, so can't tell you what "software 3.0" is.
ath3nd · 6 months ago
I find it hard to care for the marginal improvements in a glorifiedutocomplete that guzzles a shit ton of water and electricity (all stuff that can be used for more useful stuff than generating a picture of a cat with human hands or some lazy rando's essay assignment) and then ends up having to be coddled by a real engineer into a working solution.

Software 2.0? 3.0? Why stop there? Why not software 1911.1337? We went through crypto, NFTs, web3.0, now LLMs are hyped as if they are frigging AGI (spoiler, LLMs are not designed to be AGI, and even if they were, you sure as hell won't be the one to use them to your advantage, so why are you so irrationally happy about it?).

Man this industry is so tiring! What is the most tiring is the dog-like enthusiasm of the people who buy it EVERY.DAMN.TYPE, as if it's gonna change the life of most of them for the better. Sure, some of these are worse and much more useless than others (NFTs), but in the core of all of it is this cult-like awe we as a society have towards figures like the Karpathy's, Musks and Altmans of this world.

How are LLMs gonna help society? How are they gonna help people work, create and connect with one another? They take away the joy of making art, the joy of writing, of learning how to play a music instrument and sing, and now they are coming for software engineering. Sure, you might be 1%/2% faster, but are you happier, are you smarter (probably not: https://www.mdpi.com/2076-3417/14/10/4115)?

throwawayoldie · 6 months ago
KARPATHY, MUSK, ALTMAN AND COMPANY: "How are we going to 'help society'? I'm sorry, I don't understand the question."