Readit News logoReadit News
gavinray · 2 months ago
There's an interesting parallel to be drawn here from prior RL research:

  "Some evolutionary algorithms keep only the best performers in the population, on the assumption that progress moves endlessly forward. DGMs, however, keep them all, in case an innovation that initially fails actually holds the key to a later breakthrough when further tweaked. It’s a form of “open-ended exploration,” not closing any paths to progress. (DGMs do prioritize higher scorers when selecting progenitors.)"
Kenneth Stanley[0], the creator of the NEAT[1]/HyperNEAT (Picbreeder) algorithms wrote an entire book about open-ended exploration, "Why Greatness Cannot Be Planned: The Myth of the Objective".

[0]: https://en.wikipedia.org/wiki/Kenneth_Stanley

[1]: https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t...

paulluuk · 2 months ago
It's really a choice: do you want to waste compute or do you want to waste potential?

While prioritizing higher scorers for selecting progenitors will initially mitigate some of the problems, you will eventually end up with hundreds of thousands of agents that only learned to repeat the letter "a" a million times in a row, which is a huge waste of processing.

tmaly · 2 months ago
I followed his work on NEAT at the time. It was really cool. But I never imagined we would get to where we are today with AI.
spwa4 · 2 months ago
The same could be said of transformers, that only started to perform when scaled up to an absolutely ridiculous degree. I would argue most researchers are of the opinion that any learning system, scaled up enough, would work.

I think the limits of machine learning are related to the fact that all ML "knowledge" is secondhand, except talking to humans and already to a much smaller extent programming. Getting AIs to interact with, say, cars, during training is the way forward.

pvg · 2 months ago
achrono · 2 months ago
I wish an org like IEEE would be way more rigorous than what's revealed with the first paragraph:

>In April, Microsoft’s CEO said that artificial intelligence now wrote close to a third of the company’s code. Last October, Google’s CEO put their number at around a quarter. Other tech companies can’t be far off.

Take a moment to reflect -- a third of the company's code? Generative AI capable enough to write reasonable code has arguably not been around longer than 5 years. In the 50 years of Microsoft, have the last 5 years contributed to a third of the total code base? This itself would require that not a single engineer write a single line of code in these 5 years.

Okay, maybe Microsoft meant to say new/incremental code?

No, because Satya is reported to have said, "I’d say maybe 20%, 30% of the code that is inside of our repos today [...] written by software".

bwfan123 · 2 months ago
If a third of microsoft's code looks like this copilot generated PR [1] the company is going to go down the tubes soon. And I hope this happens, so, these corporate chiefs learn a harsh lesson when they are ejected for forcing stupidity across the org.

[1] https://github.com/dotnet/runtime/pull/115762

rvnx · 2 months ago
https://www.google.com/search?q=msft+stock

They never did so well

The issue with Copilot is that it is running GPT-4o and GPT-4o-mini, and both models are not good at programming.

zack6849 · 2 months ago
I'm pretty sure they meant 1/3rd of newly written code, obviously they don't mean a third of all their code that exists was written by AI
achrono · 2 months ago
That's a reasonable interpretation, but that is not what Microsoft has said. Satya talks of "30% of the code that is inside of our repos today".

Source: https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

davidmurdoch · 2 months ago
They clearly mean "new" code. Meaning on any recent day, that amount of code is authored by AI.
achrono · 2 months ago
No, because Satya's claim is about "30% of the code that is inside of our repos today".

Source: https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

throwawayoldie · 2 months ago
It's not his job to accurately report numbers, or really to do anything that involves technical acumen. His job is more akin to being a cheerleader, or a carnival barker.
cimi_ · 2 months ago
They probably mean new code not the entire codebase, but even so I think those numbers are ridiculous given my experience.

Is there any evidence of this (anywhere, not just MS or Google)?

paulluuk · 2 months ago
I'm not sure if it's ridiculous if you factor in something like copilot. Heck, even just your IDE's built-in autocomplete (which only finishes the current variable name) can get close to being responsible for 20% of your code, with tools like copilot I think you can even more easily hit that target.
kordlessagain · 2 months ago
Are we really sitting here dissecting what he's saying as if it means anything at all for the future? 20% or 30% today is 100% tomorrow. That much is certain.
AnimalMuppet · 2 months ago
100%? Certain? I disagree, strongly.
SoftTalker · 2 months ago
I've always interpreted that as "a third of the company’s (new) code" though I guess it would be nice of them to make that clear.
exe34 · 2 months ago
Maybe they've had LLMs for a very long time, given the quality of their code...
seydor · 2 months ago
newly writte code. But the consensus is that this is inflated numbers that don't involve the revisions that this code needs. Would be interesting for them to tell us what % of the LLM generated code gets thrown away .
bee_rider · 2 months ago
I mean… it is objectively the truth to quote the CEO of MS as saying what he said, whether or not he is lying or using a misleading metric. The only questionable things about the quote, imo, are

> Other tech companies can’t be far off.

First, MS and Google are working on coding assistants so I’d expect them to be quite ahead of the curve in terms of what their CEOs report. Both in terms of what they are actually doing (since they have a bunch of people working there who are interested in AI coding assistants, surely they are using them). And in terms of that the head advertisers for these products, the CEOs are willing to say (although I should be clear, I’m not even necessarily saying he’s lying or being misleading. He’s in charge of a company that is advertising some AI tool, maybe all his reports are also emphasizing how good the dogfood is).

Second and relatedly, quoting a AI tool salesman on how much of his company’s code is written by AI… eh, it is a big company, the CEO of MS is a known figure. But maybe they should be explicitly skeptical toward him. As you note, I wouldn’t be surprised if MS was itself far off from what he said in the quote, let alone other companies…

Although, if he says:

> "I’d say maybe 20%, 30% of the code that is inside of our repos today [...] written by software".

Depending on how you look at it, that doesn’t necessarily preclude, like, classic macros and other classic code generation tools, so actually I have no idea what it even means. If an AI touches a JavaScript minifier, does it get credit for all the JavaScript that gets generated by it? Haha.

mucha · 2 months ago
What Satya says: “I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,”

First line from the article: In April, Microsoft’s CEO said that artificial intelligence now wrote close to a third of the company’s code.

Software != AI

Source: https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

CNBC misquotes Satya in the same article with his actual quote.

datameta · 2 months ago
I think this is interesting enough for a post in and of itself: https://arxiv.org/abs/2505.22954
wiz21c · 2 months ago
From the article abstract: "All experiments were done with safety precautions (e.g., sandboxing, human oversight)."

Do the authors really believe "safety" is necessary, that is, there is a risk that somethign goes wrong ? What kind of risk ?

datameta · 2 months ago
From what I understand, alignment and interpretability were rewarded as part of the optimization function. I think it is prudent that we bake in these "guardrails" early on.
catoc · 2 months ago
Number of lines of code… airplane weight… etc

Deleted Comment

Deleted Comment