Readit News logoReadit News
neil_s · 5 years ago
I had trouble accessing the relevant video snippet even after going through the conference registration, so here's a summary.

You can view the demo at https://twitter.com/i/broadcasts/1OyKAYWPRrWKb starting around 29:00.

It's Sam Altman demoing a massive Open AI model that was trained on GitHub OSS repos using a Microsoft supercomputer. It's not Intellicode, but the host says that they're working on compressing the models to a size that could be feasible in Intellicode. The code model uses English-language comments, or simply function signatures, to generate entire functions. Pretty cool.

sama · 5 years ago
Thanks, but it's Sam McCandlish doing the demo (and the project).
KhoomeiK · 5 years ago
I'm confused. Is that not you doing the OpenAI demo around 29:00?

Deleted Comment

BubRoss · 5 years ago
Great, even more lag coming in the next version of visual studio.
YeGoblynQueenne · 5 years ago
So that's basically program synthesis from natural language (ish) specifications (i.e. the comments).

I can see this being a useful tool [1]. However, I don't expect any ability for innovation. At best this is like having an exceptionally smart autocomplete function that can look up code snippets on SO for you (provided those code snippets are no longer than one line).

That's not to say that it can't write new code, that nobody has quite written before in the same way. But in order for a tool like this to be useful it must stick as close as possible to what is expected- or it will slow development down rather than helping it. Which means it can only do what has already been done before.

For instance- don't expect this to come up with a new sorting algorithm, out of the blue, or to be able to write good code to solve a certain problem when the majority of code solving that problem on github happens to be pretty bad.

In other words: everyone can relax. This will not take your job. Or mine.

____________

[1] I apologise to the people who know me and who will now be falling off their chairs. OK down there?

gwern · 5 years ago
I think you are underselling the potential of a model which deeply understand programming. Imagine combining such a model with something like AutoML-Zero: https://arxiv.org/abs/2003.03384 It may not be 'creative', but used as tab-completion, it's not being rewarded or incentivized or used in any way which would expose its abilities towards creating a new sort algorithm.
raghavgoyal14 · 5 years ago
I agree on the tab-completion part. Something like Gmail's smart-compose could have potentially huge benefits here.

But I'm not sure about the "deeply understand programming" part. Language modelling and "AI", in its current form, uncovers only statistical correlations and barely scratches the surface of what "understanding" is. This has restricted deployment of majority of academic research into the real-world and this, I believe, is no different and will work only in constrained settings.

Edit: typo

YeGoblynQueenne · 5 years ago
Bert is a language model. It's trained to predict the next character in a sequence. It does not have any capacity to "understand" programming, or anything at all. It can also not produce outputs that are not similar to the examples it's been trained on. Like all neural net models it can interpolate between its examples, but it can't extrapolate to regions of the sample space it's never seen. This is why I say it lacks the ability to innovate.

I'm not sure how you would combine AutoML-Zero with Bert. How do you mean?

yazr · 5 years ago
What do you think is a more productive path leading to "AutoCode" ?!

A. Add external definitions or reward formalism to make the code-space easier to search?

OR

B. Keep adding code trees, execution traces, comments, memory dumps and learn from those?

My own instinct is that AlphaZero was a lot more convincing than AlphaStar, so lots of (A) is definitely needed

DJHenk · 5 years ago
> In other words: everyone can relax. This will not take your job. Or mine

Of course not. This technology converts writing code into bug hunting in pre-written code. Finding bugs in code that you did not write is way harder than writing the code yourself.

So if anything, this makes programming harder, not easier, and we will need more programmers, not less.

OOPMan · 5 years ago
Oh dear.

And then the model trains itself on the buggy code written and poorly debugged by these extra coders and then so on and so forth.

Codepocalypse.

Kill it with fire!

westurner · 5 years ago
> At best this is like having an exceptionally smart autocomplete function that can look up code snippets on SO for you (provided those code snippets are no longer than one line).

Yeah, all it could do for you is autocomplete around what it thinks the specification might be at that point in time.

> But what if Andy gets another dinosaur, a mean one? -- Toy Story (1995)

joshuak · 5 years ago
I agree completely with your expectation of the abilities of such a system.

However, I think very little programming labor is employed in the construction of new algorithms or even most business logic, even a casual stroll through github reveals a staggering amount of reimplementation.

I think the promise here is the ability to code in a more conceptual way with less fiddling with the finicky details.

Swizec · 5 years ago
> I think the promise here is the ability to code in a more conceptual way with less fiddling with the finicky details.

This is basically how product managers code. Or former engineers turned engineering managers. Or even team leads. Hell, maybe like an architect?

You come up with a rough sketch, design the system, think through a couple edge cases, tell the computer what you need, and the computer figures out the details for you. Similar to being a high level engineer that designs/defines/codes the broad strokes of something and then lets the lower level minions handle details.

We made a similar leap when compilers were invented.

YeGoblynQueenne · 5 years ago
I agree- that's why I think such a system is not capable of innovation.

In the same way, that's why I think it would be a useful tool: it promises to automate away the kind of coding that most programmers can do with eyes closed and that's the most boring and repetitive part of the job.

Like, without trying to demean it, it sounds like a great boilerplate generator.

gradys · 5 years ago
I'd put it differently. This is going to take your job, just like an assembly programmer from the 70s might consider Python to have basically taken their job. In software, the job is constantly eating itself and transforming.

It's part of the job to continually incorporate new capabilities and lever yourself up.

BaronSamedi · 5 years ago
I agree. While this is well done, it seems to be copying human programming techniques rather than allowing the AI to create code that it thinks is optimal. I think there is the potential to evolve efficient and secure code that is free from the constraints we impose on it due to the way our minds work. Such code may not be intelligible to us but could very well be much better than what we could write.
random32840 · 5 years ago
An AI like this can hold a hell of a lot more information in its head at one point than a human. Each decision it makes is based on way more context, it can manipulate the problem using much more information, much faster. The problem is that it can't think in abstractions.

If AI gets to the point where it has a reasonable understanding of the shape of the data & the basic spatial manipulations being applied (not far off IMO), I'd expect it to be waaaaaay better at discovering certain types of new algorithms than humans. It can handle thinking about algorithms that have millions of independently moving parts in a way a human can't.

Humans have the edge deriving algorithms that require a sequence of high-level steps on an abstraction. "Do this, then we get a thing, then we do some stuff to the thing, stretch it, squash it, massage it." AI sucks at that, it doesn't think in the same kind of flexible abstractions.

But imagine if you build an understanding of how the code will be compiled & how that will interact with the cache into the AI. That's very difficult for humans because you can't think about all those mechanics at once, we have to focus on one at a time. An AI that really gets it? I could see it writing a better sorting algorithm for a specific, complex datatype than a human could, or at the very least having the competetive edge because it can do it basically instantly.

izabera · 5 years ago
How often does the average programmer come up with a new sorting algorithm?
pharke · 5 years ago
Yeah I'm thinking it would be more useful to have a really well indexed library of functions accessible by search.
gameswithgo · 5 years ago
alphago and alphastar were certainly creative. this project in its current state may not have that capacity but it also may not be a huge leap to get there.
tanilama · 5 years ago
I mean it is cool.

But there is the thing, the natural description of a function is not always this unambiguous.

When you are telling a function to 'compute XYZ', what you are actually doing is 'check whether X.a exists, if so execute branch 1), else branch 2)'.

If the logic gets really complicated, then describing it accurately in human language isn't necessarily faster than doing it in code directly. Otherwise, we don't need invent programming languages like at all, we can just write compilers to interpret and execute human languages.

And I am interested, as whether the model itself is conditioned on the type constraint of class. It is neat that they pick Python in this case. But if it is Java or other static typed language, would this system condition its generation not only the natural text, but also the resulted type system? My bet, per my understanding of the language modeling approach they use is, they are not doing this, due to very high complexity and cost of the training, and domain adaptation.

Overall, this again is an interesting demo. But I think for code generation based on human language to be useful, we are really in a scenario, that you need to go 99% accurate for it to be remotely practical.

nerdponx · 5 years ago
This might be more useful for a task like "read files off a list, and download them in parallel, with no more than 20 concurrent downloads." That particular task might be a one-liner in some programming languages, but there are a lot of programs like that which need significant bookkeeping and/or boilerplate even though their plain-language description of intended behavior is not complicated.

Or implementing a sophisticated protocol that has a formal specification. If you can express the correct behavior in some kind of pithy pseudocode, a tool like this could "compile" that to code in various programming languages. Like a super-powered version of SWIG.

MiroF · 5 years ago
I agree that code generation of complex functions is hard.

But I think the example given of unit testing - ie. natural language description of specific behavior of function -> code is extremely useful.

tanilama · 5 years ago
Unit testing is a good use case.

But that would require the condition on the type system, meaning the code-gen needs to understand the object's interface, which while not impossible in current techniques, but hard enough due computation complexity.

Again I don't dispute this tool being interesting. But claims it to be ground breaking or game changing is simply not right.

Majority of programmers time, is not typing down the code. It is to look at the comment/description, think about it, edit some code, then rethink then edit again.

This tool has potential to solve some typing time, but it still not going to things fundamentally.

IdiocyInAction · 5 years ago
How does this do compared to other models? Is this a totally cutting edge result? On the surface, it seems quite impressive, but sans an environment to try it out with, I cannot be entirely sure. Still, this does make me question whether I chose a safe career, haha.

The thing is, I'd really need to see a live demo to see how good this is. Making mistakes is actually kind of a big issue; as most people know, debugging code is harder than writing it. And a lot of the language models which can write impressive-seeming text also generate masses of garbage. There's no way to know whether this was cherrypicked or not.

The mere fact that it can extract meaning from text like this is already really impressive though.

bglazer · 5 years ago
I've read a fair number of papers on neural program synthesis lately. To me, these seemed to be obviously cherry picked examples, so you can't really evaluate the whole system based on them.

However, this is fairly impressive for a couple reasons. First, the system constructs programs from natural language descriptions, rather than examples of input-output pairs or a formal specification, which are the most common settings for program synthesis. Second, they're generating full blown python, not a smaller, domain specific language.

Finally, and this is pretty mind-blowing, is the seamless, idiomatic use of loops, branches, and function calls. I haven't seen previous program synthesis tools able to generate such complex code. They're typically limited to simple linear programs with less than about 100 lines. Complex control flow and function calls are still beyond their reach for the most part.

I'm not an active researcher in neural program synthesis, so my statements may not reflect the current state of the art.

I honestly thought that the most promising route forward for program synthesis would be a model that incorporated knowledge of the syntax and semantics of code. Most likely, a model that manipulated, or at least had some view of, the program's AST. This seems to be just throwing a giant Transformer model at github.

Fine tuning a vanilla language model on a giant corpus of code feels like a dead end for the field, long-term. It seems obvious to me that humans are doing something more than just statistical pattern recognition and generation when we write and reason about code.

Then again, it's hard to argue with results. I'm sure lots of pre-neural network voice recognition researchers were in love with the elegance of their hidden markov models.

Edit: Also, everyone should go try the FlashFill feature in Microsoft excel. As far as I know, it's the only example of program synthesis shipped in a consumer facing production system, and it works shockingly well.

IdiocyInAction · 5 years ago
> Fine tuning a vanilla language model on a giant corpus of code feels like a dead end for the field, long-term. It seems obvious to me that humans are doing something more than just statistical pattern recognition and generation when we write and reason about code.

Yeah, this is the main reason why I would be interested in more examples. But, if this thing was trained on all of GitHub, I could imagine that it come up with decent-looking code for a lot of examples; a beefy, smarter Google with some rudimentary contextual understanding, if you will. Still, the presence of any mistakes is a no-go and I'd be really interested how it reacts to more realistic, specific requirements.

But yeah, I'd figure a model for code generation would have to have some kind of knowledge of syntax and semantics, rather than doing pure statistical pattern matching, to be of any real use. It would not only have to generate, but also to debug its code (I wonder whether you could do that purely with statistical pattern recognition). I might be wrong, of course, but I would be surprised if that is enough to write complex code.

YeGoblynQueenne · 5 years ago
>> Edit: Also, everyone should go try the FlashFill feature in Microsoft excel. As far as I know, it's the only example of program synthesis shipped in a consumer facing production system, and it works shockingly well.

And it's not a giant language model trained on a gigantic dataset. Rather, if memory serves, it's a buch of task-specific DSLs and rules, all hand-written from scratch.

MauranKilom · 5 years ago
I am also hedging my hopes of this working on "more realistic" scenarios. It does produce code that looks natural to us, but i expect it to show clear "seams" where its understanding of something isn't deep enough.

But maybe this is just a question of how much compute (and network size/"depth") you invest. On a certain level we're also just some recurrent LSTM :)

bo1024 · 5 years ago
Ha. You hit the nail on the head. There is no rigorous way to measure AI-generated anything. (to my knowledge) So every demo is "ooh look at this" and actual performance is not scientifically evaluated, because we don't know how. This includes images, text, etc.
parksy · 5 years ago
I have thought about this before but I can see that logical errors are introduced which must be manually tested and reviewed anyway, so what if a more reliable approach could be achieved by training these data sets on test cases alongside passing code?

This way developers just write unit tests or functional tests, and the AI generates code and retrains itself until the code passes for all tests. This could happen silently in the background as the developer defines the tests.

A number of natural language test frameworks exist, Behat for example lets you define tests such as:

Feature: Multiple site support

  Background:
    Given a global administrator named "Greg"
    And a blog named "Greg's anti-tax rants"
    And a customer named "Wilson"
    And a blog named "Expensive Therapy" owned by "Wilson"

  Scenario: Wilson posts to his own blog
    Given I am logged in as Wilson
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published."

  Scenario: Greg posts to a client's blog
    Given I am logged in as Greg
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published."
It could still fit the dream of describing to a computer what kind of program you want and having it figure out the plumbing.

Anyway interesting work. Very interesting. I remember a few colleagues laughed at me no more than 5 years ago when I suggested that AI would eventually write code. And here it is, in an early version, flawed surely but only set to improve.

Edit to add: This subject while insanely interesting to me is well out of my wheelhouse. I'm guessing there's possibly semantic structure to the above that the type of model being used in the demo can't deal with? Like this one use-case has to co-exist in an entire ecosystem of dependencies and related entities... Could the model cope with that or is it just calculating the likelihood of the next character like other models I've seen, but with insane accuracy when it comes to code?

BaronSamedi · 5 years ago
Instead of Test Driven Development, Test Only Development? I like that idea. This reminds me of an article I read a while ago on co-evolutionary training in genetic programming: one algorithm evolving to do something, with another evolving to break it.
parksy · 5 years ago
Yeah that's a good way of putting it. Also has a catchy name, "TOD".

Ultimately as well we don't care what the code looks like, if it passes all tests then it "works". You probably don't even need to generate the code in a high level language, if people aren't ever going to really read it.

You'd probably need tests designed to ensure the code is executes quickly enough and automatically generate edge case test data so you don't end up with a blog where you can only post articles with the titles in the exact test data heh.

The future seems interesting for us developer types anyway. If a product designer could express their requirements in plain language developers would only really need to be around for cases where the models failed and more training data was needed to improve them.

Voloskaya · 5 years ago
I'am a bit confused, is this built by OpenAI or Microsoft? Microsoft released the paper IntelliCode Compose: Code Generation Using Transformer [1] 4 days ago and there is no attribution to anyone from OpenAI in it.

Are those two entirely separate and yet exactly similar initiatives?

[1]: https://arxiv.org/abs/2005.08025v1

p1esk · 5 years ago
IntelliCode Compose is built around a multi-layer generative pretrained transformer model for code (GPT-C), which is avariant of the GPT-2

GPT-2 is built by OpenAI

Voloskaya · 5 years ago
I am aware of this, I am referring to the video, where Sam Altman (CEO of OpenAI) is presenting the demo and saying "we have built", while Kevin Scott (CTO of MSFT) is saying that it's the first time he has seen that. So this is clearly marketed as OpenAI's work, not just saying that the model is based on their work.
grensley · 5 years ago
Wow, this has the ability to be a total gamechanger. You have to be really observant about the bugs though, I would have totally missed the one with the price discount without executing it.
netsec_burn · 5 years ago
By lowering the barrier of entry of programming further, I wonder if we'll see more bugs (like the price discount) as a result of this?
colordrops · 5 years ago
Similar problem to automated driving - as long as it's better than most humans, occasional bugs will be ok. Virtually no software is bug free.

It's much more difficult problem than automated driving though - for software, the space of intents of the user is orders of magnitude greater in size. It's the job of the model to determine the intent of the "programmer". Perhaps we could meet the model half way and come up with heavily-structured natural language to communicate intent.

bufferoverflow · 5 years ago
You still need a programmer to find the bugs. I think it's actually harder to spot and fix a bug, than to write a simple method that involves one.
joshuak · 5 years ago
I notice that the bug was in the user's failure to communicate the intent of the scalar. Presumably with regular use users would learn to be more clear and/or anticipate the likely fixes to ambiguous labels.

Also, since it would be used to build tests as well, I'd expect such misunderstandings to be pretty obvious. I would be willing to bet you'd see a net reduction in bugs, and a substantial reduction in typo related bugs.

But if you mean by lowered barrier of entry you mean the population of programmers would be less competent, yes bugs in the design might increase, however being able to more quickly get to the point of evaluating a design is a great way to learn better design.

swalsh · 5 years ago
These are just baby steps, but holy shit is that impressive. It kind of feels like working with offshore devs, but it's in real time.
nnq · 5 years ago
...that's mildly insulting
LAMike · 5 years ago
Only if you assume what shore he's referring to.
29athrowaway · 5 years ago
I've worked with developers from all around the globe. While it's true that some cannot even write fizzbuzz, some others can be extremely brilliant individuals with an excellent work ethic.