Readit News logoReadit News
sama commented on Introducing ChatGPT and Whisper APIs   openai.com/blog/introduci... · Posted by u/minimaxir
minimaxir · 3 years ago
> It is priced at $0.002 per 1k tokens, which is 10x cheaper than our existing GPT-3.5 models.

This is a massive, massive deal. For context, the reason GPT-3 apps took off over the past few months before ChatGPT went viral is because a) text-davinci-003 was released and was a significant performance increase and b) the cost was cut from $0.06/1k tokens to $0.02/1k tokens, which made consumer applications feasible without a large upfront cost.

A much better model and a 1/10th cost warps the economics completely to the point that it may be better than in-house finetuned LLMs.

I have no idea how OpenAI can make money on this. This has to be a loss-leader to lock out competitors before they even get off the ground.

sama · 3 years ago
we make a little money on it!
sama commented on GitHub Copilot   copilot.github.com/... · Posted by u/todsacerdoti
sama · 5 years ago
AI FTW!

(dang please don't ban me for a low-quality comment :) i couldn't resist but will not make it a habit!)

sama commented on PG and Jessica   blog.samaltman.com/pg-and... · Posted by u/janvdberg
jdoliner · 5 years ago
I can't help but wonder if this post was written as a response to Kanye's tweets about starting a Y Combinator for musicians. [0] Which Sama offered to help with. I would love to see another post about whether Kanye and Kim could function like PG and Jessica for a music based YC. (Which twitter has already dubbed Ye Combinator.)

https://twitter.com/kanyewest/status/1305977929180966913

sama · 5 years ago
That was the specific inspiration, yes.
sama commented on Tempering Expectations for GPT-3 and OpenAI’s API   minimaxir.com/2020/07/gpt... · Posted by u/vortex_ape
Barrin92 · 6 years ago
I'm not really sure I understand the hype anyway. All GPT-3 does is generate text from human input to begin with, it's not actually at all intelligent as the person from the Turing test thread pointed out.

Sure GPT-3 can respond with factoids, but it doesn't actually understand anything. If I have a chat with the model and I ask it "what did we talk about thirty minutes ago" it's as clueless as anything. A few weeks ago computerphile put out a video of GPT-3 doing poetry that was allegedly only identified as computer generated half of the time, but if you actually read the poems they're just lyrically sounding word salad, as it does not at all understand what it's talking about.

Honestly the only expectations I have for this is generating a barrage of spam or fake news that uncritical readers can't distinguish from human output.

sama · 6 years ago
"Understand" is up for debate, but it's clearly learning something. The fact that it's possible to learn general structure as well as we can from unlabeled data does seem like a significant development.
sama commented on Tempering Expectations for GPT-3 and OpenAI’s API   minimaxir.com/2020/07/gpt... · Posted by u/vortex_ape
Bx6667 · 6 years ago
I am totally confused by people not being impressed with gtp3. If you asked 100 people in 2015 tech industry if these results would be possible in 2020, 95 would say no, not a chance in hell. Nobody saw this coming. And yet nobody cares because it isn’t full blown AGI. That’s not the point. The point is that we are getting unintuitive and unexpected results. And further, the point is that the substrate from which AGI could spring may already exist. We are digging deeper and deeper into “algorithm space” and we keep hitting stuff that we thought was impossible and it’s going to keep happening and it’s going to lead very quickly to things that are too important and dangerous to dismiss. People who say AGI is a hundred years away also said GO was 50 years away and they certainly didn’t predict anything even close to what we are seeing now so why is everyone believing them?
sama · 6 years ago
I think people should be impressed, but also recognize the distance from here to AGI. It clearly has some capabilities that are quite surprising, and is also clearly missing something fundamental relative to human understanding.

It is difficult to define AGI, and it is difficult to say what the remaining puzzle piece are, and so it's difficult to predict when it will happen. But I think the responsible thing is to treat near-term AGI as a real possibility, and prepare for it (this is the OpenAI charter we wrote two years ago: https://openai.com/charter/).

I do think what is clear is that we are, in the coming years, going to have very powerful tools that are not AGI but that still change a lot of new things. And that's great--we've been waiting long enough for a new tech platform.

sama commented on Demo of an OpenAI language model applied to code generation [video]   twitter.com/i/broadcasts/... · Posted by u/cjlovett
neil_s · 6 years ago
I had trouble accessing the relevant video snippet even after going through the conference registration, so here's a summary.

You can view the demo at https://twitter.com/i/broadcasts/1OyKAYWPRrWKb starting around 29:00.

It's Sam Altman demoing a massive Open AI model that was trained on GitHub OSS repos using a Microsoft supercomputer. It's not Intellicode, but the host says that they're working on compressing the models to a size that could be feasible in Intellicode. The code model uses English-language comments, or simply function signatures, to generate entire functions. Pretty cool.

sama · 6 years ago
Thanks, but it's Sam McCandlish doing the demo (and the project).

u/sama

KarmaCake day23059October 9, 2006View Original