Greg Brockman & Sam Altman: "Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.
- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.
- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior."
Altman has always been a bit sly, I am not surprised if he has made some backroom deals that don't sit well with the board. But there is a fine line between doing the right thing and unable to do anything at all. Sometimes, it is better to have the moderately bad guy that can at least bring progress instead of a virtuous leader that can't win a fight.
I guess only time will tell. Right now though, OAI is not looking good and depend on how Microsoft was involved in all of this, someone's seat might get a bit shaky.
I want to hope this means the OpenAI non-profit is looking for more ways to give code away and to democratize and share it's wins. Rather than increasingly commercialize & sell a product.
I'm pretty afraid though this might mean instead cutting off & restricting access to this technology, under the pretension of safeguarding humanity from it's use.
There's court intrigue discussions aplenty but I want to know what the intent is here, what this signals as coming next.
I think you're right that this move by Ilya is to refocus on the non-profit mission, so less urgency to commercialize and sell products, and instead more focus on core research.
Based on Ilya's statements, it doesn't seem like safety was the main motivation in the decision, so maybe there's a hope that they'll be more open, but it doesn't seem to indicate either way, so possible that they'll stay tight lipped regardless.
Everyone from Greg to Sam to Ilya keep hanging on "AGI for the benefit of humanity". According to OpenAI's constitution: AGI is explicitly carved out of all commercial and IP licensing agreements, including the ones with Microsoft. Nuclear.
Well apparently, the board decides what will be AGI. But 3 (everyone now left except ilya) of those board members don't even work at Open AI and are only privy to what the rest share.
Yesterday, this is what Altman said, "On a personal note, like four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we pushed the veil of ignorance back"
He goes on to say, "By next year, the model capabilities will take such a leap forward that No one would have expected"
Now keep in mind, while we can all speculate on just how much the next iteration will be better, the idea that they could be sitting on something noticeably better is not far fetched at all. Open AI sat on GPT-4 for 8 months before announcing it to the public.
He says they can push Language Models much farther but that there are more breakthroughs required. But here's where it gets weird. He immediately starts talking about Super Intelligence and "discovering new physics" as the bar. He says, "If it can't discover new physics, I don't think it's a super intelligence". Nobody asked you about this Sam..
To Ilya, they built AGI internally, and Altman wanted to release/monetize it early and tried to undersell it as being far away from AGI to the rest of the board. Ilya considers it AGI, or something extremely close to it, and deserving of far more caution, and so decided to convince the board that Altman was underselling the capabilities of their newer models and was risking a premature release of AGI. If true, then it could be seen as Altman lying about something that is foundational to their mission (which is to safely and responsibly release AGI out into the world) and subsequently fired him.
Characterizing Dev Day as "too far" makes more sense in this scenario. Ultimately, the only reason SOTA LLM Agents aren't particularly dangerous is competence. If you suddenly bumped the latter up while laying the grounds for the former..
Anytime I hear someone even hypothesizing with a tin foil hat that some company has developed AGI today in 2023, I hear it the same way I hear someone speculating about UFOs. It's so absurd that all I can do is shake my head and look around the room hoping to make eye contact with someone sane who is wearing the same expression of raised eyebrows that I am.
I could write an essay on why the existence of AGI in 2023 is a comical prospect, but it just seems to be a given to me... It isn't worth the effort because if someone can't see that for themselves already, no amount of explanation from me is going to change their mind.
But in one sentence, try asking ChatGPT to reverse a string of 20+ digits, like 473936482738338373926. It can't do it. It's a super trivial task, but it can't do it. Because it doesn't really understand anything. It's just an incredibly advanced Markov chain. It isn't a mind. It can't reason. They are no closer to AGI than we were 10 years ago. Chatgpt's text generation capabilities have fooled people into thinking that OpenAI has made substantial progress on creating a reasoning consciousness. But it hasn't. It hasn't at all. And it's so easy to realize that if you interact with the thing.
>But in one sentence, try asking ChatGPT to reverse a string of 20+ digits, like 473936482738338373926.
LLMs don't see numbers or letters but tokens. Letter, digit level manipulation is intuitively a hard task. Unsurprisingly, removing that handicap resulted in much better arithmetic abilities. https://arxiv.org/abs/2310.02989
It's pretty telling in my opinion that this is the kind of "proof" people bank hard on. For something that so obviously "does not understand", you'd think providing a task that it would fail that a chunk of humans also wouldn't would be easy.
> It isn't worth the effort because if people can't see it for themselves already, no amount of explanation from me is going to change their mind.
That's convenient. I mean, if you were to produce such an essay I think it would actually be extremely high value and a lot of people would enjoy reading it. Hell, a decent HN post would be nice.
But if you're so unable to express the idea, I frankly question whether you really have such a strong grasp on it.
The line for AGI used to be the Turing Test. But we blew past that with current models without even blinking and came up with new requirements because we weren’t ready to call something AGI yet.
> try asking ChatGPT to reverse a string of 20+ digits, like 473936482738338373926. It can't do it
ChatGPT (4) can accomplish this task easily. Regardless, I'm willing to bet there are at least some humans that can't do this task. And basically all intelligent non-human animals can't either.
> And it's so easy to realize that if you interact with the thing.
I don't think ChatGPT is AGI, but I think the vast majority of people who have interacted with ChatGPT think that it brings us closer to AGI than what we had 10,20 years ago.
I have the exact same experience and I’m not sure what to make of it. Even repeating the instructions doesn’t work. There is clearly a lack of basic reasoning ability and it can’t comprehend tasks that a five year old would have zero issues with.
I got it to make a logo for my hypothetical startup yesterday but it put these weird coloured circles beneath it so I asked it to remove those. It generated a whole new logo. No matter how many times I tried to get it to give me the original logo with the changes I wanted it just kept generating new ones. Sometimes with spelling mistakes of the fake company name.. That was not intelligence.
I don’t know about AGI but seeing some of the new tools that mix Vision with GPT4 is a little humbling. Like the makeitreal.tldraw.com tool where you can draw a diagram of a game (like Breakout) and it will code up a working game for you.
"Reverse this sequence of numbers: 473936482738338373926"
"The reversed sequence of the numbers 473936482738338373926 is 629373833837284639374."
It works like this by using Python. Even when I told it to write a JS function internally to do it and only share the answer, it overruled that. When I asked it to repeat the sequence back in the same order, but starting from the last digit and working backwards, it failed a little bit. It seems like using the language model to pivot to the correct "model" or "logic" to decide to use code is impressive.
> But in one sentence, try asking ChatGPT to reverse a string of 20+ digits, like 473936482738338373926. It can't do it. It's a super trivial task, but it can't do it. Because it doesn't really understand anything. It's just an incredibly advanced Markov chain.
I agree with this statement, but why don't I see it described this way more?
It really feels like transformers are (large) parameterized markov chains but I never seen anyone describe it this way, is it just not a good approximation/hiding too much of the technical workings to be true?
I am pretty positive on AI and LLMs and all this stuff. But even so I really doubt OAI could have made something so much better than GPT to the point that it worth sacking its CEO in such a manner.
Sam has always been a salesman, that is true and the board knows it. It would take much more than just a disagreement on the value and urgency of a deal to get rid of him. I think it has to be active sabotage and some kind of business maneuver that basically kill the company as it is right now while not telling the board about it.
It does not mean that it is objectively many times better than GPT. It can be a disagreement of interpretation, Ilya and co strongly believing that it is not safe to release, while Sam thinking everything is fine. The capabilities themselves are not very relevant.
That’s very weird to read, because I’ve noticed some sentiments about “new physics” in anon AI twitter as well around a month and a half ago. I thought it was just weird speculation at the time, but now hearing it from a completely different source suddenly gives this some credibility.
All of the above with tinfoil hat on, of course. Huge if true but still highly unprobable.
Hats on to you. The AGI was compellingly persausive that sam must be fired immediately.
They didn't fire him for that reason, but because it would be dangerously persuasive to release without safeguards - which is what sam planned to do without telling the board.
You are reading too much into this, and falling victim to sam altman’a marketing strategy, of fud fomo and hype - tricks adopted from the crypto underworld.
What happened instead is altman and his followers took over a genuinely open initiative of building ai.
In doing so he thought he could simply monetise models built upon content that they had no right to monetise, since their so called ai doesnt actually learn, it depends on data - the more the better.
Well it turns out that some sane people at openai decided to end the pyramid scheme of data - funding - data and return to core values.
Or at least that’s my hope, as that is the only path forward to building ai, a goal they havent reached yet.
Rumors that they had a breakthrough that Sam wanted to deploy but Ilya didn’t.
Makes me wonder what sort of capability they actually have in-house but not available for us plebs.
And if they have that in-house, how reasonable is it to assume that the US government (or perhaps even other state governments) also have access to it?
Based on Ilya's comments [1] it sounds like it was more about a disagreement around the original non-profit mission, and making sure that AI benefited all of humanity.
Perhaps Ilya felt Sam was focusing too much on profit and power with his recent world tour and then dev day? Regardless, it's certainly rare, for one of the core scientists to maintain control over their creation, rather than the other way around. Typically the VC business guy would be pushing the scientist out.
It would be weird for this sort of disagreement to reach a feverpitch that would see the CEO sacked without some much better model in the background though.
Like Dev Day was characterized as "too far". How ? How is that interfering with the mission to benefit all humanity? It's all very weird.
Given this sounds like an alignment/strategy issue... OpenAI might find themselves in a big old defamation lawsuit. The press release clearly reads: "he did something really bad but we can't tell you exactly what". People have sued for less...
- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.
- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior."
https://twitter.com/gdb/status/1725736242137182594
I guess only time will tell. Right now though, OAI is not looking good and depend on how Microsoft was involved in all of this, someone's seat might get a bit shaky.
Ilya Sutskever "at the center" of Altman firing? - https://news.ycombinator.com/item?id=38314299 - Nov 2023 (252 comments)
(I mean there are a lot of related ongoing threads but it's all relative)
I'm pretty afraid though this might mean instead cutting off & restricting access to this technology, under the pretension of safeguarding humanity from it's use.
There's court intrigue discussions aplenty but I want to know what the intent is here, what this signals as coming next.
Based on Ilya's statements, it doesn't seem like safety was the main motivation in the decision, so maybe there's a hope that they'll be more open, but it doesn't seem to indicate either way, so possible that they'll stay tight lipped regardless.
Everyone from Greg to Sam to Ilya keep hanging on "AGI for the benefit of humanity". According to OpenAI's constitution: AGI is explicitly carved out of all commercial and IP licensing agreements, including the ones with Microsoft. Nuclear.
Well apparently, the board decides what will be AGI. But 3 (everyone now left except ilya) of those board members don't even work at Open AI and are only privy to what the rest share.
Yesterday, this is what Altman said, "On a personal note, like four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we pushed the veil of ignorance back"
He goes on to say, "By next year, the model capabilities will take such a leap forward that No one would have expected"
Now keep in mind, while we can all speculate on just how much the next iteration will be better, the idea that they could be sitting on something noticeably better is not far fetched at all. Open AI sat on GPT-4 for 8 months before announcing it to the public.
https://www.youtube.com/live/ZFFvqRemDv8?si=yUnLvk1gHNocxUVu
2 days ago, Sam is asked about what's left for AGI. https://m.youtube.com/watch?v=NjpNG0CJRMM
He says they can push Language Models much farther but that there are more breakthroughs required. But here's where it gets weird. He immediately starts talking about Super Intelligence and "discovering new physics" as the bar. He says, "If it can't discover new physics, I don't think it's a super intelligence". Nobody asked you about this Sam..
In No prior podcast 2 weeks ago with Ilya, he says Transformers can obviously get us to AGI. https://twitter.com/burny_tech/status/1725578088392573038
To Ilya, they built AGI internally, and Altman wanted to release/monetize it early and tried to undersell it as being far away from AGI to the rest of the board. Ilya considers it AGI, or something extremely close to it, and deserving of far more caution, and so decided to convince the board that Altman was underselling the capabilities of their newer models and was risking a premature release of AGI. If true, then it could be seen as Altman lying about something that is foundational to their mission (which is to safely and responsibly release AGI out into the world) and subsequently fired him.
Characterizing Dev Day as "too far" makes more sense in this scenario. Ultimately, the only reason SOTA LLM Agents aren't particularly dangerous is competence. If you suddenly bumped the latter up while laying the grounds for the former..
I could write an essay on why the existence of AGI in 2023 is a comical prospect, but it just seems to be a given to me... It isn't worth the effort because if someone can't see that for themselves already, no amount of explanation from me is going to change their mind.
But in one sentence, try asking ChatGPT to reverse a string of 20+ digits, like 473936482738338373926. It can't do it. It's a super trivial task, but it can't do it. Because it doesn't really understand anything. It's just an incredibly advanced Markov chain. It isn't a mind. It can't reason. They are no closer to AGI than we were 10 years ago. Chatgpt's text generation capabilities have fooled people into thinking that OpenAI has made substantial progress on creating a reasoning consciousness. But it hasn't. It hasn't at all. And it's so easy to realize that if you interact with the thing.
LLMs don't see numbers or letters but tokens. Letter, digit level manipulation is intuitively a hard task. Unsurprisingly, removing that handicap resulted in much better arithmetic abilities. https://arxiv.org/abs/2310.02989
It's pretty telling in my opinion that this is the kind of "proof" people bank hard on. For something that so obviously "does not understand", you'd think providing a task that it would fail that a chunk of humans also wouldn't would be easy.
Did you use ChatGPT 3.5 once a year ago or something? The models change often.
Edit: most people would struggle to reverse a 20+ digit number if it was listed off to them. This doesn’t make them unintelligent.
That's convenient. I mean, if you were to produce such an essay I think it would actually be extremely high value and a lot of people would enjoy reading it. Hell, a decent HN post would be nice.
But if you're so unable to express the idea, I frankly question whether you really have such a strong grasp on it.
https://chat.openai.com/share/79a7ab39-bd3c-42e8-b4bb-452ded...
> The reversed string of '473936482738338373926' is '629373833837284639374'.
So now do you think it understands something?
"Just an incredibly advanced Markov chain" doesn't mean anything. Humans writing text are also incredibly advanced Markov chains.
ChatGPT (4) can accomplish this task easily. Regardless, I'm willing to bet there are at least some humans that can't do this task. And basically all intelligent non-human animals can't either.
> And it's so easy to realize that if you interact with the thing.
I don't think ChatGPT is AGI, but I think the vast majority of people who have interacted with ChatGPT think that it brings us closer to AGI than what we had 10,20 years ago.
“The government isn’t listening” except Snowden dumps the documents that show 10+ systems designed for mass intelligence?
https://www.noemamag.com/artificial-general-intelligence-is-...
The reason ChatGPT can't reverse numbers is...current tokenization strategies. It's a known shortcoming.
"The reversed sequence of the numbers 473936482738338373926 is 629373833837284639374."
It works like this by using Python. Even when I told it to write a JS function internally to do it and only share the answer, it overruled that. When I asked it to repeat the sequence back in the same order, but starting from the last digit and working backwards, it failed a little bit. It seems like using the language model to pivot to the correct "model" or "logic" to decide to use code is impressive.
I agree with this statement, but why don't I see it described this way more?
It really feels like transformers are (large) parameterized markov chains but I never seen anyone describe it this way, is it just not a good approximation/hiding too much of the technical workings to be true?
Deleted Comment
Deleted Comment
Sam has always been a salesman, that is true and the board knows it. It would take much more than just a disagreement on the value and urgency of a deal to get rid of him. I think it has to be active sabotage and some kind of business maneuver that basically kill the company as it is right now while not telling the board about it.
Deleted Comment
All of the above with tinfoil hat on, of course. Huge if true but still highly unprobable.
New physics is neat but there are world changing capable definitions that wouldn't meet that requirement.
They didn't fire him for that reason, but because it would be dangerously persuasive to release without safeguards - which is what sam planned to do without telling the board.
Similar hat: https://twitter.com/8teAPi/status/1725724907722752008 / https://archive.is/bvLVQ
What happened instead is altman and his followers took over a genuinely open initiative of building ai.
In doing so he thought he could simply monetise models built upon content that they had no right to monetise, since their so called ai doesnt actually learn, it depends on data - the more the better.
Well it turns out that some sane people at openai decided to end the pyramid scheme of data - funding - data and return to core values.
Or at least that’s my hope, as that is the only path forward to building ai, a goal they havent reached yet.
Edit: I was agreeing with the parent commenter, not being sarcastic towards them.
Makes me wonder what sort of capability they actually have in-house but not available for us plebs.
And if they have that in-house, how reasonable is it to assume that the US government (or perhaps even other state governments) also have access to it?
Perhaps Ilya felt Sam was focusing too much on profit and power with his recent world tour and then dev day? Regardless, it's certainly rare, for one of the core scientists to maintain control over their creation, rather than the other way around. Typically the VC business guy would be pushing the scientist out.
1. https://nitter.net/GaryMarcus/status/1725707548106580255
Like Dev Day was characterized as "too far". How ? How is that interfering with the mission to benefit all humanity? It's all very weird.