nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
This doesn't make any sense. If it was a disagreement, they could have gone the "quiet" route and just made no substantive comment in the press release. But they made accusations that are specific enough to be legally enforceable if they're wrong, and in an official statement no less.
If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.
I agree. None of this adds up. The only thing that makes any sense, given OpenAI has any sense and self interest at all, is that the reason they let Altman go may have even been bigger than even what they were saying, and that there was some lack of candor in his communications with the board. Otherwise, you don't make an announcement like that 30 minutes before markets close on a Friday.
Even if their case is 100% solid, they wouldn't have said it publically. Unless they hated Sam for doing something, so it's not just direction of the company or something like that. It's something bigger.
> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
I think you're completely backward. A board doesn't do that unless they absolutely have to.
Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.
The board, like any, is a small group of people, and in this case a small group of people divided into two sides defined by conflicting ideological perspectives. In this case, I imagine the board members have much broader and longer-term perspectives and considerations factoring into their decision making than the significant, significant majority of other companies/boards. Generalizing doesn’t seem particularly helpful.
If it was really just about seeing eye to eye, why would the press release say anything about Sam being "consistently candid in his communications?" That seems pretty unnecessary if it were fundamentally a philosophical disagreement. Why not instead say something about differences in forward looking vision?
Which they can do in a super polite "wish him all the best" way or an "it was necessary to remove Sam's vision to save the world from unfriendly AI" way as they see fit. Unlike an accusation of lying, this isn't something that you can be sued for, and provided you're clear about what your boardroom battle-winning vision is it probably spooks stakeholders less than an insinuation that Sam might have been covering up something really bad with no further context.
Ah, the old myth about the irreplaceable engineer and the dumb suit. Ask Wozniak about that. I don't think he believes Apple would be without Steve Jobs.
The first Steve was totally irreplaceable, the second Steve was probably arguably the difficult one to be replaceable. Without the first firing, the second Steve would never exist. But then when Apple current ball rolling, he was replaced fine with Tim Cook.
If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.
Or there was a disagreement about whether the dishonesty was over the line? Dishonesty happens all the time and people have different perspectives on what constitutes being dishonest and on whether a specific action was dishonest or not. The existence of a disagreement does not mean that it has nothing to do with dishonesty.
I think that if there were a lack of truth to him being less-than-candid with the board, they would have left that part out. You don’t basically say that an employee (particularly a c-suiter with lots of money for lawyers) lied unless you think that you could reasonably defend that statement in court. Otherwise, it’s defamation.
I’m not saying there is lack of truth. I’m saying that’s not the real reason. It could be there’s a scandal to be found, but my guess is the hostility from OpenAI is just preemptive.
There’s really no nice way to tell someone to fuck off from the biggest thing. Ever.
Doesn't justify the hostile language and the urgent last minute timing. (partners were notified just minutes before press release). They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time.
A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.
Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.
Yeah, this is more abrupt and more direct than any CEO firing I've ever seen. For comparison, when Travis Kalanick was ousted from Uber in 2017, he "resigned" and then was able to stay on the board until 2019. When Equifax had their data breach, it took 4 days for the CEO to resign and then the board retroactively changed it to "fired for cause". With the Volkswagon emissions scandal, it took 20 days for the CEO to resign (again, not fired) despite the threat of criminal proceedings.
You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.
That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.
That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.
Great point. It was rude to drop a bombshell during trading hours. That said, the chunk of value Microsoft dropped today may be made back tomorrow, but maybe not: if OpenAI is going to slow down and concentrate on safe/aligned AI then that is not quite as good for Microsoft.
That may be the case, but I have a feeling that it will end up being presented as a alignment and ethics versus all in on AGI consequences be damned. I'm sure OpenAI has gotten a lot of external pressure to focus more on alignment and ethics and this coup is signalling that OpenAI will yield to that pressure.
What is confusing here is Why would Greg have agreed to the language in the press release (that he would be staying in the company and report to the CEO) only to resign 1 hour later. Surely the press release would not have contained that information without the his agreement that he would be staying.
> Why would Greg have agreed to the language in the press release
We have no evidence he agreed or didn't agree to the wording. A quorum of the board met, probably without the chairman, and voted the CEO of the company and the chairman of the board out. The chairman also happens to have a job as the President of the company. The President role reports to the CEO, not the board. Typically, a BOD would not fire a President, the CEO would. The board's statement said the President would continue reporting to the CEO (now a different person) - clarifying that the dismissal as board chairman was separate from his role as a company employee.
Based on the careful wording of the board's statement as well as Greg's tweet, I suspect he wasn't present at the vote nor would he be eligible to vote regarding his own position as chairman. Following this, the remaining board members convened with their newly appointed CEO and drafted a public statement from the company and board.
He didn't. Greg was informed after Sam (I'm assuming the various bits being flung about by Swisher are true; she gets a free pass on things like this), so I think the sequence was: a subset of the board meets, forms quorum, votes to terminate Sam and remove Greg as chair (without telling him). Then they write the PR, and around the same time, let Sam and then Greg know. If OpenAI were a government, this would be called a coup.
Ha Most people don’t know how slipshod these things go. Succession had it right when people were always fighting over the PR release trying to change each others statements.
> When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.
> Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.
the theory that Altman did something in bad faith means that it might not be a disagreement but it's something that forced Sutskever to vote against Sam
Feels like Griffindor beheaded Slytherin right before Voldemort could make them his own. Hogwarts will be in turmoil but that price was unavoidable given the existential threath?
Sam and Ilya have recently made public statements about AGI that appear to highlight a fundamental disagreement between them.
Sam claims LLMs aren't sufficient for AGI (rightfully so).
Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.
In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.
That is just disagreement in technological aspects, which any well-functioning company should always have a healthy amount of within its leadership. The press release literally said he was fired because he was lying. I haven't seen anything like that from a board of a big company for a very long time.
Additionally, OpenAI can just put resources towards both approaches in order to settle this dispute. The whole point of research is that you don't know the conclusions ahead of time.
The question has enormous implications for OpenAI because of the specifics of their nonprofit charter. If Altman left out facts to keep the board from deciding they were at the AGI phase of OpenAI, or even to prevent them from doing a fair evaluation, then he absolutely materially misled them and prevented them from doing their jobs.
If it turns out that the ouster was over a difference of opinion re: focusing on open research vs commercial success, then I don't think their current Rube Goldberg corporate structure of a non profit with a for profit subsidiary will survive. They will split up into two separate companies. Once that happens, Microsoft will find someone to sell them a 1.1% ownership stake and then immediately commence a hostile takeover.
No-one knows. But I sure would trust the scientist leading the endeavor more than a business person that has interest in saying the opposite to avoid immediate regulations.
>Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.
Humans don't work this way either. You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine. Just like humans do when they shut down their system 1 brain and go into system 2 slow mode.
I'm in the definitely ready for AGI camp. But it's not going to be a single model that's going to do the AGI magic trick, it's going to be an engineered system consisting of multiple communicating models hooked up using traditional engineering techniques.
> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.
This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.
The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.
Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.
Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.
Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!
Did Ilya give a reason why transformers are theoretically sufficient? I've watched him talk in a CS seminar and he's certainly interesting to listen to.
From the interviews with him that I have seen, Sutskever thinks that language model is a sufficient pretraining task because there is a great deal of reasoning involved in next token prediction. The example he used was that suppose you fed a murder mystery novel to a language model and then prompted it with the phrase "The person who committed the model was: ". The model would unquestionably need to reason in order to come to the right conclusion, but at the same time it is just predicting the next token.
Can a super smart business-y person educate this engineer on how this even happens.
So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?
Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.
The details depend on what's specified in the non-profit's Bylaws and Articles of Incorporation. As a 501(c)3 there are certain requirements and restrictions but other things are left up to what the founding board mandated in the documents which created and govern the corporation.
Typically, these documents contain provisions for how voting, succession, recusal, eligibility, etc are to be handled. Based on my experience on both for-profit and non-profit boards, the outside members of the board probably retained outside legal counsel to advise them. Board members have specific duties they are obligated to fulfill along with serious legal liability if they don't do so adequately and in good faith.
I had the same questions, and have learnt now that non profit governance is like this and that is why is a bad idea for something like OpenAI. In a for profit the shareholders can just replace the board.
Asking ChatGPT (until someone else answers) says that to remove a board member usually takes a super majority, which makes much more sense... but still seems to imply they need at least 4/6.
Why would Greg have said "after learning today's news" if he took part in the vote? If he decided to quit immediately after the vote then why would the board issue a statement saying he was going to stay on? I don't think he took part, the others probably convened a meeting and cast a unanimous vote, issued the statement and then contacted Greg and Sam. The whole thing seems rushed so that's probably how it would have played out.
I mean, I still wonder though if they really only need 3 ppl fully on board to effectively take the entire company. Vote #1, oust Sam, 3/5 vote YES. Sam is out, now the vote is "Demote Greg", 3/4 vote YES, Greg is demoted and quits. Now, there could be one "dissenter" and it would be easy to vote them out too. Surely there's some protection against that?
This suggests that Greg Brockman wasn't in the board meeting that made the decision, and only "learned the news" that he was off the board the same way the rest of us did.
You've put "learned the news" in quote, but what Greg Brockman wrote was "based on today's news".
That could simply mean that he disagreed with the outcome and is expressing that disagreement by quitting.
EDIT: Derp. I was reading the note he wrote to OpenAI staff. The tweet itself says "After learning today's news" -- still ambiguous as to when and where he learned the news.
It's all very ambiguous, but if he had been there for the board meeting where he was removed, I imagine he would have quit then and it would have been in the official announcement. It comes across like he didn't quit until after the announcement had already been made.
> He was chairman of the board, no? surely he was in the meeting?
Since he was removed as Chairman at the same time as Altman was as CEO, presumably he was excluded from that part of the meeting (which may have been the whole meeting) for the same reason as Altman would have been.
Just guessing here, but I think the board can form a quorum without the chair, and vote, and as long as they have a majority, i think they can proceed with a press release based on their vote.
"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."
What’s the likelihood this is over a Microsoft acquisition? Purely speculative here, but Sam might have been a roadblock.
Edit: Maybe this is a reasonable explanation: https://news.ycombinator.com/item?id=38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.
https://twitter.com/karaswisher/status/1725682088639119857
nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
https://twitter.com/ilyasut/status/1707752576077176907
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.
Deleted Comment
So better be the first to set the narrative.
> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
[source: https://twitter.com/karaswisher/status/1725702501435941294]
Sounds like you exactly predicted it.
I don’t like this whole development one bit, actually. He lost his brakes and I’m sure he doesn’t see it this way at all.
> My bet: He’ll have a new company up by Monday.
Which is good!
Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.
All this to say that the board is probably unlike the boards of the vast majority of tech companies.
If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.
Open AI- we need clarity on your new direction.
It not like to can just move to another AI company if you don't like their terms.
Deleted Comment
There’s really no nice way to tell someone to fuck off from the biggest thing. Ever.
A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.
Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.
You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.
That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.
That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.
Deleted Comment
Ha! Tell me you don't know about markets without telling me! Stock can drop after hours too.
Who knows, maybe they settled a difference of opinion and Altman went ahead with his plans anyway.
We have no evidence he agreed or didn't agree to the wording. A quorum of the board met, probably without the chairman, and voted the CEO of the company and the chairman of the board out. The chairman also happens to have a job as the President of the company. The President role reports to the CEO, not the board. Typically, a BOD would not fire a President, the CEO would. The board's statement said the President would continue reporting to the CEO (now a different person) - clarifying that the dismissal as board chairman was separate from his role as a company employee.
Based on the careful wording of the board's statement as well as Greg's tweet, I suspect he wasn't present at the vote nor would he be eligible to vote regarding his own position as chairman. Following this, the remaining board members convened with their newly appointed CEO and drafted a public statement from the company and board.
> When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.
> Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.
Dead Comment
Dang! He left @elonmusk on read. Now that's some ego at play.
Dead Comment
And this time around he would have the sympathies from the crowd.
Regardless this is very detrimental to OpenAI brand, Ilya might be the genius behind ChatGPT, he couldn’t do it just himself.
The war between OpenAI and Sam AI is just the beginning
Edit: Ok seems to be a joke account. I guess I’m getting old.
Update from Greg
Sam claims LLMs aren't sufficient for AGI (rightfully so).
Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.
In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.
No-one knows. But I sure would trust the scientist leading the endeavor more than a business person that has interest in saying the opposite to avoid immediate regulations.
I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.
[0] https://arxiv.org/abs/2309.12288
I'm in the definitely ready for AGI camp. But it's not going to be a single model that's going to do the AGI magic trick, it's going to be an engineered system consisting of multiple communicating models hooked up using traditional engineering techniques.
This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.
Think about the RLHF component that trains LLMs. It's the training itself that generalises - not the final model that becomes a static component.
Dead Comment
How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic
Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.
Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.
Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!
So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?
Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.
Typically, these documents contain provisions for how voting, succession, recusal, eligibility, etc are to be handled. Based on my experience on both for-profit and non-profit boards, the outside members of the board probably retained outside legal counsel to advise them. Board members have specific duties they are obligated to fulfill along with serious legal liability if they don't do so adequately and in good faith.
I mean, I still wonder though if they really only need 3 ppl fully on board to effectively take the entire company. Vote #1, oust Sam, 3/5 vote YES. Sam is out, now the vote is "Demote Greg", 3/4 vote YES, Greg is demoted and quits. Now, there could be one "dissenter" and it would be easy to vote them out too. Surely there's some protection against that?
Deleted Comment
Deleted Comment
There is nothing business-y about this. As a non-profit OpenAI can do whatever they want.
OpenAI isn't a single person, so decisions like firing the CEO have to be made somehow. I'm wondering about how that framework actually works.
Deleted Comment
This feels like real like succession panning out. Every board member is trying to figure out how to optimize their position.
That could simply mean that he disagreed with the outcome and is expressing that disagreement by quitting.
EDIT: Derp. I was reading the note he wrote to OpenAI staff. The tweet itself says "After learning today's news" -- still ambiguous as to when and where he learned the news.
Deleted Comment
Since he was removed as Chairman at the same time as Altman was as CEO, presumably he was excluded from that part of the meeting (which may have been the whole meeting) for the same reason as Altman would have been.
Probably a similar situation.
https://time.com/collection/time100-ai/6309033/greg-brockman...
Edit: Maybe this is a reasonable explanation: https://news.ycombinator.com/item?id=38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.