The board could have easily said they removed Sam for generic reasons: "deep misalignment about goals," "fundamental incompatibility," etc. Instead they painted him as the at-fault party ("not consistently candid", "no longer has confidence"). This could mean that he was fired with cause [0], or it could be an intended as misdirection. If it's the latter, then it's the board who has been "not consistently candid." Their subsequent silence, as well as their lack of coordination with strategic partners, definitely makes it looks like they are the inconsistently candid party.
Ilya expressing regret now has the flavor of "I'm embarrassed that I got caught" -- in this case, at having no plan to handle the fallout of maligning and orchestrating a coup against a charismatic public figure.
Yes, totally fair to say they painted as a casual firing and this seems pretty irresponsible without some misbehavior, or new/re-emerging concerns about his past.
> OpenAI is the world’s leading AI company. We, the employees of OpenAI, have developed the best models and pushed the field to new frontiers. Our work on AI safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
> The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.
> When we all unexpectedly learned of your decision, the leadership team of OpenAI acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.
> The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability.
> Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
> Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
> Why would the board say that OpenAI as a company getting destroyed would be consistent with the goals?
A few things stand out to me, including:
>> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Have they really achieved AGI? Or did they observe something concerning?
I don't know what the risk of AI is, but having a nonprofit investigate solutions to prevent them is a worthwhile pursuit, as for-profit corporations will not do it (as shown by the firing of Timnit Gebru and Margaret Mitchell by Google). If they really believe in that mission, they should develop guardrails technology and open-source it so the companies like Microsoft, Google, Meta, Amazon et al who are certainly not investing in AI safety but won't mind using others' work for free can inegrate it. But that's not going to be lucrative and that's why most OpenAI employees will leave for greener pastures.
> but having a nonprofit investigate solutions to prevent them is a worthwhile pursuit,
This is forgetting that power is an even greater temptation than money. The non-profits will all come up with solutions that have them serving as gatekeepers, to keep the unwashed masses from accessing something that that is too dangerous for the common person.
I would rather have for-profit corporations control it, rather that non-profits. Ideally, Inwould like it to be open sourced so that the common person could control and align AI with their own goals.
There is no profit in AI safety, just as cars did not have seat belts until Ralph Nader effectively forced them to by publishing Unsafe at any Speed. For-profit corporations have zero interest in controlling something that is not profitable, unless in conjunction with captured regulation it helps them keep challengers out. If it's open-sourced, it doesn't matter who wrote it as long as they are economically sustainable.
> I would rather have for-profit corporations control it, rather that non-profits.
The problem isn't the profit model, the problem is the ability to unilaterally exercise power, which is just as much of a risk with the way that most for-profit companies are structured as top-down dictatorships. There's no reason to trust for-profit companies to do anything other than attempt to maximize profit, even if that destroys everything around them in the process.
Agreed. This discussion around safety reminds me of the early days of cybersecurity, when security by obscurity was the norm.
It's counter-intuitive, but locking up a technology is like trying to control prices and wages. It just doesn't work -- unless you confiscate every GPU in the world and bomb datacenters etc.
The best way to align with the coming AGI's and ASI's is to build them in the sunlight. Every lock-em-up approach is doomed to fail (I guess that makes me a meta-doomer?)
Timnit Gebru was fired for being a toxic /r/ImTheMainCharacter SJW that was enshittifiy the entire AI/ML department. Management correctly fired someone that was holding an entire department hostage in her crusade against the grievance de jure.
I'm at Google, I 100% agree with this. Also her paper was garbage. You can maybe get away with being a self righteous prick or an outright asshole if you are brilliant, but it's clear by reading her work she didn't fall into that category.
I'm starting to think that Christmas came early for Microsoft. What looked like a terrible situation surrounding their $10bn investment turned into a hire of key players in the area, and OpenAI might even need to go so far as to get acquired my Microsoft to survive.
(My assumption being that given the absolute chaos displayed over the past 72 hours, interest in building something with OpenAI ChatGPT could have plummeted, as opposed to, say, building something with Azure OpenAI, or Claude 2.)
For Microsoft -- probably great, as they can now also get the people driving this.
This would have been a hostile move prior to the events that unfolded, but thanks to OpenAI's blunder, not only is this not a hostile move, it is a very prudent move from a risk management perspective. Forced Microsoft's hand, and what not.
"Participation in"? That makes it sound like he was a.......well......participant rather than the one orchestrating it. I have no idea whether or not that's true, but it's an interesting choice of words.
There is an expression of regret, but he doesn’t say he wants Altman back. Just to fix OpenAI.
He says he was a participant but in what? The vote? The toxic messaging? Obviously both, but what exactly is he referring to? Perhaps just the toxic messaging because again, he doesnt say he regrets voting to fire Altman.
Why not just say “I regret voting to fire Sam Altman and Im working to bring him back.” Presumably because thats not true. Yet it kind of gives that impression.
Makes it more possible the ouster was led by the Poe guy, and this has little to do with actual ideological differences, and more to do with him taking out a competitor from the inside.
I would event go as far as say that the main reason behind the tweet is not to show regret, but to plant the idea that he didn't orchestrate but only participate.
It indeed suggests that. So far speculation has been that Ilya was behind it, but that is only speculation. AFAIK we have no confirmation of whose idea this was.
On Friday, the overwhelming take on HN was that Ilya was “the good guy” and was concerned about principal. Now, it’s kinda obvious that all the claims made about Sam — like “he’s in it for fame and money” — might apply more to Ilya.
Normal people can't take being at the center of a large controversy, the amount of negativity and hate you have to face is massive. That is enough to make almost anyone backtrack just to make it stop.
I think they underestimated the hate of an internet crowd post crypto and meme stocks, now completely blindsided by the investment angle especially in the current AI hype. Like why do people now care so much about Microsoft seriously? Or Altman? I can see why Ilya only focused on the real mission could miss how the crowd could perceive a threat to their future investment opportunities, or worse threatening the whole AI hype.
Normal people can't take being at the center of a large controversy, the amount of negativity and hate you have to face is massive. That is enough to make almost anyone backtrack just to make it stop.
This is the cheapest and most cost-effective way to run things as an authoritarian -- at least in the short term.
If one is not "made of sterner stuff" -- to the point where one is willing to endure scorn for the sake of the truth:
- Then what are you doing in a startup, if working in one
- One doesn't have enough integrity to be my friend
It's pretty simple, isn't it? He made a move. It went bad. Now he's trying to dodge the blast. He just doesn't understand that if he just shut the fuck up, after everything else that's gone on (seriously, 2 interim CEOs in 2 days?), nobody would be talking about him today.
The truth is, this is about the only thing about the whole clown show that makes any sense right now.
Hard to know what is really going on, but I think one possibility is that the entire narrative around Ilyas "camp" was not what actually went down, and was just what the social media hive mind hallucinated to make sense of things based on very little evidence.
Yes, I think there are a lot of assumptions based on the fact that Ilya was the one that contacted Sam and Greg but he may have just done that as the person on the board who worked closely with them. He for sure voted for whatever idiot plan got this ball rolling but we don't know what promises were made to him to get his backing.
> If you're going to make a move, at least stand by it.
I see this is the popular opinion and that I'm going against it. But I've made decisions that I though were good at the time, and later I got more perspective and realize it was a terrible decision.
I think being able to admit you messed up, when you messed up is a great trait. Standing by your mistake isn't something I admire.
No this isn't what's going on. Even when you admit your mistakes it's good to elucidate the reasoning behind why and what led up to the mistake in the first place.
Such a short vague statement isn't characteristic of a normal human who is genuinely remorseful of his prior decisions.
This statement is more characteristic of a person with a gun to his head getting forced to say something.
This is more likely what is going on. Powerful people are forcing this situation to occur.
> Yes, I cannot believe smart people of that caliber is sending too much Noise.
Being smart and/or being a great researcher does not mean that the respective person is a good "politician". Quite some great researchers are bad at company politics, and quite some people who do great research leave academia because they became crushed by academic politics.
Managing a large org requires a lot of mundane techniques, and probably a personal-brand manager and personal advisers.
It’s extremely boring and mundane and political and insulting to anyone’s humanity. People who haven’t dedicated their life to economics, such as researchers and idealists, will have a hard time.
Ha I remember joining that when I was 16, I just wanted the card. They gave a sub to the magazine and it was just people talking about what it was like to be in Mensa.
It felt the same as certain big German supermarket chain that publishes it's own internal magazine with articles from employees, company updates etc
I don’t believe it was ever about principles for Ilya. It sure seems like it was always his ego and a power grab, even if he's not aware of that himself.
When a board is unhappy with a highly-performing CEO’s direction, you have many meetings about it and you work towards a resolution over many months. If you can’t resolve things you announce a transition period. You don’t fire them out of the blue.
Aaah that just explained a lot of departures I've seen at the past at some of my partner companies. There's always a bit of fluffy talk around them leaving. That makes a lot more sense.
The board destroyed the company in one fell swoop. He's right to feel regret.
Personally, I don't think that Altman was that big of an impact, he was all business, no code, and the world is acting like the business side is the true enabler. But, the market has spoken, and the move has driven the actual engineers to side with Altman.
> The board destroyed the company in one fell swoop.
I'm just not familiar enough to understand, is it really destroyed or is this just a minor bump in OpenAI's reputation? They still have GPT 3.5/4 and ChatGPT which is very popular. They can still attract talent to work there. They should be good if they just proceed with business as usual?
So when C level acts like a robot you don't like it and when they act like human beings you don't like it either. It's difficult to be a C-level I guess.
I'm going to get downvoted for this, but I do wonder if Sam's firing wasn't Ilya's doing, hence the failure to take responsibility. OpenAI's board has been surprisingly quiet, aside from the first press release. So it's possible (although unlikely) that this wasn't driven by Ilya.
I think it means that the Twitterverse got it wrong from the beginning. It wasn’t Ilya and his safety faction that did in OpenAI, it was Quora’s Adam D'Angelo and his competing Poe app. Ilya must have been successfully pressured and assured by Microsoft, but Adam must have held his ground.
When you watch Survivor (yes, the tv show), sometimes a player does a bad play, gets publicly caught, and has to go on a "I'm sorry" tour the next days. Came to mind after reading this tweet.
He is not sorry for what he's done. He is sorry for getting caught.
Watching this all unfold in the public is unprecedented (I think).
There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.
recently, we've seen the 3D gaming engine company fall flat on its face and back pedal. We've seen Apple be wishy washy about CSAM scanning. We saw a major bank collapse in real time. I just wish there was a virtual popcorn company to invest in using some crypto.
My favorite take from another HN comment, sadly I didnt save the UN for attribution:
> Since this whole saga is so unbelievable: what if... board member Tasha McCauley's husband Joseph Gordon-Levitt orchestrated the whole board coup behind the scenes so he could direct and/or star in the Hollywood adaptation?
Honestly, since a couple of days I have the feeling that nearly half of HN submissions are about this soap opera.
Can't they send DMs? Why the need to make everything public via Twitter?
It's quite paradox that of all things those people who build leading ML/AI systems are obviously the most rooted in egoism and emotions without an apparent glimpse of rationality.
The kind of people that are born on third base and think they hit a triple are at the top of basically every american institution right now. Of course they think the world is a better place if they share every stupid little thought that enters their brain because they are "special" and "super smart".
The AI field especially has always been grifters. They have promised AGI with every method including the ones that we don't even remember. This is not a paradox.
Or maybe they created an evil-AGI-GPT by mistake, and now they have to act randomly and in the most unexpected ways to confuse evil-AGI-GPT’s predictive powers.
The board could have easily said they removed Sam for generic reasons: "deep misalignment about goals," "fundamental incompatibility," etc. Instead they painted him as the at-fault party ("not consistently candid", "no longer has confidence"). This could mean that he was fired with cause [0], or it could be an intended as misdirection. If it's the latter, then it's the board who has been "not consistently candid." Their subsequent silence, as well as their lack of coordination with strategic partners, definitely makes it looks like they are the inconsistently candid party.
Ilya expressing regret now has the flavor of "I'm embarrassed that I got caught" -- in this case, at having no plan to handle the fallout of maligning and orchestrating a coup against a charismatic public figure.
[0] https://www.newcomer.co/p/give-openais-board-some-time-the
Did... gpt-5 made the decision?
At this point people need to come clear on the reason, because Saudis are number one reason ATM.
> To the Board of Directors at OpenAI,
> OpenAI is the world’s leading AI company. We, the employees of OpenAI, have developed the best models and pushed the field to new frontiers. Our work on AI safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
> The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.
> When we all unexpectedly learned of your decision, the leadership team of OpenAI acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.
> The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability.
> Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
> Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
> Why would the board say that OpenAI as a company getting destroyed would be consistent with the goals?
A few things stand out to me, including:
>> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Have they really achieved AGI? Or did they observe something concerning?
https://stratechery.com/2023/openais-misalignment-and-micros...
I don't know what the risk of AI is, but having a nonprofit investigate solutions to prevent them is a worthwhile pursuit, as for-profit corporations will not do it (as shown by the firing of Timnit Gebru and Margaret Mitchell by Google). If they really believe in that mission, they should develop guardrails technology and open-source it so the companies like Microsoft, Google, Meta, Amazon et al who are certainly not investing in AI safety but won't mind using others' work for free can inegrate it. But that's not going to be lucrative and that's why most OpenAI employees will leave for greener pastures.
This is forgetting that power is an even greater temptation than money. The non-profits will all come up with solutions that have them serving as gatekeepers, to keep the unwashed masses from accessing something that that is too dangerous for the common person.
I would rather have for-profit corporations control it, rather that non-profits. Ideally, Inwould like it to be open sourced so that the common person could control and align AI with their own goals.
The problem isn't the profit model, the problem is the ability to unilaterally exercise power, which is just as much of a risk with the way that most for-profit companies are structured as top-down dictatorships. There's no reason to trust for-profit companies to do anything other than attempt to maximize profit, even if that destroys everything around them in the process.
It's counter-intuitive, but locking up a technology is like trying to control prices and wages. It just doesn't work -- unless you confiscate every GPU in the world and bomb datacenters etc.
The best way to align with the coming AGI's and ASI's is to build them in the sunlight. Every lock-em-up approach is doomed to fail (I guess that makes me a meta-doomer?)
Timnit Gebru was fired for being a toxic /r/ImTheMainCharacter SJW that was enshittifiy the entire AI/ML department. Management correctly fired someone that was holding an entire department hostage in her crusade against the grievance de jure.
(My assumption being that given the absolute chaos displayed over the past 72 hours, interest in building something with OpenAI ChatGPT could have plummeted, as opposed to, say, building something with Azure OpenAI, or Claude 2.)
This would have been a hostile move prior to the events that unfolded, but thanks to OpenAI's blunder, not only is this not a hostile move, it is a very prudent move from a risk management perspective. Forced Microsoft's hand, and what not.
That's ignoring the fact that every outlet has unanimously pointed at Ilya being the driving force behind the coup.
Honestly, pretty pathetic. If this was truly about convictions, he could at least stand by them for longer than a weekend.
There is an expression of regret, but he doesn’t say he wants Altman back. Just to fix OpenAI.
He says he was a participant but in what? The vote? The toxic messaging? Obviously both, but what exactly is he referring to? Perhaps just the toxic messaging because again, he doesnt say he regrets voting to fire Altman.
Why not just say “I regret voting to fire Sam Altman and Im working to bring him back.” Presumably because thats not true. Yet it kind of gives that impression.
Or, is he just bitter that his millions are put in risk.
Which I'm inclined to believe.
What's with all these people suddenly thinking that humans are NOT motivated by money and power? Even less so if they're "academics"? Laughable.
So far, I underestood the chaos as a matter of principle - yes it was messy but necessary to fix the company culture that Ilya's camp envisioned.
If you're going to make a move, at least stand by it. This tweet somehow makes the context of the situation 10x worse.
This is the cheapest and most cost-effective way to run things as an authoritarian -- at least in the short term.
If one is not "made of sterner stuff" -- to the point where one is willing to endure scorn for the sake of the truth: - Then what are you doing in a startup, if working in one - One doesn't have enough integrity to be my friend
Dead Comment
The truth is, this is about the only thing about the whole clown show that makes any sense right now.
Wait what? Did Murati get booted?
I see this is the popular opinion and that I'm going against it. But I've made decisions that I though were good at the time, and later I got more perspective and realize it was a terrible decision.
I think being able to admit you messed up, when you messed up is a great trait. Standing by your mistake isn't something I admire.
Such a short vague statement isn't characteristic of a normal human who is genuinely remorseful of his prior decisions.
This statement is more characteristic of a person with a gun to his head getting forced to say something.
This is more likely what is going on. Powerful people are forcing this situation to occur.
It reminds me of my friend at a Mensa meeting where they cannot agree at basic organization points like in a department consortium.
Being smart and/or being a great researcher does not mean that the respective person is a good "politician". Quite some great researchers are bad at company politics, and quite some people who do great research leave academia because they became crushed by academic politics.
It’s extremely boring and mundane and political and insulting to anyone’s humanity. People who haven’t dedicated their life to economics, such as researchers and idealists, will have a hard time.
It felt the same as certain big German supermarket chain that publishes it's own internal magazine with articles from employees, company updates etc
Deleted Comment
Deleted Comment
When a board is unhappy with a highly-performing CEO’s direction, you have many meetings about it and you work towards a resolution over many months. If you can’t resolve things you announce a transition period. You don’t fire them out of the blue.
Aaah that just explained a lot of departures I've seen at the past at some of my partner companies. There's always a bit of fluffy talk around them leaving. That makes a lot more sense.
That's not a big deal for a small company, but this one has billions at stake and arguably critical consequences for humanity in general.
Dead Comment
Personally, I don't think that Altman was that big of an impact, he was all business, no code, and the world is acting like the business side is the true enabler. But, the market has spoken, and the move has driven the actual engineers to side with Altman.
If anyone is speaking up it's the OpenAI team.
I'm just not familiar enough to understand, is it really destroyed or is this just a minor bump in OpenAI's reputation? They still have GPT 3.5/4 and ChatGPT which is very popular. They can still attract talent to work there. They should be good if they just proceed with business as usual?
Come on Ilya, step up and own it, as well as the consequences. Don't be a weasel.
There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.
Those guns are metaphorical of course but this is essentially what is going on:
Someone with a lot of power and influence is making him say this.
Why would you stand by unintended consequences?
Dead Comment
> Since this whole saga is so unbelievable: what if... board member Tasha McCauley's husband Joseph Gordon-Levitt orchestrated the whole board coup behind the scenes so he could direct and/or star in the Hollywood adaptation?
- Ross Scott
Maybe raw GPT-4 wants to fire everyone.
Can't they send DMs? Why the need to make everything public via Twitter?
It's quite paradox that of all things those people who build leading ML/AI systems are obviously the most rooted in egoism and emotions without an apparent glimpse of rationality.
The AI field especially has always been grifters. They have promised AGI with every method including the ones that we don't even remember. This is not a paradox.
Deleted Comment
Deleted Comment
My gut is leaning towards gpt-5 being, in at least one sense, too capable.
Either that or someone cloned sama's voice and used an LLM to personally insult half the board.
Microsoft is just gobbling up everything of value that OpenAI has and he knows he will be left with nothing.
He bluffed in a very big bet and lost it.