Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.
Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history
Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.
There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.
The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).
This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.
The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will
Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).
Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.
Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.
Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.
So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.
>Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.
the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :
Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.
The more likely explanation is that D'Angelo has a massive conflict of interest with him being CEO of Quora, a business rapidly being replaced by ChatGPT and which has a competing product "creator monetization with Poe" (catchy name, I know) that just got nuked by OpenAI's GPTs announcement at dev day.
>Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.
What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.
Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.
>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.
> Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth?
OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.
The first thing that Sam Altman did when he took over was give Microsoft the keys to the kingdom, and even more absurdly, he is now working for Microsoft on the same thing. That’s without even mentioning the creepy Worldcoin company.
Money and status are the clear motivations here, OpenAI charter be damned.
> What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something?
Yes. Yes and more yes.
That is why, at least in the U.S., we have given non-profits exemptions from taxation. Because they are supposed to be improving society, not profiting from it.
I like to read that, besides the problems others have listed, OpenAI seems like it was built on top of the work of others, who were researching AI, and suddenly took all this "free work" from the contributors and sold it for a profit where the original contributors didn't even see a single dime from their work.
To me it seems like it's the usual case of a company exploiting open source and profiting off others' contributions.
It seemed to me the entire point of the legal structure was to raise private capital. It's a lot easier to cut a check when you might get up to 100x your principal versus just a tax write off. This culminated in the MS deal: lots of money and lots of hardware to train their models.
I would rather OpenAI have a diverse base of income from commercialization of its products than depend on "donations" from a couple ultrarich individuals or corporations. GPT-4 cost $100 million+ to train. That money needs to come from somewhere.
People keep speculating sensational, justifiable reasons to fire Altman. But if these were actual factors in their decision, why doesn't the board just say so?
Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.
I have no details of OpenAI's Board’s reasons for firing Sam, and I am conflicted (lead of Scalable Alignment at Google DeepMind). But there is a large, very loud pile on vs. people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things.
...
Third, my prior is strongly against Sam after working for him for two years at OpenAI:
1. He was always nice to me.
2. He lied to me on various occasions
3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)
The Issue with these two explanations from the board is that this is normally nothing which would result into firing the CEO.
In my eyes these two explanations are simple errors which can occur to everybody and in a normal situation you would talk about these Issues and you could resolve them in 5min without firing anybody.
I agree, and what’s more I think the stated reasons make sense if (a) the person/people impacted by these behaviours had sway with the board, and (b) it was a pattern of behaviour that everyone was already pissed off about.
If board relations have been acrimonious and adversarial for months, and things are just getting worse, then I can imagine someone powerful bringing evidence of (yet another instance of) bad/unscrupulous/disrespectful behavior to the board, and a critical mass of the board feeling they’ve reached a “now or never” breaking point and making a quick decision to get it over with and wear the consequence.
Of course, it seems that they have miscalculated the consequences and botched the execution. Although we’ll have to see how it pans out.
I’m speculating like everyone else. But knowing how board relations can be, it’s one scenario that fits the evidence we do have and doesn’t require anyone involved to be anything other than human.
Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.
It's not even that it's not a justifiable reason, but they did it without getting legal advice or consulting with partners and didn't even wait for markets to close.
Board destroyed billions in brand and talent value for OpenAI and Microsoft in a mid day decision like that.
This is also on Sam Altman himself for building and then entertaining such an incompetent board.
If you don't think the likes of Sam Altman, Eric Schmidt, Bill Gates and the lot of them want to increase their own power you need to think again. At best these individuals are just out to enrich themselves, but many of them demonstrate a desire to affect the prevailing politic and so i don't see how they are different, just more subtle about it.
Why worry about the Sauds when you've got your own home grown power hungry individuals.
because our home grown power hungry individuals are more likely to be okay with things like women dressing how they want, homosexuality, religious freedom, drinking alcohol, having dogs and other decadent western behaviors which we've grown very attached to
I don't think that's true. I've seen at least one other person bring up the CIA in all the "theorycrafting" about this incident. If there's a mystery on HN, likelihood is high of someone bringing up intelligence agencies. By their nature they're paranoia-inducing and attract speculation, especially for this sort of community.
With my own conspiracy theorist hat on, I could see making deals with the Saudis regarding cutting edge AI tech potentially being a realpolitik issue they'd care about.
It feels like Altman started the whole non-profit thing so he could attract top researchers with altruistic sentiment for sub-FANAAG wages. So the whole "Altman wasn't candid" thing seems to track.
Reminds me of a certain rocket company that specializes in launching large satellite constellations that attracts top talent with altruistic sentiment about saving humanity from extinction.
> the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
> rich and powerful people using the technology to enhance their power over society.
We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.
Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?
> money from the Saudis on the order of billions of dollars to make AI accelerators
Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.
At some point this is probably about a closed source "fork" grab. Of course that's what practically the whole company is probably planning.
The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.
Of course this is about the money, one way or another.
> Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.
Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.
If I understood correctly Altman was CEO of the for-profit OpenAI, not the non-profit. The structure is pretty complicated: https://openai.com/our-structure
I’m curious: if one of the board members “knows” the only way for OpenAI to be truly successful is for it to be a non-profit and “don’t be evil” (Google’s mantra), that if they set expectations correctly and put caps on the for-profit side, it could be successful. But they didn’t fully appreciate how strong the market forces would be, where all of the focus/attention/press would go to the for-profit side. Sam’s side has such an intrinsic gravity, that’s it’s inevitable that it will break out of its cage.
Note: I’m not making a moral claim one way or the other, and I do agree that most tech companies will grow to a size/power/monopoly that their incentives will deviate from the “common good”. Are there examples of openai’s structure working correctly with other companies?
To me this is the ultimate Silicon Valley bike shedding incident.
Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.
On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.
> taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
This is absolutely peak irony!
US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets
Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!
I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.
100% agree. I've seen this type of thing up close (much smaller potatoes but same type of thing) and whatever is getting aired publicly is most likely not the real story. Not sure if the reasons you guessed are it or not, we probably won't know for awhile but your guesses are as good as mine.
Neither of these reasons have anything to do with a lofty ideology regarding the safety of AGI or OpenAI’s nonprofit status. Rather it seems they are micromanaging personnel decisions.
Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.
> But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.
Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.
Right now I think that’s the most plausible explanation simply because none of the other explanations that have been floating around make any sense when you consider all the facts. We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up.
And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.
> Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.
If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?
I find this implausible, though it may have played a motivating role.
Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.
GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.
Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.
There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:
> The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.
> Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge
So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".
> Third, my prior is strongly against Sam after working for him for two years at OpenAI:
> 1. He was always nice to me.
> 2. He lied to me on various occasions
> 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)
One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.
I’m confused how the board is still keeping their radio silence 100%. Where I’m from, with a shitstorm this big raging, and the board doing nothing, they might very easily be personally held responsible for all kinds of utterly nasty legal action.
Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?
This isn't unlike the radio silence Brendan Eich kept, when the Mozilla sh* hit the fan. This is in my opinion the outcome of when really technical and scientific people have been given decades of advice of not talking to the public.
I have seen this play out many times in different locations for different people. A lot of technical folks like myself were given the advice that actions speak louder than words.
I was once scouted at a silicon valley selenium browser testing company. I migrated their cloud offering from VMWare to KVM, which depended on code I wrote and then defied my middle manager by improving their entire infrastructure performance by 40%. My instinct was to communicate this to the leadership, but I was advised not to skip my middle manager.
The next time I went the office I got a severance package and later found out that 2 hours later during the all hands they presented my work as their own. The middle manage went on to become the CTO of several companies.
I doubt we will ever find out what really happened or at least not in the next 5-10 years. OpenAI let Sam Altman be the public face of the company and got burned by it.
Personally I had no idea Ilya was the main guy in this company until the drama that happened. I also didn't know that Sam Altman was basically only there to bring in the cash. I assume that most people will actually never know that part of OpenAI.
Wow this is significant, he did this to Charlie cheever the best guy at Facebook and quora. He got Matt on board and fired Charlie without informing investors. Only difference this time 100 billion company is at stake at openai. Process is similar. This going very wrong for Adam D'Angelo. With this I hope other board members get to the bottom get Sam back and vote out D'Angelo from board.
Remember Facebook Questions? While it lives on as light hearted polls and quizzes it was originally launched by D’Angelo when he was an FB employee. It was designed to compete with expert Q&A websites and was basically Quora v0.
When D’Angelo didn’t get any traction with it he jumped ship and launched his own competitor instead. Kind of a live wire imho.
Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.
It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.
Naive question. In my part of the world, board meetings for such consequential decisions can never be called out on such short notice. Board meeting has to be called ahead of time by days, all the board members must be given written agenda. They have to acknowledge in writing that they've got this agenda. If the procedures such as these aren't followed, the firing cannot stand in court of law. The number of days are configurable in the shareholders agreement, but it is definitely not 1 day.
I find it interesting that the attempted explanations, as unconvincing as they may be, are related to Altman specifically. Given that Brockman was the board chairperson it is surprising that there don't seem to be any attempts to explain that demotion. Perhaps its just not being reported to anyone outside but it makes no sense to me that anyone would assume a person would stay after being removed from a board without an opportunity to be at the meeting to defend their position.
It could be a more primal explanation. I think OpenAi doesn’t want to effectively be a R&D arm of Microsoft. The ChatGPT mobile app is an unpolished and unrefined. There’s little to no product design there, so I totally see how it’s fair criticism to call out premature feature milling (especially when it’s clear it’s for Microsoft).
I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.
If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.
It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?
Exactly my point why would d Angelo want openai to thrive when his own company poe(chatbot) wants compete in the same space. Its conflict of interest which ever way you look. He should resign from board of openai in the first place.
The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.
Now it's increasingly look like Sam will be heading back into the role of CEO of openai.
There’s lots of conflicts of interests beyond Adam and his Poe AI. Yes, he was building a commerical bot using OpenAI APIs, but Sam was apparently working on other side ventures too. And Sam was the person who invested in Quora during his YC tenure, and must have had a say in bringing him onboard. At this point, the spotlight is on most members of the nonprofit board
I think Ilya was naive and didn't see this coming and good that he reliased quickly announced on twitter and made the right call to get Sam back.
Otherwise it was like Ilya vs Sam showdown,and people were siding towards Ilya for agi and all. But this behind the scene looks like corporate power struggle and coup.
1. He’s the actual ringleader behind the coup. He got everyone on board, provided reassurances and personally orchestrated and executed the firing. Most likely possibly and the one that’s most consistent with all the reporting and evidence so far (including this article).
2. Others on the board (e.g. Adam) masterminded the coup and saw Ilya as a fellow traveler useful idiot that could be deceived into voting against Sam and destroy the company he and his 700 colleagues spent so hard to build. He then also puppeteer Ilya to do the actual firing over Google Meet.
Based on Ilya's tweets and his name on that letter (still surprised about that, I have never sees someone calling for their own resignation) that seems to be the story.
The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI. This can be done in perpetuity. Google explains its AI failures along the same lines.
> The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI.
Isn't the solution to just pipe ChatGPT into a meta-reinforcement-learning framework that gradually learns how to prompt ChatGPT into writing the source-code for a true AGI? What do we even need AI ethicists for anyway? /s
1) Where is Emmett? He's the CEO now. It's his job to be the public face of the company. The company is in an existential crisis and there have been no public statements after his 1AM tweet.
2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.
Technically he's the interim CEO in a chaotic company just assigned in the last 24hrs. I'd probably wait to get my bearings before walking in acting like I've got everything under control on the first day after a major upheaval.
The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.
The more I read into this story the more I can't help but to be a conspiracy theorist and say that it feels like the boards intent was to kill the company.
No explanation beyond "he tried to give two people the same project
the "Killing the company would be consistent with the companies mission" line in the boards statement
Adam having a huge conflict of interest
Emmet wanting to go from a "10" to a "1-2"
I'm either way off, or I've had too much internet for the weekend.
Yes these people should all be doing more to feed internet drama! If they don't act soon, HN will have all sorts of wild opinions about what's going on and we can't have that!
Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!
I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.
You can either act like a professional and control the messaging or let others fill the vacuum with idle speculation. I'm quite frankly in shock as to the level of responsibility displayed by people whose position should demand high function.
My favorite hypothesis: Ilya et al suspected emergent AGI (e.g. saw the software doing things unprompted or dangerous and unexpected) and realized the Worldcoin shill is probably not the one you want calling the shots on it.
For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.
The speculations are rampant precisely because the board has said absolute nothing since the leadership transition announcement on Friday.
If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.
Convincing two constituencies: employees and customers, that your company isn't just yolo-ing things like ceos and so forth seems like it is a pretty good use of ceo time!
I cannot say whether you deserve the downvotes, but an alternative and grounded perspective is appreciated in this maelstrom of news, speculation and drama.
I find it absolutely fascinating that Emmett accepted this position. He can game all scenarios and there is no way that he can come out ahead on any of them. One would expect an experienced Silicon Valley CEO to make this calculus and realize it's a lost cause. The fact he accepted to me shows he's not a particularly good leader.
He made it pretty clear that he consider it as a once in a life time chance.
I think he is correct, being the CEO twitch is a position known by no one in many places, e.g. how many developers/users in China even heard of Twitch? Being the CEO of OpenAI is a completely different story, it is a whole new level he can leverage in the years to come.
As much as I'd love to hear about the details of the drama as the next person, they really don't have to say anything publicly. We are all going to continue using the product. They don't have public investors. The only concern about perception they may have is if they intend to raise more money anytime soon.
That's what a board of a for-profit company which has a fiduciary duty towards shareholders should do.
However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)
Half the board lacks any technical skill, and the entire board lacks any business procedural skill. Ideally, you’d have a balance of each on a component board.
Ideally, you also have at least a couple independent board members who are seasoned business/tech veterans with the experience and maturity to prevent this sort of thing from happening in the first place.
Giving 2 people the same project? Isnt this like the thing to do to get differing approaches and then release the amalgamation of the two? I thought these sorts of things are common.
Giving different opinions on same person is a reason to fire a CEO?
This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.
As mentioned by another person in this thread [0], it is likely that it was Ilya's work that was getting replicated by another "secret" team, and the "different opinions on the same person" was Sam's opinions of Ilya. Perhaps Sam saw him as an unstable element and a single point of failure in the company, and wanted to make sure that OpenAI would be able to continue without Ilya?
Since a lot of the board’s responsibilities are tied to capabilities of the platform, it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board. A simple dual-track project shouldn’t be a problem, but this kind of thing would be seen as dishonesty by the board.
The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.
either that or Sam didn't tell Adam D'Angelo that they were launching a competing product in exactly the same space that poe.ai had launched one. For some context, poe had launched something similar to those custom GPTs with creator revenue sharing etc. just 4 weeks prior to dev-day
I remember a few years ago when there was some research group that was able to take a picture of a black hole. It involved lots of complicated interpretation of data.
As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.
Did the teams know that there was another team working on the same thing? I wonder how that affects working of both teams... On the other hand, not telling the teams would erode the trust that the teams have in management.
Maybe they needed two teams to independently try to decode an old tape of random numbers from a radio space telescope that turned out to be an extraterrestrial transmission, like a neutrino signal from the Canis Minor constellation or something. Happens all the time.
The CEO's I've worked for have mostly been mini-DonaldT's, almost pathologically allergic to truth, logic, or consistency. Altman seems way over on the normal scale for CEO of a multi-billion dollar company. I'm sure he can knock two eggs together to make an omelette, but these piddling excuses for firing him don't pass the smell test.
I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)
as for multiple teams with overlapping goals -- are you kidding me? That's a 100% legit and popular tactic. Once CEO I worked with relished this approach and called it a "Steel-cage death match"!
I thought the design team always worked up 3 working prototypes from a set of 10 foam mockups. There was an article from someone with intimate knowledge of Ives lab some years back stating this was protocol for all Apple products.
Seriously? Click wheel iPhone lost shockingly? The click wheel on most laptops wears out so fast for me, and the chances of that happening on a smaller phone wheel is just so much higher.
Back in the late 80s, Lotus faced a crisis with their spreadsheet, Lotus 1-2-3. Should they:
1. stick with DOS
2. go with OS/2
3. go with Windows
Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.
Apple had a skunk works team keeping each new version of their OS to compile on x86 long before the switch. I wonder if the Lotus situation was an influence, or if ensuring your software can be made to work on different hardware is just an obvious play?
Consider for a moment: this is what the board of one of the fastest growing companies in the world worries about - kindergarten level drama.
Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.
I wonder if this is what the staff are thinking right now. It must feel awful if they are.
Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.
I guess it depends on whether any of them actually got the assignment. One way to interpret it is that nobody is taking that assignment seriously. So depending on what that assignment is and how important that particular assignment is to the board, then it may in fact be a big deal.
Does a board give an assignment to the CEO or teams?
If the case is that the will of the board is not being fulfilled, then the reasoning is simple. The CEO was told to do something and he has not done it. So, he is ousted. Plain and simple.
This talk about projects given to two teams and what not is nonsense. The board should care if its work is done, not how the work is done. That is the job of the CEO.
To me at least that's an _extremely_ rude thing to do. (Unless one person is asked to do it this way, the other one that way, so people can compare the outcome.)
(Especially if they aren't made aware of each other until the end.)
I think this needs to be viewed through the lens of the gravity of how the board reacted; giving them the benefit of the doubt that they acted appropriately and, at least with the information they had the time, correctly.
A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?
Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.
> giving them the benefit of the doubt that they acted appropriately
Yet you're only willing to give this to one side and not the other? Seems reasonable... Especially despite all the evidence so far that the board is either completely incompetent or had ulterior motives.
> wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying?
It is breach of contract if it violated his employment contract, but I don't have a copy of his contract. It is wrongful termination if it was for an illegal reason, but there doesn't seem to be any suggestion of that.
> same for MS
I doubt very much that the contract with Microsoft limits OpenAI's right to manage their own personnel, so probably not.
So, none of this sounds like it could be the real reason Altman was fired. This leaves people saying it was a "coup", which still doesn't really answer the question. Why did Altman get fired, really?
Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.
Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.
So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.
I think you may be hallucinating reasonable reasons to explain an inherently indefensible situation, patching up reality so it makes sense again. Sometimes people with puffed up egos are frustrated over trivial slights, and group think takes over, and nuking from orbit momentarily seems like a good idea. See, I’m doing it too, trying to rationalize. Usually when we’re stuck in an unsolvable loop like a SAT solver, we need to release one or more constraints. Maybe there was no good reason. Maybe there’s a bad reason — as in, the reasoning was faulty. They suffered Chernobyl level failure as a board of directors.
This is what I suspect; that their silence is possibly not simply evidence of no underlying reason, but that the underlying reason is so sensitive that it cannot be revealed without doing further damage. Also the hastiness of it makes me suspect that whatever it was happened very recently (e.g. conversations or agreements made at APEC).
Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.
If it was anything all that bad, Ilya and Greg would’ve known about it, because one of them was chairman of the board and the other was a board member. And both of them want Sam rehired. You can’t even spin it that they are complicit in wrongdoing, because the board tried to keep Greg at the company and Ilya is still on the board now and previously supported them.
Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.
> because the board tried to keep Greg at the company
Aside from the fact that they didn't fire him as President and said he was staying on in the press release that went out without any consultation, I've seen no suggestion of any effort to keep him at the company.
i do believe what they said about Altman "was not consistently candid in his communications with the board.", based on my understanding, altman did proved his dishonest behavior from he did to openai, turned non-profit into for-profit and open source model to closed-source one. and even worst, people seems totally accepted this type of personality, the danger is not the AI itself, is the AI will be built by AltmanS!
> dishonest behavior from he did to openai, turned non-profit into for-profit and
Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work?
> will be built by AltmanS
Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?
Banks have strict cash reserve requirements that are externally audited. OpenAI does not, and more to the point, they're both swimming in money and could easily get more if they wanted. (At least until last week, that is.)
Rumor has it, they had been trying to get more, and failing. No audited records of that kind of thing, of course, so could be untrue. But Altman and others had publicly said that they were attempting to get Microsoft to invest more, and he was courting sovereign wealth funds for an AI (though non-OpenAI) chip related venture, and ChatGPT had a one-day partial outage due to "capacity" constraints, which is odd if your biggest backer is a cloud company. It all sounds like they are running short on money, long before they get to profitability. Which would have been fine up until about a year ago, because someone with Altman's profile could easily get new funding for a buzz-heavy project like ChatGPT. But times are different, now...
Yeah, I can't imagine why DeepMind would possibly want to see OpenAI incinerated.
When you have such a massive conflict of interest and zero facts to go on - just sit down.
also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."
Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.
But as we all know - Ilya did a 180 (surprised the heck out of me).
Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.
Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history
There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.
This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.
I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?
What sane user would want a shitcoin CEO in charge of a product they depend on?
Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).
Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.
So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.
Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.
the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :
https://www.theatlantic.com/technology/archive/2023/11/sam-a...
https://quorablog.quora.com/Introducing-creator-monetization...
https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...
Maybe it's a factor, but it's insufficient
What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.
Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.
>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.
Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.
OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.
The first thing that Sam Altman did when he took over was give Microsoft the keys to the kingdom, and even more absurdly, he is now working for Microsoft on the same thing. That’s without even mentioning the creepy Worldcoin company.
Money and status are the clear motivations here, OpenAI charter be damned.
Yes. Yes and more yes.
That is why, at least in the U.S., we have given non-profits exemptions from taxation. Because they are supposed to be improving society, not profiting from it.
To me it seems like it's the usual case of a company exploiting open source and profiting off others' contributions.
Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.
https://twitter.com/geoffreyirving/status/172675427022402397...
I have no details of OpenAI's Board’s reasons for firing Sam, and I am conflicted (lead of Scalable Alignment at Google DeepMind). But there is a large, very loud pile on vs. people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things.
...
Third, my prior is strongly against Sam after working for him for two years at OpenAI:
1. He was always nice to me.
2. He lied to me on various occasions
3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)
In my eyes these two explanations are simple errors which can occur to everybody and in a normal situation you would talk about these Issues and you could resolve them in 5min without firing anybody.
If board relations have been acrimonious and adversarial for months, and things are just getting worse, then I can imagine someone powerful bringing evidence of (yet another instance of) bad/unscrupulous/disrespectful behavior to the board, and a critical mass of the board feeling they’ve reached a “now or never” breaking point and making a quick decision to get it over with and wear the consequence.
Of course, it seems that they have miscalculated the consequences and botched the execution. Although we’ll have to see how it pans out.
I’m speculating like everyone else. But knowing how board relations can be, it’s one scenario that fits the evidence we do have and doesn’t require anyone involved to be anything other than human.
Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.
It's not even that it's not a justifiable reason, but they did it without getting legal advice or consulting with partners and didn't even wait for markets to close.
Board destroyed billions in brand and talent value for OpenAI and Microsoft in a mid day decision like that.
This is also on Sam Altman himself for building and then entertaining such an incompetent board.
Why worry about the Sauds when you've got your own home grown power hungry individuals.
the second after Musk taking over Twitter
do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.
Cambridge Analytics and The Facebook we must do better greatest hits?
This!
We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.
Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?
Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.
The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.
Of course this is about the money, one way or another.
This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.
Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.
Note: I’m not making a moral claim one way or the other, and I do agree that most tech companies will grow to a size/power/monopoly that their incentives will deviate from the “common good”. Are there examples of openai’s structure working correctly with other companies?
Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.
On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.
This is absolutely peak irony!
US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets
Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!
I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.
OpenAI, on the other hand...
Dead Comment
Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.
Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.
And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.
If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?
Deleted Comment
Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.
GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.
Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.
There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:
> The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.
> Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge
So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".
> Third, my prior is strongly against Sam after working for him for two years at OpenAI:
> 1. He was always nice to me.
> 2. He lied to me on various occasions
> 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)
One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.
Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?
I have seen this play out many times in different locations for different people. A lot of technical folks like myself were given the advice that actions speak louder than words.
I was once scouted at a silicon valley selenium browser testing company. I migrated their cloud offering from VMWare to KVM, which depended on code I wrote and then defied my middle manager by improving their entire infrastructure performance by 40%. My instinct was to communicate this to the leadership, but I was advised not to skip my middle manager.
The next time I went the office I got a severance package and later found out that 2 hours later during the all hands they presented my work as their own. The middle manage went on to become the CTO of several companies.
I doubt we will ever find out what really happened or at least not in the next 5-10 years. OpenAI let Sam Altman be the public face of the company and got burned by it.
Personally I had no idea Ilya was the main guy in this company until the drama that happened. I also didn't know that Sam Altman was basically only there to bring in the cash. I assume that most people will actually never know that part of OpenAI.
(I'm genuinely curious—in the US I'm not aware of any action that could be taken here by anyone besides possibly Sam Altman for libel.)
This school level immaturity.
Old story
https://www.businessinsider.com/the-sudden-mysterious-exit-o...
When D’Angelo didn’t get any traction with it he jumped ship and launched his own competitor instead. Kind of a live wire imho.
https://en.wikipedia.org/wiki/List_of_Facebook_features#Face...
Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.
It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.
Do things work differently in America?
I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.
If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.
It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?
The girl she said not to worry about.
I consider this a feature.
The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.
Now it's increasingly look like Sam will be heading back into the role of CEO of openai.
Dead Comment
My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.
That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.
Otherwise it was like Ilya vs Sam showdown,and people were siding towards Ilya for agi and all. But this behind the scene looks like corporate power struggle and coup.
Ilya was one of the board members that removed Sam, so his reasons would, ipso facto, be a subset of the board's reasons.
You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".
1. He’s the actual ringleader behind the coup. He got everyone on board, provided reassurances and personally orchestrated and executed the firing. Most likely possibly and the one that’s most consistent with all the reporting and evidence so far (including this article).
2. Others on the board (e.g. Adam) masterminded the coup and saw Ilya as a fellow traveler useful idiot that could be deceived into voting against Sam and destroy the company he and his 700 colleagues spent so hard to build. He then also puppeteer Ilya to do the actual firing over Google Meet.
Isn't the solution to just pipe ChatGPT into a meta-reinforcement-learning framework that gradually learns how to prompt ChatGPT into writing the source-code for a true AGI? What do we even need AI ethicists for anyway? /s
Deleted Comment
Dead Comment
Dead Comment
2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.
The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.
https://x.com/drtechlash/status/1726507930026139651
> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.
> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.
> - Emmett Shear Sept 16, 2023
https://x.com/eshear/status/1703178063306203397
No explanation beyond "he tried to give two people the same project
the "Killing the company would be consistent with the companies mission" line in the boards statement
Adam having a huge conflict of interest
Emmet wanting to go from a "10" to a "1-2"
I'm either way off, or I've had too much internet for the weekend.
Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!
I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.
For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.
A statement from the CEO/the board is a standard descalation.
If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.
Uh, or investors and customers will? Yes, people are going to speculate, as you point out, which is not good.
> we might realize this is not all that important in the end and move on to other news items!
It's important to some of us.
News
Company which does research and doesn't care about money makes a decision to do something which aligns with research and not caring about money.
From the OpenAI website...
"it may be difficult to know what role money will play in a post-AGI world"
Big tech co makes a move which sends its stock to an all time high. Creates research team.
Seems like there could be a "The Martian" meme here... we're going to Twitter the sh* out of this.
I think he is correct, being the CEO twitch is a position known by no one in many places, e.g. how many developers/users in China even heard of Twitch? Being the CEO of OpenAI is a completely different story, it is a whole new level he can leverage in the years to come.
However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)
https://www.youtube.com/watch?v=jZ2xw_1_KHY
Giving different opinions on same person is a reason to fire a CEO?
This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.
[0] https://news.ycombinator.com/reply?id=38357843
[1] https://twitter.com/geoffreyirving/status/172675427761849141...
The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.
As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.
So yes, it’s absolutely a valid strategy.
https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)
I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)
1. stick with DOS
2. go with OS/2
3. go with Windows
Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.
Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.
I wonder if this is what the staff are thinking right now. It must feel awful if they are.
Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.
Dead Comment
If the case is that the will of the board is not being fulfilled, then the reasoning is simple. The CEO was told to do something and he has not done it. So, he is ousted. Plain and simple.
This talk about projects given to two teams and what not is nonsense. The board should care if its work is done, not how the work is done. That is the job of the CEO.
Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.
Half the board has not had a real job ever. I’m serious.
Shocking. Simply shocking.
"After six months, they realised our entire floor was duplicating the work of the one upstairs".
(Especially if they aren't made aware of each other until the end.)
A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?
Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.
Yet you're only willing to give this to one side and not the other? Seems reasonable... Especially despite all the evidence so far that the board is either completely incompetent or had ulterior motives.
Still too much in the dark to judge.
And the other guy is the founder of Quora and Poe.
It is breach of contract if it violated his employment contract, but I don't have a copy of his contract. It is wrongful termination if it was for an illegal reason, but there doesn't seem to be any suggestion of that.
> same for MS
I doubt very much that the contract with Microsoft limits OpenAI's right to manage their own personnel, so probably not.
Wrongful termination only applies when someone is fired for illegal reasons, like racial discrimination, or retaliation, for example.
I mean I’m sure they can all sue each other for all kinds of reasons, but firing someone without a good reason isn’t really one of them.
Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.
Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.
So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.
Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.
Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.
Aside from the fact that they didn't fire him as President and said he was staying on in the press release that went out without any consultation, I've seen no suggestion of any effort to keep him at the company.
Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work?
> will be built by AltmanS
Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?
And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.
It seems like a simple power struggle where the board and employees were misaligned.
Not the strongest opening line I've seen.
When you have such a massive conflict of interest and zero facts to go on - just sit down.
also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."
Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.
But as we all know - Ilya did a 180 (surprised the heck out of me).
Deleted Comment