Readit News logoReadit News
LarsDu88 · 2 years ago
There has to be a bigger story to this.

Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

hooande · 2 years ago
Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.

There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.

rtpg · 2 years ago
The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).

This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.

rightbyte · 2 years ago
> a decision that destroyed billions of dollars worth of brand value and good will

I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?

What sane user would want a shitcoin CEO in charge of a product they depend on?

wruza · 2 years ago
The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will

Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).

tsimionescu · 2 years ago
Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.

Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.

austhrow743 · 2 years ago
Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.

So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.

127 · 2 years ago
>good will

Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.

trhway · 2 years ago
>Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.

the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :

https://www.theatlantic.com/technology/archive/2023/11/sam-a...

sumitkumar · 2 years ago
Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.
aerhardt · 2 years ago
Surely the API products are the runaway products, unless you are conflating the two. I think their economics are much more promising.
LastTrain · 2 years ago
Yep. I think you've explained the origins of most decisions, bad and good - they are reactionary.
throwaway4aday · 2 years ago
The more likely explanation is that D'Angelo has a massive conflict of interest with him being CEO of Quora, a business rapidly being replaced by ChatGPT and which has a competing product "creator monetization with Poe" (catchy name, I know) that just got nuked by OpenAI's GPTs announcement at dev day.

https://quorablog.quora.com/Introducing-creator-monetization...

https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...

curiousllama · 2 years ago
A (potential, unstated) motivation for one board member doesn't explain the full moves of the board, though.

Maybe it's a factor, but it's insufficient

LMYahooTFY · 2 years ago
>Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.

Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.

>Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.

nmfisher · 2 years ago
> Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth?

OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

The first thing that Sam Altman did when he took over was give Microsoft the keys to the kingdom, and even more absurdly, he is now working for Microsoft on the same thing. That’s without even mentioning the creepy Worldcoin company.

Money and status are the clear motivations here, OpenAI charter be damned.

jelling · 2 years ago
> What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something?

Yes. Yes and more yes.

That is why, at least in the U.S., we have given non-profits exemptions from taxation. Because they are supposed to be improving society, not profiting from it.

xinayder · 2 years ago
I like to read that, besides the problems others have listed, OpenAI seems like it was built on top of the work of others, who were researching AI, and suddenly took all this "free work" from the contributors and sold it for a profit where the original contributors didn't even see a single dime from their work.

To me it seems like it's the usual case of a company exploiting open source and profiting off others' contributions.

ascv · 2 years ago
It seemed to me the entire point of the legal structure was to raise private capital. It's a lot easier to cut a check when you might get up to 100x your principal versus just a tax write off. This culminated in the MS deal: lots of money and lots of hardware to train their models.
Sunhold · 2 years ago
I would rather OpenAI have a diverse base of income from commercialization of its products than depend on "donations" from a couple ultrarich individuals or corporations. GPT-4 cost $100 million+ to train. That money needs to come from somewhere.
jimmySixDOF · 2 years ago
Then there is the Inference cost said to be as high as $0.30 per question asked based on compute cost infrastructure.
kmlevitt · 2 years ago
People keep speculating sensational, justifiable reasons to fire Altman. But if these were actual factors in their decision, why doesn't the board just say so?

Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.

codeulike · 2 years ago
For what its worth, here's a thread from someone who used to work with Sam who says they found him deceptive and manipulative

https://twitter.com/geoffreyirving/status/172675427022402397...

I have no details of OpenAI's Board’s reasons for firing Sam, and I am conflicted (lead of Scalable Alignment at Google DeepMind). But there is a large, very loud pile on vs. people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things.

...

Third, my prior is strongly against Sam after working for him for two years at OpenAI:

1. He was always nice to me.

2. He lied to me on various occasions

3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

Wronnay · 2 years ago
The Issue with these two explanations from the board is that this is normally nothing which would result into firing the CEO.

In my eyes these two explanations are simple errors which can occur to everybody and in a normal situation you would talk about these Issues and you could resolve them in 5min without firing anybody.

zztop44 · 2 years ago
I agree, and what’s more I think the stated reasons make sense if (a) the person/people impacted by these behaviours had sway with the board, and (b) it was a pattern of behaviour that everyone was already pissed off about.

If board relations have been acrimonious and adversarial for months, and things are just getting worse, then I can imagine someone powerful bringing evidence of (yet another instance of) bad/unscrupulous/disrespectful behavior to the board, and a critical mass of the board feeling they’ve reached a “now or never” breaking point and making a quick decision to get it over with and wear the consequence.

Of course, it seems that they have miscalculated the consequences and botched the execution. Although we’ll have to see how it pans out.

I’m speculating like everyone else. But knowing how board relations can be, it’s one scenario that fits the evidence we do have and doesn’t require anyone involved to be anything other than human.

wordpad25 · 2 years ago
>People keep speculating

Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

It's not even that it's not a justifiable reason, but they did it without getting legal advice or consulting with partners and didn't even wait for markets to close.

Board destroyed billions in brand and talent value for OpenAI and Microsoft in a mid day decision like that.

This is also on Sam Altman himself for building and then entertaining such an incompetent board.

Guthur · 2 years ago
If you don't think the likes of Sam Altman, Eric Schmidt, Bill Gates and the lot of them want to increase their own power you need to think again. At best these individuals are just out to enrich themselves, but many of them demonstrate a desire to affect the prevailing politic and so i don't see how they are different, just more subtle about it.

Why worry about the Sauds when you've got your own home grown power hungry individuals.

achenet · 2 years ago
because our home grown power hungry individuals are more likely to be okay with things like women dressing how they want, homosexuality, religious freedom, drinking alcohol, having dogs and other decadent western behaviors which we've grown very attached to
PeterStuer · 2 years ago
What is interesting is the total absence of 3 letter agency mentions from all of the talk and speculation about this.
smolder · 2 years ago
I don't think that's true. I've seen at least one other person bring up the CIA in all the "theorycrafting" about this incident. If there's a mystery on HN, likelihood is high of someone bringing up intelligence agencies. By their nature they're paranoia-inducing and attract speculation, especially for this sort of community. With my own conspiracy theorist hat on, I could see making deals with the Saudis regarding cutting edge AI tech potentially being a realpolitik issue they'd care about.
mcmcmc · 2 years ago
This feels like a lot of very one sided PR moves from the side with significantly more money to spend on that kind of thing
VectorLock · 2 years ago
It feels like Altman started the whole non-profit thing so he could attract top researchers with altruistic sentiment for sub-FANAAG wages. So the whole "Altman wasn't candid" thing seems to track.
mcpackieh · 2 years ago
Reminds me of a certain rocket company that specializes in launching large satellite constellations that attracts top talent with altruistic sentiment about saving humanity from extinction.
saagarjha · 2 years ago
Ok, but the wages were excellent (assuming that the equity panned out, which it seemed very likely it would until last week).
dariosalvi78 · 2 years ago
> you have the single greatest shitshow in tech history

the second after Musk taking over Twitter

achenet · 2 years ago
We live interesting times ^_^
bryanrasmussen · 2 years ago
>Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.

Cambridge Analytics and The Facebook we must do better greatest hits?

LZ_Khan · 2 years ago
Taking money from Saudi's alone should raise a big red flag.
roschdal · 2 years ago
> the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This!

P_I_Staker · 2 years ago
> rich and powerful people using the technology to enhance their power over society.

We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.

Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?

blackoil · 2 years ago
> money from the Saudis on the order of billions of dollars to make AI accelerators

Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.

AtlasBarfed · 2 years ago
At some point this is probably about a closed source "fork" grab. Of course that's what practically the whole company is probably planning.

The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.

Of course this is about the money, one way or another.

cdogl · 2 years ago
> Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.

Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.

adrianN · 2 years ago
Was Marx wrong?
dgellow · 2 years ago
If I understood correctly Altman was CEO of the for-profit OpenAI, not the non-profit. The structure is pretty complicated: https://openai.com/our-structure
tomhallett · 2 years ago
I’m curious: if one of the board members “knows” the only way for OpenAI to be truly successful is for it to be a non-profit and “don’t be evil” (Google’s mantra), that if they set expectations correctly and put caps on the for-profit side, it could be successful. But they didn’t fully appreciate how strong the market forces would be, where all of the focus/attention/press would go to the for-profit side. Sam’s side has such an intrinsic gravity, that’s it’s inevitable that it will break out of its cage.

Note: I’m not making a moral claim one way or the other, and I do agree that most tech companies will grow to a size/power/monopoly that their incentives will deviate from the “common good”. Are there examples of openai’s structure working correctly with other companies?

detourdog · 2 years ago
To me this is the ultimate Silicon Valley bike shedding incident.

Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.

blackoil · 2 years ago
> There has to be a bigger story to this.

On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.

xinayder · 2 years ago
So they actually kicked him out because he transformed a non-profit into a money printing machine?
whelp_24 · 2 years ago
You that like it's a bad thing for them to do? You wouldn't donate to the Coca-cola company.
osrec · 2 years ago
What does TC style mean?
sevagh · 2 years ago
Total Compensation
smiley1437 · 2 years ago
Tech Crunch
k12sosse · 2 years ago
MBS? Seriously? How badly do you need the money.. good luck not getting hacked to pieces when your AI insults his holiness
curiousgal · 2 years ago
> taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

This is absolutely peak irony!

US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets

Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!

I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.

0xDEADFED5 · 2 years ago
it's okay to give an example of something bad without being required to list all the other things in the universe that are also bad.
xinayder · 2 years ago
The difference is that the US Army wasn't created with the intent to "keep guns from the hands of criminals" and we all know it's a bad actor.

OpenAI, on the other hand...

Dead Comment

zw123456 · 2 years ago
100% agree. I've seen this type of thing up close (much smaller potatoes but same type of thing) and whatever is getting aired publicly is most likely not the real story. Not sure if the reasons you guessed are it or not, we probably won't know for awhile but your guesses are as good as mine.
kmlevitt · 2 years ago
Neither of these reasons have anything to do with a lofty ideology regarding the safety of AGI or OpenAI’s nonprofit status. Rather it seems they are micromanaging personnel decisions.

Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

1024core · 2 years ago
> But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

kmlevitt · 2 years ago
Right now I think that’s the most plausible explanation simply because none of the other explanations that have been floating around make any sense when you consider all the facts. We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up.

And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.

insanitybit · 2 years ago
It seems extremely short sighted for the rest of the board to go along with that.
behnamoh · 2 years ago
> Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?

Deleted Comment

Zolde · 2 years ago
I find this implausible, though it may have played a motivating role.

Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.

GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.

Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.

There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:

> The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

> Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge

So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".

> Third, my prior is strongly against Sam after working for him for two years at OpenAI:

> 1. He was always nice to me.

> 2. He lied to me on various occasions

> 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.

seanhunter · 2 years ago
What’s interesting to me is that someone looked at Quora and thought “I want the guy behind that on my board”.
shandor · 2 years ago
I’m confused how the board is still keeping their radio silence 100%. Where I’m from, with a shitstorm this big raging, and the board doing nothing, they might very easily be personally held responsible for all kinds of utterly nasty legal action.

Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?

rjzzleep · 2 years ago
This isn't unlike the radio silence Brendan Eich kept, when the Mozilla sh* hit the fan. This is in my opinion the outcome of when really technical and scientific people have been given decades of advice of not talking to the public.

I have seen this play out many times in different locations for different people. A lot of technical folks like myself were given the advice that actions speak louder than words.

I was once scouted at a silicon valley selenium browser testing company. I migrated their cloud offering from VMWare to KVM, which depended on code I wrote and then defied my middle manager by improving their entire infrastructure performance by 40%. My instinct was to communicate this to the leadership, but I was advised not to skip my middle manager.

The next time I went the office I got a severance package and later found out that 2 hours later during the all hands they presented my work as their own. The middle manage went on to become the CTO of several companies.

I doubt we will ever find out what really happened or at least not in the next 5-10 years. OpenAI let Sam Altman be the public face of the company and got burned by it.

Personally I had no idea Ilya was the main guy in this company until the drama that happened. I also didn't know that Sam Altman was basically only there to bring in the cash. I assume that most people will actually never know that part of OpenAI.

lolinder · 2 years ago
What specific legal action could be pursued against them where you're from? Who would have a cause for action?

(I'm genuinely curious—in the US I'm not aware of any action that could be taken here by anyone besides possibly Sam Altman for libel.)

chucke1992 · 2 years ago
It is fascinating considering that D'Angelo had a history with coup (in Quora he did the same, didn't he?)
aravindgp · 2 years ago
Wow this is significant, he did this to Charlie cheever the best guy at Facebook and quora. He got Matt on board and fired Charlie without informing investors. Only difference this time 100 billion company is at stake at openai. Process is similar. This going very wrong for Adam D'Angelo. With this I hope other board members get to the bottom get Sam back and vote out D'Angelo from board.

This school level immaturity.

Old story

https://www.businessinsider.com/the-sudden-mysterious-exit-o...

gorgoiler · 2 years ago
Remember Facebook Questions? While it lives on as light hearted polls and quizzes it was originally launched by D’Angelo when he was an FB employee. It was designed to compete with expert Q&A websites and was basically Quora v0.

When D’Angelo didn’t get any traction with it he jumped ship and launched his own competitor instead. Kind of a live wire imho.

https://en.wikipedia.org/wiki/List_of_Facebook_features#Face...

dwd · 2 years ago
Do we even have an idea of how the vote went?

Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.

It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.

havercosine · 2 years ago
Naive question. In my part of the world, board meetings for such consequential decisions can never be called out on such short notice. Board meeting has to be called ahead of time by days, all the board members must be given written agenda. They have to acknowledge in writing that they've got this agenda. If the procedures such as these aren't followed, the firing cannot stand in court of law. The number of days are configurable in the shareholders agreement, but it is definitely not 1 day.

Do things work differently in America?

moberley · 2 years ago
I find it interesting that the attempted explanations, as unconvincing as they may be, are related to Altman specifically. Given that Brockman was the board chairperson it is surprising that there don't seem to be any attempts to explain that demotion. Perhaps its just not being reported to anyone outside but it makes no sense to me that anyone would assume a person would stay after being removed from a board without an opportunity to be at the meeting to defend their position.
fastball · 2 years ago
I don't understand how you only need 4 people for quorum on a 6-person board.
lfclub · 2 years ago
It could be a more primal explanation. I think OpenAi doesn’t want to effectively be a R&D arm of Microsoft. The ChatGPT mobile app is an unpolished and unrefined. There’s little to no product design there, so I totally see how it’s fair criticism to call out premature feature milling (especially when it’s clear it’s for Microsoft).

I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.

If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.

It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?

The girl she said not to worry about.

diordiderot · 2 years ago
> There’s little to no product design there

I consider this a feature.

aravindgp · 2 years ago
Exactly my point why would d Angelo want openai to thrive when his own company poe(chatbot) wants compete in the same space. Its conflict of interest which ever way you look. He should resign from board of openai in the first place.

The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.

Now it's increasingly look like Sam will be heading back into the role of CEO of openai.

anupamchugh · 2 years ago
There’s lots of conflicts of interests beyond Adam and his Poe AI. Yes, he was building a commerical bot using OpenAI APIs, but Sam was apparently working on other side ventures too. And Sam was the person who invested in Quora during his YC tenure, and must have had a say in bringing him onboard. At this point, the spotlight is on most members of the nonprofit board
015a · 2 years ago
So? Sam gave Worldcoin early access to OpenAI's proprietary technology. Should Sam step down (oh wait)?

Dead Comment

LMYahooTFY · 2 years ago
Well, the appointment of a CEO who believes AGI is a threat to the universe is potentially one point in favor of AI safety philosophical differences.
AndyNemmity · 2 years ago
Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.

That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.

aravindgp · 2 years ago
I think Ilya was naive and didn't see this coming and good that he reliased quickly announced on twitter and made the right call to get Sam back.

Otherwise it was like Ilya vs Sam showdown,and people were siding towards Ilya for agi and all. But this behind the scene looks like corporate power struggle and coup.

dragonwriter · 2 years ago
> Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

Ilya was one of the board members that removed Sam, so his reasons would, ipso facto, be a subset of the board's reasons.

karmasimida · 2 years ago
He lets the emotion gets the better part of him for sure.
arthur_sav · 2 years ago
> Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told.

You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".

laurels-marts · 2 years ago
Two possibilities when it comes to Ilya:

1. He’s the actual ringleader behind the coup. He got everyone on board, provided reassurances and personally orchestrated and executed the firing. Most likely possibly and the one that’s most consistent with all the reporting and evidence so far (including this article).

2. Others on the board (e.g. Adam) masterminded the coup and saw Ilya as a fellow traveler useful idiot that could be deceived into voting against Sam and destroy the company he and his 700 colleagues spent so hard to build. He then also puppeteer Ilya to do the actual firing over Google Meet.

jacquesm · 2 years ago
Based on Ilya's tweets and his name on that letter (still surprised about that, I have never sees someone calling for their own resignation) that seems to be the story.
resource0x · 2 years ago
The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI. This can be done in perpetuity. Google explains its AI failures along the same lines.
DaiPlusPlus · 2 years ago
> The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI.

Isn't the solution to just pipe ChatGPT into a meta-reinforcement-learning framework that gradually learns how to prompt ChatGPT into writing the source-code for a true AGI? What do we even need AI ethicists for anyway? /s

Deleted Comment

JyB · 2 years ago
That's the only thing that make sense with Ilya & Murati signing that letter.
anoy8888 · 2 years ago
This is the most likely scenario. Adam wants to destroy OpenAI so that his poop AI has a chance to survive

Dead Comment

Dead Comment

DebtDeflation · 2 years ago
1) Where is Emmett? He's the CEO now. It's his job to be the public face of the company. The company is in an existential crisis and there have been no public statements after his 1AM tweet.

2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.

dmix · 2 years ago
Technically he's the interim CEO in a chaotic company just assigned in the last 24hrs. I'd probably wait to get my bearings before walking in acting like I've got everything under control on the first day after a major upheaval.

The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.

https://x.com/drtechlash/status/1726507930026139651

> I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.

> If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.

> - Emmett Shear Sept 16, 2023

https://x.com/eshear/status/1703178063306203397

motoxpro · 2 years ago
The more I read into this story the more I can't help but to be a conspiracy theorist and say that it feels like the boards intent was to kill the company.

No explanation beyond "he tried to give two people the same project

the "Killing the company would be consistent with the companies mission" line in the boards statement

Adam having a huge conflict of interest

Emmet wanting to go from a "10" to a "1-2"

I'm either way off, or I've had too much internet for the weekend.

concordDance · 2 years ago
Everyone involved here is a doomer by the strict definition ("misaligned agi could kill us all and alignment is hard") .
creer · 2 years ago
Another "thing" is, he has been named by a board which... [etc]. Being a bit cautious would be a minimum.
PheonixPharts · 2 years ago
Yes these people should all be doing more to feed internet drama! If they don't act soon, HN will have all sorts of wild opinions about what's going on and we can't have that!

Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!

I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.

sackfield · 2 years ago
You can either act like a professional and control the messaging or let others fill the vacuum with idle speculation. I'm quite frankly in shock as to the level of responsibility displayed by people whose position should demand high function.
JumpCrisscross · 2 years ago
My favorite hypothesis: Ilya et al suspected emergent AGI (e.g. saw the software doing things unprompted or dangerous and unexpected) and realized the Worldcoin shill is probably not the one you want calling the shots on it.

For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.

minimaxir · 2 years ago
No serious company wants drama. Hopefully OpenAI is still a serious company.

A statement from the CEO/the board is a standard descalation.

ssnistfajen · 2 years ago
The speculations are rampant precisely because the board has said absolute nothing since the leadership transition announcement on Friday.

If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.

insanitybit · 2 years ago
> HN will have all sorts of wild opinions about what's going on and we can't have that!

Uh, or investors and customers will? Yes, people are going to speculate, as you point out, which is not good.

> we might realize this is not all that important in the end and move on to other news items!

It's important to some of us.

gexla · 2 years ago
Thank you! I get the sense that none of this matters and it's all a massive distraction.

News

Company which does research and doesn't care about money makes a decision to do something which aligns with research and not caring about money.

From the OpenAI website...

"it may be difficult to know what role money will play in a post-AGI world"

Big tech co makes a move which sends its stock to an all time high. Creates research team.

Seems like there could be a "The Martian" meme here... we're going to Twitter the sh* out of this.

x0x0 · 2 years ago
Convincing two constituencies: employees and customers, that your company isn't just yolo-ing things like ceos and so forth seems like it is a pretty good use of ceo time!
concordDance · 2 years ago
OpenAI becoming a Microsoft department is awful from an X risk point of view.
Andrex · 2 years ago
I cannot say whether you deserve the downvotes, but an alternative and grounded perspective is appreciated in this maelstrom of news, speculation and drama.
quickthrower2 · 2 years ago
They have customers and people deciding if they want to be customers.
kyleyeats · 2 years ago
This sarcastic post is the best understanding of public relations I've seen in an HN post.
upupupandaway · 2 years ago
I find it absolutely fascinating that Emmett accepted this position. He can game all scenarios and there is no way that he can come out ahead on any of them. One would expect an experienced Silicon Valley CEO to make this calculus and realize it's a lost cause. The fact he accepted to me shows he's not a particularly good leader.
tw1984 · 2 years ago
He made it pretty clear that he consider it as a once in a life time chance.

I think he is correct, being the CEO twitch is a position known by no one in many places, e.g. how many developers/users in China even heard of Twitch? Being the CEO of OpenAI is a completely different story, it is a whole new level he can leverage in the years to come.

bmitc · 2 years ago
That seems kind of silly to say. He's not a good leader because he's taking on a challenge?
c_s_guy · 2 years ago
If Emmett will run this the same way he ran Twitch, I'm not expecting much action from him.
starshadowx2 · 2 years ago
People kept asking where he was during his years of being Twitch CEO, it's not unlike him to be MIA now either.
agitator · 2 years ago
As much as I'd love to hear about the details of the drama as the next person, they really don't have to say anything publicly. We are all going to continue using the product. They don't have public investors. The only concern about perception they may have is if they intend to raise more money anytime soon.
eastern · 2 years ago
That's what a board of a for-profit company which has a fiduciary duty towards shareholders should do.

However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)

pushedx · 2 years ago
He has said more than he said during his entire 5 years at Twitch
arduanika · 2 years ago
Here he is! Blathering about AI doom 4 months ago, spitting Yudkowsky talking points:

https://www.youtube.com/watch?v=jZ2xw_1_KHY

highwayman47 · 2 years ago
Half the board lacks any technical skill, and the entire board lacks any business procedural skill. Ideally, you’d have a balance of each on a component board.
eshack94 · 2 years ago
Ideally, you also have at least a couple independent board members who are seasoned business/tech veterans with the experience and maturity to prevent this sort of thing from happening in the first place.
markdown · 2 years ago
Why should he care about updating internet randoms? It's none of our business. The people who need to know what's going, know what's going on.
spullara · 2 years ago
He is trying to determine if they have already made an Alien God.
kumarvvr · 2 years ago
Giving 2 people the same project? Isnt this like the thing to do to get differing approaches and then release the amalgamation of the two? I thought these sorts of things are common.

Giving different opinions on same person is a reason to fire a CEO?

This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.

hal009 · 2 years ago
As mentioned by another person in this thread [0], it is likely that it was Ilya's work that was getting replicated by another "secret" team, and the "different opinions on the same person" was Sam's opinions of Ilya. Perhaps Sam saw him as an unstable element and a single point of failure in the company, and wanted to make sure that OpenAI would be able to continue without Ilya?

[0] https://news.ycombinator.com/reply?id=38357843

JimDabell · 2 years ago
Since a lot of the board’s responsibilities are tied to capabilities of the platform, it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board. A simple dual-track project shouldn’t be a problem, but this kind of thing would be seen as dishonesty by the board.
kmlevitt · 2 years ago
Firing Sam as a way of sticking up for Ilya would make more sense if Ilya wasn’t currently in support of Sam getting his job back.
015a · 2 years ago
This is an interesting theory when combined with this tweet from Google DeepMind's team lead of Scalable Alignment [1].

[1] https://twitter.com/geoffreyirving/status/172675427761849141...

The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.

BillyTheKing · 2 years ago
either that or Sam didn't tell Adam D'Angelo that they were launching a competing product in exactly the same space that poe.ai had launched one. For some context, poe had launched something similar to those custom GPTs with creator revenue sharing etc. just 4 weeks prior to dev-day
stingraycharles · 2 years ago
I remember a few years ago when there was some research group that was able to take a picture of a black hole. It involved lots of complicated interpretation of data.

As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.

So yes, it’s absolutely a valid strategy.

msravi · 2 years ago
Did the teams know that there was another team working on the same thing? I wonder how that affects working of both teams... On the other hand, not telling the teams would erode the trust that the teams have in management.
campbel · 2 years ago
Yep! I've done eng "bake-offs" as well, where a few folks / teams work on a problem in isolation then we compare and contrast after. Good fun!
DonHopkins · 2 years ago
Maybe they needed two teams to independently try to decode an old tape of random numbers from a radio space telescope that turned out to be an extraterrestrial transmission, like a neutrino signal from the Canis Minor constellation or something. Happens all the time.

https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)

danbmil · 2 years ago
The CEO's I've worked for have mostly been mini-DonaldT's, almost pathologically allergic to truth, logic, or consistency. Altman seems way over on the normal scale for CEO of a multi-billion dollar company. I'm sure he can knock two eggs together to make an omelette, but these piddling excuses for firing him don't pass the smell test.

I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)

danbmil · 2 years ago
as for multiple teams with overlapping goals -- are you kidding me? That's a 100% legit and popular tactic. Once CEO I worked with relished this approach and called it a "Steel-cage death match"!
aravindgp · 2 years ago
You were right Ilya was naive , he regrets his decision on twitter. And he was taken advantage of by power hungry people behind.
valine · 2 years ago
Steve Jobs famously had two iPhone teams working on concepts in parallel. It was click wheel vs multi-touch. Shockingly the click wheel iPhone lost.
brandall10 · 2 years ago
I thought the design team always worked up 3 working prototypes from a set of 10 foam mockups. There was an article from someone with intimate knowledge of Ives lab some years back stating this was protocol for all Apple products.
mikepurvis · 2 years ago
Another element of that was the team that tried to adapt iPodOS for iPhone vs Forstall's team that adapted OSX.
throwawayapples · 2 years ago
and the Apple (II etc) vs Mac teams warring with each other.
fakedang · 2 years ago
Seriously? Click wheel iPhone lost shockingly? The click wheel on most laptops wears out so fast for me, and the chances of that happening on a smaller phone wheel is just so much higher.
WalterBright · 2 years ago
Back in the late 80s, Lotus faced a crisis with their spreadsheet, Lotus 1-2-3. Should they:

1. stick with DOS

2. go with OS/2

3. go with Windows

Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.

quickthrower2 · 2 years ago
Which would have had been a tradeoff too. More time to market, fewer people on each project, slowed down by cross platform code.
mvkel · 2 years ago
IBM was bankrolling all the development. They only had one choice.
dylan604 · 2 years ago
Apple had a skunk works team keeping each new version of their OS to compile on x86 long before the switch. I wonder if the Lotus situation was an influence, or if ensuring your software can be made to work on different hardware is just an obvious play?
clnq · 2 years ago
Consider for a moment: this is what the board of one of the fastest growing companies in the world worries about - kindergarten level drama.

Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.

I wonder if this is what the staff are thinking right now. It must feel awful if they are.

discordance · 2 years ago
Happens all the time.

Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.

zebnyc · 2 years ago
How does that work? Do they have the same the same PM, requirements? Is it just different tech / achitectures adopted by different teams. Fascinating

Dead Comment

fluidcruft · 2 years ago
I guess it depends on whether any of them actually got the assignment. One way to interpret it is that nobody is taking that assignment seriously. So depending on what that assignment is and how important that particular assignment is to the board, then it may in fact be a big deal.
kumarvvr · 2 years ago
Does a board give an assignment to the CEO or teams?

If the case is that the will of the board is not being fulfilled, then the reasoning is simple. The CEO was told to do something and he has not done it. So, he is ousted. Plain and simple.

This talk about projects given to two teams and what not is nonsense. The board should care if its work is done, not how the work is done. That is the job of the CEO.

hooloovoo_zoo · 2 years ago
Giving two groups of researchers the same problem is guaranteeing one team will scoop the other. Hard to divvy up credit after the fact.
ldjkfkdsjnv · 2 years ago
Also when a project is vital to a company, you cannot just give it to one team. You need to derisk
m3kw9 · 2 years ago
How did they get 4 board to fire him because he tried to A B test a project?
quickthrower2 · 2 years ago
Was that verbatim the reason or an angry persons characterisation?
samspenc · 2 years ago
> One explanation was that Altman was said to have given two people at OpenAI the same project.

Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.

whywhywhywhy · 2 years ago
>Have these people never worked at any other company before?

Half the board has not had a real job ever. I’m serious.

adastra22 · 2 years ago
And the one which does have a real job is a direct competitor with OpenAI.
GreedClarifies · 2 years ago
It is unbelievable TBH.

Shocking. Simply shocking.

squigz · 2 years ago
Could you please elaborate on what a 'real job' is in this context?
ben_w · 2 years ago
My dad interviewed someone who was applying for a job. Standard question, why did you leave the last place?

"After six months, they realised our entire floor was duplicating the work of the one upstairs".

qiqitori · 2 years ago
To me at least that's an _extremely_ rude thing to do. (Unless one person is asked to do it this way, the other one that way, so people can compare the outcome.)

(Especially if they aren't made aware of each other until the end.)

015a · 2 years ago
I think this needs to be viewed through the lens of the gravity of how the board reacted; giving them the benefit of the doubt that they acted appropriately and, at least with the information they had the time, correctly.

A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?

Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.

qwytw · 2 years ago
> giving them the benefit of the doubt that they acted appropriately

Yet you're only willing to give this to one side and not the other? Seems reasonable... Especially despite all the evidence so far that the board is either completely incompetent or had ulterior motives.

croes · 2 years ago
Maybe it's was not a ordinary project or not ordinary people.

Still too much in the dark to judge.

bmitc · 2 years ago
In over 10 years of experience, I have never known this to happen.
spoonjim · 2 years ago
Actually, they haven’t. One is some policy analyst and the other is an actor’s wife.
lupire · 2 years ago
Tasha Macauley is an electrical engineer who founder of two tech companies, besides having a cute husband.

And the other guy is the founder of Quora and Poe.

gongagong · 2 years ago
wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying? same for MS
dragonwriter · 2 years ago
> wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying?

It is breach of contract if it violated his employment contract, but I don't have a copy of his contract. It is wrongful termination if it was for an illegal reason, but there doesn't seem to be any suggestion of that.

> same for MS

I doubt very much that the contract with Microsoft limits OpenAI's right to manage their own personnel, so probably not.

pacificmint · 2 years ago
Employment in California is ‘at will’, which means they can fire him without a reason.

Wrongful termination only applies when someone is fired for illegal reasons, like racial discrimination, or retaliation, for example.

I mean I’m sure they can all sue each other for all kinds of reasons, but firing someone without a good reason isn’t really one of them.

rossdavidh · 2 years ago
So, none of this sounds like it could be the real reason Altman was fired. This leaves people saying it was a "coup", which still doesn't really answer the question. Why did Altman get fired, really?

Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.

Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.

So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.

skygazer · 2 years ago
I think you may be hallucinating reasonable reasons to explain an inherently indefensible situation, patching up reality so it makes sense again. Sometimes people with puffed up egos are frustrated over trivial slights, and group think takes over, and nuking from orbit momentarily seems like a good idea. See, I’m doing it too, trying to rationalize. Usually when we’re stuck in an unsolvable loop like a SAT solver, we need to release one or more constraints. Maybe there was no good reason. Maybe there’s a bad reason — as in, the reasoning was faulty. They suffered Chernobyl level failure as a board of directors.
namocat · 2 years ago
This is what I suspect; that their silence is possibly not simply evidence of no underlying reason, but that the underlying reason is so sensitive that it cannot be revealed without doing further damage. Also the hastiness of it makes me suspect that whatever it was happened very recently (e.g. conversations or agreements made at APEC).

Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.

kmlevitt · 2 years ago
If it was anything all that bad, Ilya and Greg would’ve known about it, because one of them was chairman of the board and the other was a board member. And both of them want Sam rehired. You can’t even spin it that they are complicit in wrongdoing, because the board tried to keep Greg at the company and Ilya is still on the board now and previously supported them.

Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.

dragonwriter · 2 years ago
> because the board tried to keep Greg at the company

Aside from the fact that they didn't fire him as President and said he was staying on in the press release that went out without any consultation, I've seen no suggestion of any effort to keep him at the company.

aiman3 · 2 years ago
i do believe what they said about Altman "was not consistently candid in his communications with the board.", based on my understanding, altman did proved his dishonest behavior from he did to openai, turned non-profit into for-profit and open source model to closed-source one. and even worst, people seems totally accepted this type of personality, the danger is not the AI itself, is the AI will be built by AltmanS!
qwytw · 2 years ago
> dishonest behavior from he did to openai, turned non-profit into for-profit and

Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work?

> will be built by AltmanS

Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?

vorticalbox · 2 years ago
OpenAI, Inc. Is non profit but it's subsidiary OpenAI Global, LLC. Is for profit.
awb · 2 years ago
The only thing akin to that would be an AI safety concern and the new CEO specifically said that wasn’t the issue.

And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.

It seems like a simple power struggle where the board and employees were misaligned.

resolutebat · 2 years ago
Banks have strict cash reserve requirements that are externally audited. OpenAI does not, and more to the point, they're both swimming in money and could easily get more if they wanted. (At least until last week, that is.)
rossdavidh · 2 years ago
Rumor has it, they had been trying to get more, and failing. No audited records of that kind of thing, of course, so could be untrue. But Altman and others had publicly said that they were attempting to get Microsoft to invest more, and he was courting sovereign wealth funds for an AI (though non-OpenAI) chip related venture, and ChatGPT had a one-day partial outage due to "capacity" constraints, which is odd if your biggest backer is a cloud company. It all sounds like they are running short on money, long before they get to profitability. Which would have been fine up until about a year ago, because someone with Altman's profile could easily get new funding for a buzz-heavy project like ChatGPT. But times are different, now...
leoc · 2 years ago
Not specifically related to this latest twist, sorry, but DeepMind’s Geoffrey Irving trusts the board over Altman: https://x.com/geoffreyirving/status/1726754270224023971
jacquesm · 2 years ago
"I have no details of OpenAI's Board’s reasons for firing Sam"

Not the strongest opening line I've seen.

leoc · 2 years ago
I do have to point out that this is also true of nearly everyone else who’s expressed a strong opinion on the topic, and it didn’t stop any of them
blazespin · 2 years ago
Yeah, I can't imagine why DeepMind would possibly want to see OpenAI incinerated.

When you have such a massive conflict of interest and zero facts to go on - just sit down.

also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."

Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.

But as we all know - Ilya did a 180 (surprised the heck out of me).

Deleted Comment