Through all of this, no one has cogently explained why Altman leaving is such a big deal. Why would workers immediately quit their job when he has no other company, and does he even know who these workers are? Are these people that desperate to make a buck (or the prospect of big bucks)? It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.
What does Altman bring to the table besides raising money from foreign governments and states, apparently? I just do not understand all of this. Like, how does him leaving and getting replaced by another CEO the next week really change anything at the ground level other than distractions from the mission being gone?
And the outpouring of support for someone who was clearly not operating how he marketed himself publicly is strange and disturbing indeed.
I think he's not as known in the outside world but it's really difficult to understate the amount of social capital sama has in the inner circles of Silicon Valley. It sounds like he did a good job instilling loyalty as a CEO as well, but the SV thing means that the more connected someone at the company is to the SV ecosystem, the more likely they like him/want to be on his good side.
This is kind of like the leadership of the executive branch switching parties. You're not going to say "why would the staff immediately quit?" Especially since this is corporate America, and sama can have another "country" next week.
>I think he's not as known in the outside world but it's really difficult to understate the amount of social capital sama has in the inner circles of Silicon Valley.
This definitely sounds like someone the average person - including the average tech worker, exceptionally income-engorged as they may be - would want heading the, "Manhattan Project but potentially for inconceivably sophisticated social/economic/mind control et al." project. /s
Based on Andrej Karpathy's comment on Twitter today, the board never explained any of this to the staff. So siding with Altman seems like a far better option since his return would mean a much higher likelihood of continuing business as usual.
If Ilya & co. want the staff to side with them, they have to give a reason first. It doesn't necessarily have to be convincing, but not giving a reason at all will never be convincing.
As much as @sama is not exactly "great" (World Coin is...ehem). The firing reeks political strife and anyone have enough days at any office knows what happens the next year at OpenAI will be anything but grandstanding for those "revolutionists" to stamp out any dissenting voice and fertile ground for the opportunists to use the chaos to make things worse. Most of the employee's prime objective will be navigating the political shitstorm than doing their job. The chance OpenAI stay as is before ChatGPT is little to none.
Better run for the lifeboat before the ship hits the iceberg.
Do you understand why he was fired? The company had a charter, one the board is to help uphold. Altman and his crew were leading the company, and seemingly its employees, away from that charter. He was not open about how he was doing that. The board fired him.
This is like a bunch of people joining a basketball team where the coach starts turning it into a soccer team, and then the GM fires the coach for doing this and everyone calls the GM crazy and stupid. If you want to play soccer, go play soccer!
If you want to make a ton of money in a startup moving fast, how about don't setup a non-profit company spouting a bunch of humanitarian shit? It's even worse, because Altman very clearly did all this intentionally by playing the "I care about humanity card" just long enough while riding on the coattails of researchers where he could start up side processes to use his new AI profile to make the big bucks. But now people want to make him a martyr simply because the board called his bluff. It's bewildering.
A CEO typically builds up a network of his people within the org and if he falls hard they are next on the chopping block. Same deal as with dictators.
"Dozens" sounds like about right amount for a large org.
Seems like the board wants to slow down progress which pretty much means sitting there waiting for alignment instead of putting out the work you came for. Sam will let them work to progress I guess, plus a mountain of cash/equity for them.
The new CEO, (Emmett, not Mura, who was CEO for two days I guess) has publicly stated on multiple occasions "we need to slow down from a 10 to a 1-2". Ilya is also in favor of dramatically "slowing down". That's who's left in this company, running it.
In the field of AI, right now, "slowing down" is like deciding to stop the car and walk the track by foot in the middle of a Formula 1 race. It's like going backwards.
Unless things change from the current status quo, OpenAI will be irrelevant in less than 2 years. And of course many will quit such a company and go work somewhere where the CEO wants to innovate, not slow down.
not to mention how incredibly arrogant it is to think that if you stop, all progress stops. you're in a race and you refuse to acknowledge that anybody else is even around
Well many of the top researches in the world seem keen for a slow down so I’m not sure you’re right. You can’t force people to work on things at a pace they’re uncomfortable with.
A poorly planned poorly executing of a CEO with such a high profile and so important to investors that the CEO of Microsoft is surprised, angry, and negotiating his return… is the kind of absolute chaos that I would like to avoid. I would definitely consider quitting in that circumstance.
I would think to myself, what if management ever had a small disagreement with me?
I quit a line cook job once in a very similar circumstance scaled down to a small restaurant. The inexperienced owners were making chaotic decisions and fired the chef and I quit the same day, not out of any kind of particular loyalty or anger, I just declined the chaos of the situation. Quitting before the chaos hurt me or my reputation by getting mixed up in it… to move on to other things.
More senior employees can easily know ~1000x more about the company than new employees. These employees are like lower branches on a tree, their knowledge crucially supporting many others. Key departures can sever entire branches.
yes, but although we can all be replaced in a company, some of the people can be replaced much harder. so, i wouldn’t say that the number is high but maybe (and i only speculate) some of them are key people.
Was Sutskever really that instrumental to OpenAI's success, if it was at all possible for him to be surprised at the direction the company is taking. It doesn't seem that he is that involved in the day-to-day operations.
>The reason I was a founding donor to OpenAI in 2015 was not because I was interested in AI, but because I believed in Sam. So I hope the board can get its act together and bring Sam and Greg back.
I guess other people joined for similar reasons.
As regards the 'strange and disturbing' support, personally I thought OpenAI was doing cool stuff and it was a shame to break it because of internal politics.
This is classic startup PR nonsense. They just fear change for obvious reasons. It doesn’t mean that they will leave if OpenAI can work without Altman.
I don’t get it either. Who gives two shits about a sv bigwig who’s playbook appears to have been promote open ai and then immediately try to pull up the ladder and lock it with regulatory action.
Professionals tend to value their work in the real way of assigning value to it. So I doubt it was desperation so much as having a sense of self worth and a belief that the structure of Open-AI was largely a matter of word games the lawyers came up with.
As for Altman... I don't understand what's insignificant about raising money and resources from outside groups? Even if he wasn't working directly on the product itself, that role is still valuable in that it means he knows the amounts of resources that kind of project will require while also commanding some amount of familiarity with how to allocate them effectively. And on top of that he seems understand how to monetize the existent product a lot better than the Ilya who mostly came out of this looking like a giant hazard for anyone who isn't wearing rose tinted sci-fi goggles.
TBH, my primary concern is this will be the catalyst for another market crash by destroying the public trust in AI, which is currently benefiting from investor FOMO.
Bear in mind that the cause of an equity market crash and its trigger are two different things.
The 2000 crash in Tech was caused by market speculation in enthusiastic dot-com companies with poor management YES, but the trigger was simply the DOJ finally making Bill throw a chair (they had enough of being humiliated by him for decades as they struggled with old mainframe tech and limited staffing).
If the dot-com crash trigger had not arrived for another 12-18 months, I’m sure the whole mess could have been swept under the rug by traders during the Black Swan event and the recovery of the healthy companies would have been 5-6 months, not 5-6 years (or 20 years in MSFT’s case).
OpenAI seems to be the product of two types of people:
- The elite ML/AI researchers and engineers.
- The elite SV/tech venture capitalists.
These types come with their own followings - and I'm not saying that these two never intersect, but on one side you get a lot of brilliant researchers that truly are in it for the mission. They want to work there, because that's where ground zero is - both from the theoretical and applied point of view.
It's the ML/AI equivalent of working at CERN - you could pay the researchers nothing, or everything, and many wouldn't care - as long as they get to work on the things they are passionate about, AND they get to work with some of the most talented and innovative colleagues in the world. For these, it is likely more important to have top ML/AI heads in the organization, than a commercially-oriented CEO like Sam.
On the other side, you have the folks that are mostly chasing prestige and money. They see OpenAI as some sort of springboard into the elite world of top ML, where they'll spend a couple of years building cred, before launching startups, becoming VP/MD/etc. at big companies, etc. - all while making good money.
For the latter group, losing commercial momentum could indeed affect their will to work there. Do you sit tight in the boat, or do you go all-in on the next big player - if OpenAI crumbles the next year?
With that said, leadership conflicts and uncertainty is never good - whatever camp you're in.
> Why would workers immediately quit their job when he has no other company
It is Sam Altman. He will have one in a week.
> It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.
I would imagine most employees at any organization are not really there because of corporate values, but their own interests.
> What does Altman bring to the table besides raising money from foreign governments and states, apparently?
And one of the world's largest tech corporations. If you are interested in the money side, that isn't something to take lightly.
So I would bet it is just following the money, or at least the expected money.
The new board also wants to slow development. That isn't very exciting either.
Altman was fired because people who want to slow the progress of AI orchestrated his firing.
Whether or not he works at the company is symbolic and indicative of who is in charge: the people who want to slow AI progress, or the people who want to speed it up.
The board fired Altman for shipping too fast compared to their safety-ist doom preferences. The new interim CEO has said that he wants to slow AI development down 80-90%. Why on earth would you stay, if you joined to build + ship technology?
Of course, some employees may agree with the doom/safety board ideology, and will no doubt stay. But I highly doubt everyone will, especially the researchers who were working on new, powerful models — many of them view this as their life's work. Sam offers them the ability to continue.
If you think this is about "the big bucks" or "fame," I think you don't understand the people on the other side of this argument at all.
Not enough people understand what OpenAI was actually built on.
OpenAI would not exist if FAANG had been capable of getting out of it's own way and shipping things. The moment OpenAI starts acting like the companies these people left, it's a no brainer that they'll start looking for the door.
I'm sure Ilya has 10 lifetimes more knowledge than me locked away in his mind on topics I don't even know exist... but the last 72 hours are the most brain dead actions I've ever seen out of the leadership of a company.
This isn't even cutting your own nose of to spite the face: this is like slashing your own tires to avoid going in the wrong direction.
The only possible justification would have been some jailable offense from Sam Altman, and ironically their initial release almost seemed to want to hint that before they were forced to explicitly state that wasn't the case. At the point where you're forced to admit you surprise fired your CEO for relatively benign reasons how much must have gone completely sideways to land you in that position?
This is exactly why you would want people on the board who understand the technology. Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI. That is a technical thing that to understand seems to be beyond most people without real experience in the field. It is certainly beyond the understanding of some dude that lucked into a great training set and became an expert, much the same way the The Knack became industry leaders.
> Through all of this, no one has cogently explained why Altman leaving is such a big deal.
Pure f***g Greed. He is basically a front-man for a bunch of VCs/Angels/Influential Business Folks/Shady Investors/etc. who were betting on making big bucks through him.
Unfortunately, Ilya and his philosophical/ethical/moral stance has gotten in their way and hence they have let loose their dogs in the media to play up Sam Altman's "indispensability" to OpenAI.
It is likely that wherever Altman goes next, @gdb would follow, and _he_ is deeply loved by many at OAI (but so is Altman).
CEOs should be judged by their vision for the company, their ability to execute on that vision, bringing in funding, and building the best executive team for that job. That is what Altman brings to the table.
You make it seem that wanting to make money is a zero-sum game, which is a narrow view to take - you can be heavily emotionally and intellectually invested in what you do for a living and wanting to be financially independent at the same time. You also appear to find it “disturbing” that people support someone that is doing a good job - there has always been a difference between marketing and operations, and it is rather weird you find that disturbing - and appreciate stability, or love working for a team that gets shit done.
To address your initial strawman, why would workers quit when the boss leaves? Besides all the normal reasons listed above, they also might not like the remaining folks, or they may have lost faith in those folks, given the epic clusterfuck they turned this whole thing into. All other issues aside, if I would see my leadership team fuck up this badly, on so many levels, i’d be getting right out of dodge.
These are all common sense, adult considerations for anyone that has an IQ and age above room temperature and that has held down a job that has to pay the bills, and combining that with your general tone of voice, I’m going to take a wild leap here and posit that you may not be asking these questions in good faith.
TheInformation: Dozens of Staffers Quit OpenAI After Sutskever Says Altman Won’t Return
>Dozens of OpenAI staffers internally announced they were quitting the company Sunday night, said a person with knowledge of the situation, after board director and chief scientist Ilya Sutskever told employees that fired CEO Sam Altman would not return.
Isn't this expected? Nearly everyone who joined post ChatGPT was primarily financially motivated. What is more interesting is how many of the core research team stays.
This is actually pretty surprising to me, since a financially motivated person would normally wait until a better deal, and just collect their paycheck in the meantime.
There's also no guarantee that Altman will really start a new company, or be able to collect funding to hire everyone quickly. I wonder if these people are just very loyal to Sam.
This. Very accurate. At the end of they day this is a battle between academics and capitalists and what they stand for. We generally know how this typically goes…
Tip for builders: you can use the GPT APIs on Microsoft Azure. Managed reliably, nobody's quitting, no drama. Same APIs, just with better controls, global availability, and a very stable, reliable, and trustworthy provider.
(disclosure: I work at Azure, but this is just my own observation).
You want me to trust M$ in all this? Embrace, extend, extinguish.
Fellow nerds, you really need to go into work on Monday and have a hard chat with your C levels and legal (Because IANAL). The question is: Who owns the output of LLM/AI/ML tooling?
I will give you a hint, it's not you.
Do you need to copyright what a CS agent says, no, you want them on script as much as possible. An LLM parroting your training data is a good thing (assuming a human wrote it). Do you want an LLM writing code, or copy for your product, or a song for your next corporate sing along (Where did you go old IBM)? No you dont, because it's likely going straight to the public domain. Depending on what your doing with the tool and how your using it, it might not matter that this is the case (its an internal thing) but M$, or openAI, or whoever your vendor is, having a copy that they are free to use might be very bad...
I don't understand what point you're trying to make. Yes Microsoft uses OpenAI APIs. What is the point you're trying to make beyond that? It's still OpenAI software.
This will be an interesting test to see how fast you can bootstrap GPT-4 level performance with unlimited funds and talent that already has deep knowledge of the internals. With the initial adoption of ChatGPT alongside Copilot, OpenAI's data moat of crawled data & RLHF is pretty vast. And that's not leaving the walled garden of OpenAI. You can simulate a lot of this using other off-the-shelf LLMs (see Alpaca) but nothing is a substitute for real world observed usage.
In a related note, has this meaningfully broken through to the mainstream yet? If a ChatGPT competitor comes out tomorrow that is just as good - but under a different brand - how many people will switch because it's Altman backed? I'll be curious to find out.
Anthropic was formed for nearly the same reason Sam was fired. To slow the things down. OpenAI takes MS funding and Anthropic is formed. OpenAI pace goes a little above the comfort level of Ilya and Sam is fired. MS picks up Sam and will try to outpace openAI while openAI will put brakes on itself.
> They have billions in funding and have not yet got particularly close to GPT-4.
Wrong. Claude 2 beats GPT-4 is some benchmarks (e.g. HumanEval Python coding; math; analytical writing.). It's close enough. It doesn't matter who holds the crown this week, Anthropic definitely has ingredients to make GPT-4-class model.
This is like comparing similar cars from BMW and Toyota, finding few specific parameters where BMW has a higher score and saying "You see? Toyota engineering is nowhere close".
This actually shows Sam Altman's true contribution: the free version of ChatGPT is undeniably worse than Bing Chat, and yet ChatGPT is a bigger brand.
(And it might be a deliberate choice to save money for Claude 3 instead instead of making Claude 2 absolutely SotA.)
noob ai here, but is it gonna be challenging because of something intrinsic to gpt4? or about collecting equivalent amount of data to train a comparable model. Because I see Facebook releasing their models down to the weights
I would switch, but not because of Altman backing or not. I would switch if their strategy were to be to progress at pace. I’m not big on AI safety as it is parroted these days, I just want more AI, faster.
I’m genuinely surprised that they stuck to their guns. The PR push behind Altman’s return was convincing enough that I had my doubts.
Altman will be more than fine, he’ll get a bucket of money and the chance to prove he is the golden boy he’s been sold to the world. He will get to recruit a team that believes in his vision of accelerating AI for commercial use. This will lead to a more diverse market.
I hope for the best for those who remain at OpenAI. I hope for the best for Altman and Brockman.
Yes, I felt the same. In every piece, there was very little news but a lot of fluff to lead the public with opinions. Probably VCs saw their money burning and wanted Sam back at the helm to protect their asset.
I'm also pretty suspicious of people in forums like these who say nothing can compare to GPT4 and they're miles ahead of everyone else etc. How much of that is venture capital speaking?
It's not quite where it is (or was) with Tesla, where it was hopeless to know what was sincere and what was just people talking up their investment/talking down their short, but it's getting there.
I read something a while ago: when trying to interpret the truth of what is happening, the value of public statements is only that it's an indication of what that source would like the public to believe. And when looked at that way, that signal does have value. Not as truth, but as motive.
So that helped cut through all the cruft with this. There was a lot of effort behind putting across the perception that the board was going to resign and that Altman was going to come back.
Looked at through that lens, it makes more sense: the existing board had little incentive to quit and rehire Sam/Greg. The only incentive was if mass resignations threatened their priorities of working on safety and alignment, and I get the sense that most of these resignations are more on the product engineering side.
So I don't really think this is a twist that no one saw coming.
Other than 1) Microsoft and 2) anyone building a product with the OpenAI api 3) OpenAI employees…
…is OpenAI crashing a burning a big deal?
This seems rather over hyped… everyone has an opinion, everyone cares because OpenAI has a high profile.
…but really, alternatives to chatGPT exist now, and most people will be, really… not affected by this in any meaningful degree.
Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?
Feels a lot like Twitter; people said it would crash and burn, but really, it’s just a bit rubbish now, and a bunch of other competitors have turned up.
…and competitive pressure is good right?
I predict: what happens will look a lot like what happened with Twitter.
I'll be probably downvoted to hell, but, I think what is happening is healthy to the ecosystem.
Pine forests are known to grow by fires. Fires scatter the seeds around, the area which is unsustainable is reset, new forests are seeded, life goes on.
This is what we're seeing, too. A very dense forest has burned, seeds are scattered, new, smaller forests will start growing.
Things will slow down a bit, bit accelerate again in a more healthy manner. We'll see competition, and different approaches to training and sharing models.
Totally agree. It seems like OpenAI is ahead of the curve, but even some free open source projects have become really good. I am no expert, so take this with a grain of salt. It seems OpenAI has a lead, but only of a few months or so and others are racing behind. I guess it really sucks if you built something that relies on the OpenAI api, but even then one could replace the api layer.
I mean, OpenAI aren't just going to close up shop. I would very much doubt they're just going to turn off their APIs. I would just keep building and if you have to swap LLMs at some point then do so.
In time of Windows, around let's say mid 1990s, people thought Windows is irreplaceable.
Now turns out Linux is the workhorse everywhere for running workloads or consuming content. Almost every programming language (other than Microsoft's own SDKs) gets developed on Linux, has first class support for Linux and Windows is always an afterthought.
It has gone to that extent that to lure developers, Microsoft has to embed a Lunux in a virtual machine on Windows called WSL.
Local inference is going to get cheaper and affordable and that's for sure.
New models would also emerge.
So OpenAI doesn't seem to have an IP that can withstand all that IMHO.
Linux isn't the workhorse in any business that isn't tech based. The dev bubble here is pretty strong. I've done IT for a couple MSPs now so I've seen 100s of different tech stacks. No one uses Linux for anything. ESXi for the hypervisors, various version of Windows server, and M365 for everything else. Graphics/marketing uses Macs sometimes but other than that, it's all Windows/MS. Seeing a Linux VM is exceeding rare and usually runs some bespoke software that no one knows how to service or support. Yes, Linux is much more viable these days, but it's not even close to being mainstream.
In the "grand scheme of things", no, it's probably not a big deal. I think in the short term, I think it has the potential to set back the space a few months, as a lot of the ecosystem is still oriented around OpenAI (as they are the best at productivizing). I think that even extends to many community/open source models, which are commonly trained against GPT-4.
If they are able to retain enough people to properly release a GPT-5 with significant performance increases in a few months, I would assume that the effect is less pronounced.
I just don’t have anything too remarkable to add right now. I like and respect Sam and I think so does the majority of OpenAI. The board had a chance to explain their drastic actions and they did not take it, so there is nothing to go on except exactly what it looks like.
I for one thought Karpathy would side with the core researchers and not the corpos. To me, this whole ordeal is a clash between profit motives of Sam vs Non Profit and Safety motives of OpenAI's original charter. I mean didn't HN hate when OpenAI changed their open nature and become completely closed and profit oriented? This could be the healing of the cancer that OpenAI brought to this field to make it closed as a whole.
One is Sutskever, who believes AI is very dangerous and must be slowed down and closed source (edit: clarified so that it doesn't sound like closed down). He believes this is in line with OpenAI's original charter.
Another is the HN open source crowd who believes AI should be developed quickly and be open to everyone. They believe this is in line with OpenAI's original charter.
Then there is Altman, who agrees that AI should be developed rapidly, but wants it to stay closed so he can directly profit by selling it. He probably believes this is in line with OpenAI's original charter, or at least the most realistic way to achieve it, effective altruism "earn to give" style.
Karpathy may be more amenable to the second perspective, which he may think Altman is closer to achieving.
Karpathy is a very agreeable guy and a fantastic educator, and he's very respected by everyone including leader-owners like Altman and Musk, but he doesn't seem like he has very strong opinions one way or another about the hot button issues.
Karpathy is a hybrid. He’s smart, but he clearly enjoys both the money and the attention. This is the guy who defended Elon’s heavily exaggerated self driving claims when the impact was actual human lives.
> This could be the healing of the cancer that OpenAI brought to this field to make it closed as a whole.
I don’t know. The damage might be permanent. Everyone is probably going to be way more careful with what information they release and how they release it. Altman corrupted the entire community with his aggressive corporate push. The happy-go-lucky “look what we created” attitude of the community might be probably gone for good. Now every suit is going to be asking “can we make massive amount of money with this” or “can I spin up a hype train with this”.
I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.
If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?
Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.
Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.
No idea what the future holds for any of the players here. Reality truly is stranger than fiction.
OpenAI has hundreds more employees, all of whom are incredibly smart. While they will definitely lose the leadership and talent of those two, it’s not as if a nuclear bomb dropped on their HQ and wiped out all their engineers!
So questioning whether they will survive seems very silly and incredibly premature to me
Pretty much every researcher I know at OpenAI who are on twitter re-tweeted Sam Atlman's heart tweet with their own heart or some other supportive message.
I'm sure that's a sign that they are all team Sam - this includes a ton of researchers you see on most papers that came out of OpenAI. That's a good chunk of their research team and that'd be a very big loss. Also there are tons of engineers (and I know a few of them) who joined OpenAI recently with pure financial incentives. They'll jump to Sam's new company cause of course that's where they'd make real money.
This coupled with investors like Microsoft backing off definitely makes it fair to question the survival of OpenAI in the form we see today.
And this is exactly what makes me question Adam D'Angelo's motives as a board member. Maybe he wanted OpenAI to slow down or stop existing, to keep his Poe by Quora (and their custom assistants) relevant. GPT Agents pretty much did what Poe was doing overnight, and you can have as many as them with your existing 20$ ChatGPT Plus subscription. But who knows I'm just speculating here like everyone else.
But this is a disaster that can't be sugarcoated. Working in an AI company with a doomer as head is ridiculous. It will be like working in a tobacco company advocating for lung cancer awareness.
I don't think the new CEO can do anything to get back trust in record short amount of time. The sam loyalists will leave. The question remain, how is the new CEO going to hire new people, and will he be able to do so fast enough, and the ones who remain will accept the company that is a drastically different.
The perception right now is that the board doesn't care about investors, this will kill this company that is burning money at an insane rate. Employees will run for the exits unless they are convinced that there is a future exit.
If the funding dries up for OpenAI, those engineers have no incentive to keep working there. No point wasting your career on an organization that's destined to die.
I am guessing they are super reliant on Microsoft to keep running ChatGPT... If Microsoft decides to get out and finds a way they would be in deep trouble.
What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.
Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?
Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.
Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.
Can someone explain to me what they mean by "safe" AGI? I've looked in many places and everyone is extremely vague. Certainly no one is suggesting these systems can become "alive", so what exactly are we trying to remain safe from? Job loss?
Yeah Emmett Shear seems like an odd choice if they’re worried about retention because 1) Twitch was never known to be a particularly great place to work and 2) he stepped down for some reason and not because Twitch was in an amazing place or anything at the time
The big question in my mind is the reported threat from MSFT to withhold cloud credits (i.e. the actual currency of their $10B investment). Is this true? And are they going to follow through?
I don't buy for a second that enough employees will walk to sink the company (though it could be very be disruptive). But for OpenAI, losing a big chunk of their compute could mean they are unable to support their userbase and that could permanently damage their market position.
was it even reported? i heard a bunch of stuff that seemed to be hypothetical guessing like "satya must be furious" that seemed to morph into "it was reported satya is furious"
i've seen similar with the cloud credits thing, people just pontificating whether it's even a viable strategy.
> No idea what the future holds for any of the players here. Reality truly is stranger than fiction.
Is it though? "No outcome where [OpenAI] is one of the big five technology companies. My hope is that we can do a lot more good for the world than just become another corporation that gets that big." -Adam D'Angelo
I guess he would prefer is the existing incumbents got even larger, or if his competitor to ChatGPT (Poe) could capture significant fraction of the market.
> I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.
That's kinda what happened. The latest gist I read was that the non-profit, idealistic(?) board clashed with the for-profit, hypergrowth CEO over the direction to take the company. When you read the board's bios, they're weren't ready for this job (few are; these rocket ship stories are rare), the rocket ship got ahead of their non-profit goals, and they found themselves in over their heads, then failed to game out how this would go over (poor communication with MS, not expecting Altman to get so much support).
From here, the remaining board either needs to either surface some very damning evidence (the memo ain't it) or step down and let MS and Sequoia find a new board (even if they're not officially entitled to do that). Someone needs to be saying mea culpa.
Well, despite what Musk did, X (Twitter?) has still been limping along for quite a while now. While more abrupt and surprising, this doesn't seem nearly as bad as that.
This is far worse. OpenAI simply cannot survive without Microsoft and skeleton staff. It's not like a static codebase where you can keep the service up and running indefinitely barring bugs. Why would anyone building with the OpenAI APIs, their customers, have any faith in the company if they openly don't care about business? Working on AI is highly capital intensive, on the scale of many tens of billions of dollars. Where are they going to get that funding? How will they pay their staff? There is no way Microsoft is going to HODL after this embarrassment.
Don't fully believe this, but the only rational explanation I can see is that Ilya knows they have AGI.
- Nuke employee morale: massive attrition, not getting upside (tender offer),
- Nuke the talent magnet: who's going to want to work there now?
- Nuke Microsoft relationship: all those GPUs gone,
- Nuke future fundraising: who's going to fund this shit show?
People really need to stop with this AGI bullshit. They make a glorified Markov chain and suddenly they should have AGI? Self-driving cars are barely able to stay on the road after all this time, but sure, someone's hiding conscious machines in their basement.
burnout and sleep deprivation can lead to some pretty bad choices; thats why you want to surround yourself with people that will stand up to you when your ideas and plans suffer from too much tunnel vision. sounds like the other 3 board members were yes-men/women; the house of cards was there for a while, it seems.
No, OpenAI will not survive as a company with more than one shareholder. At the end of the day, MSFT has a fiduciary duty to its own shareholders. MSFT has set certain expectations for its own financial performance based on its agreements with OpenAI and MSFT shares traded based on those expectations. Now OpenAI has sustained a hemorrhage of its leadership that negotiated those agreements, including a public admission by OpenAI of deception in their boardroom and private talk of a potential competitor involving employees. The only question is if OpenAI will capitulate or the lawyers and supply chain will be leveraged to compel their cooperation with protecting the MSFT shareholders. MSFT has deep enough pockets to retain all of the workers. One way or another, the IP and their ops are now the property of the bank, in this case MSFT shareholders. Let’s hope nobody goes to jail by resisting what is a standard cleanup operation at this point.
“Sorry, we are reporting a write down of $10 billion due to potential misrepresentations of commercial intent that occurred in our OpenAI portfolio.”
Things you will never hear Satya Nadella say. Way more likely he will coordinate to unify as much of their workers as he can to continue on as a subsidiary, with the rest left to go work something out with other players crazy/desperate enough to trust them.
Seems like a logical choice. Microsoft’s next big play is generative AI, and they’ve put a lot of money into that.
They need to show they’re taking steps to stabilize things now that their hype factory has come unraveled.
I don’t think they particularly need these people , because they likely already have in house talent that is competitive. But having these people on board now will allow them to paint a much more stable picture to their shareholders.
I bet "new advanced AI research team" at Microsoft is going to be underwhelming for many, but really, it should be eye-opening. This is what startups, especially VC-backed capital-intensive AI startups, usually are.
What does Altman bring to the table besides raising money from foreign governments and states, apparently? I just do not understand all of this. Like, how does him leaving and getting replaced by another CEO the next week really change anything at the ground level other than distractions from the mission being gone?
And the outpouring of support for someone who was clearly not operating how he marketed himself publicly is strange and disturbing indeed.
This is kind of like the leadership of the executive branch switching parties. You're not going to say "why would the staff immediately quit?" Especially since this is corporate America, and sama can have another "country" next week.
“Difficult to understate” would mean he has little to no social capital.
This definitely sounds like someone the average person - including the average tech worker, exceptionally income-engorged as they may be - would want heading the, "Manhattan Project but potentially for inconceivably sophisticated social/economic/mind control et al." project. /s
If Ilya & co. want the staff to side with them, they have to give a reason first. It doesn't necessarily have to be convincing, but not giving a reason at all will never be convincing.
Better run for the lifeboat before the ship hits the iceberg.
It's "such a big deal" because he has been leading the company, and apparently some people really like how and they really don't like how it ended.
Why would it require any other explanation? Are you asking what leaders do and why an employee would care about what they do...?
This is like a bunch of people joining a basketball team where the coach starts turning it into a soccer team, and then the GM fires the coach for doing this and everyone calls the GM crazy and stupid. If you want to play soccer, go play soccer!
If you want to make a ton of money in a startup moving fast, how about don't setup a non-profit company spouting a bunch of humanitarian shit? It's even worse, because Altman very clearly did all this intentionally by playing the "I care about humanity card" just long enough while riding on the coattails of researchers where he could start up side processes to use his new AI profile to make the big bucks. But now people want to make him a martyr simply because the board called his bluff. It's bewildering.
"Dozens" sounds like about right amount for a large org.
Still, what do they actually want? It seems a bit overly dramatic for such an organisation.
Deleted Comment
In the field of AI, right now, "slowing down" is like deciding to stop the car and walk the track by foot in the middle of a Formula 1 race. It's like going backwards.
Unless things change from the current status quo, OpenAI will be irrelevant in less than 2 years. And of course many will quit such a company and go work somewhere where the CEO wants to innovate, not slow down.
I would think to myself, what if management ever had a small disagreement with me?
I quit a line cook job once in a very similar circumstance scaled down to a small restaurant. The inexperienced owners were making chaotic decisions and fired the chef and I quit the same day, not out of any kind of particular loyalty or anger, I just declined the chaos of the situation. Quitting before the chaos hurt me or my reputation by getting mixed up in it… to move on to other things.
The world is filled with Sam Altmans, but surely not enough Ilya Sutskevers.
>The reason I was a founding donor to OpenAI in 2015 was not because I was interested in AI, but because I believed in Sam. So I hope the board can get its act together and bring Sam and Greg back.
I guess other people joined for similar reasons.
As regards the 'strange and disturbing' support, personally I thought OpenAI was doing cool stuff and it was a shame to break it because of internal politics.
This guy is a villain.
Deleted Comment
As for Altman... I don't understand what's insignificant about raising money and resources from outside groups? Even if he wasn't working directly on the product itself, that role is still valuable in that it means he knows the amounts of resources that kind of project will require while also commanding some amount of familiarity with how to allocate them effectively. And on top of that he seems understand how to monetize the existent product a lot better than the Ilya who mostly came out of this looking like a giant hazard for anyone who isn't wearing rose tinted sci-fi goggles.
Bear in mind that the cause of an equity market crash and its trigger are two different things.
The 2000 crash in Tech was caused by market speculation in enthusiastic dot-com companies with poor management YES, but the trigger was simply the DOJ finally making Bill throw a chair (they had enough of being humiliated by him for decades as they struggled with old mainframe tech and limited staffing).
If the dot-com crash trigger had not arrived for another 12-18 months, I’m sure the whole mess could have been swept under the rug by traders during the Black Swan event and the recovery of the healthy companies would have been 5-6 months, not 5-6 years (or 20 years in MSFT’s case).
- The elite ML/AI researchers and engineers.
- The elite SV/tech venture capitalists.
These types come with their own followings - and I'm not saying that these two never intersect, but on one side you get a lot of brilliant researchers that truly are in it for the mission. They want to work there, because that's where ground zero is - both from the theoretical and applied point of view.
It's the ML/AI equivalent of working at CERN - you could pay the researchers nothing, or everything, and many wouldn't care - as long as they get to work on the things they are passionate about, AND they get to work with some of the most talented and innovative colleagues in the world. For these, it is likely more important to have top ML/AI heads in the organization, than a commercially-oriented CEO like Sam.
On the other side, you have the folks that are mostly chasing prestige and money. They see OpenAI as some sort of springboard into the elite world of top ML, where they'll spend a couple of years building cred, before launching startups, becoming VP/MD/etc. at big companies, etc. - all while making good money.
For the latter group, losing commercial momentum could indeed affect their will to work there. Do you sit tight in the boat, or do you go all-in on the next big player - if OpenAI crumbles the next year?
With that said, leadership conflicts and uncertainty is never good - whatever camp you're in.
It is Sam Altman. He will have one in a week.
> It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.
I would imagine most employees at any organization are not really there because of corporate values, but their own interests.
> What does Altman bring to the table besides raising money from foreign governments and states, apparently?
And one of the world's largest tech corporations. If you are interested in the money side, that isn't something to take lightly.
So I would bet it is just following the money, or at least the expected money.
The new board also wants to slow development. That isn't very exciting either.
> It is Sam Altman. He will have one in a week.
His previous companies were Loopt and Worldcoin. Won't his next venture require finding someone else to piggyback off of?
> If you are interested in the money side, that isn't something to take lightly.
I am interested in how taking billions from foreign companies and states could lead to national security and conflict of interest problems.
> The new board also wants to slow development.
It's not a new board as far as I know.
Welcome to Cargo Cult AI.
Whether or not he works at the company is symbolic and indicative of who is in charge: the people who want to slow AI progress, or the people who want to speed it up.
Of course, some employees may agree with the doom/safety board ideology, and will no doubt stay. But I highly doubt everyone will, especially the researchers who were working on new, powerful models — many of them view this as their life's work. Sam offers them the ability to continue.
If you think this is about "the big bucks" or "fame," I think you don't understand the people on the other side of this argument at all.
OpenAI would not exist if FAANG had been capable of getting out of it's own way and shipping things. The moment OpenAI starts acting like the companies these people left, it's a no brainer that they'll start looking for the door.
I'm sure Ilya has 10 lifetimes more knowledge than me locked away in his mind on topics I don't even know exist... but the last 72 hours are the most brain dead actions I've ever seen out of the leadership of a company.
This isn't even cutting your own nose of to spite the face: this is like slashing your own tires to avoid going in the wrong direction.
The only possible justification would have been some jailable offense from Sam Altman, and ironically their initial release almost seemed to want to hint that before they were forced to explicitly state that wasn't the case. At the point where you're forced to admit you surprise fired your CEO for relatively benign reasons how much must have gone completely sideways to land you in that position?
Pure f***g Greed. He is basically a front-man for a bunch of VCs/Angels/Influential Business Folks/Shady Investors/etc. who were betting on making big bucks through him.
Unfortunately, Ilya and his philosophical/ethical/moral stance has gotten in their way and hence they have let loose their dogs in the media to play up Sam Altman's "indispensability" to OpenAI.
Deleted Comment
I start to believe these workers are mostly financially motivated and that's why they follow him.
CEOs should be judged by their vision for the company, their ability to execute on that vision, bringing in funding, and building the best executive team for that job. That is what Altman brings to the table.
You make it seem that wanting to make money is a zero-sum game, which is a narrow view to take - you can be heavily emotionally and intellectually invested in what you do for a living and wanting to be financially independent at the same time. You also appear to find it “disturbing” that people support someone that is doing a good job - there has always been a difference between marketing and operations, and it is rather weird you find that disturbing - and appreciate stability, or love working for a team that gets shit done.
To address your initial strawman, why would workers quit when the boss leaves? Besides all the normal reasons listed above, they also might not like the remaining folks, or they may have lost faith in those folks, given the epic clusterfuck they turned this whole thing into. All other issues aside, if I would see my leadership team fuck up this badly, on so many levels, i’d be getting right out of dodge.
These are all common sense, adult considerations for anyone that has an IQ and age above room temperature and that has held down a job that has to pay the bills, and combining that with your general tone of voice, I’m going to take a wild leap here and posit that you may not be asking these questions in good faith.
They didn't "bring" a hyper capitalist. Sam Co-founded this entire thing lol. He was there from the beginning.
Deleted Comment
>Dozens of OpenAI staffers internally announced they were quitting the company Sunday night, said a person with knowledge of the situation, after board director and chief scientist Ilya Sutskever told employees that fired CEO Sam Altman would not return.
https://www.theinformation.com/articles/dozens-of-staffers-q...
There's also no guarantee that Altman will really start a new company, or be able to collect funding to hire everyone quickly. I wonder if these people are just very loyal to Sam.
For example, the GPT4 128K-token model is unavailable, and the GPT-4V model is also unavailable.
Fellow nerds, you really need to go into work on Monday and have a hard chat with your C levels and legal (Because IANAL). The question is: Who owns the output of LLM/AI/ML tooling?
I will give you a hint, it's not you.
Do you need to copyright what a CS agent says, no, you want them on script as much as possible. An LLM parroting your training data is a good thing (assuming a human wrote it). Do you want an LLM writing code, or copy for your product, or a song for your next corporate sing along (Where did you go old IBM)? No you dont, because it's likely going straight to the public domain. Depending on what your doing with the tool and how your using it, it might not matter that this is the case (its an internal thing) but M$, or openAI, or whoever your vendor is, having a copy that they are free to use might be very bad...
In a related note, has this meaningfully broken through to the mainstream yet? If a ChatGPT competitor comes out tomorrow that is just as good - but under a different brand - how many people will switch because it's Altman backed? I'll be curious to find out.
I think that illustrates it will be a be a big uphill battle for any new entrant no matter how well funded or resourced.
Wrong. Claude 2 beats GPT-4 is some benchmarks (e.g. HumanEval Python coding; math; analytical writing.). It's close enough. It doesn't matter who holds the crown this week, Anthropic definitely has ingredients to make GPT-4-class model.
This is like comparing similar cars from BMW and Toyota, finding few specific parameters where BMW has a higher score and saying "You see? Toyota engineering is nowhere close".
This actually shows Sam Altman's true contribution: the free version of ChatGPT is undeniably worse than Bing Chat, and yet ChatGPT is a bigger brand.
(And it might be a deliberate choice to save money for Claude 3 instead instead of making Claude 2 absolutely SotA.)
Still, given the exodus and resources now available I’d imagine pretty fast
Altman will be more than fine, he’ll get a bucket of money and the chance to prove he is the golden boy he’s been sold to the world. He will get to recruit a team that believes in his vision of accelerating AI for commercial use. This will lead to a more diverse market.
I hope for the best for those who remain at OpenAI. I hope for the best for Altman and Brockman.
Bloomberg, the verge and the information all went to bat for Altman in a big way on this.
It's not quite where it is (or was) with Tesla, where it was hopeless to know what was sincere and what was just people talking up their investment/talking down their short, but it's getting there.
"Investors were hoping that Altman would return to a company “which has been his life's work”"
As opposed to Sutskever, who they found on the street somehow, yeah?
So that helped cut through all the cruft with this. There was a lot of effort behind putting across the perception that the board was going to resign and that Altman was going to come back.
Looked at through that lens, it makes more sense: the existing board had little incentive to quit and rehire Sam/Greg. The only incentive was if mass resignations threatened their priorities of working on safety and alignment, and I get the sense that most of these resignations are more on the product engineering side.
So I don't really think this is a twist that no one saw coming.
If OpenAI ceases to be Sam’s vision someone will replace it.
It is a good thing for the ecosystem I guess, we will have more diverse products to choose.
But making AI more safe? Not likely. The tech will spread and Ilya will probably not a safer AGI, because he will not control it
Should they decide to sink to the level of VC scheming briefly, it will be like child's play for them.
Other than 1) Microsoft and 2) anyone building a product with the OpenAI api 3) OpenAI employees…
…is OpenAI crashing a burning a big deal?
This seems rather over hyped… everyone has an opinion, everyone cares because OpenAI has a high profile.
…but really, alternatives to chatGPT exist now, and most people will be, really… not affected by this in any meaningful degree.
Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?
Feels a lot like Twitter; people said it would crash and burn, but really, it’s just a bit rubbish now, and a bunch of other competitors have turned up.
…and competitive pressure is good right?
I predict: what happens will look a lot like what happened with Twitter.
Ultimately, most people will not be affected.
The people who care will leave.
New competitors will turn up.
Life goes on…
Pine forests are known to grow by fires. Fires scatter the seeds around, the area which is unsustainable is reset, new forests are seeded, life goes on.
This is what we're seeing, too. A very dense forest has burned, seeds are scattered, new, smaller forests will start growing.
Things will slow down a bit, bit accelerate again in a more healthy manner. We'll see competition, and different approaches to training and sharing models.
Life will go on...
> Isn’t breaking the strangle hold on AI what everyone wanted with open source models last week?
By other things getting better, not by stalling the leader of the pack.
Now turns out Linux is the workhorse everywhere for running workloads or consuming content. Almost every programming language (other than Microsoft's own SDKs) gets developed on Linux, has first class support for Linux and Windows is always an afterthought.
It has gone to that extent that to lure developers, Microsoft has to embed a Lunux in a virtual machine on Windows called WSL.
Local inference is going to get cheaper and affordable and that's for sure.
New models would also emerge.
So OpenAI doesn't seem to have an IP that can withstand all that IMHO.
If they are able to retain enough people to properly release a GPT-5 with significant performance increases in a few months, I would assume that the effect is less pronounced.
Jumping to a different platform is a huge sacrifice for power users - those who create content and value.
None of this is a factor here. ChatGPT is just a tool, like an online image resizer.
I just don’t have anything too remarkable to add right now. I like and respect Sam and I think so does the majority of OpenAI. The board had a chance to explain their drastic actions and they did not take it, so there is nothing to go on except exactly what it looks like.
https://twitter.com/karpathy/status/1726289070345855126
One is Sutskever, who believes AI is very dangerous and must be slowed down and closed source (edit: clarified so that it doesn't sound like closed down). He believes this is in line with OpenAI's original charter.
Another is the HN open source crowd who believes AI should be developed quickly and be open to everyone. They believe this is in line with OpenAI's original charter.
Then there is Altman, who agrees that AI should be developed rapidly, but wants it to stay closed so he can directly profit by selling it. He probably believes this is in line with OpenAI's original charter, or at least the most realistic way to achieve it, effective altruism "earn to give" style.
Karpathy may be more amenable to the second perspective, which he may think Altman is closer to achieving.
I don’t know. The damage might be permanent. Everyone is probably going to be way more careful with what information they release and how they release it. Altman corrupted the entire community with his aggressive corporate push. The happy-go-lucky “look what we created” attitude of the community might be probably gone for good. Now every suit is going to be asking “can we make massive amount of money with this” or “can I spin up a hype train with this”.
If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?
Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.
Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.
No idea what the future holds for any of the players here. Reality truly is stranger than fiction.
So questioning whether they will survive seems very silly and incredibly premature to me
I'm sure that's a sign that they are all team Sam - this includes a ton of researchers you see on most papers that came out of OpenAI. That's a good chunk of their research team and that'd be a very big loss. Also there are tons of engineers (and I know a few of them) who joined OpenAI recently with pure financial incentives. They'll jump to Sam's new company cause of course that's where they'd make real money.
This coupled with investors like Microsoft backing off definitely makes it fair to question the survival of OpenAI in the form we see today.
And this is exactly what makes me question Adam D'Angelo's motives as a board member. Maybe he wanted OpenAI to slow down or stop existing, to keep his Poe by Quora (and their custom assistants) relevant. GPT Agents pretty much did what Poe was doing overnight, and you can have as many as them with your existing 20$ ChatGPT Plus subscription. But who knows I'm just speculating here like everyone else.
But this is a disaster that can't be sugarcoated. Working in an AI company with a doomer as head is ridiculous. It will be like working in a tobacco company advocating for lung cancer awareness.
I don't think the new CEO can do anything to get back trust in record short amount of time. The sam loyalists will leave. The question remain, how is the new CEO going to hire new people, and will he be able to do so fast enough, and the ones who remain will accept the company that is a drastically different.
https://twitter.com/karpathy/status/1726478716166123851
You are aware that more than just 2 people departed?
Oh yes it is.
Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?
Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.
Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.
Surely that's what you need for safety?
Sam doomed himself. Laundry Buddy is the new Clippy
What we need at this point is a neutral 3rd party who can examine their safety claims in detail and give a relatively objective report to the public.
Dead Comment
I don't buy for a second that enough employees will walk to sink the company (though it could be very be disruptive). But for OpenAI, losing a big chunk of their compute could mean they are unable to support their userbase and that could permanently damage their market position.
i've seen similar with the cloud credits thing, people just pontificating whether it's even a viable strategy.
Which does not say whether microsoft was open to the idea or ultimately chose to pursue that path.
Is it though? "No outcome where [OpenAI] is one of the big five technology companies. My hope is that we can do a lot more good for the world than just become another corporation that gets that big." -Adam D'Angelo
That's kinda what happened. The latest gist I read was that the non-profit, idealistic(?) board clashed with the for-profit, hypergrowth CEO over the direction to take the company. When you read the board's bios, they're weren't ready for this job (few are; these rocket ship stories are rare), the rocket ship got ahead of their non-profit goals, and they found themselves in over their heads, then failed to game out how this would go over (poor communication with MS, not expecting Altman to get so much support).
From here, the remaining board either needs to either surface some very damning evidence (the memo ain't it) or step down and let MS and Sequoia find a new board (even if they're not officially entitled to do that). Someone needs to be saying mea culpa.
Deleted Comment
Things you will never hear Satya Nadella say. Way more likely he will coordinate to unify as much of their workers as he can to continue on as a subsidiary, with the rest left to go work something out with other players crazy/desperate enough to trust them.
Sam and Greg, and left OpenAI staffers now join in Microsoft
https://twitter.com/satyanadella/status/1726509045803336122
They need to show they’re taking steps to stabilize things now that their hype factory has come unraveled.
I don’t think they particularly need these people , because they likely already have in house talent that is competitive. But having these people on board now will allow them to paint a much more stable picture to their shareholders.