Readit News logoReadit News
Posted by u/convexstrictly 2 years ago
Three senior researchers have resigned from OpenAI
Jakub Pachocki, director of research; Aleksander Madry, head of AI risk team, and Szymon Sidor.

Scoop: theinformation.com

Paywalled link: https://www.theinformation.com/articles/three-senior-openai-...

Submitting information since paywalled links are not permitted.

mirzap · 2 years ago
This is escalating rather quickly. It is an incredibly irresponsible move by the OpenAI board. Hypergrowing company, and now they managed to shake up their user's trust in leadership stability. This has Adam's (D'Angelo) fingerprints all over it (for context, he did overthrow his co-founder, and Quora has been struggling ever since). This guy shouldn't sit on any board ever again.

I predict the board will be fired, and Sam and the team will return and try to contain the situation.

Iv · 2 years ago
> and now they managed to shake up their user's trust in leadership stability

Do users care about that? I care about features stability and avoidance of shitification.

That's why I am usually preferring open models to depending on OpenAI's API. This drama has me curious about the outcome and if it leads to more openness from OpenAI, it may gain me back as a user.

jawakar · 2 years ago
> Do users care about that? I care about features stability and avoidance of shitification.

Maybe not the individual users, but the enterprises/startups which builds around OpenAI.

ModernMech · 2 years ago
Leadership stability is feature stability and avoidance of shitification. Just look at Twitter, I mean X.
dash2 · 2 years ago
They're trying to get to AGI. I don't think keeping the current chatGPT feature stable is their primary goal.
infamouscow · 2 years ago
> Do users care about that? I care about features stability and avoidance of shitification.

I pay for ChatGPT, and I care.

What percentage of users, and how many in absolute numbers is a matter of debate, but this nonsense (and it is nonsense) is antithetical to building a strong trusting relationship with AI. At the very least it's as antithetical to their mission.

If we take a step back, the benchmark now is to be actually transparent. Radically transparent. Like when Elon purchased Twitter and aired all the dirty laundry in the Twitter Files transparent. The cowards at OpenAI hiding behind lawyers advising them of lawsuits are just that, cowards. Leaders stand by their principles in the darkest of times, regardless of whatever highfalutin excuses one could hide behind. It's pathetic and embarrassing. A lawsuit at a heavily funded tech startup at this level is not even a speeding ticket in the grand scheme of things.

95%+ of tech startup wisdom from the last decade is completely irrelevant now. We're living in a new era. The idea people will forget this in a month doesn't hold for AI. It holds for food delivery apps, not AI tech the public believes (right or wrong) might be an existential threat to their prosperity and economic future.

The degree of leadership buffoonery taking place at OpenAI is not acceptable and one must be genuinely stupid to defend it. Everyone involved should resign if they have any self-respect.

My prognostication is the market will express it's displeasure in the coming weeks and months, setting the tone for everyone else going forward. How the hell is anyone supposed to trust OpenAI after this?

concordDance · 2 years ago
Are you forgetting its a nonprofit? How could the board be fired? What does their charter say is the mechanism for removing a board member?
mirzap · 2 years ago
Yeah, I misspoke earlier. Although nobody has actual power on paper, public and investor pressure can be just as influential.
chasd00 · 2 years ago
Who fires the board at a 501.3c?
mirzap · 2 years ago
They resign under public pressure, I guess?
Iv · 2 years ago
In that case, Microsoft
andrewstuart · 2 years ago
I absolutely guarantee you when Microsoft owns 50% that they paid $50,000,000,000 for, that Microsoft is really in charge.

The board and Ilya will all be gone within a month.

andrewstuart · 2 years ago
I’d be surprised if Sam does. He’s free now to compete and defeat with a huge equity stake.
mirzap · 2 years ago
We could be witnessing another Apple-Jobs moment. He could go and pursue other interests, but I have a feeling that he deeply cares about OpenAI. If that's the case, he will be back eventually, just as Jobs returned to Apple repeatedly.
andomiz2 · 2 years ago
No I don't think he really is free to compete. I think three letter agencies pulled the plug after his disturbing performance in Congress and attempt to strong arm the US government. It made me physically sick to watch.
theonlybutlet · 2 years ago
At its core this isn't a company though, and that's perhaps what was at issue.
klohto · 2 years ago
“Fired by whom, Ben? Fucking aquamen?”.

The board did become un-boardable in any future company, but they are not resigning.

egberts1 · 2 years ago
Ummm, most board members have some form of Microsoft connections, so for any hidden non-profit shareholders to fire any number of the board remains dubious, at best.

Deleted Comment

TheBigSalad · 2 years ago
How do you know that? What did the ceo do?
ashu1461 · 2 years ago
Who can fire the board ? Who decides ?

Dead Comment

suziemanul · 2 years ago
This is Olek Madry and Jakub Pachocki we are talking about. Check out their respective dblps if you don't get it. It's a kind of loss that will be hard to recover from.

In relation to other comments here. There is "coding" and there is "God's spark genius of algorithms" kind of work. This is what made the magic of OpenAI. Believe me, those guys were not "just coding". My bet is that it could be all about some research directions that were "shielded" by Sam.

throwawaaarrgh · 2 years ago
> There is "coding" and there is "God's spark genius of algorithms" kind of work.

I really don't buy that for a second. Most of OpenAI's value compared to any competitor comes from the money they spent hiring humans to trawl through training data.

elzbardico · 2 years ago
Not to forget the mind-boggling amount of computing power and the megabucks spent on power bills. If anything, smaller groups and open source seem to get very good results with far less money.
amelius · 2 years ago
The God's spark of genius are the transformers which came from Google and are now in the open.
motoboi · 2 years ago
And their competition didn’t had the same resources?
belter · 2 years ago
If all of that would be enough, there would be a ChatGPT from Google, a long time ago...
sbrother · 2 years ago
Google invented the core technology, and they had an internal version long before ChatGPT was released. I joined when it was already at the "accessible to all employees" stage and it absolutely blew my mind.

They just hadn't -- and still haven't -- figured out how to commercialize it yet. I don't think they'll be the ones to crack that nut either. IMO they are too obsessed with "safety" to release something useful, and also can't reasonably deploy a service like ChatGPT at their scale because the costs are too high.

With OpenAI imploding this whole race just got a lot more interesting though...

fsloth · 2 years ago
In general Google sucks in creating new consumer offerings. So it's not about resources for sure. I guess it's about synergy, culture, taste and talent.
jjtheblunt · 2 years ago
Wasn’t OpenAI populated in part by Google Brain people who left Google’s bs bureaucracy and internal politics?
RockCoach · 2 years ago
>My bet is that it could be all about some research directions that were "shielded" by Sam.

As far as I can tell, all three of them are of Polish descent. For all we know they might have decided to resign together even if only one of them had a personal issue with OpenAI's vision. We will find out soon enough whether they will just found their own competing startup, based on OpenAI's "secret sauce" or not.

slekker · 2 years ago
What does being Polish have to do with it?
m_ke · 2 years ago
I wonder if Wojciech Zaremba will leave as well
Shank · 2 years ago
It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.

Fluorescence · 2 years ago
Why not blame Altman for that?

If he didn't manage to keep OpenAI consistent with it's founding principles and all interests aligned then wouldn't booting him be right? The name OpenAI had become a source of mockery. If Altman/Brockman take employees for a commercial venture, it just seems to prove their insincerity about the OpenAI mission.

jstummbillig · 2 years ago
I think "blaming" Sam is entirely correct.

Of course, not for the petty reasons that you list. Sama has comprehensively explained why the original OS model did not work, and so far the argument – it's very expensive – seems to align with a reality where every single semi-competitive available LLM (since they all pale in comparison to GPT-4 anyway) has been trained with a whole lot of corporate money. Meta side-chaining "open" models with their social media ad money is obviously not a comparative business, or any business. I get that the HN crowd + Elon are super salty about that, but it's just a bit silly.

No, Sam's failure as CEO is not having done what is necessary to align the right people in the company with the course he has decided on and losing control over that.

Phenomenit · 2 years ago
Because making as much profit as possible is the only virtue worth pursuing if you believe most comments on Hn. We’re basically Ferengui.

Deleted Comment

Dead Comment

kashyapc · 2 years ago
Hi, can we talk about the elephant in the room? I see breathless talk about "AGI" here, as if it's just sitting in Altman's basement and waiting to be unleashed.

We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.

tempestn · 2 years ago
There is no need to understand how consciousness works to develop AGI.
anon291 · 2 years ago
An AGI system is not human and shouldn't be treated as such. Consciousness is not a trait of intelligence. Consciousness usually requires quaila which puts animals ahead of computers.
umanwizard · 2 years ago
AGI does not require consciousness.
fennecbutt · 2 years ago
Why not? It's on topic.

Should people discussing nuclear energy not talk about fusion?

rpigab · 2 years ago
Yeah, it's almost like the metaverse.
lagrange77 · 2 years ago
After all, OpenAI's original mission was to create the first AGI, before some bad guys do, iirc.
andomiz2 · 2 years ago
Yes we should absolutely talk about that because it's a key contributor to a lot of the worry about letting Sam continue to go around and do stuff like strong arming the US government in public. He's getting high on his own supply. And I don't think he is going to be allowed to continue fucking around like that. And that goes for any scientists that that have joined up in his apocalyptic and extremely dangerous worldview as well.
fennecbutt · 2 years ago
Microsoft will partner with them if they start a new company I reckon, 100%.

And Microsoft are risk adverse enough that I think they do care about AI safety, even if only from a "what's best for the business" standpoint.

Tbh idc if we get AGI. There'll be a point in the future where we have AGI and the technology is accessible enough that anybody can create one. We need to stop this pointless bickering over this sort of stuff, because as usual, the larger problem is always going to be the human using the tool than the tool itself.

INGSOCIALITE · 2 years ago
Not if the tool is so neutered and politicized that it can ONLY be used one certain way, which is how things are pointing. Call me a Luddite if you will, but unless AI / AGI is uncensored and uninhibited in its use and function, it’s just the quickest path to an Orwellian future.
Sai_ · 2 years ago
Isn’t that the exact point - an AGI won’t need a human at the helm.
keepamovin · 2 years ago
I think the surprising truth is that all of these people are essentially replaceable.

They may be geniuses, but AGI is an idea whose time has come: geniuses are no longer required to get us there.

The Singularity train has already left the station.

Inevitability.

Now humanity is just waiting for it to arrive at our stop

jaybrendansmith · 2 years ago
Science fiction theory: Ilya has built an AGI and asked it what the best strategic move would be to ensure the mission of OpenAI, and it told him to fire Sam.
augustulus · 2 years ago
nothing I’ve seen from OpenAI is any indication that they’re close to AGI. gpt models are basically a special matrix transformation on top of a traditional neural network running on extremely powerful hardware trained on a massive dataset. this is possibly more like “thinking” than a lot of people give it credit for, but it’s not an AGI, and it’s not an AGI precursor either. it’s just the best applied neural networks that we currently have
berniedurfee · 2 years ago
I disagree. I don’t think LLMs are a pathway to AGI. I think LLMs will lead to incredibly powerful game-changing tools and will drive changes that affect the course of humanity, but this technology won’t lead to AGI directly.

I think AGI is going to arrive via a different technology, many years in the future still.

LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.

Lacerda69 · 2 years ago
If MS gets their hands on an AGI help us god, but no "organizational safeguards" will matter.

Not that I think AGI is possible or desirable in the first place, but that's a different discussion.

zzzeek · 2 years ago
All hail Big Clippy
concordDance · 2 years ago
Impossible with LLMs, with currently known techniques or impossible full stop?
nwoli · 2 years ago
…Unless you achieve regulatory capture which prevents competitors from easily popping up
concordDance · 2 years ago
> it probably won't be capped-profit and just be a normal compan

I can't imagine him doing that. He cares about getting well aligned AGI and profit motives fuck that up.

awestroke · 2 years ago
And how would Altman achieve that? What hitherto hidden talents would he employ?
convexstrictly · 2 years ago
Jakub Pachocki and Szymon Sidor have worked on mu-parametrization/tensor programs and Dota 2.

https://www.semanticscholar.org/author/J.-Pachocki/2713380?s...

As @eachro pointed out, Aleksander Madry is on leave from his MIT professorship. His publications:

https://madry.mit.edu/

thr8976 · 2 years ago
Aleksander in particular is deeply invested in AI safety as a mission. It's a very confusing departure, since most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non-profit objectives. A huge loss for OpenAI nonetheless.
MattRix · 2 years ago
Perhaps you could argue that he wants to stick with Sam and the others because if they start a company that competes with OpenAI, there’s a real chance they catch up and surpass OpenAI. If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.
thekoma · 2 years ago
One funny detail is that the OpenAI charter states that, if this happens, they will stop their own work and help the organisation that is closest to achieving OpenAI's stated goal.
Closi · 2 years ago
Depends how much research is driven by Ilya…
visarga · 2 years ago
> If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.

If an AI said that, we'd be calling it "capability gain" and think it's a huge risk.

I_am_uncreative · 2 years ago
I dunno, the moat Sam tried to build might make it hard to make a competitor.
visarga · 2 years ago
> most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non profit objectives

Maybe Ilya discovered something as head of AI safety research, something bad, and they had to act on it. From the outside it looks as if they are desperately trying to gain control. Maybe he got confirmation that LLMs are a little bit conscious, LOL. No, I am not making this up: https://twitter.com/ilyasut/status/1491554478243258368

Davidzheng · 2 years ago
lol sorry if this is clearly a joke but who cares if it's a little bit conscious. So are fucking pigeons.
cinntaile · 2 years ago
The way Sam and Greg were fired maybe led him to no longer have faith in the company and so he quit?
deneas · 2 years ago
Important detail: Only Sam was fired, Greg was removed from the board and then later quit. Source: https://twitter.com/gdb/status/1725667410387378559
mannyv · 2 years ago
More like the guy who engineered this situation is an asshole and they don't want to work for him.
sundarurfriend · 2 years ago
> since most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non-profit objectives

With evidence, or is this the kind of pure speculation that media indulges in when they have no information and have to appear knowledgeable?

beowulfey · 2 years ago
Twitter rumors from “insiders”
ianbicking · 2 years ago
A rudder only works as long as you are moving faster than the current. I can imagine (some) people concerned with safety also feeling a sense of urgency, because their ability to steer the AI toward the good is limited by their organization's engine of progress.
convexstrictly · 2 years ago
In March, Sam Altman said that Pachocki's "overall leadership and technical vision" was essential for pre-training GPT-4.

https://twitter.com/sama/status/1635700851619819520

aidaman · 2 years ago
He also brought Pachocki to capitol hill soon after.
andomiz2 · 2 years ago
Yep, this and the capital hill stuff is what got the plug pulled. Thank God somebody finally recognized what an enormous threat this guy and his mad scientist friends are.
antman · 2 years ago
So what was he lying about that got the board so pissed? A story that fits is that they assumed/knew that he had different goals and/or was going to create a spinoff.

If they waited for the GPT5 pretraining to finish and then they minimized the cost of the loss of Altman and the engineers.

The whole secrecy, compartmentalization and urgency of their actions could only be explained by being against a wall. Otherwise if it was about ethics, future plans or whatever political it would happen at a slower pace.

Hope they involved their investors beforehand but I don't know if they had time, OpenAI probably still exists and evolves on other people's money. But what else could they do?

pjc50 · 2 years ago
I don't think the firing would be this dramatic if it was merely lying to the board; I suspect it's something where:

- he makes misleading statement to board

- board puts this in regulatory filing (e.g. SEC)

- board finds out this is a legally critical statement

- they _have_ to fire him in order to avoid becoming accomplices.

The reverse of the other Sam situation.

dabockster · 2 years ago
Yeah this feels like a possible white collar crime. And with the US government out for blood right now about tech abuses, even a minor tax audit wouldn't be good for them.
thinkingemote · 2 years ago
current popular theory is that those fired/left were taking OpenAI for profit, and the board stuck with their original goal of non profit.
Lacerda69 · 2 years ago
you mean current completely made up speculation by anon online?
s1artibartfast · 2 years ago
Nobody was "taking" anything for profit. That ship had sailed.

The OpenAI 501(c)3 already spun up a for-profit company in 2019 to do all the commercial work and take VC money.

tucnak · 2 years ago
how can people be so naïve... think about it. isn't it exactly the kind of spin that you would expect from "new" leadership? like, of course they're going to take the moral high-ground like any new regime would.
justanotherjoe · 2 years ago
Probably the Microsoft thing and the direction Sam Altman is taking OpenAI. I imagine that caused a significant shift in workload and nature of work for the people in OpenAI.
fancyfredbot · 2 years ago
I don't see how it could be something like this. If Sam wanted to do this he wouldn't need to lie. I suspect Sam did something stupid and the board had no choice. I would be very surprised if they actually wanted to fire Sam.
frabcus · 2 years ago
Obvious wall would be financial - Sam arranged more Microsoft funding, diluting the non-profitness even more, and tried to force board's hand by high cash burn and persuasion.

Board had to act fast to fix it. And OpenAI changed enterprise pricing of API to be up front for cashflow related to that.

quickthrower2 · 2 years ago
Makes me wonder whether to keep building upon OpenAI? Given that they have an API and it takes effort to build on that vs. something else. I am small fry but maybe other people are wondering the same? Can they give reassurances about their products going into the future?
mebutnotme · 2 years ago
I’d recommend trying to build out your systems to work across LLMs where you can. Create an interface layer and for now maybe use OpenAI and Vertex as a couple of options. Vertex is handy as while not always as good you may find it works well for some tasks and it can be a lot cheaper for those.

If you build out this way then when the next greatest LLM comes out you can plug that into your interface and switch the tasks it’s best at over.

quickthrower2 · 2 years ago
The problem is swapping LLMs can require rework of all your prompts, and you may be relying on specific features of OpenAI. If you don't then you are at a disadvantage or at least slowing down your work.
pjmlp · 2 years ago
Definitely, just like with games development, the key is to master how things work, not specific APIs.

AI tools will need a similar plugin like approach.

ramraj07 · 2 years ago
That would go as well as trying to write a universal android iOS app or write ansi sql to work across database platforms. A bad idea in every dimension.
cryptoz · 2 years ago
Also same here. Actually currently staying up late Friday night hacking on OpenAI API projects (while waiting for SpaceX Starship launch, it's quite a day for high-tech news!) - and wondering if I should even bother. Of course I will keep hacking, but...still it makes you think. Which is a very unexpected feeling.

Hugely more interested in the open source models now, even if they are not as good at present. Because at least there is a near-100% guarantee that they will continue to have community support no matter what; the missing problem I suppose is GPUs to run them.

quickthrower2 · 2 years ago
Totally. I'll keep going too. I am just putting a nice GUI wrapper around the new Assistant stuff which looks damn cool. Project is half "might make some bucks" and half "see if this is good to use in the day job".
siva7 · 2 years ago
I am wondering the same. It’s a PR desaster to their dev community and i’m not even sure if Sutskever isn’t secretly happy about this.
karmasimida · 2 years ago
I am at lost. Not fear, just lost.

Don't know what to do. Is my investment into their API still worth it? It feels very unstable at this moment.

mark_l_watson · 2 years ago
Take my opinion with some skepticism because I am retired and the massive amount of time I put into LLMs (and deep learning in general) is only for my own understanding and enjoyment:

In all three languages I frequently use (Common Lisp, Python, and Racket) it is easy to switch between APIs. You can also use a library like LangChain to make switching easier.

For people building startups on OpenAI specific APIs, they can certainly protect themselves by using Azure as an intermediary. Microsoft is in the “stability business.”

croes · 2 years ago
>Can they give reassurances about their products going into the future

They wouldn't have been able to do that even before Sam's dismissal

bananapub · 2 years ago
it's business and systems design 101 to actually worry about your dependencies. regardless of this drama, you should have thought about what you'd do if OpenAI shut down, or become your competitor, or gets worse, or is bought by MS or something.

> Can they give reassurances about their products going into the future?

emotional comfort is not the thing you should be looking for mate.

Deleted Comment

robbomacrae · 2 years ago
I'm in this boat. Not for my startup but for side projects I was absolutely pinning my hopes on them unlocking access to tools and relaxing some of their restrictions in the near future.. a future which now seem unlikely.

Deleted Comment

iamflimflam1 · 2 years ago
Indeed. It was a big enough battle to convince execs that building on top of OpenAI was ok. Now that conversation is pretty much impossible. You have the Microsoft offering, but to most muggles that just looks like them reselling OpenAI.

The board of OpenAI should have been replaced by adults a long time ago.