Jakub Pachocki, director of research; Aleksander Madry, head of AI risk team, and Szymon Sidor.
Scoop: theinformation.com
Paywalled link: https://www.theinformation.com/articles/three-senior-openai-...
Submitting information since paywalled links are not permitted.
I predict the board will be fired, and Sam and the team will return and try to contain the situation.
Do users care about that? I care about features stability and avoidance of shitification.
That's why I am usually preferring open models to depending on OpenAI's API. This drama has me curious about the outcome and if it leads to more openness from OpenAI, it may gain me back as a user.
Maybe not the individual users, but the enterprises/startups which builds around OpenAI.
I pay for ChatGPT, and I care.
What percentage of users, and how many in absolute numbers is a matter of debate, but this nonsense (and it is nonsense) is antithetical to building a strong trusting relationship with AI. At the very least it's as antithetical to their mission.
If we take a step back, the benchmark now is to be actually transparent. Radically transparent. Like when Elon purchased Twitter and aired all the dirty laundry in the Twitter Files transparent. The cowards at OpenAI hiding behind lawyers advising them of lawsuits are just that, cowards. Leaders stand by their principles in the darkest of times, regardless of whatever highfalutin excuses one could hide behind. It's pathetic and embarrassing. A lawsuit at a heavily funded tech startup at this level is not even a speeding ticket in the grand scheme of things.
95%+ of tech startup wisdom from the last decade is completely irrelevant now. We're living in a new era. The idea people will forget this in a month doesn't hold for AI. It holds for food delivery apps, not AI tech the public believes (right or wrong) might be an existential threat to their prosperity and economic future.
The degree of leadership buffoonery taking place at OpenAI is not acceptable and one must be genuinely stupid to defend it. Everyone involved should resign if they have any self-respect.
My prognostication is the market will express it's displeasure in the coming weeks and months, setting the tone for everyone else going forward. How the hell is anyone supposed to trust OpenAI after this?
The board and Ilya will all be gone within a month.
The board did become un-boardable in any future company, but they are not resigning.
Deleted Comment
Dead Comment
In relation to other comments here. There is "coding" and there is "God's spark genius of algorithms" kind of work. This is what made the magic of OpenAI. Believe me, those guys were not "just coding". My bet is that it could be all about some research directions that were "shielded" by Sam.
I really don't buy that for a second. Most of OpenAI's value compared to any competitor comes from the money they spent hiring humans to trawl through training data.
They just hadn't -- and still haven't -- figured out how to commercialize it yet. I don't think they'll be the ones to crack that nut either. IMO they are too obsessed with "safety" to release something useful, and also can't reasonably deploy a service like ChatGPT at their scale because the costs are too high.
With OpenAI imploding this whole race just got a lot more interesting though...
As far as I can tell, all three of them are of Polish descent. For all we know they might have decided to resign together even if only one of them had a personal issue with OpenAI's vision. We will find out soon enough whether they will just found their own competing startup, based on OpenAI's "secret sauce" or not.
I'm not saying this will happen, but it seems to me like an incredibly silly move.
If he didn't manage to keep OpenAI consistent with it's founding principles and all interests aligned then wouldn't booting him be right? The name OpenAI had become a source of mockery. If Altman/Brockman take employees for a commercial venture, it just seems to prove their insincerity about the OpenAI mission.
Of course, not for the petty reasons that you list. Sama has comprehensively explained why the original OS model did not work, and so far the argument – it's very expensive – seems to align with a reality where every single semi-competitive available LLM (since they all pale in comparison to GPT-4 anyway) has been trained with a whole lot of corporate money. Meta side-chaining "open" models with their social media ad money is obviously not a comparative business, or any business. I get that the HN crowd + Elon are super salty about that, but it's just a bit silly.
No, Sam's failure as CEO is not having done what is necessary to align the right people in the company with the course he has decided on and losing control over that.
Deleted Comment
Dead Comment
We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.
Should people discussing nuclear energy not talk about fusion?
And Microsoft are risk adverse enough that I think they do care about AI safety, even if only from a "what's best for the business" standpoint.
Tbh idc if we get AGI. There'll be a point in the future where we have AGI and the technology is accessible enough that anybody can create one. We need to stop this pointless bickering over this sort of stuff, because as usual, the larger problem is always going to be the human using the tool than the tool itself.
They may be geniuses, but AGI is an idea whose time has come: geniuses are no longer required to get us there.
The Singularity train has already left the station.
Inevitability.
Now humanity is just waiting for it to arrive at our stop
I think AGI is going to arrive via a different technology, many years in the future still.
LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.
Not that I think AGI is possible or desirable in the first place, but that's a different discussion.
I can't imagine him doing that. He cares about getting well aligned AGI and profit motives fuck that up.
https://www.semanticscholar.org/author/J.-Pachocki/2713380?s...
As @eachro pointed out, Aleksander Madry is on leave from his MIT professorship. His publications:
https://madry.mit.edu/
If an AI said that, we'd be calling it "capability gain" and think it's a huge risk.
Maybe Ilya discovered something as head of AI safety research, something bad, and they had to act on it. From the outside it looks as if they are desperately trying to gain control. Maybe he got confirmation that LLMs are a little bit conscious, LOL. No, I am not making this up: https://twitter.com/ilyasut/status/1491554478243258368
With evidence, or is this the kind of pure speculation that media indulges in when they have no information and have to appear knowledgeable?
https://twitter.com/sama/status/1635700851619819520
If they waited for the GPT5 pretraining to finish and then they minimized the cost of the loss of Altman and the engineers.
The whole secrecy, compartmentalization and urgency of their actions could only be explained by being against a wall. Otherwise if it was about ethics, future plans or whatever political it would happen at a slower pace.
Hope they involved their investors beforehand but I don't know if they had time, OpenAI probably still exists and evolves on other people's money. But what else could they do?
- he makes misleading statement to board
- board puts this in regulatory filing (e.g. SEC)
- board finds out this is a legally critical statement
- they _have_ to fire him in order to avoid becoming accomplices.
The reverse of the other Sam situation.
The OpenAI 501(c)3 already spun up a for-profit company in 2019 to do all the commercial work and take VC money.
Board had to act fast to fix it. And OpenAI changed enterprise pricing of API to be up front for cashflow related to that.
If you build out this way then when the next greatest LLM comes out you can plug that into your interface and switch the tasks it’s best at over.
AI tools will need a similar plugin like approach.
Hugely more interested in the open source models now, even if they are not as good at present. Because at least there is a near-100% guarantee that they will continue to have community support no matter what; the missing problem I suppose is GPUs to run them.
Don't know what to do. Is my investment into their API still worth it? It feels very unstable at this moment.
In all three languages I frequently use (Common Lisp, Python, and Racket) it is easy to switch between APIs. You can also use a library like LangChain to make switching easier.
For people building startups on OpenAI specific APIs, they can certainly protect themselves by using Azure as an intermediary. Microsoft is in the “stability business.”
They wouldn't have been able to do that even before Sam's dismissal
> Can they give reassurances about their products going into the future?
emotional comfort is not the thing you should be looking for mate.
Deleted Comment
Deleted Comment
The board of OpenAI should have been replaced by adults a long time ago.