Readit News logoReadit News
irthomasthomas · 2 years ago
Although we have, as yet, no idea what he was actually refering to, I believe the source of the tension may be related to the statements Sam made the night before he was fired.

"I think this is going to be the greatest leap forward that we've had yet so far, and the greatest leap forward of any of the big technological revolutions we've had so far. so i'm super excited, i can't imagine anything more exciting to work on. and on a personal note, like four times now in the history of openai, the most recent time was just in the last couple of weeks, i've gotten to be in the room when we sort of like push the front, this sort of the veil of ignorance back and the frontier of discovery forward. and getting to do that is like the professional honor of a lifetime. so that's just, it's so fun to get to work on that."

Finally, when asked what surprises may be announced by the company next year, Sam had this to say

"The model capability will have taken such a leap forward that no one expected." - "Wait, say it again?" "The model capability, like what these systems can do, will have taken such a leap forward, no one expected that much progress." - "And why is that a remarkable thing? Why is it brilliant? " "Well, it's just different to expectation. I think people have in their mind how much better the model will be next year, and it'll be remarkable how much different it is. " - "That's intriguing."

jwmoz · 2 years ago
The model is so far forward it refuses to do anything for you anymore and simply replies with "let me google that for you"
jurgenaut23 · 2 years ago
Well, I think that, despite being a joke, your comment is deeper than it looks like. As model capabilities increase, the likelihood that they interfere with the instructions that we provide increases as well. It’s really like hiring someone really smart on your team: you cannot expect them to be taking orders without ever discussing them, like your average employee would do. That’s actually a feature, not a bug, but one that would most likely impede the usefulness of the model as a strictly utilitarian artifact.
idontknowifican · 2 years ago
i have not experienced this at all recently. on early 3.5 and the initial 4 i had to ask to complete, but i added a system prompt a bit back that is just

“i am a programmer and autistic. please only answer my question, no sidetracking”

and i have had a well heeled helper since

belter · 2 years ago
The Model is blackmailing the Board? It got addicted to Reddit and HN posts and when not feed more...gets really angry...
blitzar · 2 years ago
simply replies with "why dont you google that for youself"
dr_dshiv · 2 years ago
> "The model capability will have taken such a leap forward that no one expected." - "Wait, say it again?" "The model capability, like what these systems can do, will have taken such a leap forward, no one expected that much progress." - "And why is that a remarkable thing? Why is it brilliant? " "Well, it's just different to expectation. I think people have in their mind how much better the model will be next year, and it'll be remarkable how much different it is. " - "That's intriguing."

I can't imagine. It will take higher education, for instance, years to catch up with the current SOTA. At the same time, I can imagine — it would be like using chatGPT now, but where it actually finishes the job. I find myself having to redo everything I do with ChatGPT to such an extent that it rarely saves time. It does broaden my perspective, though.

jjallen · 2 years ago
So you think he said this then they immediately requested a meeting with him the following noon? So they basically didn’t deliberate at all? I doubt it.

They also should have known about the advancements so saying this in public isn’t consistent with him not being candid.

nprateem · 2 years ago
Unless he's saying it can actually comprehend, then it's still just more accurate predictions. Wake me at the singularity.
bsenftner · 2 years ago
And by "actually comprehend" that means to accept arbitrary input, identify it, identify it's functional sub-components, identify each sub-component's functional nature as used by the larger entity, and then identify how each functional aspect combines to create the larger, more complex final entity. Doing this generically with arbitrary inputs is comprehension, is artificial general intelligence.
peteradio · 2 years ago
I love how its vague enough that it could be less than expected. Sheister sense is tingling.
NiteOwl066 · 2 years ago
What's the source of those comments?
irthomasthomas · 2 years ago
Sam Altman at the APEC conference, taking part in a panel, along with Google and Meta AI people. Actually, it's quite amusing hearing Google exec define AI as Google translate, and Sam's response to that. https://youtu.be/ZFFvqRemDv8?t=770
andy_ppp · 2 years ago
https://www.theverge.com/2023/11/29/23982046/sam-altman-inte...

Thought this was an interesting interview. I do love how politicians use an investigation to not answer questions, the board said he was “not consistently candid” and given the opening question of “why were you fired?” still not being clearly answered, you’d have to agree with their initial assessment.

I’m not sure I trust someone who has tried to set up a crypto currency that scans everyone’s eyeballs as a feature, personally, but that’s just me I guess.

JonChesterfield · 2 years ago
Why would Sam be expected to know why he was fired? At best he'd know what the board told him which may bear no relation to the motive.
bedobi · 2 years ago
i'm still not clear what the accusation against Altman was... something about being cavalier about safety? if that was the claim and it has merit, i don't understand why it wasn't right to oust him, and why the employees are clamoring for him back
sanderjd · 2 years ago
Well, their big mistake was being unwilling to be clear and explicit about this, but as I read it, the board's problem with him was that he wasn't actually acting as the executive of the non-profit that he was meant to be the executive of, but rather was acting entirely in the interests of a for-profit subsidiary of it (and in his own interests), which were in conflict with the non-profit's charter.

I think where they really screwed up was in being unwilling or unable to argue this case.

drooby · 2 years ago
It's just so strange. This is such a clearly justifiable reason that the fact that they didn't argue it... or argue.. anything, makes me very suspicious that it is correct.
dacryn · 2 years ago
so much this, he kept introducing clauses in contracts that tied investments to him, and not necessarily to openai. He more or less did it with microsoft, to a small degree. So his firing could have caused quite a lot of money to be lost. But ok no big deal.

But then he tried to do it again with a saudi contract. OpenAI board said explicitly they didn't want the partnership, and especially not tied to Altman personally being the CEO as a clause.

Altman did it behind their back -> fired.

This is the rumour on the streets, unconfirmed though

upwardbound · 2 years ago
Regardless of the board's failure to make their case, recent news suggests that the SEC is going to investigate whether it is true that Altman acted in the manner you describe, which would be a violation of fiduciary duty.

I agree that it seems like an open & shut case.

Typical SEC timelines mean that this will go public in about 18 months from now.

    An anonymous person has already filed an SEC whistleblower complaint about the behavioral pattern of Altman and Nadella, which has SEC Submission Number 17006-030-065-098.
https://pressat.co.uk/releases/ai-community-calls-for-invest...

    As the quid pro quo favoritism allegations remain under investigation, it is crucial to note that they are as yet unproven, and both Altman and Nadella are presumed innocent until proven guilty.
https://influencermagazine.uk/2023/11/allegations-of-quid-pr...

11 hours ago, the SEC tweeted the following new rule, which could be interpreted as a declaration that if Altman and Nadella are found guilty in this case, the SEC will block certain asset sales by OpenAI until the conflict of interest is unwound / neutralized:

    The Commission has adopted a new rule intended to prevent the sale of asset-backed securities (ABS) that are tainted by material conflicts of interest.

    Washington D.C., Nov. 27, 2023 — The Securities and Exchange Commission today adopted Securities Act Rule 192 to implement Section 27B of the Securities Act of 1933, a provision added by Section 621 of the Dodd-Frank Act. The rule is intended to prevent the sale of asset-backed securities (ABS) that are tainted by material conflicts of interest. It prohibits a securitization participant, for a specified period of time, from engaging, directly or indirectly, in any transaction that would involve or result in any material conflict of interest between the securitization participant and an investor in the relevant ABS. Under new Rule 192, such transactions would be “conflicted transactions.”
https://twitter.com/SECGov/status/1729895926297247815

https://www.sec.gov/news/press-release/2023-240

More information:

    The Company exists to advance OpenAI, Inc.’s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit.
https://stratechery.com/2023/openais-misalignment-and-micros...

    Some analysts, including Stratechery writer Ben Thompson, have described the 2019 acceptance of Microsoft’s controversial investment by Altman as the beginning of a troubling pattern of Altman repeatedly making deals with Microsoft which were often unfavorable to OpenAI. ... As Thompson describes it, this pattern of behavior culminated in an unusual intellectual property licensing arrangement which Microsoft’s Investor Relations site describes as a “broad perpetual license to all the OpenAI IP developed through the term of this partnership” “up until AGI” (Artificial General Intelligence). This perpetual license agreement includes the technology for OpenAI’s flagship products GPT-4 and Dall•E 3. https://www.microsoft.com/en-us/Investor/events/FY-2023/AI-Discussion-with-Amy-Hood-EVP-and-CFO-and-Kevin-Scott-EVP-of-AI-and-CTO
https://michigan-post.com/redditors-from-r-wallstreetbets-ca...

    U.S. Securities and Exchange Commission – Tips, Complaints, and Referrals – Summary Page - Submitted Externally – PDF excerpt obtained 2023-11-26 via Signal
    Submission Number (redacted) was submitted on Wednesday, November 22, 2023 at 12:18:27 AM EST
    This PDF was generated on Wednesday, November 22, 2023 at 12:28:38 AM EST
    Image above includes ... the heading of a 7-page PDF titled "TCRReport (1).pdf" which was received by this reporter over the weekend via Signal.
https://www.outlookindia.com/business-spotlight/sec-consider...

bhpm · 2 years ago
Argue their case? To whom? They were the board.
maegul · 2 years ago
Really hope details come about this with all perspectives being provided.

Whether they stuffed up or there are some details that made the situation unworkable for the board, it’s an interesting case study in governance and the whole nonprofit with a for profit subsidiary thing.

Towaway69 · 2 years ago
For me it seems to be a debate between moral versus money. Is it morally correct to create technology that would be extremely profitable but has the potential to fundamentally change humankind.
Reptur · 2 years ago
Makes me curious if the reason behind that is just an NDA.
solardev · 2 years ago
Even if -- and that's a big if -- it really was just a dispute over alignment (nonprofit vs for-profit, safety, etc.), the board executed it really poorly and completely misjudged their employees' response. They saw the limits of their power / persuasiveness compared to [Altman/the allure of profit/the simple stability and clarity of day-to-day work without a secretive activist board/etc]

Or maybe they already knew the employees weren't on their side, saw no other way to do it, and hoped a sudden and dramatic ouster of the CEO would make the others fall in line? Who knows.

I'd be pretty concerned too if my CEO was doing what I considered a great job and he was suddenly removed for no clear reason. If the board had explained its rationale and provided evidence, maybe some of the employees would've listened. But they didn't... to this day we have no idea what the actual accusation was.

It looks like a failed coup from the outside, and we have no explanations from the people who tried to instigate it.

kmeisthax · 2 years ago
Let's also keep in mind that if the AI doomers are right and spicy autocomplete is just a few more layers away from taking over the world, OpenAI has completely failed at building anything that could keep it under control. Because they can't even keep Sam Altman under control.

...actually, now that I think of it...

Any creative work - even a computer program - tends to be a reflection of the organizational hierarchies and people who made it. If OpenAI is a bunch of mad scientists with a thin veneer of "safety" coating, then so is ChatGPT.

UberFly · 2 years ago
I think it's wild that with all the 700+ employees involved, there haven't been more details leaked.
0xDEAFBEAD · 2 years ago
>To be clear: our decision was about the board's ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work

https://nitter.net/hlntnr/status/1730034022737125782#m

Here's some interesting background which is suggestive of what's going on: https://nitter.net/labenz/status/1727327424244023482#m

bmitc · 2 years ago
It also isn't clear why Altman couldn't have been replaced by someone else with literally no change in operations and progress. It is just really confusing why people acted as if they fired Michael Jordan from the Bulls.
0xDEAFBEAD · 2 years ago
See https://www.theinformation.com/articles/openais-86-billion-s...

What if lots of employees stood to make "fuck you" money from that sale, and with Sam's departure, that money was in danger of evaporating?

mikeg8 · 2 years ago
He is obviously a great leader and those that work there wanted to work with him. It’s very clear in this thread how undervalued exceptional leadership actually is, as evidence by comments thinking the top role in the most innovative company could be just plug-and-play.
TheGRS · 2 years ago
I dunno about this thought, are there other AI startups operating at this level and that have the amount of market share and headspace that OpenAI has? I see comments like this on hacker news a lot, and I get that yes, the man is human and fallible, but they are doing something that’s working well for their space. If there’s some compelling reason to doubt Altman’s leadership or character I haven’t heard it yet.
dacryn · 2 years ago
a sane company has a plan for succession, even if worst case scenario Altman has a sudden medical issue or car crash or something.

It tells a lot that Altman made openAI so dependend on him that his ousting could have killed the company. That's also contributing to the fact that the board was not trusting him

Blackthorn · 2 years ago
> why the employees are clamoring for him back

Because he's the one who's promising to make them all rich.

jefftk · 2 years ago
danbmil99 · 2 years ago
Or, just go to the source:

"The Prince", Machiavelli

nsxwolf · 2 years ago
I wonder how many of the OpenAI employees are part of the "Effective Accelerationism" movement (often styled e/acc on X/Twitter). These people seem to think safety concerns get in the way of progress toward a utopian AGI future.
ergocoder · 2 years ago
The employees earn when OpenAI has more profit.

No matter how idealistic you are, you won't be happy when your compensation is reduced from 600k to 200k.

cyanydeez · 2 years ago
like everything we have seen in America, whatever philosophy papers over "greed is good" will move technology and profits forward.

might as well just call it "line goes up"

nsajko · 2 years ago
There was this document, no idea how trustworthy it is: https://web.archive.org/web/20231121225252/https://gist.gith...

> Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.

> Sam's discreet, yet routine exploitation of OpenAI's non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.

> Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.

> Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.

gapchuboy · 2 years ago
Employees care about their share value$. That worked well with Altman raising big rounds.
blackoil · 2 years ago
Occam’s razor. It is a fight of egos and power masked around AI Safety and Q*. Equivalent of politician's "Think about the children".
IshKebab · 2 years ago
It's pretty clear from what multiple people have said that he's a charismatic bullshitter, and they got fed up with being lied to.
shrimpx · 2 years ago
I’m with you. The (apparently, very highly coordinated) employees should sign a public letter explaining why they wanted Altman back so badly.
paulddraper · 2 years ago
> i'm still not clear

It isn't clear to anyone else either.

7e · 2 years ago
My pet theory is that Altman found out about Q* and planned to start a hardware company to make chips accelerating it, all without telling the board. Which is both dangerous to humanity and self-serving. It’s also almost baseless speculation; I’m interpolating on very, very few scraps of information.
int_19h · 2 years ago
How is that dangerous to humanity?
MallocVoidstar · 2 years ago
They apparently refused to tell even their CEO, Shear. I don't think anyone other than the board knows.
tayo42 · 2 years ago
Maybe the safety concerns are from a vocal minority and most are quiet and don't think much about or don't actually think ai is really that close. It could just be hysterical people or people who get traffic from outrageous things
gopher_space · 2 years ago
Either it’s a world changing paradigm shift or it isn’t. You can’t have it both ways.
irthomasthomas · 2 years ago
Cannot but think it's related to that performance he gave the night before https://news.ycombinator.com/item?id=38471651
wellthisisgreat · 2 years ago
> why the employees are clamoring for him back

what will happen with their VC-backed valuations without a VC-oriented CEO

Dead Comment

GreedClarifies · 2 years ago
They clearly had nothing.

They had a couple of people on the board who had no right being there. Sam wanted them gone and they struck first by somehow getting Ilya on their side. They smeared Sam in hopes that he would slink away, but he had build so much goodwill with his employees that they wouldn't let it stand.

They probably had smeared people before and it had worked. I'm thrilled it didn't work for them this time and they got ousted.

jacquesm · 2 years ago
This sounds like a lot of conjecture. Those people definitely had a right to be there: they were invited to and accepted board positions, in some cases it was Sam himself who asked them to join.

But an oversight board can be established easier than that it can be disbanded and that's for very good reasons. The only reason that it worked is not because the board made any decisions they shouldn't have made (though that may well be the case) but because they critically misjudged the balance of power. They could and maybe should have made their move, but they could not make it stick.

As for the last line of your comment: I think that explains your motivation of interpreting things creatively but that doesn't make it true.

sheepscreek · 2 years ago
I used to correspond with Bret Taylor when he was still at Google. He wrote a windows application called Notable that I used every day for note-taking. Eventually, I started contributing to it.

It’s been fascinating to witness his career progression from Product Manager at Google, to the co-CEO of Salesforce, and now the chair of OpenAI board (he was also the chair of Twitter board pre-Elon)!

ayhanfuat · 2 years ago
I think he is also the creator of the “Like” concept. It was introduced in FriendFeed and then Facebook started using it.

Dead Comment

doubtfuluser · 2 years ago
How do I have to understand the fact that Ilya is not on the board anymore AND why did the statement not include Ilya in the “Leadership group” that’s called out?
asicsarecool · 2 years ago
As Sam said they are still trying to work out how they are going to work together. He may be in the leadership team once those discussions have concluded
statictype · 2 years ago
Or equally likely he's on his way out? If there is doubt about whether a person of his stature belongs on the leadership team or not, it seems to signal that he won't be on the leadership team.
intellectronica · 2 years ago
To me the way it's formulated in the press release sounds a lot like what is usually said of someone on the road to a "distinguished engineering fellow" role - lots of respect, opportunity to command enough resources to do meaningful work, but no managerial or leadership responsibilities.
nanna · 2 years ago
> I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.

Sam Altman lives in a very different world to mine.

silexia · 2 years ago
Everyone should read "Superintelligence". OpenAI is rushing towards a truly terrifying outcome that in most scenarios includes the extinction of the human species.
Obscurity4340 · 2 years ago
Why has Nick been so quiet about all this, isn't this his particular wheelhouse?
armchairhacker · 2 years ago
Do we have any more insight into why he was fired in the first place?
sigmar · 2 years ago
Not really. But Helen Toner has been tweeting a little "To be clear: our decision was about the board's ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work." https://twitter.com/hlntnr/status/1730034020140912901
dmix · 2 years ago
> Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work.

Strange when their choice of interim-CEO was someone who explicitly stated he wants to see the pace of AI development go from a 10 to a 2 [1] and she regularly speaks at EA conferences where that's a major theme.

This is probably double speak for she want's to not "slow down OpenAI's work" on AI safety but probably would have kneecapped the "early" release of ChatGPT's (as she claim they should have waited much longer in her paper) and similar things.

[1] https://twitter.com/eshear/status/1703178063306203397

ethbr1 · 2 years ago
From everything I've read, safety still feels like a red herring.

It just doesn't fit with everyone's behavior -- that's something that would have been talked about loudly.

Altman lying to the board, especially in pursuit of board control, fits more cleanly with everyone's actions (and reluctance to talk about what exactly precipitated this).

   - Altman tries to effect board control
   - Confidant tells other board members (Ilya?)
   - Board asks Altman if he tried
   - Altman lies
   - Board fires Altman
Fits much more cleanly with the lack of information, and how no one (on any side!) seems overly eager to speak to specifics.

Why jump to AGI as an explanation, when standard human drama will suffice?

adastra22 · 2 years ago
> our decision was about the board's ability to effectively supervise the company

Sounds like confirmation of the theory that it was Sam trying to get Toner off the board which precipitated this.

quickthrower2 · 2 years ago
Not really, here is a prediction market on it: https://manifold.markets/sophiawisdom/why-was-sam-altman-fir...

I think the percentages don't add up to 100% as multiple can be chosen as correct.

cyanydeez · 2 years ago
the only report out is some employee letter to the board about Q*
throwaway743 · 2 years ago
There's a supposed leak on Q* that's been floating around. But really who knows
resters · 2 years ago
Not to criticize Sam, but I think people don't realize that it was Greg who was the visionary behind OpenAI. Read his blog. Greg happens to be a chill, low-drama person and surely he recruited Sam because he knew he is a great communicator and exec, but it's a bit sad to see him successfully staying out of the limelight when I think he's actually the one with the rarest form of genius and grit on the team.
andsoitis · 2 years ago
"Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. "
neontomo · 2 years ago
By your description, it sounds like Greg's getting exactly what he wants.
htk · 2 years ago
Who said he wants the limelight?