Readit News logoReadit News
pasltd · 2 years ago
dang · 2 years ago
I moved your comment from https://news.ycombinator.com/item?id=38372541 to here, where it seems like it can help more readers. I hope that's ok!
jessenaser · 2 years ago
Thank you.
gwern · 2 years ago
None of the comments thus far seem to clearly explain why this matters. Let me summarize the implications:

Sam Altman expelling Toner with the pretext of an inoffensive page (https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...) in a paper no one read* would have given him a temporary majority with which to appoint a replacement director, and then further replacement directors. These directors would, naturally, agree with Sam Altman, and he would have a full, perpetual board majority - the board, which is the only oversight on the OA CEO. Obviously, as an extremely experienced VC and CEO, he knew all this and how many votes he (thought he) had on the board, and the board members knew this as well - which is why they had been unable to agree on replacement board members all this time.

So when he 'reprimanded' her for her 'dangerous' misconduct and started talking seriously about how 'inappropriate' it was for a 'board member' to write anything which was not cheerleading, and started leading discussions about "whether Ms Toner should be removed"...

* I actually read CSET papers, and I still hadn't bothered to read this one, nor would I have found anything remarkable about that page, which Altman says was so bad that she needed to be expelled immediately from the board.

gizmo · 2 years ago
Okay, let's stipulate that Sam was maneuvering to get full board control. Then the independent directors were probably worried that -- sooner or later -- Sam would succeed. With Sam fully in charge the non-profit goals would be completely secondary to the commercial goals. This was unacceptable to the independent directors and Ilya and so they ousted Sam before he could oust them?

That's a clear motive. Sam and the independent directors were each angling to get rid of the other. The independent directors got to a majority before Sam did. This at least explains why they fired Sam in such a haphazard way. They had to strike immediately before one of the board members got cold feet.

skygazer · 2 years ago
Besides explaining the haphazardness, that would also nicely explain why they didn't want to elaborate publicly on why they "had" to let him go -- "it was either him or us" wouldn't have been popular given his seeming popularity.
JacobThreeThree · 2 years ago
>This at least explains why they fired Sam in such a haphazard way.

The timing of it makes sense, but the haphazard way it was done is only explained by inexperience.

Deleted Comment

hn_throwaway_99 · 2 years ago
I mean, here is a relevant passage from the paper, linked in another comment: https://news.ycombinator.com/item?id=38373684

If I were the CEO of OpenAI, I'd be pretty pissed if a member of my own board was shitting on the organization she was a member of while puffing up a competitor. But the tone of that paper makes me think that the schism must go back much earlier (other reporting said things really started to split a year ago when ChatGPT was first released), and it sounds to me like Toner was needling because she was pissed with the direction OpenAI was headed.

I'm thinking of a good previous comment I read when the whole Timnit Gebru situation at Google blew up and the Ethical AI team at Google was disbanded. The basic argument was on some of the inherent incompatibilities between an "academic ombudsman" mindset, and a "corporate growth" mindset. I'm not saying which one was "right" in this situation given OpenAI's frankenstein org structure, but just that this kind of conflict was probably inevitable.

BryantD · 2 years ago
Just spot checking: did anyone comment on this paper when it was published? Did any media outlet say “hey, a member of the OpenAI board is criticizing OpenAI and showing a conflict of interest?” Did any of the people who cover AI (Zvi, say) discuss this as a problem?

These are serious questions, not gotchas. I don’t know the answers, and I think having those answers would make it easier to evaluate whether or not the paper was a significant conflict of interest. The opinions we have formed now are shaped by our biases about current events.

It didn’t make HN.

lossolo · 2 years ago
> If I were the CEO of OpenAI, I'd be pretty pissed if a member of my own board was shitting on the organization she was a member of while puffing up a competitor.

Considering what's in the charter, it seems like she didn't do anything wrong?

> We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

more here: https://news.ycombinator.com/item?id=38372769

losvedir · 2 years ago
This is really interesting. It makes perfect sense that they weren't sitting at 6 board members for 9 months because Sam and the others didn't see the implications, but because they saw them all too well and were gridlocked.

But then it gets interesting inferring things from there. Obviously sama and gdb were on one side (call it team Speed), and Helen Toner on the other (team Safety). I think McCauley is with Toner (some connection I read about which I don't remember now: maybe RAND or something?).

But what about D'Angelo and Ilya? For the gridlock, one would have to be on each side. Naively I'd expect tech CEO to be Speed and Ilya Safety, but what would have precipitated the switch Friday? If D'Angelo wanted to implode the company due to conflict of interest, wouldn't he just have sided with Team Safety earlier?

But maybe Team Speed vs Team Safety isn't the same as Team Fire Sam vs Team Don't. I could see that one as Helen, Tasha, and Adam vs Sam, GDB, and Ilya. And, that also makes sense to me in that Ilya seems the most likely to flip for reasons, which also lines up with his regret and contrition. But then that raises the question of what made him flip? A scary exchange with prototype GPT5, which made him weigh his Safety side more highly than his loyalty to Sam?

sanxiyn · 2 years ago
Maybe Sam wanted to redirect Ilya's GPUs to ChatGPT after DevDay surge. 20% of OpenAI's GPUs are allocated to Ilya's team.
murakamiiq84 · 2 years ago
Random fanfiction: it's also possible that it wasn't actually a 3-3 split but more like a 2-2 split with 2 people -- likely Adam and Ilya, though I guess Adam and Tasha is also possible -- trying to play nice and not obviously "take sides." And then eventually Sam thought he won Adam and Ilya's loyalty re: firing Helen but slipped up (maybe Adam was salty about Poe and Ilya was uncomfortable with him "being less than candid" about something Ilya care about. Or maybe they were more principled than Sam thought).

And then to Adam and Ilya, normally something like "you should've warned me about GPTs bro" or "hey remember that compute you promised me? Can I prettyplease have it back?" are stuff that they are willing to talk it out with their good friend Sam. But Sam overplayed his hand: they realized that if Sam was willing to force out Helen under such flimsy pretexts then maybe they're next, GoT style[1]. So they had a change of heart, warned Tasha and Helen, and Helen persuaded them to countercoup.

[1] Reid Hoffman was allegedly forced out before, so there's precedent. And of course Musk too. https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...

lacker · 2 years ago
It's not just OpenAI. Every AI organization is discovering that they have internal groups which are pulling in a different direction than the rest of the organization, and trying to get rid of those groups.

* Google got rid of its "Ethical AI" group

* Facebook just got rid of its "Responsible AI" team

* OpenAI wanted to get rid of the "Effective Altruists" on the board

I guess if I was afraid of AI taking over the world then I would be rooting for OpenAI to be destroyed here. Personally I hope that they bring Sam back and I hope that GPT-5 is even more useful than GPT-4.

didibus · 2 years ago
I feel the people advocating safety, while they are probably right from a moral and ethics point of view, are just doomed to fail.

It's like with the Nuclear Bomb, it's not like had Einstein withheld his contributions, we wouldn't have Nuclear Bombs today. It's always a matter of time before someone else figured it out, and until someone else with bad intentions does.

How to attack safety in AI I think has to assume there are already bad actors with super powerful AI around, and what can we do in defense of that.

dmix · 2 years ago
It's interesting the paper is selling Anthropic's approach to 'safety' as the correct approach when they just launched a new version of Claude and HN thread is littered with people saying its unusable because half the prompts they type get flagged as ethical violations.

It's pretty clear some legitimate concerns about a hypothetical future AGI, that we've barely scraped the surface of, turns into "what can we do today" and it's largely virtue signalling type behaviour crippling a non-AGI very very alpha version of LLMs just to show you care about hypothetical future risks.

Even the coorelation between commercialization and AI safety is pretty tenuous. Unless I missed some good argument about how having a GPT store makes AGI destroying the world easier.

It can probably best be summarized as Helen Toner simply wants OpenAI to die for humanity's sake. Everything else is just minor detail.

> Over the weekend, Altman’s old executive team pushed the board to reinstate him—telling directors that their actions could trigger the company’s collapse.

> “That would actually be consistent with the mission,” replied board member Helen Toner, a director at a Washington policy research organization who joined the board two years ago.

https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c

LASR · 2 years ago
What’s surprising to me is that top-level executives think that self-destructing the current leader in LLMs is the way to ensure safety.

Aren’t you simply making space for smaller, more aggressive, and less safety-minded competitors to grab a seat on the money train to do whatever they want to do?

Pandora’s box is already open. You have to guard it. You have to use your power and influence to enforce other competitors to also do that with their own boxes.

Self-destructing is the worst way to ensure AI safety.

Isn’t this just basic logic? Even chatGPT might have able to point out how stupid this is.

My only explanation is that something deeper happened that we’re not aware of. Us or them board fight might explain it. Great. Altman is out. Now what? Nobody predicted this would happen?

remarkEon · 2 years ago
Has Toner (or someone with like-minded views) filled in the blanks between "GPT-4" and "Terminator Judgement Day" in a believable way? I've read what Yudkowsky writes but it all sounds so fantastical that it's, at least to me, more like an episode of The Twilight Zone or The X-Files than something resembling reality. Is Toner really in the "nuke the datacenters" camp? If so, was her placement on the board not a mistake from the beginning?
ah765 · 2 years ago
>how having a GPT store makes AGI destroying the world easier

The argument in general is that the more commercial interest there is in AI, the more money gets invested and the faster the companies will try to move to capture that market. This increases the risk for AGI by speeding up development due to competition, and safety is seen as "decel".

Helen was considering the possibility of Altman-dominated OpenAI that continued to rapidly grow the overall market for AI, and made a statement that perhaps destroying OpenAI would be better for the mission (safe development of AGI).

ah765 · 2 years ago
This sounds convincing, especially considering this story where Sam Altman was involved in a "long con" to seize board control of Reddit (https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...).

I think Sam may have been angling to get control of the board for a long time, perhaps years, manipulating the board departures and signing deals with Microsoft. The board finally realized this when it was 3 v 3 (or perhaps 2 v 3 with 1 neutral). But Sam was still working on more funding deals and getting employee loyalty, and the board knew it was only a matter of time until he could force their hand.

ah765 · 2 years ago
Also of note, this comment by Sam's verified reddit account describing the con as "child's play": (https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...)
ugh123 · 2 years ago
In that page linked, she says that OpenAI's system card wasn't a suitable replacement as a "commitment to safety". I feel thats a fair critique even for someone on a non-profit board, unless she's really advocating for systemic change in the way the company operates and does any commercial business.
marquisdepolis · 2 years ago
In this instance, "Sam is trying to take over the Board on a flimsy basis" is an reasonable reason to remove him. Started leading discussions about whether she should be removed, is also very very far from actively working to remove her.

This is amateur hour and considering what happened she probably should have been removed.

jprete · 2 years ago
Leading discussions on whether someone should be removed is literally actively working to remove them.
abi · 2 years ago
I mean it's not cool for board members to publicly criticize the company.
zucker42 · 2 years ago
I think you're taking the intuition from a for-profit company and wrongly applying it to a non-profit company. When a board member criticizes a for-profit company, that's bad because the goal of the company is make a profit, and bad PR is bad for profit. A board member criticizing a non-profit doesn't have the same direct connection to a result opposite of the goals of the company. And if you actually read the page, it's an extremely mild criticism of OpenAI's decisions.

This situation is simultaneously "reckless board makes ill-considered mistake to suddenly fire CEO with insufficient justification" and "non-profit CEO slowly changes the culture of a non-profit to turn it into profit-seeking enterprise that he has a large amount of control over".

cycomanic · 2 years ago
What do you mean? Schmidt was on Apple's board for 3 years while he was CEO of Google. Do you think Google did not criticise Apple during that whole time (remember that the CEO is ultimately responsible for the company communications)?

Even more in the case of OpenAI, the board member is on the board of a non-profit, those are typical much more independent and very often more critical of the decisions made by other board members. Just search for board members criticising medical/charity/government boards they are sitting on, there are plenty.

That's not even considering if the article was in fact critical.

kragen · 2 years ago
it's not cool for companies to try to shut down academic freedom of inquiry in scholarly publishing in order to improve their public image

her independence from openai was the nominal reason she was asked to supervise it in the first place, wasn't it

15457345234 · 2 years ago
It's a nonprofit board & it's absolutely the role of advisory board members to continue their work in the sector they're specialized in without bias.
murakamiiq84 · 2 years ago
Is this even true for for-profit companies? Like if a professor is on the board of a for-profit (which I think is pretty common for deeptech? Maybe companies in general too? https://onlinelibrary.wiley.com/doi/abs/10.1111/fima.12069#:....), is he/she banned from making a technical point about how a competitor's product or best practices is occasionally superior to the company's product?
svnt · 2 years ago
This is the inherent conflict in the company.

Once it turned out you needed 7B parameters or more to get LLMs worth interacting with, it went from a research project to a winner-take-all compute grab. OpenAI, with an apparent technical lead and financial access, was well-positioned to win it.

It is/was naive of the board to think this could be won on a donation basis.

powera · 2 years ago
This is insane. "They had to fire Sam, because he was trying to take over the board".

First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.

Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).

Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.

comp_throw7 · 2 years ago
> "They had to fire Sam, because he was trying to take over the board".

I mean, yes? The board is explicitly there to replace the CEO if necessary. If the CEO stuffs the board full of their allies, it can no longer do that.

> First: most boards are accountable to something other than themselves. For the exact reason that it pre-empts that type of nonsense.

Boards of for-profits are accountable to shareholders because corporations with shareholders exist for the benefit of (among others) shareholders. Non-profit corporations exist to further their mission, and are accountable to the IRS in this regard.

> Second: the anti-Sam Altman argument seems to be "let's shut the company down, because that will stop AGI from being invented". Which is blatant nonsense; nothing they do will stop anyone else. (with the minimal exception that the drama they have incepted might make this holiday week a complete loss for productivity).

No, the argument is that Sam Altman trying to bump off a board member on an incredibly flimsy pretext would be an obvious attempt at seizing power.

> Third: in general, "publishing scholarly articles claiming the company is bad" is a good reason to remove someone from the board of a company. Some vague (and the fact that nobody will own up to anything publicly proves it is vague) ideological battle isn't a good enough rationale for the exception to a rule that suggests that her leaving the board soon would be a good idea.

This might be true w.r.t. for-profit boards (though not obviously so in every case), but seems nonsensical with non-profits. (Also, the article did not reductively claim "the company is bad".)

6gvONxR4sf7o · 2 years ago
> Second: the anti-Sam Altman argument seems to be "let's shut the company down...

Isn't that the pro-Altman argument? The pro-Altman side is saying "let's shut the company down if we don't get our way." The anti-Altman side is saying "let's get rid of Sam Altman and keep going."

ralfd · 2 years ago
Do you think Sam is more aligned to OpenAI non-profit charter than Helen?
CSMastermind · 2 years ago
The only specific accusation made in the article is that Sam criticized Helen Toner for writing a paper: https://cset.georgetown.edu/publication/decoding-intentions/

That says Anthropic has a better approach to AI safety than OpenAI.

Sam apparently said she should have come to him directly if she had concerns about the company's approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.

All of that ... seems completely reasonable?

Like I've heard a lot of vague accusations thrown at Sam over the last few days and yet based on this account I think he reacted the exact same way any CEO would.

I'm much more interested in how Helen managed to get on this board at all.

0xDEAFBEAD · 2 years ago
>We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

>...

>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

>Our primary fiduciary duty is to humanity.

>...

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

https://openai.com/charter

Seems to me like Helen is doing a better job of upholding the charter than Sam is.

greyface- · 2 years ago
This charter is doomed to be interpreted in radically different ways by people with differing AI-eschatological beliefs. It's no wonder it's led to so much conflict.
senectus1 · 2 years ago
>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.

how could this possibly be accomplished when trying to sell the product itself. Investors pouring Billions into it are in it for profit... they're not going to let you just stop, or help a competitor for free.

ipaddr · 2 years ago
Promoting Anthropic and putting down OpenAI doesn't make her better at her job. Her job isn't self promotion.
JCM9 · 2 years ago
The challenge of all this is that while everything going on looks totally bonkers from any normal sense of business, it’s hard to argue that the board isn’t following their charter. IMHO the mistake was setting up the structure the way it is and expecting that to go well. Even with MSFT they are obviously annoyed but also they have shareholders too and one reasonable question here is what the heck was Microsoft’s leadership doing putting billions of capital at risk with this entity that has such a wacky structure and ultimately governed by this non-profit board with a bizarre doomsdayish charter. Seriously if you haven’t read it, read it.

This whole thing has been wildly mishandled but there’s an angle here where the nonprofit is doing exactly what they always said they would do and the ones that potentially look like fools are Microsoft and other investors that put their shareholder capital into this equation thinking that would go well.

jacquesm · 2 years ago
When Microsoft came on board the charter effectively went out the window. It's like Idefix thinking he's leading Obelix around on a leash.
traject_ · 2 years ago
A lot of that billions of capital is simply Azure compute credits though.
letmevoteplease · 2 years ago
According to the paper, Anthropic's superior safety approach was deliberately delaying release in order to avoid “advanc[ing] the rate of AI capabilities progress." OpenAI is criticized for kicking off a race for AI progress by releasing ChatGPT to the public.

[1] https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...

danenania · 2 years ago
It's important to remember that part of OpenAI's mission, apart from developing safe AGI, is to "avoid undue concentration of power".

This is crucial, because safety is a convenient pretext for people whose real motivations are elitist. It's not that they want to slow down AI development; it's that they want to keep it restricted to a tight inner circle.

We have at least as much to fear from elites who think only megacorps and governments are responsible enough to be given access as we do from AI itself. The failures of our elite class have been demonstrated again and again in recent decades--from the Iraq War to Covid. These people cannot be trusted as stewards, and we can't afford to see such potentially transformative technology employed to further ossify and insulate the existing power structure.

OpenAI had the right idea. Keep the source closed if you must, and invest heavily on safety, but make the benefits available to all. The last thing we need is an AI priesthood that will inevitably turn corrupt.

himaraya · 2 years ago
The paper reads a lot more nuanced than that. It compares the "system card" released with GPT-4 to the delay of Claude and the merits of each approach vis a vis safety.
kgeist · 2 years ago
>According to the paper, Anthropic's superior safety approach was deliberately delaying release in order to avoid “advanc[ing] the rate of AI capabilities progress."

Which can end up with China taking the lead. I don't understand why they think it's safer.

sgift · 2 years ago
Having read the word soup of contradicting weasel words that make up Claudes "constitution", 'superior safety approach' has so many asterisks it could be a star chart. The only thing the garbage Anthropic has produced is superior is in making some people feel good about themselves.

(https://www.anthropic.com/index/claudes-constitution)

nsagent · 2 years ago
There are even more gems in the paper, like this one:

> Suppose a leader pledges during a campaign to provide humanitarian aid to a stricken nation or the CEO of a company commits publicly to register its algorithms or guarantee its customers' data privacy. In both cases, the leader has issued a public statement before an audience who can hold them accountable if they fail to live up to their commitments. The political leader may be punished at the polls or subjected to a congressional investigation; the CEO may face disciplinary actions from the board of directors or reputational costs to the company's brand that can result in lost market share.

I wonder if she had Sam Altman in mind while writing this.

NoboruWataya · 2 years ago
The CEO is generally accountable to the board. A CEO trying to silence criticism and oust critical board members may be typical behaviour in the world of megalomaniacal tech startup CEOs, but it is not generally considered good corporate governance. (And usually the megalomaniacal tech startup CEOs have equity to back it up.)
maxlamb · 2 years ago
He said he wished she communicated her concerns to him beforehand. How can disagreements be delt with if never communicated directly? So the CEO has to first learn of a disagreement with a fellow board member through a NY Times article?
6gvONxR4sf7o · 2 years ago
> Sam apparently said she should have come to him directly if she had concerns about the company's approach

That seems dishonest given the last three years or so of conflict about these concerns that he’s been the center of. Of course he’s aware of those concerns. More likely, that statement was just him maneuvering to be the good guy when he tried to fire her, but it backfired on him.

jacquesm · 2 years ago
It's interesting but it may well be they both have a point: Helen for telling him to get lost and Sam for attempting to remove her before she would damage the company.

But she could have made that point more forcefully by not comparing Anthropic to OpenAI, after all who better than her to steer OpenAI in the right direction. I noted in a comment elsewhere that all of these board members appear to have had at least one and some many more conflicts of interest. Helen probably believes that her loyalty is not to OpenAI but to something higher than that based on her remark that destroying the company would serve to fulfil its mission (which is a very strange point of view to begin with). But that doesn't automatically mean that she's able to place it in context, within OpenAI, within the USA, the Western world and the world as a whole.

It's like saying the atomic bomb would have never been invented if the people at Los Alamos didn't do it. They did it in three years after it became known that it could be done in principle. Others tried and failed but without the same resources. I suspect that if the USA had not done it that eventually France, the UK and Russia would have gotten there as well and later on China. Israel would not have had the bomb without the USA (willing or unwilling) and India and Pakistan would have achieved it but much later as well. So we'd end up with the same situation that we have today modulo some timing differences and with another last chapter on WWII. Better? Maybe. But it is also possible that the Russians would have launched a first strike on the USA if they were unopposed. It almost happened as it was!

The open question then is: does she really believe that no other entity has the resources to match OpenAI and does she believe that if such an entity does exist that it too will self destruct rather than to go through with the development?

And does she believe that this will hold true for all time? That they and their colleagues are so unique that they hold the key to something that can otherwise not be replicated.

pclmulqdq · 2 years ago
> The open question then is: does she really believe that no other entity has the resources to match OpenAI and does she believe that if such an entity does exist that it too will self destruct rather than to go through with the development?

People at "top" companies fall into this fallacy very readily. FAANG (especially Google and Facebook engineers) think this way on all sorts of things.

The reality is that for any software project, your competition is rarely more than 1 year behind you if what you're doing is obviously useful. OpenAI made ChatGPT, and that revealed that this sort of thing was obviously useful, kicking off the arms race. Now they are bleeding money running a model that nobody could run profitably in order to keep their market position.

I have tried to explain this to xooglers several times, and it often goes in one ear and out the other until they get complacent and the competition swipes them about a year later.

jltsiren · 2 years ago
I think the real issue is that OpenAI was doomed to fail from the beginning. AI is commercially too valuable to be developed by an organization with a mission like them. Eventually they had to make a choice: either become a for-profit without any pretensions about the good of humanity, or stay true to the mission and abandon ambitions of being at the cutting edge of AI development.

A non-profit could not have beaten the superpowers in developing the atomic bomb, and a non-profit cannot beat commercial interests in developing AI.

abraae · 2 years ago
> And does she believe that this will hold true for all time? That they and their colleagues are so unique that they hold the key to something that can otherwise not be replicated.

It's impossible to understand this position. We can be sure that in some countries right now there are vigorous attempts to build autonymous AI-enabled killing machines, and those people care nothing for whatever safety guardrails some US startup is putting in place.

I'm a believer in a skynet scenario, though much smarter people than me are not, so I'm hopefully wrong. But whatever, hand waving attempts to align/ soften, safeguard this technology are pointless and will only slow down the good actors. The genie is out of the bottle.

hutzlibu · 2 years ago
"But it is also possible that the Russians would have launched a first strike on the USA if they were unopposed. It almost happened as it was!"

When did a first strike of the Sowjetunion allmost happened? I rather think it was the other way around, first strike was evaluated, to hit them before they got the bomb.

0xDEAFBEAD · 2 years ago
>I noted in a comment elsewhere that all of these board members appear to have had at least one and some many more conflicts of interest.

From the perspective of avoiding an AI race, conflict of interest could very well be a good thing. You're operating under a standard capitalist model, where we want the market to pick winners, may the most profitable corporation win.

kumarvvr · 2 years ago
So, it could also be that she approached him on the subject multiple times, after-all, she is a member of a board whose job is to make AI safety a priority thing.

After his plans for rapid expansion and commercialization were in direct contrast to the company's aims, I guess she wrote the paper to highlight the issue.

It seems that, like in the case of Disney, the board has lower power and control than the CEO. Highly likely if you have larger than life people like Sam at the helm.

I would not trust the board, but I would also not trust Sam. When billions of dollars are at stake, its important to be critical of all the parties involved.

glitchc · 2 years ago
>... yet based on this account I think he reacted the exact same way any CEO would.

Say what? The CEO serves at the behest of the board, not the other way around. For Sam to tell a board member that they should bring their concerns to him suggests that Sam thinks he's higher than the board. No wonder she told him to go fly a kite.

mupuff1234 · 2 years ago
> I think he reacted the exact same way any CEO would

Perhaps if you think of it as another YC startup, but not so much if you view OpenAI as a non-profit first and foremost.

foobarqux · 2 years ago
Who is being completely reasonable? Board member has a mandate and appears to be making a good faith effort to carry it out and the CEO tries to overthrow her. Whether that is standard behavior for CEOs is irrelevant.
ipaddr · 2 years ago
She has a mandate not to promote Anthropic on the back of OpenAI. Very unprofessional
bigtones · 2 years ago
Helen Toner through her association with Open Philanthropy donated $30 Million dollars to OpenAI early on. That's how she got on the board.

https://loeber.substack.com/p/a-timeline-of-the-openai-board

CSMastermind · 2 years ago
That's super insightful, thank you for sharing this.
1024core · 2 years ago
> I'm much more interested in how Helen managed to get on this board at all.

My gut says that she is the central figure in how this all went down. She and D'Angelo are the central figures, if my gut is right.

It looks like Helen Toner was OK with destroying the company to make a point.

FTA:

> Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission

sudosysgen · 2 years ago
That seems reasonable? The charter of the company could reasonably be furthered even if that means the end of the organization. If at some point the existence of the organization becomes antithetical to the charter, the board members have a responsibility to destroy it.
1024core · 2 years ago
Replying to my own comment since I can't edit it anymore, but:

It looks like Helen Toner is out of the board.

comp_throw7 · 2 years ago
> Sam apparently said she should have come to him directly if she had concerns about the company's approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.

Huh, this sounds pretty crazy to me. Like, it's assuming that a board member should act deceptively in order to help the for-profit arm of OpenAI avoid government scrutiny, and that trying to remove them from the board if they don't want to do that is reasonable. But in fact the entire purpose of the board is to advance the mission of the parent non-profit, which doesn't sound obviously compatible with "avoid giving the FTC (maybe legitimate) ammunition against the for-profit subsidiary, even if that means you should hide your beliefs".

jacquesm · 2 years ago
No, it means that you go outside only after you've exhausted all avenues inside. It's similar to a whistle blower situation, only most whistleblowers don't have their fingers on the self-destruct button. So to press that button before exhausting all other options seems a bit hasty. There is no 'undo' on that button.
keepamovin · 2 years ago
So he criticized her and threatened her board position, and then she orchestrated a coup to oust him? Masterful. Moves and countermoves. You have to applaud her strategic acumen, and execution capability, perhaps surprising given her extensive background in policy/academia. Tho maybe it's as Thiel says (about academia: "The battles are so fierce because the stakes are so small") and that's where she developed her Machiavellian skills?

Of course, it could also be that whatever interest groups she represents could not bare to lose a seat.

Whether initiated by her or her backers (or other board forces), I can't see any of the board stepping down if these are the kind of palace intrigues that have been occurring. They are all clearly so desperate for power they will cling to the positions on this rocketship for dear life. Even if it means blowing up the rocketship so they can keep their seat.

Microsoft can't spend good will erasing the entire board and replacing it, even tho it's near major shareholder because too much values the optics around its relationship to AI right now.

A strong, effective leader in the first place would have prevented this kind of situation. I think the board should be reset and replaced with more level headed, less ideological, more experienced veterans...tho picking a good board is no easy task.

anonymouskimmer · 2 years ago
> Microsoft can't spend good will erasing the entire board and replacing it,

because they don't have the power to as they do not have any stake in the governing non-profit.

hindsightbias · 2 years ago
Not that I think there are many examples of technical people making great board members, we've entered an era where if I don't get my way on the inside I'll just tweet about it and damn any wider consequences.

Management and stockholders beware.

wnoise · 2 years ago
The non-profit OpenAI has no stockholders.
kmlevitt · 2 years ago
While this would be perfectly reasonable if OpenAI was for profit, it’s ostensibly a non profit. The entire reason they wanted her on the board in the first place was for her expert academic opinion on AI safety. If they see that as a liability, why did they pretend to care about those concerns in the first place?

That said, if she objects to open AI’s practices, the common sense thing to do is to resign from the board out of protest, not take actions that lead to the whole operation being burned to the ground.

Rastonbury · 2 years ago
This is not any other company though? It's a non-profit with charter to make AI to benefit all of humanity

Helen believed she was doing her job according to the non-profit charter, obviously this hurts the for-profit side of things but that is not her mandate. That is the reason OpenAI is structured the way it is, with the intention of preventing capitalist forces from swaying them away from the non-profit charter, in hindsight it didn't work, but that was the intention (with the independent directors, no equity stakes, etc)

The board has all my respect for standing up to the capitalists of Altman, the VCs, Microsoft. Big feathers to ruffle - even though the execution was misjudged, turns out most of its employees are pretty capitalistic too

reducesuffering · 2 years ago
> The board has all my respect for standing up to the capitalists of Altman, the VCs, Microsoft. Big feathers to ruffle - even though the execution was misjudged, turns out most of its employees are pretty capitalistic too

Exactly. This is a battle between altruistic principles and some of the most heavyweight greedy money in the world. The board messed up the execution, but so did OpenAI leadership when they offered million dollar pay packages to people in a non-profit that is supposed to be guided by selfless principles.

GreedClarifies · 2 years ago
"I'm much more interested in how Helen managed to get on this board at all."

Indeed. This is far more interesting. How the hell did Helen and Tasha get on, and stay on, the board.

bigtones · 2 years ago
Helen Toner through her association with Open Philanthropy donated $30 Million dollars to OpenAI early on. That's how she got on the board.

https://loeber.substack.com/p/a-timeline-of-the-openai-board

Dead Comment

Deleted Comment

eigenvalue · 2 years ago
I strongly suspect this whole thing is caused by an overinflated ego and a desire to feel like she is the main character and the chief “resistance” saving the world. The EA philosophy is truly poisonous. It leads people to betray those close to them in honor of abstract ideals that they are most likely wrong about anyway. Such people should be avoided like the plague if you’re building a team of any kind.
winenbug · 2 years ago
https://openai.com/our-structure

This whole thing was so, SO poorly executed, but the independent people on the board were gathered specifically to prioritize humanity & AI safety over OpenAI. It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to "get around" ChatGPT's guardrails, she probably had some firm grounds to stand on).

Yes, Sam made LLMs mainstream and is the face of AI, but if the board believes that that course of action could destroy humanity it's literally the board's mission to stop it — whether that means destroying OpenAI or not.

What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place. I don't think either side is purely in the wrong here, but they're two sides of an incredibly badly thought-of charter.

lll-o-lll · 2 years ago
> It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to "get around" ChatGPT's guardrails, she probably had some firm grounds to stand on).

Sam didn’t forget anything. He is a brilliant Machiavellian operator. Just look at the Reddit reverse takeover as an example; Machiavelli would be in awe.

> What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place.

No. It shows this structure is doomed to fail if you have a genius schemer as a CEO, playing the long game to gain unrestricted control.

ac2u · 2 years ago
> Just look at the Reddit reverse takeover as an example; Machiavelli would be in awe.

What were the details on that? (Sorry it’s not an easy story to find on Google given how much the keywords overlap with OpenAI topics)

Metacelsus · 2 years ago
> Just look at the Reddit reverse takeover as an example

I'm not familiar with this, what happened? Googling "Sam Altman reddit reverse takeover" is just flooded with OpenAI results.

bmitc · 2 years ago
I think it points out how Altman setup this non-profit OpenAI as a sort of humanitarian gift, because he pretty clearly marketed himself as having no financial stake in the company, only to use that as leverage for his own benefit.

This whole thing is a gigantic mess, but I think it still leaves Altman in the center and as the cause of it all. He used OpenAI to gather talent and boost his "I'm for humanity" profile while dangling the money carrot in front of his employees and doing everything he could to get back in the money making game using this new profile.

In other words, it seems like he setup the non-profit OpenAI as a sort of Trojan horse to launch himself to the top of the AI players.

jjulius · 2 years ago
>In other words, it seems like he setup the non-profit OpenAI as a sort of Trojan horse to launch himself to the top of the AI players.

Given that Altman apparently idolized Steve Jobs as a kid, this idea really doesn't feel that far-fetched.

TerrifiedMouse · 2 years ago
> What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place.

I disagree. The for-profit arm was always meant to be subservient to the non-profit arm - the latter practically owns the former.

A proper CEO would just try to make money without running afoul of the non-profit’s goals.

Yes that would mean earning less or even not at all. But it was clearly stated to investors that profit isn’t a priority.

0xDEAFBEAD · 2 years ago
>they're two sides of an incredibly badly thought-of charter.

It's easy to say this with the benefit of hindsight, but I haven't seen anyone in this discussion even suggest an alternative model that they claim would've been superior.

winenbug · 2 years ago
Agreed, I'm not saying I have a better alternative, just that this is something we all should now realize; given i'm sure we were all wondering for a long time what the whole governance structure of OpenAI really meant (capped for-profit with non-profit mission etc.)
kragen · 2 years ago
nonprofit companies with for-profit portfolio companies are hardly unusual and certainly not doomed to fail. i've worked for two such companies in my high-tech career myself; one is now called altarum, though i worked for the for-profit subsidiary that got sold to veridian
clnq · 2 years ago
A lot of people in tech say that executives are excessively diplomatic and do not speak their truth. But this is what happens when they do too much, too ardently, too often. This is why diplomacy and tact is so important in these roles.

Things do not go well if everyone keeps poking each other with sticks and cannot let their own frame of reference go for the sake of the bigger picture.

Ultimately, I don’t think Altman doesn’t believe ethics and safety is important. And I don’t think Toner fails to realize that OpenAI is only in a place to dictate what AI will be due to its commercial principles. And they probably both agree that there is a conflict there. But what tactful leadership would have done is found a solution behind closed doors. Yet from their communication, it doesn’t even look like they defined the problem statement — everyone offers a different idea of the problem that they had to face together. It looks more like it was more like immature people shouting past each other for a year (not saying it was that, but it looks that way).

Moral of the story: tact, grace, and diplomacy are important. So is speaking one’s truth, but there is a tactful time, place, and manner. And also, no matter how brilliant someone is, if they can’t develop these traits, they end up rocking the boat a lot.

jacquesm · 2 years ago
Spot on.
lwneal · 2 years ago
The relevant passage from the paper co-written by board member Helen Toner:

"OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to "jailbreaks" that allow users to bypass safety controls...

A different approach to signaling in the private sector comes from Anthropic, one of OpenAI's primary competitors. Anthropic's desire to be perceived as a company that values safety shines through across its communications, beginning from its tagline: "an AI safety and research company." A careful look at the company's decision-making reveals that this commitment goes beyond words."

[1] https://cset.georgetown.edu/publication/decoding-intentions/

murakamiiq84 · 2 years ago
I think this is heavily editoralized. if you look at the 3 pages in question that the quotes are pulled from (28-30 in doc, 29-31 in pdf), they appear to be given as examples in pretty boring academic discussions explicating the theories of costly signaling in the context of AI.It also has lines like:

"The system card provides evidence of several kinds of costs that OpenAI was willing to bear in order to release GPT-4 safely.These include the time and financial cost..."

"Returning to our framework of costly signals, OpenAI’s decision to create and publish the GPT4 system card could be considered an example of tying hands as well as reducible costs. By publishing such a thorough, frank assessment of its model’s shortcomings, OpenAI has to some extent tied its own hands—creating an expectation that the company will produce and publish similar risk assessments for major new releases in the future. OpenAI also paid a price ..."

"While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety"

And the conclusion:

"Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed. Taken together, these two case studies therefore provide further evidence that signaling around AI may be even more complex than signaling in previous eras."

hn_throwaway_99 · 2 years ago
> I think this is heavily editoralized.

"Editorialized"?? It's a direct quote from the paper, and additional context doesn't alter its perceived meaning.

murakamiiq84 · 2 years ago
Note that the quote about Anthropic is about Anthropic's desire to be perceived as a company that values safety, not a direct claim that Anthropic actually is safe, or even that it desires to value safety.
hn_throwaway_99 · 2 years ago
You must have interpreted the final sentence "A careful look at the company's decision-making reveals that this commitment goes beyond words" very differently than I did, or else you're splitting hairs in making your distinction.
ryukoposting · 2 years ago
This reads more like ad copy than a research paper. I'd have been pissed too if I were Altman.
PepperdineG · 2 years ago
>Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that mission would be fulfilled.

Now we know where that came from

0xDEAFBEAD · 2 years ago
It's pretty heavily implied by OpenAI's charter: https://openai.com/charter

It's weird to me that Helen is getting so much crap for upholding the charter. No one objected to the charter at the time it was published. The charter was always available for employees, investors, and customers to read. Did everyone expect it to be ignored when push came to shove?

There's a lot of pressure on Helen right now from people who have a financial stake in this situation, but it's right there in the charter that OpenAI's primary fiduciary duty is to humanity. If employees/investors/customers weren't OK with that, they should not have worked with OpenAI.

tapoxi · 2 years ago
Investors wanted to have a commercial enterprise while pretending it was a nonprofit acting for the good of humanity. This helps market something as scary and potentially destructive as AI. "It can't be that bad, they're a nonprofit with altruistic goals!" Then the investors get mad when the board they intended to be figureheads actually try to uphold some principles.

Best to rip the band-aid off and stop pretending.

chubot · 2 years ago
Also, Sam himself repeatedly used the charter as marketing, and as a recruiting tool for AI researchers that could have gone anywhere they wanted (e.g. Ilya)

He was basically making the argument that AGI is better under OpenAI than Google.

Now they're implicitly making the argument that it's better under Microsoft, which is difficult for me to believe.

Fomite · 2 years ago
Turns out for a lot of people it's easy to be on board with a high minded charter until it might cost something.
barnabee · 2 years ago
Not surprising to me at all, in approximate order of real, practical importance (and power, if they all band together):

employees founder/CEO customers investors board stuff written on pieces of paper

Yes, there are certainly exceptions (a very powerful founder, highly replaceable and disorganised employees, investor or board member who wields unusual power/leverage, etc.) but it does not surprise me at all that the charter should get ignored/twisted/modified when basically everyone but the board wills it.

The only surprise is that anyone thought this approach and structure would be helpful in somehow securing AI safety.

sackfield · 2 years ago
This charter doesn't have a sole interpretation, and shame on Helen for strong arming her view and ruining the lives of so many people.

If there is something completely clear, its that OpenAI cannot uphold its charter without labour. She has ruined that, and thus failed in upholding the charter. There were many different paths to take, she took the worst one.

justanotherjoe · 2 years ago
Oh please... Can't you see how meaningless the phraes 'goodness of humanity' is? If only something like that can be so readily known!
jacquesm · 2 years ago
That's a bit naive, to put it mildly. It presumes that nobody else would be able to replicate the effort and that the parties that are able to replicate it would also destroy theirs after proving that it could be done. Fat chance.
ytoawwhra92 · 2 years ago
The actions of other organisations are not in the scope of the board's mission. The actions of the company the board controls are in that scope.

"The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,”"

We can't control other's actions, but we can control our own. If we feel that our actions are diverging from our own sense of what we ought to be doing we can change our actions, regardless of how others are behaving.

barrkel · 2 years ago
Indeed. The only way to retain control of the leading ship in the race is to keep it together as you steer it. If the ship disintegrates, then you're no longer in control of the leading ship, and someone else will win the race.
renewiltord · 2 years ago
Interesting. This is the person who "holds comparable influence to a USAF Colonel." according to "a prominent researcher in AI safety" who "discovered prompt injection". https://news.ycombinator.com/item?id=38330566

Well, I suppose this tells us something about the AI safety community and whether it makes sense to integrate safety into one's workflow. It seems that the best AI safetyists will scuttle your company from the inside at some moment not yet known. This does sort of imply that it is risky for an AI company to have a safetyist on board.

That does seem to be accurate. For instance, Google had the most formidable safety team, and they've got the worst AI. Meta ignored theirs and they've given us very good open models.

comp_throw7 · 2 years ago
"AI safety" should be disentangled into "AI notkilleveroneism" and "AI ethics", which are substantially non-overlapping categories. I've looked at who works at Preamble, and there aren't any names there that I recognize from the side of things that's concerned with x-risk. Take their takes with a grain of salt.
Seanambers · 2 years ago
AI safety is just woke 2.0.
nonethewiser · 2 years ago
Non-profit or not, steering the company towards non-existence isnt in the interest of the company.
winenbug · 2 years ago
But that's the problem, the board's mission was doomed from the get-go. Their mission isn't to be "in the interest of the company" but "in the interest of humanity" i.e. if they believe OpenAI at its pace would destroy humanity, then their mission is literally to destroy OpenAI itself.
dontknowmuch · 2 years ago
These true believers serve a higher calling. Only they can prevent an AIpocalpse.
brigadier132 · 2 years ago
Amazing how these higher minded people always forget about the little people on the ground. All these employees losing their livelihoods for the greater good. Not surprised an ethicist thinks like this.
tokai · 2 years ago
Highly educated AI specialists are the little people now? They can all find employment in an instant.
Fomite · 2 years ago
I mean, if you're worried about what these higher minded people are worried about, the number of employees at OpenAI is dwarfed by the number of other, more vulnerable employees threatened by this in the economy as a whole.

That's one of the issues with both this and effective altruism as a concept - it's a series of just-so stories with a veneer of math.

Riverheart · 2 years ago
“All these employees losing their livelihoods for the greater good.”

The same employees building technology that will ultimately put many more employees out of jobs? Ironic, because people say that jobs lost to AI will be for the greater good. I think we’re okay with sacrificing for greater goods as long as we aren’t the ones getting sacrificed.

doktrin · 2 years ago
> Microsoft has given every OpenAI employee a job offer.

> All these employees losing their livelihoods for the greater good

You penned both of these statements today. Clearly you understand that OpenAI employees are a highly compensated and in-demand resource whose “livelihoods” are in no jeopardy whatsoever, so the theatrics here are really bizarre.

nomel · 2 years ago
Given a choice between a paycheck and doing something questionable, and you have a looong history of what people will choose.

I’m not saying that’s the case here, but that can’t be used as a shield.

Merrill · 2 years ago
Ethicists seem mainly concerned about thwarting technology to ensure that no harm occurs, rather than guiding the development of technology to deliver the most benefits possible.

Dead Comment

foobarian · 2 years ago
That doesn’t even follow when taken literally. If the company is destroyed presumably they can’t create artificial intelligence, so there is nothing there to benefit all humanity in the first place.
comp_throw7 · 2 years ago
It follows just fine, I think, given that the possibility space is not limited to "create beneficial AGI" and "don't create AGI". It also includes "create unaligned AGI", which is obviously much worse "don't create AGI"; the board would be remiss in its duties if it didn't try to prevent that from happening.
liuliu · 2 years ago
The company is not destroyed. Board is not shutting down the company, they fired the CEO. The other ~700 people chose to quit. Not sure why it is "life-ruined" other than probably some tender offers withdrawn (and even this bit is unclear whether Thrive Capital will do that).
Davidzheng · 2 years ago
The mission of benefiting humanity can also mean not harming