Readit News logoReadit News
lacker · 2 years ago
The most embarassing thing about this isn't that the board decided to fire the CEO. That happens all the time. Many startups replace their CEO, it isn't always clear whether it's the right decision, but that's the responsibility of the board.

The embarassing part is that the board decided to fire the CEO, announced their decision, refused to say why, attempted to put in place a new CEO but had to immediately demote the new CEO (Mira) after she rejected their plan, upset and alienated their core partners, along with almost all of their employees, and then publicly backtracked to undo the firing that led to this all happening.

Once you screw this up to such an incredible degree, how can anyone really trust that you were doing the rest of your job well?

caesil · 2 years ago
Interesting how an intellectual movement claiming to know better than anyone else how development of an unpredictable technology might pan out over the course of years failed to predict how one decision would pan out over the course of a single weekend.

Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.

jacquesm · 2 years ago
> Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.

You may well be on to something. I'd trust a cabal of science fiction writers more than that I would trust these self appointed governors of our collective future. They lack imagination, for starters.

empath-nirvana · 2 years ago
To put it another way, how are they going to control a super intelligent AI when they can't even control Sam Altman.
0xDEAFBEAD · 2 years ago
>Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.

Actually this is more or less the point that Eliezer Yudkowsky makes in this essay about the need for caution in AI development:

https://www.econlib.org/archives/2016/03/so_far_unfriend.htm... (see especially points A/B/C/D)

I doubt overconfidence is a problem specific to effective altruism. In any case, any good machine learning engineer knows that a dataset with only a single data point is essentially worthless -- even if we grant the premise that the board took the wrong action given the information they had available to them at the time.

guiambros · 2 years ago
It's ironic how they'd have been more successful if they had followed recommendations from their own LLM [1] (or Bard [2]).

Of course the recommendations are not that novel; that's CEO succession planning 101. But I guess none of those four have done any large succession planning, and were clearly out of their depth.

[1] https://chat.openai.com/share/e77bc868-fe27-4346-9c13-0908e8...

[2] https://g.co/bard/share/549edfddc624

NumberWangMan · 2 years ago
> how development of an unpredictable technology might pan out

> how one decision would pan out

I'm not sure what point you are making here. Are you trying to say "see, the AI not-kill-everyone-ists couldn't predict the future even in the short term, therefore we shouldn't put much credence into the the idea that the specific examples of AI doom they have given will happen"?

Or are you trying to imply that the idea of AI doom as a whole is bunk, because we can't predict the future... therefore everything will be fine...?

spacecadet · 2 years ago
Always has been!
hn_throwaway_99 · 2 years ago
> Once you screw this up to such an incredible degree, how can anyone really trust that you were doing the rest of your job well?

As someone who pretty harshly described the board's actions, I agree with the first part of what you wrote, but I don't agree with this.

People can be good at critical aspects of their job while lacking the maturity to understand large group dynamics. I thought this quote from the article said a lot:

> The board members weren’t prepared for the fallout from their decision. The members, including Toner, were taken aback by staffers’ apparent willingness to abandon the company without Altman at the helm and the extent to which the management team sided with the ousted CEO, according to people familiar with the matter.

It's pretty hard to me to think, based on the board's initial press release, that there wouldn't be a huge fallout from this decision. The release also made me think that Altman had done something truly egregious/malevolent, and that wasn't the case.

What I saw this as was a group of people who had real concerns about Altman's leadership, and some of those concerns ended up sounding pretty valid, but who didn't have the experience/maturity to understand how to go about the next step. OpenAI's "frankenstein" org structure, with a board beholden to the nonprofit's charter but also responsible for billions in investor capital, also helped contribute to this cluster-f.

WillPostForFood · 2 years ago
>What I saw this as was a group of people who had real concerns about Altman's leadership, and some of those concerns ended up sounding pretty valid,

What did you hear that sounded valid? Toner had her chance to make her case here, and it was simply that Altman tried to get her fired, so she retaliated. It wasn't about safety or leadership. It was petty politics over board control.

jdminhbg · 2 years ago
> People can be good at critical aspects of their job while lacking the maturity to understand large group dynamics.

I agree with this as a general principle, but in this specific case, the 'understanding group dynamics' part is the 'critical aspect of the job' part. By the same token, you don't just promote the best engineer to CEO if he's bad at managing, because the managing is the critical part of the job.

kmlevitt · 2 years ago
>What I saw this as was a group of people who had real concerns about Altman's leadership, and some of those concerns ended up sounding pretty valid, but who didn't have the experience/maturity to understand how to go about the next step.

That and that alone is enough to question their competence, though. The impression I get is that Toner has been in the academic/nonprofit world for so long that she doesn't understand how the real world works. In those places, the sad truth is nothing you really do is of much consequence to anybody in the grand scheme of things, other than you collecting prestige and a paycheck.

Then she tried making a consequential decision in an organization valued at around $90 billion, and lo and behold, people started caring about her actions in a way she has never experienced before.

jacquesm · 2 years ago
The scariest two words: 'unconsciously incompetent'. That's where the biggest accidents come from. At least you can recognize malice and do something about it.
1vuio0pswjnm7 · 2 years ago
"Once you screw this up to such an incredible degree, how can anyone really trust that you were doing the rest of your job well."

How can anyone take any "AI" company seriously other than as a get rich quick scheme. Doubtful a successful investor focused on management would bother with a company like OpenAI.

https://fortune.com/2023/05/06/ai-warren-buffett-charlie-mun...

The probability of OpenAI employing wackos, like the guy at Google who thought automplete was sentient, is very high.

gaganyaan · 2 years ago
The dismissive autocomplete meme needs to die.
1vuio0pswjnm7 · 2 years ago
"AI" is an inferior form of autocomplete. Autocomplete normally the user to choose the most appropriate completion, or choose none of they all suck, while "AI" does not even allow the user to select from multiple choice, it forces one on the user, no matter how dumb. It's a classic tactic used by so-called "tech" companies: remove user choice.
NearAP · 2 years ago
> but had to immediately demote the new CEO (Mira) after she rejected their plan

According to reports, Mira was informed of the decision to fire Sam the night before Sam was fired. There's no indication she was against it and she also didn't inform Sam. So, I don't get this "she rejected their plan".

If the board informed her they were going to fire Sam, it seems logical that they also informed her they were going to give her the interim CEO position. I have no knowledge of this (it just kind of makes sense to me).

jacquesm · 2 years ago
That may have revolved around the fact that there was no valid reason that they could put up to defend the firing. After all, if they had then she may well have stayed on board, the same goes for CEO #3.
paulddraper · 2 years ago
> So, I don't get this "she rejected their plan"

Mira immediately started trying Sam back. [1]

If that's not rejecting the board's plans, I don't know what is.

[1] https://economictimes.indiatimes.com/tech/technology/openais...

7e · 2 years ago
Embarrassing? It’s a complex system and it’s impossible to predict the future when so many parties are involved. Altman started the chaos, but somehow avoids the blame.
woopsn · 2 years ago
OpenAI's oversight structure is plainly stupid by any measure. Their CEO and core "partners" liquidated the board and reformed it within a few days of his firing. The buck stops with Sam. The only thing Helen Toner screwed up was being an associate professor with net worth < $500,000,000.
0xDEAFBEAD · 2 years ago
I've made this point before, but the ratio of "people criticizing OpenAI's legal structure" to "people proposing a better legal structure" in these discussions is remarkably high.

Not one critic I've seen has stated what the best legal structure for OpenAI should actually have been, in a world where OpenAI really is stewarding a technology that could be an existential threat to humans. I really wish they would do so, because then we'd have a shot at improving OpenAI's governance. But so far it's been crickets.

BTW, I think one of the big lessons of this episode is that legal structure matters much less than you'd think. On paper, OpenAI is supposed to follow its charter https://openai.com/charter and the board had the power to fire Sam. In practice, Sam was kinda sorta violating the charter, but he's a charismatic leader who was about to bring in a big payday for the employees (https://www.cnbc.com/2023/11/30/openai-tender-offer-on-track...), and that turned out to matter more than the org's legal structure.

0xDEAFBEAD · 2 years ago
>The embarassing part is that the board decided to fire the CEO, announced their decision, refused to say why, attempted to put in place a new CEO but had to immediately demote the new CEO (Mira) after she rejected their plan, upset and alienated their core partners, along with almost all of their employees, and then publicly backtracked to undo the firing that led to this all happening.

I'm reminded of these great PG tweets:

>When people criticize an action on the grounds of the "optics," they're almost always full of shit. All they're really saying is "What you did looks bad." But if they phrased it that way, they'd have to answer the question "Was it actually bad, or not?"

>If someone did something bad, you don't need to talk about "optics." And if they did something that seems bad but that you know isn't, why are you criticizing it at all? You should instead be explaining why it's not as bad as it seems.

https://nitter.net/paulg/status/1728015636771504300#m

The situation is similar if you criticize an action on the grounds that "it upset people". If the action was actually bad, you should be able to explain why directly.

Here are some quotes from the OpenAI charter:

>We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

>...

>Our primary fiduciary duty is to humanity.

>...

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.

https://openai.com/charter

And here's a quote from the OP:

>In October, Toner, who is director of strategy at a think tank in Washington, D.C., co-wrote a paper on AI safety. The paper said OpenAI’s launch of ChatGPT sparked a “sense of urgency inside major tech companies” that led them to fast-track AI products to keep up. It also said Anthropic, an OpenAI competitor, avoided “stoking the flames of AI hype” by waiting to release its chatbot.

>After publication, Altman confronted Toner, saying she had harmed OpenAI by criticizing the company so publicly. Then he went behind her back, people familiar with the situation said.

Who is doing a better job of upholding the charter in this situation: Altman, or Toner?

The "people are upset" criticism essentially states that people became upset when Toner attempted to actually uphold the charter. To me, that says more about those people than it says about Toner.

jacquesm · 2 years ago
I read it pretty much the same way. I do have some ideas on how you should structure the board of something like OpenAI, but it is more along the lines of a broadly represented structure than five industry experts and a customer or two.

For this to work you need all layers of society to be represented and since this is intended to be global you'd be looking at the UN. Of course that's 100% anathema to anybody involved with OpenAI or even any cross section of HN because 'bureaucrats don't get technology and the UN can't agree on anything'. But if it really is what it seems to be that to me seems to be the only venue where this can be addressed.

Of course the race is on and the first party to cross the threshold will get the gold and never mind the consequences so I expect any attempt at oversight to be murdered the second they have it working.

paulddraper · 2 years ago
> When people criticize an action on the grounds of the "optics," they're almost always full of shit.

You're characterizing this like it was just a bunch of Twitter armchair quarterbacks.

Which is was.... Plus 90%+ of the boots on the ground who voted no confidence in their leaders.

Only the most obstenate can cling to the belief they are a good leader/communicator with that statistic.

EDIT: There is a feeling that companies are full of unremarkable, out-of-touch, over-compensated leaders who ignore the opinions of their subordinates.

And yet when that situation actually materializes, or mostly materializes, you read opinions like this.

patcon · 2 years ago
I read a bit of your history. I highly respect your takes. Thanks

Dead Comment

Leary · 2 years ago
Key passage:

"Altman approached other board members, trying to convince each to fire Toner. Later, some board members swapped notes on their individual discussions with Altman. The group concluded that in one discussion with a board member, Altman left a misleading perception that another member thought Toner should leave, the people said. By this point, several of OpenAI’s then-directors already had concerns about Altman’s honesty, people familiar with their thinking said."

TeeWEE · 2 years ago
Wow, there is no way around it. Sam Altman is basically getting his way by misleading people. The board did the right thing. Honesty is crucial as a leader. I think for Sam Altman his goal is more important than ethics and he will do whatever to get his way.
m3kw9 · 2 years ago
Obviously it wasn’t the right thing, as he’s back. You don’t fire someone so important for that and extrapolate it into equating human extinction. He has flaws but he seem to have the right balance between acceleration and safety, even though I’m not a fan of him gunning for regulatory capture
jfengel · 2 years ago
Honesty is great, in moderation, but it's not the be-all and end-all of leadership.

Leadership is fundamentally about manipulating people. If the people already agreed, they wouldn't need leadership. Your job as leader is to get them all working in more or less the same direction, even though they don't really want to.

Honesty is a great way to do that, when it works. Honesty has the virtue of not having to keep track of your lies, and you don't get caught in an inconsistency.

But it doesn't always work, and then you have to consider alternatives. Sometimes people are dumb. Sometimes people are wrong. Sometimes people are hostile. Sometimes people are right, but in a way that can't actually be implemented, and you need to go with a sub-optimal path despite that.

At core, you can't lead if you don't believe that you've listened to all of the concerns and reached the solution that is best for the group as a whole, according to whatever definition of "best" and "whole" you were hired for. Sometimes that's going to force you to use other means that just telling people the truth and hoping they reach the same conclusion.

Leadership is fundamentally a people problem, not a technical one. And sometimes that's going to mean being dishonest. A good leader does it as little as possible, but simply dismissing it as a tactic is a recipe for failure because it hides a fundamental truth about why and how people behave in groups.

627467 · 2 years ago
To be honest, the passage describes high school drama. And the ensuing drama trigger by this just proves the point.

This and the FTX saga just shows EA as nerdy movement in arrested development.

yinser · 2 years ago
I think the OpenAI shenanigans are far from over and the next act is likely to play out. A couple of my personal opinions are:

- As PG pointed out Sam would find his way to the top of an island of cannibals. If you're generous to Toner's side in this article she likely _was_ alarmed by how Sam tried to get other board members to turn against her. When she walked into the arena with Altman to try and remove him however he had much more experience and orchestrated an incredible counter-coup.

- There must be an army of attorneys representing the board members, and investors because there are huge potential legal pitfalls to destroying value, or failing fiduciary duties.

- We don't know a lot because anything put out in public has to be spotless legally. If you're a board member and you know the truth, the only upside to leaking more info is clout, but the downsides are being sued and tied up in court losing a lot of time and money.

- People seem dissatisfied with what was revealed in this article but Toner has likely been advised by lots of counsel on exactly what can be said, and the list of those things is probably miniscule. I'd say she's doing the best she can to give the public more info and I'm happy she's taken the effort to do so.

tivert · 2 years ago
> When she walked into the arena with Altman to try and remove him however he had much more experience and orchestrated an incredible counter-coup.

Nit: Altman may have orchestrated a coup, but the board certainly did not. The board is the legitimate authority, and in no way is a board firing its subordinate a "coup." Rather, it's executing its legitimate power.

Coups triggered when subordinate feels threatened is a pretty common pattern. IIRC, the recent coup in Niger was triggered when the leader of the presidential guard thought the president was about to fire him, too power in a coup rather than lose his job.

jacquesm · 2 years ago
Correct, the board acted rashly and without forethought but well within their authority. Their big failure was that they didn't make it stick.
CSMastermind · 2 years ago
> The board is the legitimate authority, and in no way is a board firing its subordinate a "coup."

Sam was on the board. They didn't fire a subordinate they kicked a peer out of the group.

dekhn · 2 years ago
The board met but didn't include the board chair, and voted without him being present (they could form a quorum). I am not so sure it's exercising your legitimate power if you don't include the board chair on purpose.
creer · 2 years ago
> lawyers [...] huge potential legal pitfalls to destroying value, or failing fiduciary duties.

> advised by lots of counsel on exactly what can be said, and the list of those things is probably miniscule.

Hmmm, in the US yes and no. First, US lawyers point out risk in every direction. Because nothing is risk-less and the law is unclear or unsettled (and anyone can sue for just about anything). Once they have duly warned to their satisfaction however, their job becomes to come up with a theory on how to do exactly what it is you want to do. The two are different tasks, and are not incompatible.

That's basically the definition of a good business lawyer.

theGnuMe · 2 years ago
Yeah in think the risk of a negative outcome from being sued was and is low.

Companies implode all the time. Sure you might get a lawsuit but the company would pay it or the directors insurance would.

HN is always scared about lawsuits. If Trump taught us anything is that there’s a lot of leeway in that area.

jmacd · 2 years ago
Regarding the narrative about Sama's behaviour during this fiasco, I believe the perception of him acting strategically is mistaken. I think he was just highly loved by the employees and that alone is what facilitated his return.
sanderjd · 2 years ago
This strikes me as a very odd take. All the things that seem to "hasty" and non-strategic to you look to me like deft PR moves that each had exactly their intended effect.
angarg12 · 2 years ago
> Helen Toner was a relatively unknown 31-year-old academic from Australia—until she became one of the four board members who fired Sam Altman

> Toner graduated from the University of Melbourne, Australia, in 2014 with a degree in chemical engineering [...] In 2019, she spent nine months in Beijing studying its AI ecosystem. When she returned, Toner helped establish a research organization at Georgetown University, called the Center for Security and Emerging Technology, [...] She succeeded her former manager from Open Philanthropy, Holden Karnofsky, on the OpenAI board in 2021 after he stepped down.

Honest question, how do people find themselves in the board of one of the hottest startups at the tender age of 31? Are they geniuses or is it all about connections?

mrlatinos · 2 years ago
Her father: http://www.toner-assoc.com.au/Consultants.htm

Her brothers: https://www.vesparum.com/origins

So the short answer is wealth and connections, or daddy.

rurban · 2 years ago
Or intelligence. Looking at her career and papers sounds like being in the intelligence business, not nepo. AI safety is military intelligence only
kj4211cash · 2 years ago
Right place, right time. She was one of very few people "working" on AI Safety. I made another comment about this elsewhere on this post, but to me this debacle is a function of the fact that there is no there there when it comes to AI Safety "research".
0xDEAFBEAD · 2 years ago
I encourage anyone who thinks there is "no there there" to, for example, glance over this preprint and some of its references: https://arxiv.org/pdf/2310.17688.pdf
Angostura · 2 years ago
It wasn't very hot in 2021
627467 · 2 years ago
Her profile - as you state - checks lots of boxes: concerned highly credentialed young woman interested in an emerging field that is part of china-us battleground. Would look great on any corporate brochure
reducesuffering · 2 years ago
These people took chances on their belief in the future, that AGI was coming and monumentally impactful, back in 2015 when the entirety of people on HN mocked and derided it. Even in 2023, the majority do.
isubkhankulov · 2 years ago
It’s interesting that she was threatened with being held responsible for violating a fiduciary duty for the for-profit entity while she was sitting on the non-profit board.

Going fwd, I wonder why they cannot convert the structure to a traditional c-corp. Supposedly: “tax issues”.

Whatever problems OpenAI dealt with last week, the current non-profit structure will continue to cause future problems IMO.

dragonwriter · 2 years ago
> Going fwd, I wonder why they cannot convert the structure to a traditional c-corp. Supposedly: “tax issues”.

Because, IIRC, when a charity nonprofit like OpenAI, Inc.-- a 501(c)(3), other nonprofits are different -- converts to a for-profit (C-corp or otherwise doesn't matter), all the assets acquired while it was a charity must either remain dedicated to and used exclusively for the charitable purpose, be returned, or be donated to charity (or sold, and then the proceeds treated the same way.) For the charity OpenAI, Inc., this would include its interest in OpenAI, LLC (the wholly owned subsidiary through which it exercises control of OpenAI, LP and OpenAI Global, LLC -- the latter being the for profit entity that actually sells product, etc.), and OpenAI, LP (the holding company which in turn is majority owner of OpenAI Global, LLC).

gnulinux · 2 years ago
Just curious: can they legally sell all their assets to themselves (i.e. OpenAI nonprofit sells it to for-profit OpenAI Inc. for market price real $$$) and then use that $$$ for charitable purposes. Would this satisfy IRS by letting them retain the intellectual property rights?
Havoc · 2 years ago
> fiduciary duty for the for-profit entity

I read it as both. If OpenAI collapses operationally then the non profit fails at its mission too. That’s the board she is on and that’s the fiduciary duty relevant

(She may also be on the for profit board idk)

0xDEAFBEAD · 2 years ago
>If OpenAI collapses operationally then the non profit fails at its mission too.

Not necessarily true.

"We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome [...] if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project."

https://openai.com/charter

stcredzero · 2 years ago
In the current legal framework, isn't the reality, fiduciary uber alles?
webmaven · 2 years ago
Wait, does the for-profit even have a board?
JKCalhoun · 2 years ago
> It’s interesting that she was threatened with being held responsible for violating a fiduciary duty

Yeah, that smells like a bullshit threat. Anyone would do well to ignore it.

jacquesm · 2 years ago
Tax issues and regulatory issues both.
mrkramer · 2 years ago
Microsoft should acquire them and cut the non-profit crap.
happytiger · 2 years ago
Isn’t that basically what is happening with a few extra steps?
WillPostForFood · 2 years ago
It’s interesting that she was threatened with being held responsible for violating a fiduciary duty for the for-profit entity while she was sitting on the non-profit board.

Fiduciary doesn't mean financial or monetary duty, it means holding something in trust. Violating a fiduciary duty could be financial, social, reputational, or in this case, literally destroying the operation of the company you were entrusted with.

blueyes · 2 years ago
Any board member who believes it is her duty to destroy the organization she pledged to serve should simply step down.

The folks who argue that it is a CEO's duty to serve the board do not understand governance or power.

Open AI would have wallowed in irrelevance had Altman not raised the billions from Microsoft to fund its research. Because he was able to do that, and built the relationship with Nadella, he had and has power. Toner's behavior seems naive in that context. But naiveté among academics does not surprise me.

jacquesm · 2 years ago
My reading: They weren't picked for their ethics, they were picked for being malleable, to serve as a fig-leaf and to be able to mislead regulators. To Sam's surprise some of them took their job serious, so he tried to get rid of them because he saw the confrontation as inevitable. That backfired, he got terminated, and then that backfired so the boardmembers got the choice to jump on command or vacate the premises.

You can be fairly certain that the board will be stocked with 'yes-men' (and women) beholden to Altman. He will not make that same mistake twice.

kjkjadksj · 2 years ago
He might have been the one to build the relationship but this hardly gives him “power.” He did a sales job basically. Sales people get replaced all the time.
ralfd · 2 years ago
Reminds me of the The Iron Law of Bureaucracy:

"Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representative who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions."

> organization she pledged to serve

First there is no pledge, second I think she served the Open AI Charter.

Kinrany · 2 years ago
Your argument is "might makes right".
blueyes · 2 years ago
No, my argument is: if you do not know how to navigate power, you have no business on a board that purports to govern a fast-growing, prominent and critical organization. This board does not exist to serve itself, its abstractions or its ungrounded anxieties. It exists to practically guide a real organization employing many hundreds of people and serving many millions. There is no room for naiveté.
kippinitreal · 2 years ago
I mean, it's totally fine for a non-profit to "wallow in irrelevance"? I think the fundamental issue here is that a tax-exempt non-profit organization like these board members are leading _shouldn't_ necessarily be chasing fame/fortune. By definition they've put mission above profits (with tax writeoffs as a benefit). The naivety seems to be that the company is trying to be altruistically mission driven while also acting like a typical ambitious/profit seeking startup. Startups are great! Just weird to wrap one in a non-profit which brings different incentives
kj4211cash · 2 years ago
It seems to me as if Toner was just in the right place at the right time in order to get a seat on the board of Open AI. She has an undergrad degree in chemical engineering and a master's in "security studies." Her work on AI Safety opened this door for her but seems, at least to me, to be ... superficial. I'm no expert but I have worked in tech for awhile and at a public policy think tank. So I guess my question is there a real scientific field of AI Safety? Are there any real experts? Are there any real insights? I dislike the idea of trusting giant tech companies with breakthrough technologies with minimal oversight / regulation. But it just seems like there is no real science regarding AI x public policy. Like the policy experts have no clue what they are talking about. And after this debacle they probably won't be lucky enough to find themselves on the boards of organizations like Open AI.
0xDEAFBEAD · 2 years ago
>So I guess my question is there a real scientific field of AI Safety? Are there any real experts? Are there any real insights?

I'd say so, and in fact it's older than you might think.

Here's a lit review from 2018 for example: https://arxiv.org/pdf/1805.01109.pdf

This post is more recent, but understanding it might require some context: https://www.alignmentforum.org/posts/zaaGsFBeDTpCsYHef/shall...

A group from Cambridge has been running an online course you can take if you're interested in getting up to speed: https://aisafetyfundamentals.com/ Application deadline is early February. I took the course a couple years ago; happy to answer any questions.

sytelus · 2 years ago
There are real experts on AI safety. the way to distinguish between them and the others is simply ask where is the code for your research?