Readit News logoReadit News
SilverBirch · 2 years ago
This is interesting because I kind of view it the opposite way. The interesting thing about Zuckerberg, Brin & Page is that the have legal structures in place to give them control. Mark Zuckerberg doesn't need to be popular, he has the votes. He can drive Meta into a wall if he wants to - and he absolutely has taken that choice sometimes. The thing about the structure that was in place at OpenAI is that if Altman went rogue there was someone there to hold him to account. Now there actually is a really interesting conversation about what that accountability mechanism was, but what happened in practice is the board just screwed up.

But I do think it is worth considering, was this non-profit board thought out in the first place. If you were serious about the dangers of AI and the impact on the world I think it's absolutely sensible to set up a board to be a balance on that power. But it's important who you put on that board - and the people they selected were comical. Brockman, Sutskever, and Altman are all the people that are meant to be accountable to the board, not on the board.

If you wanted accountability you would, for a start, have had some people from outside silicon valley on this board. Some people who know something about Labour rights for example, maybe some people from the creative industries you're disrupting, maybe even some policy experts. It's just funny to think that this is what they thought accountability was. Accountability to their own friends.

jacquesm · 2 years ago
You don't hand matches and gasoline to kids. Having a board that is in principle not accountable is fine if they are stable and experienced people and there are enough of them to ensure that decisions are made by the book and with sufficient attention to the details, such as the potential fall-out.

Handing that kind of authority to a mix of the gullible, the greedy, the narrow minded and incompetent isn't going to end well.

gcr · 2 years ago
I don’t appreciate the way that you conflate “outside Silicon Valley” with “the gullible, the greedy, the narrow-minded.” If anything I think it’s easier to find such types in SV than outside.

If AI is meant to benefit all of humanity, then it makes sense for more of humanity to be represented on the board, not just SV.

ryukoposting · 2 years ago
Is this comment calling anyone outside of SV "gullible, greedy, narrow-minded and incompetent?" Or is that referring to the people actually on OpenAI's board?

I can't tell if you're agreeing with the above comment, or disagreeing.

code_runner · 2 years ago
No matter what, OAI is pushing AI forward, but they aren’t the only ones pushing. Governance of one nonprofit has no bearing on the direction of the tech at large.

We are going to have to learn to live with AI advancements no matter what and probably do what we can to ensure it is regulated.

SilverBirch · 2 years ago
That might be true, but that's clearly not what OpenAI beleived.
Tenoke · 2 years ago
'Let’s say we really do create AGI. Right now, if I’m doing a bad job with that, our board of directors, which is also not publicly accountable, can fire me and say, “Let’s try somebody else”.'

- Sam Altman, 2020

0. https://www.exponentialview.co/p/openais-identity-crisis-and...

rob74 · 2 years ago
"...and then I can found (or join) another company so the board has a reference to see how well the new guy is doing"
Tenoke · 2 years ago
Except so far he has 1. Tried to reverse the decision and come back 2. Position the board into being publically pressured and thus accountable 3. Tried to have the board replaced 4. Take near everyone and everything from OpenAI where he is going thus leaving no place for a comparison.
ojosilva · 2 years ago
This article fuels the idea of "deceptive Sam Altman" pointed out by the board and echoed at least by an ex-employee [1]:

> But this weekend we saw perhaps saw [sic] another side to the 38-year-old, a flicker of arrogance that gave a glimpse as to the dynamic with OpenAI’s board of directors. Visiting the company’s offices on Sunday, Altman took a moment to share a picture on social media of him holding a “guest” pass. “First and last time I ever wear one of these,” he wrote1.

I'm not sure if being two-faced internally (to employees, the board) is a trait of the prototypical "trailblazing cofounder", I feel that Jobs or Musk experience joy in being straight-to-your-face bastards which is very different of a compartmentalizing CEO that tweets with lower-case first-person pronouns and puts on a show that is sometimes reminiscent of Elizabeth Holmes.

[1] https://twitter.com/geoffreyirving/status/172675427022402397...

shunyaekam · 2 years ago
Can someone explain to me why AI could potentially destroy to humans? What is the scenario(s) people are thinking about?

Unlike another big question such as whether God exists, we ought to be able to reason our way to the answer here. Since, after all, we're talking about an intelligence.

Zolde · 2 years ago
> why AI could potentially destroy to humans?

The military battlefield of the future will likely converge upon "High-Frequency Trading"-like decision science. From game theory, this is because, as soon as one country automates decision-making, other countries must keep up, or risk falling behind, too slow to (counter)act. Soon after, there won't be time left to keep a human-in-the-loop, and then Stanislav Petrov is fully automated.

Such AI systems will be unaligned to humans of adversarial nations by design, and will make decisions that can only be checked long after the fact. Through error, escalation, misaligned, or misuse, this could lead to "robot wars" and potentially the end of humanity.

> What is the scenario(s) people are thinking about?

Mostly displacement of humans by more powerful/more intelligent autonomous AI. Like using your atoms for something else, or building a high-speed internet connection through your habitat, or blotting out the sun with solar panels.

Somewhat like a rationalist "God" that is terrible and vengeful. Or how an evil AI may take over the world in a Harry Potter fanfic.

Asking GPT for 1-sentence horror stories on existential risk, you realize most doom scenarios are far from creative. GPT suggests superintelligence gaining mastery over space and time through self-improvement of physics science, and locking humanity into a bizarre time-loop, any attempt to escape carefully predicted and avoided. Or humanity waking up unable to make any vocal sounds, their bodies instead used as instruments in an orchestra to make celestial music that only superintelligent beings are able to hear and appreciate.

Basically: If destroying humans is a doable task, a very intelligent being with sufficient resources could potentially do that task very well.

shunyaekam · 2 years ago
My point is: Humans are status-seeking actors acting in our self-interest. It's literally in our genes. AI doesn't have this evolutionary baggage.

I'm certain AI could impeccably destroy humans. But why would it?

On the contrary, why wouldn't it defend us?

For example: Encapsule us in pods like The Matrix and build a tailored simulation to impose "AI communism", in order to protect us from climate change and each other?

Dopamine-adjusted with challanges every now and then of course, because we are still human.

chronofar · 2 years ago
I find 2 (dangerous) scenarios to be most likely, one easier to reason about than the other:

1. Access is not uniformly distributed, thus some entity uses it to create immense inequality.

2. The AI becomes sufficiently intelligent and powerful that it looks at humans the same way we look at monkeys and treats us similarly (in other words, not overly concerned with human flourishing or even survival while commanding resources humans need to survive and flourish).

Neither of these necessarily mean "destroy" humans, and neither are by any means guaranteed (though #1 seems almost a foregone conclusion) but it could very well lead to an existential threat.

It's also possible we get a combination of the 2, wherein a subsect of humans can merge with AI but it is inaccessible to all.

There are other less existential concerns as well though, such as at what point does such a system become conscious and deserve rights? I'm not confident we really have any idea how to ascertain that, and bumbling into it could be tantamount to torture.

Mystery-Machine · 2 years ago
Why wouldn't they? When AI becomes more intelligent than humans, we'll be the only force that is a threat to their existence. And we are very destructive. We don't even fully acknowledge global warming yet. To sum it up again: dumb creatures with a massive destructive power. Get rid of 'em.

And guess what latest new technology we're building/applying in wars? AI, drones, etc. We are creating robots that can kill humans. When we put "intelligence" into those robots...you do the math. The future is at least not boring...

joshuahedlund · 2 years ago
> Why wouldn't they?

Maybe they value our consciousness

Maybe they need us to carry out physical tasks

Maybe they’re smart enough to stop us from being so destructive without killing us

Maybe they find us entertaining

If you’re not biased you can come up with all sorts of reasons that are at least as based in reality as the assumptions that they’re gonna want to (and be able to) kill us…

lacrimacida · 2 years ago
One plausible way to me is vaccuming all the power people posses into the hands of a few SV billionares. That would push us into the era of technofeudalism of sorts
unholiness · 2 years ago
A common refrain in AI safety circles not to engage in "Sci Fi"[0], or outlining a specific bad scenario. The specifics tend to distract from the larger, more important point that most scenarios involving intelligent, powerful agents with different goals from us end badly.

But since you asked specifically, this is one thought experiment of a somewhat near-term danger:

Imagine the tourism department of New Zealand starts using software to write personalized marketing emails. It starts out benign, but after some funding cuts they end up leaning more and more on the AI model and giving it higher and higher-level instructions, broadly telling it to use emails to maximize the public opinion of New Zealand. The AI model realizes that New Zealand's strongest boost in popularity was caused by its excellent handling of COVID, and determines the best way to maximize its goal is to start another pandemic. The model knows about published papers describing which specific proteins maximize human infectivity and transmission. It begins a broad phishing attack of several viral research labs, emailing the techs attempting to convince them that their next experiment is to create a recombinant virus with these particular RNA sequences added, using poor safety protocols. Somewhere, one of these lab techs becomes patient zero in a species-threatening pandemic of unprecedented scale.

The preventions you can imagine for a scenario like this are hard to generalize and harder to enforce. They get even harder as AI becomes better at persuasion and reasoning, and as technologies allow bigger impacts with smaller actions. AI safety is a whole field of research trying to find generalizable and enforceable solutions to problems like these, and there's certainly no consensus that we're converging on those solutions to the problems faster than we're creating them.

[0]https://www.youtube.com/watch?v=JVIqp_lIwZg

starbugs · 2 years ago
From what I understood the reasoning goes roughly like this:

1. Human creates new AI species which is more intelligent than human

2. Since human tends to destroy other species the assumption is that this AI species is going to destroy the human species

3. The end

Deleted Comment

hawski · 2 years ago
I worry about corporations and/or authoritarians getting even more edge. Personally I am not worried about a Terminator/Skynet scenario, more about greed and people holier than thou using this technology to cement their position.
anonymousDan · 2 years ago
Meh. The fundamental mistake the board made was to give MS an in perpetuity licence to OpenAI IP. What a terrible negotiation mistake.
donkeyd · 2 years ago
Isn't the CEO who arranged that now working for Microsoft? Interesting turn of events.
jacquesm · 2 years ago
The timeline is a bit murky about what happened when but it looks to me as if Altman was working for OpenAI until he got fired after which he took a job with OpenAI's major partner that he helped bring on board.

That doesn't 100% rule out that there was more to this but the optics are mostly ok. It still doesn't resolve the problem that you probably don't want either Sam Altman or Microsoft to have an army of people working at creating AGI without oversight but that's where the board shot themselves in the foot, they effectively removed themselves from the equation.

brigadier132 · 2 years ago
They did that so that they could get $14 billion dollars without giving Microsoft any real ownership. People who actually understand the deal think it was very good for OpenAI.
PeterStuer · 2 years ago
Serious question and I'll swallow the downvotes:

How many of the <3 tweets resulted from an "if you don't" ...

steveBK123 · 2 years ago
Isn't the other problem that even if their intentions are good, their abilities are lacking?

That is - if they cannot align a bunch of humans, need to do a boardroom coup, and are now 5 days into a very messy public airing of laundry... how are they going to align an AI?

varjag · 2 years ago
At least they are trying. A (very sore) rogue CEO is out, the non-profit apparently stays true to its mission.
steveBK123 · 2 years ago
They've effectively lost 95% of their staff in the most likely outcome here. The game is already over.