Readit News logoReadit News
Pet_Ant · a year ago
In case people haven't noticed, this is the second resignation in as many days.

https://news.ycombinator.com/item?id=40361128

hn_throwaway_99 · a year ago
They resigned together on the same day - people are just announcing this like it's some type of "drip drip" of people leaving to build suspense.

While Jan's (very pithy) tweet was later in the evening, I was reading other posts yesterday at the time of Ilya's announcement saying that Jan was also leaving.

dawoodee · a year ago
A few more people involved in the alignment efforts have left recently: https://x.com/ShakeelHashim/status/1790685752134656371
pfist · a year ago
I have noticed, and I am concerned that they were the leaders of the Superalignment team.
zer00eyz · a year ago
Sam Altman superalinged them right out the door...
dontupvoteme · a year ago
Turns out we already have alignment, it's called capitalism.
treme · a year ago
Reads like beginnings of a good dystopian movie script
transcriptase · a year ago
On the other hand, they clearly they weren’t concerned enough about the issue to continue working on it.
the_mitsuhiko · a year ago
And entirely predictable from the first one: https://openai.com/index/introducing-superalignment/
btown · a year ago
Makes me wonder if that 20% compute commitment to superalignment research was walked back (or redesigned so as to be distant from the original mission). Or, perhaps the two deemed that even more commitment was necessary, and were dissatisfied with Altman's response.

Either way, if it's enough to cause them both to think it's better to research outside of the opportunities and access to data that OpenAI provides, I don't see a scenario where this doesn't indicate a significant shift in OpenAI's commitment to superalignment research and safety. One hopes that, at the very least, Microsoft's interests in brand integrity incentivize at least some modicum of continued commitment to safety research.

uLogMicheal · a year ago
Imagine trying to keep something so far above us in intelligence, caged. Scary stuff...
kamikaz1k · a year ago
who is this and why is it important? [1]

super-alignment co-lead with Ilya (who resigned yesterday)

what is super alignment? [2]

> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027.

[1] https://jan.leike.name/ [2] https://openai.com/superalignment/

jvanderbot · a year ago
My honest-to-god guess is that it just seemed like a needless cost center in a growing business, so there was pressure against them doing the work they wanted to do.

I'm guessing, but OpenAI probably wants to start monetizing, and doesn't feel like they are going to hit a superintelligence, not really. That may have been the goal originally.

hollerith · a year ago
>it just seemed like a needless cost center in a growing business

To some of us, that sounds like, "Fire all the climate scientists because they are needless cost center distracting us from the noble goal of burning as much fossil fuel as possible."

mjr00 · a year ago
Yeah, OpenAI is all-in on the LLM golden goose and is much more focused on how to monetize it via embedding advertisements, continuing to provide "safety" via topic restrictions, etc., than going further down the AGI route.

There's zero chance LLMs lead to AGI or superintelligence, so if that's all OpenAI is going to focus on for the next ~5 years, a group related to superintelligence alignment is unnecessary.

llamaimperative · a year ago
So you mean the things the charter foresaw and was intended to make impossible is in fact happening? Who could've thunk it (other than the creators of the charter and nearly anyone else with a loose grasp on how capitalism and technology interact).
hollerith · a year ago
If AGI is no longer Sam Altman's goal, why was he recently trying to raise 7 trillion dollars for hardware to accelerate progress in AI?
threetonesun · a year ago
I assume a lot of companies want in on the AI-to-purchase pipeline. "Hey [AI] what kind of car is this?" with a response that helps you buy it at the very high end, or something as simple as "hey [AI] I need more bread, it's [brand and type]" and who it gets purchased from and how it shows up is the... bread and butter of the AI company.

Super intelligent AI seems contrary to the goals of consumerist Capitalism, but maybe I'm just not smart enough to see the play there.

icapybara · a year ago
This is the simplest explanation.
for_i_in_range · a year ago
I agree. Not everything has to be a conspiracy. Microsoft looked at a $10m+/year cost center, and deemed it unnecessary (which it arguably was), and snipped it.
legohead · a year ago
What is the "intelligence" behind a word predictor?
DrSiemer · a year ago
Fake it till you make it
TiredOfLife · a year ago
> We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. Our goal is to solve the core technical challenges of superintelligence alignment by 2027

Can somebody translate this to human?

brandall10 · a year ago
That by 2027 they will figure out how to control Skynet so it doesn't kill us all when it awakens.
colibri727 · a year ago
What are these core technical challenges ?
ganzuul · a year ago
I bet superalignment is indistinguishable from religion (the spiritual, not manipulative kind), so proponents get frequency-pulled into the well-established cult leader pipeline. It's a quagmire to navigate so we can't have both open and enlightening discussions about what is going on.
marricks · a year ago
It's also making sure AI is aligned with "our" intent and that "our" is a board made up of large corporations.

If AI did run away and do it's own thing (seems super unlikely) it's probably a crapshoot as to whether what it does is worse than the environmental apocalypse we live in where the rich continue to get richer and the poor poorer.

ben_w · a year ago
It can only be "super unlikely" for an AI to "run away and do it's own thing" when we actually know how to align it.

Which we don't.

So we're not aligning it with corporate boards yet, though not for lack of trying.

(While LLMs are not directly agents, they are easy enough to turn into agents, and there's plenty of people willing to do that and disregard any concerns about the wisdom of this).

So yes, the crapshoot is exactly what everyone in AI alignment is trying to prevent.

(There's also, confusingly, "AI safety", which includes alignment but also covers things like misuse, social responsibility, and so on)

uLogMicheal · a year ago
I thought the whole point of making a transparent organization to lead the charge on AI was so that we could prevent this sort of ego and the other risks that come with.
alephnerd · a year ago
Nonprofits are not really that transparent, and do bend to the will of donors, who themselves try to limit transparency.

That's why Private Foundations are more popular than Public Charities even though both are 501c3 organizations, because they don't need to provide transparency into their operations.

ganzuul · a year ago
Say I have intelligence x and a superintelligence is 10x, then I get stuck at local minima that the 10x is able to get out of. To me, the local minima looked "good", so if I see the 10x get out of my "good" then most likely I'm looking at something that appears to me to be "evil" even if that is just my limited perspective.

It's one hell of a problem.

adverbly · a year ago
I think you're on to something, but to me it has more to do with being part of the set of issues that intersect political policy and ethics. I see it as facing the same "discourse challenges" as:

abortion

animal cruelty laws/veganism/vegetarianism

affirmative action

climate change(denial)

These are legitimate issues, but it is also totally possible to just "ignore" them and pretend like they don't exist.

ganzuul · a year ago
This time we have a genie in a lamp which will not be ignored. This should mean that a previously unknown variable is now set to "true" so discussion is more focused on reality.

However the paranoid part of me says that these crises and wars are just for the sake of letting people continue to ignore the truly difficult questions.

dontupvoteme · a year ago
>Frequency-pulled

You mean like injection locking with oscillators? Or is this a new term in the tweetosphere

ganzuul · a year ago
Injection locking. This: https://www.youtube.com/watch?v=e-c6S6SdkPo

I mean it hides nuance in conversation.

Jimmc414 · a year ago
Jan and Ilya were the leads of the superalignment team set up in July of 2023.

https://openai.com/index/introducing-superalignment/

"Our goal is to solve the core technical challenges of superintelligence alignment in four years.

While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem:C There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically.

Ilya Sutskever (cofounder and Chief Scientist of OpenAI) has made this his core research focus, and will be co-leading the team with Jan Leike (Head of Alignment). Joining the team are researchers and engineers from our previous alignment team, as well as researchers from other teams across the company."

nicklecompte · a year ago
It is easy to point to loopy theories around superalignment, p(doom), etc. But you don't have to be hopped up on sci-fi to oppose something like GPT-4o. Low-latency response time is fine. The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine. I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.
vasco · a year ago
> The faking of emotions and overt references to Her (along with the suspiciously-timed relaxation of pornographic generations) are not fine.

Are you not aware of how many billions are getting spent on fake girlfriends on Only Fans, with millions of people chatting away with low paid labor across an ocean pretending to be an american girl? This is just reducing costs, but the consumers are already wanting the product.

I'm not sure I get the outrage / puritanism. Adults being able to chat to a fake girlfriend if they want to seems super bland. There's way more stuff out there that's way wilder you can do online, potentially also exploiting the real person on the other side of the screen if they are trafficked or whatever. I don't have any mental issues (well, who knows? ha) and genuinely would try it the same way you try a new porn category every once in a while.

screye · a year ago
The 2nd most visited GenAI website is character ai.

The net sum of all LLM B2C usecases (effectively Chatgpt) is competing with AI girlfriends for rank 1.

It isn't just huge. It is the most profitable usecase for gen AI.

"A goal of a system is what it does"

Retr0id · a year ago
There's a lot of harmful stuff already in the world, but most of us would rather not add to the pile on an industrial scale.
bnralt · a year ago
One could also say that therapists prey on lonely people who pay them to talk to them and seem like they’re genuinely interested in them, when the therapist wouldn’t bother having a connection with these people once they stop paying. Which I suppose is true from a certain point of view. But from another point of view, sometimes people feel like they don’t have close friends or family to talk to and need something, even if it’s not a genuine love or friendship.
kettro · a year ago
This is implying that therapy is nothing more than someone to talk to; if that’s your experience with therapy, then you should get another therapist.
hn_throwaway_99 · a year ago
> One could also say that therapists prey on lonely people who pay them to talk to them and seem like they’re genuinely interested in them, when the therapist wouldn’t bother having a connection with these people once they stop paying.

As another commenter said, if that's your experience with a therapist, you have a shitty therapist and should switch.

Most importantly, a good therapist will very clearly outline their role, discuss with you what you hope to achieve, etc. I've been in therapy many years, and I know exactly what I'm paying for. Sure, some weeks I really do just need someone to talk to. But never have I, or my therapist, been unclear that I am paying for a service, and one that I value much more than just having "someone to talk to".

Using the terminology "prey on lonely people" is ridiculous (again, for any good therapist). If they were actually preying on me, then their goal would be to keep me lonely so I become dependent on them (and I'm not saying that never happens, but when it does it's called malpractice). A good therapist's entire goal is to make people self-sufficient in their lives.

smt88 · a year ago
Therapists are educated and trained to help alleviate mental-health issues, and their licenses can be revoked for malpractice. Their livelihood partially depends on ethics and honest effort.

None of those safeguards are in place for AI companies.

detourdog · a year ago
Ideally a therapist is an uninvolved neutral party in one's life. They act as sounding board to measure one's internal reactions to the outside world.

The key is neutral point of view. Friend's and family come with bias. The bias's can be compounded by mentally ill friend's and family.

Therapist must meet with other therapists about their patient interactions. The second therapist acts a neutral third party to the keep the first therapist from losing their neutrality.

That is the ideal and the real world may differ.

I'm struggling with someone that looks to be having some real mental issues. The person believes I'm the issue and I need to maintain a therapist to make sure I'm treating this person fairly.

I need a neutral third party that I gossip with that is bound to keep it to themselves.

koe123 · a year ago
One could then argue all transactional relationships are predatory then, right? A restaurant serves you only for pay.

You could argue cynically that all relationships are to some extent transactional. People “invest” in friendships after all. It’s just a bit more abstract.

Maybe the flaw in the logic is the existence of some sort of “genuine” binary: things are either genuine or they aren't. When we accept such a binary lots of things can be labeled predatory.

tech_ken · a year ago
> One could also say that therapists prey on lonely people who pay them to talk to them

It is indisputable that one could say this

fullshark · a year ago
I'll just point to the theory that they didn't want to work for a megacorp creating tools for other megacorps (or worse) and actually believed in OpenAI's (initial) mission to further humanity. The tools are going to be used by deep pocketed entities for their purposes, the compute resources necessary require that to be the case for the foreseeable future.
shmatt · a year ago
Realistically its all just probabilistic word generation. People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

All these companies are doing now is taking an existing inferencing engine, making it 3% faster, 3% more accurate, etc. per quarter, fighting over the $20/month users

One can imagine product is now taking the wheel from engineering and are building ideas on how to monetize the existing engine. Thats essentially what GPT-4o is, and who knows what else is in the 1,2,3 year roadmaps for any of these $20 companies

To reach true AGI we need to get past guessing, and that doesn't seem close at all. Even if one of these companies gets better at making you "feel" like its understanding and not guessing, if it isnt actually happening, its not a breakthrough

Now with product leading the way, its really interesting to see where these engineers head

DiogenesKynikos · a year ago
> People "feel" like an LLM understands them but it doesn't, its just guessing the next token. You could say all our brains are doing are just guessing the next token but thats a little too deep for this morning

"Just" guessing the next token requires understanding. The fact that LLMs are able to respond so intelligently to such a wide range of novel prompts means that they have a very effective internal representation of the outside world. That's what we colloquially call "understanding."

qq66 · a year ago
Doesn't have to be smart to be dangerous. The asteroid that killed the dinosaurs was just a big rock.
bambax · a year ago
Oh well... It seems at least one of those two things have to be true: either AGI is so far away that "alignment" (whatever it means) is unnecessary; or, as you suggest, Altman et al. have decided it's a hindrance to commercial success.

I tend to believe the former, but it's possible those two things are true at the same time.

Liquix · a year ago
or C) the first AGI was/is being/will be carried away by men with earpieces to a heavily fortified underground compound. any government - let alone the US government - isn't going to twiddle their thumbs while tech that will change human history is released to the unwitting public. at best they'll want to prepare for and control the narrative surrounding the event, at worst AGI will be weaponized against humans before the majority are aware it exists.

if OAI is motivated by money, uncle sam can name any figure to buy them out. if OAI is motivated by power, it becomes "a matter of national security" and they do what the gov tells them. more likely the two parties' interests are aligned and the public will hear about it when It's Time™. not saying C) is what's happening - A) seems likely too - but it's a real possibility

nicklecompte · a year ago
Specifically I am supposing the superalignment people were generally more concerned about AI safety and ethics than Altman/etc. I don't think this has anything to do with superalignment itself.
llm_trw · a year ago
>dangerous for mentally unwell users

It's not our job to make the world safe for fundamentally unsafe people.

lantry · a year ago
This is literally everyone's job. It's the whole point of society. Everyone is "fundamentally unsafe", and we all rely on each other.
itishappy · a year ago
I'm guessing your work isn't sanitation either. Do you throw your trash straight on the ground?

Some things are everyone's responsibility if we want to live in a pleasant society.

nicklecompte · a year ago
"fundamentally unsafe people" is probably the grossest thing I've read on here in years.
kettro · a year ago
I would argue that it is society’s job to care for its most vulnerable.
poulpy123 · a year ago
Actually yes, it's our job

Deleted Comment

limpbizkitfan · a year ago
Okay this is a weird philosophy to have lol
yinser · a year ago
What a wild accusation for someone light years away from the board room.
nicklecompte · a year ago
I wasn't making an accusation about why Leike/Sutskever left, though I definitely understand why you read my comment that way.

The actual accusation I am making is that someone at OpenAI knew the risks of GPT-4o and Sam Altman didn't care. I am confident this is true even without spies in the boardroom. My guess is that Leike or Sutskever also knew the risks and actually did care, but that is idle speculation.

lghh · a year ago
> along with the suspiciously-timed relaxation of pornographic generations

Has ChatGPT's censoring (a loaded term, but idk what else to use) been relaxed with GPT-4o? I have not tested it because I wouldn't have expected them to do it. Does this also extend to other types of censorship or filtering they do? If not, it feels very intentional in the way you're alluding to.

ziml77 · a year ago
I don't see anything that says they've changed their policies yet. Just that they're looking into it. I also tested 4o and it still gives me a content policy warning for NSFW requests.
qarl · a year ago
> The faking of emotions

HEH. In previous versions, when it told jokes, were those fake jokes?

awkwardpotato · a year ago
Those are fundamentally different things. You can tell a joke without understanding context, you can't express emotions if you don't have any. It's a computation model, it cannot feel emotion.
ToucanLoucan · a year ago
The use of LLM's as pseudo-friends or girlfriends for people as a market solution for loneliness is so incredibly sad and dystopian. Genuinely one of the most unsettling goddamn things I've seen gain traction since I've been in this industry.

And so many otherwise perfectly normal products are now employing addiction mechanics to drive engagement, but somehow this one is just even further over the line for me in a way I can't articulate. I'm so sick of startups taking advantage of people. So, so fucking gross.

swatcoder · a year ago
It's a technological salve that gives individuals a minor and imperfect remedy for a profound failure in modern society. It's of a kind with pharmaceutical treatments for depression or anxiety or obesity -- best seen as a temporary "bridge" towards wellness (achieved, perhaps, through other interventions) -- but altogether just trying to help troubled individuals navigate a society that failed to enable their deeper wellness in the first place.

Deleted Comment

Deleted Comment

Intralexical · a year ago
Idk man, I'm too busy being terrified of the use of LLMs as propaganda agents, micro-targetting adtech vectors, mass gaslighters and cultural homogenizers.

I mean, these things are literally designed to statelessly yet convincingly talk about events they can't see, experiences they can't understand, emotions they can't feel… If a human acted like that, we'd call them a psychopath.

We already know that our social structures tend to be quite vulnerable to dark triad type personalities. And yet, while human psychopaths are limited by genetics to a small percentage of the population, there's no limit on the number of spambot instances you can instruct to attack your political rivals, Alexa 2.0 updates that could be pushed to sound 5% sadder when talking about a competitor's products, LLM moderators that can be deployed to subtly correct "organic" interactions that leave a known profitable state space… And that's just the obvious next steps from where we're already at today. I'm sure the real use cases for automated lying machines will be more horrifying than most of us could imagine today, just as nobody could have predicted in 2010 that Twitter and Facebook would enable ISIS, Trump, unconsensual mass human experimentation, the Rohingya genocide…

Which is to say, selling LLM "friends" or "girlfriends" as a way to addictively exploit people's loneliness seems like one of the least harmful things that could come out of the current "AI" push. Sad, yes, but compared to where I think this is headed, that seems like dodging a bullet.

> I'm so sick of startups taking advantage of people. So, so fucking gross.

Silicon Valley was a mistake. An entire industry controlled largely by humans that decided they like predictable programmable machines more than they like free and equal persons. What was the expected outcome?

poulpy123 · a year ago
I saw the faking of emotions, and it's already visible in previous LLM and I find that extremely annoying indeed.
vunderba · a year ago
Not fine... to you.

What's your stance on other activities which can lead to harmful actions from people with predilections towards addiction such as:

1. Loot boxes / Fremium games

2. Legalized gambling

3. Pornography

etc. etc.

I don't really have a horse in the race, neither for/against, but I prefer consistency in belief systems.

nicklecompte · a year ago
I am criticizing Sam Altman for making an unethical business decision. I didn't say "Sam Altman should go to jail because GPT-4o is creepy" or "I want to away your AI girlfriend." So I am not sure what "belief system" (ugh) you think I need to demonstrate the consistency of. Almost seems like this question is a ad hominem distraction....

All three of the categories of businesses you mentioned can be run ethically in theory. In practice that is rare: they are often run in a way that shamelessly preys on vulnerable people, and these tactics should be more closely investigated by regulators - in fact they are regulated, and AI chatbots should be as well. Sam Altman is certainly much much more ethical than most pornography executives (e.g. OnlyFans is complicit in widespread sex trafficking), but I don't think he's any better than freemium game developers.

This question seems like a bad-faith rhetorical trap, sort of like the false libertarian dilemmas elsewhere in the thread. I believe the real issue is that people want a culture where lucrative business opportunities aren't subject to ethical considerations, even by outside observers.

sebzim4500 · a year ago
>I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users

Isn't it much more likely that they are just trying to make a product that people want to use?

Even Tobacco companies don't go out of their way to give people cancer.

smt88 · a year ago
> just trying to make a product that people want to use?

Sure, but you can do that ethically or unethically.

If you make a product that's harmful, disincentivizes healthy behavior (like getting therapy), or becomes addictive, then you've crossed into something unethical.

> Even Tobacco companies don't go out of their way to give people cancer.

This is like saying pool companies don't go out of their way to get people wet.

While it isn't their primary goal, the use of tobacco causes cancer, so their behavior (promoting addiction among children, burying medical research, publishing false research, lobbying against regulation) are all in service of ultimately giving cancer to more people.

Cancer and cigarettes are inseparable, the same way casinos and gambling addiction are inseparable.

talldayo · a year ago
But tobacco companies are still complicit in distributing addictive carcinogens to people even if only in trace amounts. The same could be said about predatory business models/products.
GaggiX · a year ago
There has been no relaxation of pornographic generations on OpenAI products.
qingcharles · a year ago
They announced an intention to allow porn generations.
abeppu · a year ago
I think the comparison to tobacco companies is misleading because tobacco is not good for anyone, poses a risk of harm to everyone who uses it regularly, and causes very bad outcomes for some of those users. I.e. there's not a large population who can use tobacco without putting themselves at risk.

But hypothetically if a lot of people would benefit from a GPT with more fake emotions, that might reasonably counterbalance concerns about harm for a mentally unwell minority. If we build a highway, we know that eventually it will lead to deaths from car crashes -- but if the highway is actually adding value by letting people travel, those benefits might reasonably be expected to outweigh that harm. And the people getting into their cars and onto the highway agree, that the benefits outweigh the costs, right up until they crash.

None of this is to say that I think OpenAI's choices here were benevolent rather than a business choice. But I think even if they were trying to do the ethically best thing for the world overall, it would be plausible to move forward

I for one found the fake emotions in their voice demos to be really annoying tho.

alxjrvs · a year ago
Playing devils advocate for a moment - have you ever had a cigarette? It does plenty of good for the user. In fact, I think we do make this risk calculation that you describe in the exact same way - there are plenty of substances that are so toxic to humanity that we make them illegal to own or consume or produce, and the presence of these in your body can sometimes even risk employment, let alone death.

We know the risks from cigarettes, but it offers tangible benefits to its users, so they continue to use the product. So too cars and emotionally manipulative AI's, I imagine.

(None of this negates your overall point, but I do think the initial tobacco comparison is very apt.)

simonw · a year ago
I don't understand what you're referring to with that tobacco reference.
ChicagoBoy11 · a year ago
Not the parent comment, but I think he means something like "we know folks will be addicted to this pseudo-person and that is a good thing cause it makes our product valuable", akin to reports that tobacco companies knew the harms and addictive nature of their products and kept steadfast nonetheless. (But I'm speculating as to the parent's actual intent)
nicklecompte · a year ago
https://nida.nih.gov/publications/research-reports/tobacco-n...

  A larger proportion of people diagnosed with mental disorders report cigarette smoking compared with people without mental disorders. Among US adults in 2019, the percentage who reported past-month cigarette smoking was 1.8 times higher for those with any past-year mental illness than those without (28.2% vs. 15.8%). Smoking rates are particularly high among people with serious mental illness (those who demonstrate greater functional impairment). While estimates vary, as many as 70-85% of people with schizophrenia and as many as 50-70% of people with bipolar disorder smoke.
I am accusing OpenAI (and Phillip Morris) of knowingly profiting off mental illness by providing unhealthy solutions to loneliness, stress, etc.

atlasunshrugged · a year ago
I read it as the economics of tobacco (and alcohol and a few other 'vice' industries) that there will invariably be superusers who get addicted and produce the most economic value for companies even while consuming an actively harmful product
renewedrebecca · a year ago
Purposely making an addiction machine, most likely.

Dead Comment

Dead Comment

light_triad · a year ago
Both super-alignment leads resigned. One way to interpret this is that it's an interesting research topic that has wide repercussions the closer they get to AGI, but it's more of a extra cost that doesn't directly help with monetisation and productisation. The real killer applications are in business verticals with domain specific expertise and customised (and proprietary) datasets, not so much the 'pure' academic research.

OpenAI dedicated 20% of compute to the effort which sounds kind of like Google's 20% side project time :)

w10-1 · a year ago
It looks like "Super-alignment" was to automate monitoring, and their job was to find AI researchers who wanted not to build new things but find problems.

But there's really zero glory or profit in doing QA, much as users complain about quality issues.

So perhaps they succeeded and built a team, or perhaps they found they couldn't recruit a decent team, or perhaps they failed to produce anything useful (since the goal of not offending anyone or embarrassing the company or doing harm is relatively difficult to concretize).

Regardless, I suspect there's no good answer yet to how to manage quality, in part because there's no good answer on how these things work. That problem will likely remain long after people have forgotten about Ilya, Jan, or even Sam.

Though it might be solved by rethinking the idea that "attention is all you need".

willsmith72 · a year ago
Why can't openai just be a competent drama-free org?

Is it impossible to build really cool tech without the upheaval?

CSMastermind · a year ago
Probably because if you collect a bunch of highly motivated and extremely smart people who are good at overcoming obstacles and eliminating problems they eventually start to see each other as obstacles or problems.
llm_trw · a year ago
Having been on founding teams that imploded like open AI and ones that succeeded I've found that waving a big wad of cash before every meeting does wonders to motivate people.

>This is what we all want, and we want it as fast as possible. Let's get through this together and enjoy our tropical islands alone.

You can trust people's greed, you can't trust their morals. Anyone who says otherwise hasn't seen a large enough wad of cash yet.

Apocryphon · a year ago
This sort of thing happens even in FOSS projects all of the time, you can’t have people without drama. And perhaps you won’t have AI without drama, either.
llm_trw · a year ago
Foss projects are drama. I've found it much worse in the non-profit space than the for profit space. Which openai showed us ironically.
makk · a year ago
> Why can't openai just be a competent drama-free org?

Because it is run by humans, for the time being.

jlarocco · a year ago
That's funny.

If OpenAI's tech is so great, why aren't they using it to direct the company yet?

summerlight · a year ago
You have limited resources on both computations and engineering. And everyone have different, conflicting priorities. The group needs to make a decision where to put their resources, so there is politic.
motoboi · a year ago
You can't remove politics (human beings interacting) from human beings interactions.
ergocoder · a year ago
It started as a non-profit, which is the root of all the issues.

It needs to find a way to get over that non-profit. otherwise, I don't think the company will survive longer term.

The incentives are not super-aligned (pun intended). Employees want to build AI and make a lot of money at the same time, meanwhile the board (who wouldn't earn any money anyway) is willing to destroy money to further AI.

If OpenAI was a normal for-profit company, Sam and co-founders would have been billionaires already. Now Sam seems to have other several stakes and side projects...

jimberlage · a year ago
Less powerful inventions have caused more drama than this, historically speaking.
jejeyyy77 · a year ago
because these are 2 people who are at the top of the field and can work on literally anything they want.

Deleted Comment

JimDabell · a year ago
Where are you seeing the drama? Two people have left a startup, that’s all. It happens all the time.
gaudystead · a year ago
If you think those are the only points of drama, you may not have seen news about OpenAI from a couple months ago where Sam got ousted from the board, got in bed with Microsoft, then eventually came back to OpenAI, just to name one of the bigger dramas off the top of my head. It's definitely more than just the two data points from yesterday that you mentioned.

Dead Comment

OutOfHere · a year ago
Good riddance, considering the employee's focus on moderation. After all, most users don't like any moderation for their own personal activities, only sometimes for other people's activities.