Readit News logoReadit News
troupe · 2 years ago
If OpenAI became a non-profit with this in its charter:

“resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"

I don't think it is going to be hard to show that they are doing something very different than what they said they were going to do.

stubish · 2 years ago
So much of the discussion here is about being a non-profit, but per your quote I think the key is open source. Here we have people investing in an open source company, and the company never opened their source. Rather than open source technology everyone could profit from, they kept everything closed and sold exclusive access. I think it is going to be hard for OpenAI to defend their behavior, and a huge amount of damages to be claimed for all the money investors had to spend catching up.
tracerbulletx · 2 years ago
It says "will seek to open source technology for the public benefit when applicable" they have open sourced a number of things, Whisper most notably. Nothing about that is a promise to open source everything and they just need to say it wasn't applicable for ChatGPT or DallE because of safety.
richardw · 2 years ago
I might be too generous, but my interpretation is that the ground changed so fast that they needed to shift to continue the mission given the new reality. After ChatGPT, every for-profit and its dog is going hard. Talent can join the only Mother Teresa in the middle, or compete with them as they stupidly open all the source the second they discover anything. You can’t compete with the biggest labs in the world who have infinite GPU, with selfless open sourcers running training on their home PC’s. And you need to be in the game to have any influence over the eventual direction. I’d still bet the goal is the same, but how it’s done has changed by necessity.
HarHarVeryFunny · 2 years ago
> huge amount of damages to be claimed for all the money investors had to spend catching up

Huh? There's no secret to building these LLM-based "AI"s - they all use the same "transformer" architecture that was published by Google. You can find step-by-step YouTube tutorials on how to build one yourself if you want to.

All that OpenAI did was build a series of progressively larger transformers, trained on progressively larger training sets, and document how the capabilities expanded as you scaled them up. Anyone paying attention could have done the same at any stage if they wanted to.

The expense of recreating what OpenAI have built isn't in having to recreate some secret architecture that OpenAI have kept secret. The expense is in obtaining the training data and training the model.

orena · 2 years ago
They publish the source code for Whisper...
gamblor956 · 2 years ago
From the Articles of Incorporation:

"The specific purpose of this corporation is to provide funding for research, development and distribution of technology related to artificial intelligence. The resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable."

Based on this, it would be extremely hard to show that they are doing something very different from what they said they were going to do, namely, fund the research and development of AI technology. They state that the technology developed will benefit the public, not that it will belong to the public, except "when applicable."

It's not illegal for a non-profit to have a for-profit subsidiary earning income; many non-profits earn a substantial portion of their annual revenue from for-profit activities. The for-profit subsidiary/activity is subject to income tax. That income then goes to the non-profit parent can be used to fund the non-profit mission...which it appears they are. It would only be a private benefit issue if the directors or employees of the non-profit were to receive an "excess benefit" from the non-profit (generally, meaning salary and benefits or other remuneration in excess of what is appropriate based on the market).

ethbr1 · 2 years ago
The "when applicable" is ambiguous, but IANAL.

Does it become applicable to open source when "The resulting technology will benefit the public"?

That seems the clearest read.

If so, OpenAI would have to argue that open sourcing their models wouldn't benefit the public, which... seems difficult.

They'd essentially have to argue that the public paying OpenAI to use an OpenAI-controlled model is more beneficial.

Or does "when applicable" mean something else? And if so, what? There aren't any other sentences around that indicate what else. And it's hard to argue that their models technically can't be open sourced, given other open source models.

shp0ngle · 2 years ago
They claim that this is about the end result, but in the meantime, they can license the not-yet-done AI to Microsoft.
_heimdall · 2 years ago
If that's the interpretation, its cpletely open ended and OpenAI has full rights to move goal posts for as long as they wish by redefining "done".

Technologies are never "done" unless and until they are abandoned. Would it be reasonable for OpenAI to only open source once the product is "done" because it is obsolete or failed to meet performance metrics?

And is that open sourcing of the training algorithm, the interpretation engine, or the produced data model?

remram · 2 years ago
Arguably a lot of it is "done". They sell subscriptions to third parties...
jasonfarnon · 2 years ago
Their argument is that the profit from the license assists in reaching the end result. E.g. giving them compute power.

Dead Comment

Dead Comment

1vuio0pswjnm7 · 2 years ago
What about damages. How difficult to show.

In case anyone is confused I am referring to 126, 132 and 135. Not 127.

"126. As a direct and proximate result of Defendants breaches, Plaintiff has suffered damages in an amount that is presently unknown, but that substantially exceeds this Courts jurisdictional minimum of $35,000, and, if necessary, will be proven at trial.

127. Plaintiff also seeks and is entitled to specific performance of Defendants contractual obligations.

132. Injustice can only be avoided through the enforcement of Defendants repeated promises. If specific enforcement is not awarded, then Defendants must at minimum make restitution in an amount equal to Plaintiffs contributions that have been misappropriated and by the amount that the intended third-party beneficiaries of the Founding Agreement have been damaged [how??], which is an amount presently unknown, and if necessary, will be proven at trial, but that substantially exceeds this Courts jurisdictional minimum of $35,000.

135. As a direct and proximate result of Defendants breaches of fiduciary duty, Plaintiff and the express intended third-party beneficiaries of the Founding Agreement have suffered damages in an amount that is presently unknown, but substantially exceeds this Courts jurisdictional minimum of $35,000, and if necessary, will be proven at trial."

The end result of this suit, if it is not dismissed, may be nothing more than OpenAI settling with the plaintiffs for an amount equal to the plaintiffs' investments.

According to this complaint, we are supposed to be third-party beneficiaries to the founding agreement. But who actually believes we would be compensated in any settlement. Based on these claims, the plaintiffs clearly want their money back. Of course they are willing to claim "the public" as TPBs to get their refund. Meanwhile, in real life, their concern for "the public" is dubious.

Perhaps the outcome of the SEC investigation into Altman's misrepresentations to investors, if any, may be helpful to these plaintiffs.

bwilliams18 · 2 years ago
Elon's asking for specific performance (IE they uphold the agreement) not damages.
paulddraper · 2 years ago
Specfic performance is a behavior, eg go open source, fire Altman, break it off with MS, etc
HarHarVeryFunny · 2 years ago
> The end result of this suit, if it is not dismissed, may be nothing more than OpenAI settling with the plaintiffs for an amount equal to the plaintiffs' investments.

Musk's money kept the lights on during a time when OpenAI didn't do much more than get a computer to play Dota. If he wants the proceeds of what his money bought, then they should write him a check for $0, or ship him a garbage can full of the taco wrappers eaten by the developers during that time period.

neximo64 · 2 years ago
Its easy to show this, since the the corporation itself is doing this.

The separate entity is the one going for revenue.

Jensson · 2 years ago
You can't just give away all assets of a non-profit to a for-profit and say that now the old non-profit values no longer matters.

Dead Comment

99_00 · 2 years ago
Is the charter legally binding?

Is it unchangeable?

A single quote doesn't tell us much.

troupe · 2 years ago
Going to the IRS and saying, "This is how we plan to benefit humanity and because of that, we shouldn't have to pay income tax." and then coming back later and saying, "We decided to do the opposite of what we said." is likely to create some problems.
mminer237 · 2 years ago
Yes, the charter is legally binding as OpenAI's primary fiduciary obligation. It's akin to a normal corporation's duty to shareholders.

Such mission statements are generally modifiable as long as the new purpose is still charitable. It depends on the bylaws though.

az226 · 2 years ago
The $10M “equity” awards being vested flies in the face of private gain.
cpill · 2 years ago
yeah, the lawyers will have the whole case on those two words: "where applicable"
ant6n · 2 years ago
Rather „not organized for the private gain of any person“
btown · 2 years ago
Yep - the very existence of a widespread concern that open sourcing would be counter to AI safety, and thus not "for the public benefit," would likely it very hard to find OpenAI in violation of that commitment. (Not a lawyer, not legal advice.)
throwitaway222 · 2 years ago
Unfortunately you can also easily show that they ARE doing these things too.

Open source. Check - they have open source software available.

Private Gain of any person. Check (Not hard to see it's a non-profit. People that make private money from a non-profit is obviously excluded) Now to me, personally, I think all non-profits are for-profit enterprises. The "mission" in nearly all cases isn't for the "people it serves". I've seen so many "help the elders" "help the migrants" but the reality is, money always flows up, not to the people in need.

_heimdall · 2 years ago
I don't expect a case against OpenAI to be given the leeway to bring into question the entire notion of a nonprofit. There are long standing laws (and case law) for nonprofit entities, it won't all get thrown out here.
FooBarBizBazz · 2 years ago
OpenAI being a nonprofit is like Anthony Levandowski’s "Way of the Future" being a 501(c) (3) religious nonprofit. All of which is lifted from Stranger in a Strange Land and L. Ron Hubbard's Scientology.

(It wouldn't be the first time someone made a nerd-cult: Aum Shinrikyo was full of physics grad students and had special mind-reading hats. Though that was unironically a cult. Whereas the others were started explicitly as grifts.)

It's like they have no shame.

Geezus_42 · 2 years ago
Say hello to "effective altruism".
Aloisius · 2 years ago
Let's say for the sake of argument that they violated their original charter, it still wouldn't give Musk standing to bring the suit.

The charter is not a contract with Musk. He has no more standing than you or I.

JumpCrisscross · 2 years ago
> charter is not a contract with Musk

He was defrauded. If OpenAI fails, there is a good chance Altman et al get prosecuted.

Matticus_Rex · 2 years ago
If Musk's tens of millions in donations were in reliance on the charter and on statements made by sama, Brockman, etc., there's probably a standing argument there. Musk is very different than you or I -- he's a co-founder of the company and was very involved in its early work. I wouldn't guess that standing would be the issue they'd have trouble with (though I haven't read the complaint).
CRConrad · 2 years ago
> The charter is not a contract with Musk.

How sure are you of that? Seems to me it could at least equally validly be claimed that that is precidsely what it is.

> He has no more standing than you or I.

Did you finance OpenAI when it started? I didn't.

Dead Comment

pizzafeelsright · 2 years ago
He has........ attention.
seanhunter · 2 years ago
Companies pivot all the time. You have to do more than show they are doing something different from what they originally said they would do if you want to win an investor lawsuit.
belter · 2 years ago
Taking into account, the reported reason Elon Musk departed from the project, is because he wanted OpenAI to merge with Tesla, and he would take complete control of the project, this lawsuit smells of hypocrisy.

"The secret history of Elon Musk, Sam Altman, and OpenAI" - https://www.semafor.com/article/03/24/2023/the-secret-histor...

But that was to be expected, from the guy who forced his employees to go work with Covid and claimed danger of Covid infection to show up at a Twitter aquisition deposition...

"Tesla gave workers permission to stay home rather than risk getting covid-19. Then it sent termination notices." - https://www.washingtonpost.com/technology/2020/06/25/tesla-p...

"Musk declined to attend in-person Twitter deposition, citing COVID exposure risk" - https://thehill.com/regulation/court-battles/3675282-musk-de...

ramblerman · 2 years ago
You made exactly zero arguments based on the lawsuit itself. And just went after musks character.

The law doesn’t work that way. It’s not as simple as people I like should win and people I don’t should lose.

The fact you provided references and links implies you actually believe you are making a coherent case

loceng · 2 years ago
Can you cite the specific line and source claiming "reported reason Elon Musk departed from the project"? Feels taken out of context from what I remembering reading before.

Not sure I'd trust Washington Post to present a story accurately - whether the termination notices were relevant to the premise presented.

Did he attend the Twitter deposition via video? Seems like a hit piece.

Deleted Comment

Dead Comment

qwertox · 2 years ago
Whatever his reason may be (like resentment for jumping off the ship too soon and missing out, or standing in for humanity), I like what I read in the sense that it contains all the stuff that needs to be spoken about publicly, and the court seems to be the optimal place for this.

It feels like Microsoft is misusing the partnership only to block other companies from having access to the IP. They said they don't need the partnership, that they have got all what they need, so there would be no need to have the partnership.

If this is the way Microsoft misuses partnerships, I don't feel good about Mistral's new partnership, even if it means unlimited computing resources for them and still have the freedom to open source their models.

Not seeing Mistral Large as an open source model now has a bitter taste to it.

I also wonder if this lawsuit was the reason for him checking out Windows 11.

boringg · 2 years ago
I don't think he has any resentment about jumping off "too soon" as you say. He specifically abandoned ship because he didn't align with the organization anymore. I suspect this has been a long time coming given his public commentary on AI.

He's goal on OpenAI investments were to keep close watch on the development of AI. If you believe the public comments or not is an entirely different matter though I do feel like there is sincerity in Elons AI comments.

AndrewKemendo · 2 years ago
I’d offer that Musk hasnt shown any consistent principle motivating his behavior other than gathering power, in the face of stated motivations.

So while he may genuinely believe what he is saying, the inherent philosophical conflicts in his consistent narcissistic actions, have poisoned any other possible position to such an extent that he has lost all moral credibility

Revealed preferences never lie

vineyardmike · 2 years ago
> Not seeing Mistral Large as an open source model now has a bitter taste to it.

A company needs a product to sell. If they give away everything, they have nothing to sell. This was surely always the plan.

(1) They can give away the model but sell an API - but they can’t serve a model as cheap as Goog/Msft/Amzn who have better unit economics on their cloud and better pricing on GPUs (plus custom inference chips).

(2) they can sell the model. In which case they can’t give it away for free. Unlike open source code, there probably isn’t a market for support and similar “upsells” yet.

treesciencebot · 2 years ago
> (1) They can give away the model but sell an API - but they can’t serve a model as cheap as Goog/Msft/Amzn who have better unit economics on their cloud and better pricing on GPUs (plus custom inference chips).

Which has a simple solution, release the model weights with a license which doesn't let anyone to commercially host them (like AGPL-ish) without your permission. That is what Stability.ai does it.

rrdharan · 2 years ago
> Unlike open source code, there probably isn’t a market for support and similar “upsells” yet.

_Like_ most open source code, there isn’t a market for support and upsells..

bamboozled · 2 years ago
See The Linux Foundation, they don’t seem to have this problem.

Deleted Comment

asciii · 2 years ago
Elon kept funding the non-profit even though they had begun talks/investor discussions about the for-profit. I think an extra $3 million?

Either way, I'm guessing he did not think the for-profit side would turn into a money printer.

loceng · 2 years ago
He knew the discussion was in regards to for-profit? Companies like Microsoft probably constantly are being courted to fund non-profit orgs.

Deleted Comment

Deleted Comment

xcv123 · 2 years ago
> Whatever his reason may be

The reason is that he was ruthlessly scammed by the sociopath CEO Sam Altman.

"Mr. Musk founded and funded OpenAI, Inc. with Mr. Altman and Mr. Brockman in exchange for and relying on the Founding Agreement to ensure that AGI would benefit humanity, not for-profit corporations. As events turned out in 2023, his contributions to OpenAI, Inc. have been twisted to benefit the Defendants and the biggest company in the world. This was a stark betrayal of the Founding Agreement, turning that Agreement on its head and perverting OpenAI, Inc.’s mission. Imagine donating to a non-profit whose asserted mission is to protect the Amazon rainforest, but then the non-profit creates a for-profit Amazonian logging company that uses the fruits of the donations to clear the rainforest. That is the story of OpenAI, Inc."

"Plaintiff reasonably relied on Defendants’ false promises to his detriment, ultimately providing tens of millions of dollars of funding to OpenAI, Inc., as well as his time and other resources, on the condition that OpenAI would remain a non-profit irrevocably dedicated to creating safe, open-source AGI for public benefit, only to then have OpenAI abandon its “irrevocable” non- profit mission, stop providing basic information to the public, and instead exclusively dedicate and license its AGI algorithms to the largest for-profit company in the world, precisely the opposite of the promises Defendants made to Plaintiff."

buzzin__ · 2 years ago
It's weird that you would say Altman is a sociopath without also mentioning that Musk is one as well. Musk is also a narcissist and you can't be one without also being a sociopath.

Are you perhaps a member of the Musk cult of personality?

Just trying to create informational balance.

neom · 2 years ago
While researching OpenAI use of unique corporate governance and structures, I found these interesting resources:

OpenAI’s Hybrid Governance: Overcoming AI Corporate Challenges. - https://aminiconant.com/openais-hybrid-governance-overcoming...

Nonprofit Law Prof Blog | The OpenAI Corporate Structure - https://lawprofessors.typepad.com/nonprofit/2024/01/the-open...

AI is Testing the Limits of Corporate Governance (research paper)- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693045

OpenAI and the Value of Governance - https://www.glasslewis.com/openai-and-the-value-of-governanc...

light_triad · 2 years ago
Great resources - thanks for sharing!

You can also access their 990 form: https://projects.propublica.org/nonprofits/organizations/810...

The critical issues for OpenAI is that structurally the cost of collecting data and training models is huge and makes the previous wave of software + physical business models (i.e. Uber, Airbnb, etc) look cheap to operate in comparison. That makes OAI more reliant on cloud providers for compute. Also their moat & network effect is dependent on a more indirect supply of user generated content. Perhaps there's an advantage of using IP to train on as a non profit as some of the articles above argue.

gregwebs · 2 years ago
This suit claims breach of the "Founding Agreement". However, there is no actualy Founding Agreement, there are email communications claimed to be part of a "Founding Agreement". IANAL, but I would suspect that these emails don't matter for much now that there are Ariticles of Incorporation. Those articles are mentioned, but the "Founding Agreement" implied by emails is mentioned more. The suit also seems alarmist by stating that GPT4 is AGI.

It seems like Elon could win a suit to the extent that he could get all of his donations back based on the emails soliciting donation for a purpose that was then changed.

But Elon's goal in this suit is clearly to bring back the "Open" in "OpenAI"- share more information about GPT4 and newer models and eliminate the Microsoft exclusive licensing. Whether this would happen based on a suit like this seems like it would come down to an interpretation of the Articles of Incorporation.

codexb · 2 years ago
Page 37 of the lawsuit has the certificate of incorporation. It says precisely what Musk claims it says. That’s the founding document he’s referencing.
jcranmer · 2 years ago
> It says precisely what Musk claims it says.

Almost. Musk uses an ellipsis in his copy in the text that elides some text that is rather detrimental to his claims:

> In furtherance of its purposes, the corporation shall engage in any lawful act of activity for which nonprofit corporations may be organized under the General Corporation Law of Delaware.

waterheater · 2 years ago
It likely depends on what constitutes a valid contract in this jurisdiction. For example, some states recognize a "handshake agreement" as a legally-binding contract, and you can be taken to court for violating that agreement. I'm certain people have been found guilty in a legal context because they replied to a email one way but acted in the opposite manner.

The Articles of Incorporation are going to be the key legal document. Still, the Founding Agreement is important to demonstrate the original intentions and motivations of the parties. That builds the foundation for the case that something definitively caused Altman to steer the company in a different direction. I don't believe it's unfair to say Altman is steering; it seems like the Altman firing was a strategy to draw out the anti-Microsoft board members, who, once identified, were easily removed once Altman was reinstated. If Altman wasn't steering, then there's no reason he would have been rehired after he was fired.

dragonwriter · 2 years ago
> For example, some states recognize a "handshake agreement" as a legally-binding contract

Subject to limits on specific kinds of contracts that must be reduced to writing, all US jurisdictions (not just some states) recognize oral contracts provided that the basic requirements of a contract (offer, acceptance, consideration, etc.) are present.

neom · 2 years ago
These dudes[1] had a lawyer on today who talked about it, it's actually pretty interesting, but this is what they are arguing with: https://content.next.westlaw.com/practical-law/document/I77e...

[1] https://www.youtube.com/watch?v=0hWZJg_nda4

HarHarVeryFunny · 2 years ago
The whole suit reads like amateur hour - sounds like some teenager whining about how he's been wronged. It reminds me of Hans Niemann's chess suit.
1vuio0pswjnm7 · 2 years ago
"In March 2023, OpenAI released its most powerful language model yet, GPT-4. GPT-4 is not just capable of reasoning. It is better at reasoning than average humans. It scored in the 90th percentile on the Uniform Bar Exam for lawyers. It scored in the 99th percentile on the GRE Verbal Assessment. It even scored a 77% on the Advanced Sommelier examination."

One could argue a common characteristic of the above exams is that they each test memory, and, as such, one could argue that GPT-4's above-average performance is not necessarily evidence of "reasoning". That is, GPT-4 has no "understanding" but it has formidable reading speed and retention (memory).

While preparation for the above exams depends heavily on memorisation, other exams may focus more on reasoning and understanding.

Surely GPT-4 would fail some exams. But when it comes to GPT-4's exam performance, only the positive results are reported.

https://freeman.vc/notes/reasoning-vs-memorization-in-llms

bastawhiz · 2 years ago
> Surely GPT-4 would fail some exams. But when it comes to GPT-4's exam performance, only the positive results are reported.

The default is failing the exams. I'd be no less impressed if they came right out and said "This is a short list of the only exams it passes" simply because (IMO) it's remarkable that a machine could pass any of those exams in the first place. Just a couple years ago, it would have been outlandish for a machine to even have a double digit score (at best!).

If we've already found ourselves in a position where passing grades on some exams that qualify people for their careers is unremarkable, I'll honestly be a bit disappointed. 99th percentile on the GRE Verbal would make an NLP researcher from 2010 have a damn aneurysm; if we're now saying that's "not reasoning" then we're surely moving the goalposts for what that means.

gigglesupstairs · 2 years ago
> One could argue a common characteristic of the above exams is that they each test memory, and, as such, one could argue that GPT-4's above-average performance is not necessarily evidence of "reasoning". That is, GPT-4 has no "understanding" but it has formidable reading speed and retention (memory).

I don’t think they will make this argument since it will heavily negate their (expected) argument that they’re not open-sourcing the model because of safety concerns.

knotthebest · 2 years ago
I’m not so sure of this. The fear generally isn’t that the model itself is dangerous, more so that how it is used by someone might be.
1vuio0pswjnm7 · 2 years ago
None of these exams are the basis for professional certification: passing them does not _on its own_ qualify anyone for any particular profession.

The Advanced Sommelier exam is part of a process that involves taking other exams and courses. The GRE Verbal is usually taken in combination with other GRE parts and used to apply for entry into a program where years of further study may be required. The UBE normally follows years of study in an approved program. (Back in the day, some people used to take the MBE, which is now a part of the UBE, while they were still in school because the exam was so easy: it was a standardised, multiple-choice test.)

The plaintiffs must make the argument that GPT-4 is "AGI" because the alleged agreement to form OpenAI was focused on "AGI" specifically, not simply development and improvement of LLMs. If OpenAI has not yet achieved AGI, then arguably the alleged agreement does not restrict whatever it is doing now. It only applies to "AGI".

romwell · 2 years ago
>Surely GPT-4 would fail some exams

Some? It does hilariously badly on basic math.

With confidence, though.

ethbr1 · 2 years ago
> hilariously badly on basic math. With confidence, though

How does it do on the GMAT? Sounds like a good candidate for an MBA program.

MacsHeadroom · 2 years ago
GPT-4 with code interpreter is better at math than elite Math undergrads.
shawabawa3 · 2 years ago
Have you tried GPT recently on maths? Since they trained it to write code for maths questions it's got a lot better
reso · 2 years ago
It's clear that OpenAI has become something that it wasn't intended to be at it's founding. Maybe that change happened for good reasons, but the fact that there was a change is not in doubt.
TaylorAlexander · 2 years ago
Intention is an interesting word. I wonder how many of the founders quietly hoped it would make them a lot of money. Though to be fair, I do believe that hope would have been tied to the expectation that they meet their stated goals of developing some form of AGI.
l33tman · 2 years ago
It seems a bit weird to quietly hope that the money you put in an organization with the explicit goal of being a non-profit, would give you direct monetary returns though.. Maybe they hoped for returns in other ways, like getting some back-channel AGI love when it finally became conscious? :)
paulddraper · 2 years ago
Well of course they'll make a lot of money. For some definition of "a lot."
amou234 · 2 years ago
Elon Musk: actual billionaire

Sam Altman: fake billionaire (most equity is tied to openAI)

this should be a one sided battle

andruby · 2 years ago
Most of Elon’s wealth is also tied up in equity.
e_i_pi_2 · 2 years ago
This type of thing makes me wish the only option was public defenders so you aren't able to just pay more and have better chances in court. That said - I still don't think Musk has a good chance here, he's lost cases against people with far less resources by just being confidently wrong, at some point paying more for lawyers doesn't help you
speedylight · 2 years ago
I thought Sam has no equity in OpenAI?
HarHarVeryFunny · 2 years ago
At least 1/2 of Altman's net worth is invested in Helion energy (fusion energy startup).
akerl_ · 2 years ago
Generally speaking, changing what your company does is just “pivoting”. It’s not clear to me why Elon would having standing for this suit, or why a company changing their direction would be actionable.

This would be like suing Google for removing “Don’t be evil” from their mission statement.

mdasen · 2 years ago
I think non-profits change the argument here a bit. With a for-profit company, what your company is doing is trying to make money. If you change that, investors have a right to sue. With a non-profit, what the company is doing is some public service mission. Why does Musk have standing? Potentially because he donated millions to OpenAI to further their non-profit mission.

I'm not saying that Musk has a good case. I haven't read the complaint.

Still, with a non-profit, you're donating to a certain cause. If I create "Save the Climate" as a non-profit and then pivot to creating educational videos on the necessity of fossil fuels, I think it'd be reasonable to sue since we aren't performing our mission. There's certainly some latitude that management and the board should enjoy in pivoting the mission, but it isn't completely free to do whatever it wants.

Even with a for-profit company, if management or the board pivot in a way that investors think would be disastrous for the company, there could be reason to sue. Google removing "don't be evil" is a meaningless change - it changes nothing. Google deciding that it was going to shut down all of its technology properties in favor of becoming a package delivery company would be a massive change and investors could sue that it wasn't the right direction for the company and that Google was ignoring their duty to shareholders.

Companies can change direction, but they also have duties. For-profit companies are entrusted with your investment toward a goal of earning money. Non-profit companies are entrusted with your donations toward a goal of some public good. If they're breaching their duty, a lawsuit is reasonable. I'm not saying OpenAI is breaching their duty, just that they aren't free to do anything they want.

lukan · 2 years ago
There is a great difference between a for profit company "pivoting" - and a nonprofit changing direction of mission goals. Because a non profit accepts donation - and they are bound to the original mission. Also their profits usually are. Google never was a nonprofit, so adding and later removing their "don't be evil" was basically just PR (even though I do believe, that originally it was supposed to mean something, but not in a legally binding way).
_fizz_buzz_ · 2 years ago
If they started selling jelly beans, I would agree with you. But they changed from a non profit to a for profit model and from a open source to a closed source model. If they pivoted their product that would be one thing, but they completely shifted their mission.
advael · 2 years ago
I find myself in the weird position of still thinking Musk is upset about this for pettier reasons than he alleges but still being super glad he's bringing this suit. OpenAI has clearly sold out in a big way to one of the most dangerous and irresponsible companies on the planet and someone with pockets this deep needed to bring this suit for there to be any chance of any accountability even being possible given the scale of the organization
oglop · 2 years ago
“OpenAI has clearly sold out in a big way to one of the most dangerous and irresponsible companies on the planet”

Find a couch and lay down before the vapors get too strong.

advael · 2 years ago
Listen I know that having an opinion and using superlatives when describing something makes me intrinsically uncool for breaking the all-encompassing kayfabe of total glibness required to be one of the Very Smart People on the Internet, but I think it's a warranted distinction for a company that has consistently been behind a lion's share of both legal and technological innovations that have pushed our world toward dystopia and catastrophe in the last 30+ years. They have been repeatedly shown to engage in anti-competitive and customer hostile behavior, often inventing tactics used by other tech monopolies after they proved that you can get away with them. Their lawyers both drafted the policies of the DMCA and put considerable pressure on a staggering number of nations to adopt similar laws. TPMs are their innovation as well. Their explicit ethos and business model is about maximizing the degree to which intellectual property law stifles innovation from competitors, and their founder has extended this model into connections made doing ostensibly charitble work, notably acting to prevent at least one major vaccine from being open-sourced and made publicly available during a global pandemic, a decision which not only likely killed millions of people directly, but also likely allowed the current state of affairs where the thing can constantly mutate in a large swath of the population of the world which can't produce vaccines quickly because they are legally barred from doing so.

But even a commitment to a strong concept of IP isn't an obstacle when new fuckery can be done. In the new wave generative AI, Microsoft continues to innovate. Even without including anything done by open OpenAI, they probably win most shady data scam to train AI from their acquisition of Github and subsequent indiscriminate use of private repos to train models that will then regurgitate snippets of code (again, this coming from a company that is very litigious about its own code's IP rights), as well as using lots of code open-sourced under licenses that explicitly prohibit commercial usage or require code built from it to be open-sourced in turn to train models that are both themselves directly sold as a commercial product without making its source (let alone weights or datasets) available, but that also will regurgitate code from those repos without replicating those licenses, thus essentially laundering any arbitrary violation of those licenses (After all, copilot might have suggested that snippet of code with the developer using it never knowing that it was from a GPL-licensed codebase). So to summarize, after building an entire business on code as IP and spending a ton on everything from press to litigation to lobbying strengthening the inviolability of this IP, they then created the world's most effective tool for ignoring IP law for proprietary corporate code and open-source code alike in order to effectively sell this capability as a service

I fully stand by calling Microsoft one of the most dangerous and irresponsible companies currently active on this planet. Perhaps you've got a better argument against this claim than an oblique comparison to sexist depictions of housewives in old movies. Feel free to share it if you like

atoav · 2 years ago
Anybody can be a useful idiot, why not Elon Musk?

Although I share your evaluation that he is likely in it for petty reasons.

advael · 2 years ago
Never hurts to be useful, right?

Dead Comment

HarHarVeryFunny · 2 years ago
Any competent lawyer is going to get Musk on the stand reiterating his opinions about the danger of AI. If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.

nicce · 2 years ago
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.

patcon · 2 years ago
When I try to port your logic over into nuclear capacity it doesn't hold very well.

Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.

Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)

FeepingCreature · 2 years ago
This only holds if defense outscales attack. It seems very likely that attack outscales defense to me with LLMs.
robbrown451 · 2 years ago
I don't see how opening it makes it safer. It's very different from security things, where some "white hat" can find a security, and they can then fix it so instances don't get hacked. Sure, a bad person could run the software without fixing the bug, but that isn't going to harm anyone but themselves.

That isn't the case here. If some well meaning person discovers a way that you can create a pandemic causing superbug, they can't just "fix" the AI to make that impossible. Not if it is open source. Very different thing.

tw04 · 2 years ago
Just like nuclear weapons?

The whole “security through obscurity doesn’t work” is absolute nonsense. It absolutely works and there are countless real world examples. What doesn’t work is relying on that as your ONLY security.

geor9e · 2 years ago
You don't even need to call him to the stand, it's not some gotcha, he writes it all over the complaint itself. "AGI poses a grave threat to humanity — perhaps the greatest existential threat we face today." I highly doubt a court is going to opine about open vs closed being safer, though. The founding agreement is pretty clear that the intention was to make it open for the purpose of safety. Courts rule on if a contract was breached, not whether breaching it was a philosophy good thing.
andy_ppp · 2 years ago
You’re forgetting that any good lawyer would do something some random on hacker news made up to support their belief the lawsuit is about AI safety.
starbugs · 2 years ago
> If the tech really is dangerous then being more closed arguably is in the public's best interest

If that was true, then they shouldn't have started off like that to begin with. You can't have it both ways. Either you are pursuing your goal to be open (as the name implies) or the way you set yourself up was ill-suited all along.

HarHarVeryFunny · 2 years ago
Their position evolved. Many people at the time disagreed that having open source AGI - putting it in the hands of many people - was the best way to mitigate the potential danger. Note that this original stance of OpenAI was before they started playing with transformers and having anything that was beginning to look like AI/AGI. Around the time of GPT-3 was when they said "this might be dangerous, we're going to hold it back".

There's nothing wrong with changing your opinion based on fresh information.

brookst · 2 years ago
…unless you believe that the world can change and people’s opinions and decisions should change based on changing contexts and evolving understandings.

When I was young I proudly insisted that all I ever wanted to eat was pizza. I am very glad that 1) I was allowed to evolve out of that desire, and 2) I am not constantly harangued as a hypocrite when I enjoy a nice salad.

awb · 2 years ago
The document says they will open source “when applicable”. If open sourcing wouldn’t benefit the public, then they aren’t obligated to do it.

That gives a lot of leeway for honest or dishonest intent.

troyvit · 2 years ago
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Then what should we do about all the open models that are closing in on OpenAI's capabilities?

bamboozled · 2 years ago
I have absolutely zero reason to believe "AI" is better of in the hands of Altman, Brockman etc and most of the public would feel similar.
paulddraper · 2 years ago
Who would most of the public trust with AI?
Nevermark · 2 years ago
Other groups are going to discover the same problems. Some will act responsibly. Some will try to, but the profit motive will undermine their best intentions.

This is exactly the problem having an open non-profit leader was designed to solve.

Six month moratoriums, to vet and mitigate dangers including outside experts, would probably be a good idea.

But people need to know what they are up against. What can AI do? How do we adapt?

We don't need more secretive data gathering, psychology hacking, manipulative corporations, billionaires (or trillionaires), harnessing unknown compounding AI capabilities to endlessly mine society for 40% year on year gains. Social networks, largely engaged in winning zero/negative sum games, are already causing great harm.

That would compound all the dangers many times over.

psychoslave · 2 years ago
>If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Tell about a technology you think but dangerous, and I'll give you fifty way to kill someone with it.

Plastic bag for example, are not only potentially dangerous, they make a significant contribution to the current mass extinction of biodiversity.

lukan · 2 years ago
"Plastic bag for example, are not only potentially dangerous, they make a significant contribution to the current mass extinction of biodiversity."

That is news to me, how exactly do they significantly contribute?

ryukoposting · 2 years ago
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

I contend that a threat must be understood before it can be neutralized. It will either take a herculean feat of reverse-engineering, or an act of benevolence on OpenAI's behalf. Or a lawsuit, I guess.

seydor · 2 years ago
Then the foundational document of openAi is Self-contradictory
HarHarVeryFunny · 2 years ago
Perhaps, but who knew? Nobody at that time knew how to build AGI, and what it therefore might look like. I'm sure people would have laughed at you if you said "predict next word" was the path to AGI. The transformer paper that kicked off the LLM revolution would not be written for another couple of years. DeepMind was still focusing on games, with AlphaGo also still a couple of years away.

OpenAI's founding charter was basically we'll protect you from an all-powerful Google, and give you the world's most valuable technology for free.

andy_ppp · 2 years ago
Are you a lawyer or have some sort of credentials to be able to make that statement? I’m not sure if Elon Musk being hypocrite about AI safety would be relevant to the disputed terms of a contract.
HarHarVeryFunny · 2 years ago
I don't think it's about him being a hypocrite - just him undermining his own argument. It's a tough sell saying AI is unsafe but it's still in the public's best interest to open source it (and hence OpenAI is reneging on it's charter).