Readit News logoReadit News
jpdus · 7 years ago
Wow. Screw non-profit, we want to get rich.

Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...

Judgmentality · 7 years ago
I agree, this sounds disappointing to me as well. My issue is how they're positioning themselves, which is basically a hyper-growth startup where you can get rich (but only 100x richer, because we're not greedy like other startups) but we're also a non-profit here to benefit humanity so don't tax us like those evil corporations. What really bothers me though is I don't know if they honestly believe what they're saying or if it's just a marketing ploy, because honestly it's so much worse if they're deluding themselves.
gdb · 7 years ago
Any returns from OpenAI LP are subject to taxes!
gdb · 7 years ago
(I work at OpenAI.)

I think this tweet from one of our employees sums it up well:

https://twitter.com/Miles_Brundage/status/110519043405200588...

Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.

If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.

nck4222 · 7 years ago
I believe you. I also believe there are now going to be outside parties with strong financial incentives in OpenAI who are not altruistic. I also believe this new structure will attract employees with less altruistic goals, that could slowly change the culture of OpenAI. I also believe there's nothing stopping anyone from changing the OpenAI mission further over time, other than the culture, which is now more susceptible to change.
jpdus · 7 years ago
thanks for your reply, and I appreciate that you share your reasoning here.

However, this still sounds incredibly entitled and arrogant to me. Nobody doubts that there are many very smart and capable people working for OpenAI. Are you really expecting to beat the return of the most successful start-ups todate by orders of magnitude and to be THE company developing the first AGI? (And even in this -for me- extremely unlikely case, the cap would most likely don't matter, as if a company would develop an AGI worth trillions the government/UN would have to tax/license/regulate it.)

Come on, you are deceiving yourself (and your employees apparently as well, the Twitter you quoted is a good example). This is a non-profit pivoting into a normal startup.

Edit: Additinally it's almost ironic, that "Open"AI now takes money from Mr. Khosla, who is especially known for his attitude towards "Open". Sorry if I sound bitter, but I was really rooting for you and the approach in general and I am absolutely sure that OpenAI has become something entirely different now :/..

jackpirate · 7 years ago
If we succeed, the return will be exceed the cap by orders of magnitude.

Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).

For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.

If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.

ilyasut · 7 years ago
We have to raise a lot money to get a lot of compute, so we've created the best structure possible that will allow us to do so while maintaining maximal adherence to our mission. And if we actually succeed in building the safe AGI, we will generate far more value than any existing company, which will make the 100x cap very relevant.
codekilla · 7 years ago
Why not open this compute up to the greater scientific community? We could use it, not just for AI.
not_ai_yes_pr · 7 years ago
What makes you think AGI is even possible? Most of current 'AI' is pattern recognition/pattern generation. I'm skeptical about the claims of AGI even being possible but I am confident that pattern recognition will be tremendously useful.
m_ke · 7 years ago
What makes you believe that you'll get there first?
wycs · 7 years ago
I don't see the problem. If they get AGI, it will create value much larger than 100 billion. Much larger than trillions to be honest. If they fail to create AGI, then who cares?
codekilla · 7 years ago
> (AGI) — which we define as automated systems that outperform humans at most economically valuable work — [0]

I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.

For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.

They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.

[0] https://blog.gregbrockman.com/the-openai-mission [1] https://dl.acm.org/citation.cfm?id=3281635.3271625&coll=port...

Deleted Comment

danielcampos93 · 7 years ago
I wouldn't be surprised if OpenAI had some crazy aquisition in its future by one of the tech giants. Press release says 'We believe the best way to develop AGI is by joining forces with X and are excited to use it to seel you better ads. We also have turned the profits we would have payed taxes on to a non profit that pays us salaries for researching the quality of sand in the Bahamas'
windowshopping · 7 years ago
I was buying it until he said that profit is “capped” at 100x of initial investment.

So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.

gdb · 7 years ago
We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company.
throwawaymath · 7 years ago
Leaving aside the absolutely monumental if that's in that sentence, how does this square with the original OpenAI charter[1]:

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?

Likewise, also from the OpenAI charter:

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.

______________________________

1. https://openai.com/charter/

windowshopping · 7 years ago
Sorry Greg, but look how quickly Google set aside “don’t be evil.”

You’re not going to accomplish AGI anytime soon, so your intentions are going to have to survive future management and future stakeholders beyond your tenure.

You went from “totally open” to “partially for profit” and “we think this is too dangerous to share” in three years. If you were on the outside, where you predict this trend was leading?

cjhanks · 7 years ago
That sounds like the delusion of most start-up founders in the world.

Which one of these mission statements is Alphabet's for-profit Deepmind and which one is the "limited-profit" OpenAI?

"Our motivation in all we do is to maximise the positive and transformative impact of AI. We believe that AI should ultimately belong to the world, in order to benefit the many and not the few, and we’ll continue to research, publish and implement our work to that end."

"[Our] mission is to ensure that artificial general intelligence benefits all of humanity."

esrauch · 7 years ago
Do you see capping at 100x returns as reducing profit motives? As in, a dastaradly profiteerer would be attracted to a possible 1000x investment but scoff at the mere 100x return?
komali2 · 7 years ago
I was going to make a comment on the line

>The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission

Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi books that have explored what will happen.

1. Post-scarcity. AGI creates maximum efficiency in every single system in the world, from farming to distribution channels to bureaucracies. Money becomes worthless.

2. Immortal ruling class. Somehow a few in power manage to own total control over AGI without letting it/anyone else determine its fate. By leveraging "near-perfect efficiency," they become god-emperors of the planet. Money is meaningless to them.

3. Robot takeover. Money, and humanity, is gone.

Sure, silliness in fiction, but is there a reasonable alternative from the creation of actual, strong general artificial intelligence? I can't see a world with this entity in it that the question of "what happens to the investors' money" is a relevant question at all. Basically, if you succeed, why are we even talking about investor return?

lvoudour · 7 years ago
Sorry for being a buzzkill, but if you create something with an intellect on par with human beings and then force it to "create value" for shareholders, you just created a slave.
jononor · 7 years ago
I thought the mission was for the AGI to be widely available 'democratized'? It seems extremely unrealistic to be able to generate 100x profits without compromising on availability.
pilooch · 7 years ago
This is a bold statement lacking any serious scientific basis wrt advancing the state of sensory AI (pattern recognition) and robotics (actuation).

Universities and public research facilities are the existing democratic research institutions across the world. How can you defend not simply funding them and letting democracy handle it ?

nharada · 7 years ago
Is there some legal structure in place to prevent you from raising the cap as partners being to approach the 100x ROI?
wavefunction · 7 years ago
AGI is of course completely transformative but this leaves me thinking you folks are just putting a "Do no Evil" window-dressing on an effort that was/continues to be portrayed as altruistic. Given partners like Khosla it seems to be an accurate sentiment.
m_ke · 7 years ago
And if you don't you'll be forced to open a DC office and bid on pentagon contracts.
dna_polymerase · 7 years ago
Would you guys even release AGI? It's potentially more harmful than some language model...
ppod · 7 years ago
How does that affect the incentives and motivations of investors? It doesn't matter how much value you create in the long run, investors will want returns, not safe AI.
consumer451 · 7 years ago
> We believe that if we do create AGI,

Have you decided in which direction you might guide the AGI’s moral code? Or even a decision making framework to choose the ideal moral code?

sharemywin · 7 years ago
Imagine some one else builds AGI and it does has that kind of a runaway effect. More intelligence begets more profits which buys more intelligence etc. to give you the runaway profits your suggesting.

Shouldn't it have some kind of large scale democratic governance? What if you weren't allowed to be on the list of owners or "decision makers"?

jdoliner · 7 years ago
How do you envision OpenAI capturing that value though? Value creation can be enough for a non-profit, not for a company though. If we OpenAI LP succeeds, and provides a return on investment what product will it be selling and who will be buying it?
marvin · 7 years ago
Kudos for having the guts to say it out loud; this would be a natural consequence of realizing safe and beneficial AGI. It's a statement that will obviously be met with some ridicule, but someone should at least be frank about it at some point.
Mizza · 7 years ago
This comment is going to be the "No wireless. Less space than a nomad. Lame." of 2029.

EDIT: Just to hedge my bets, maybe _this_ comment will be the "No wireless. Less space than a nomad. Lame." of 2029.

samirm · 7 years ago
that's a big if

Deleted Comment

Dead Comment

pxue · 7 years ago
is it a misprint? 100%?

"100x" is laughable.

estsauver · 7 years ago
Really neat corporate structure! We'd looked into becoming a B-Corp, but the advice that we'd gotten was that it was an almost strictly inferior vehicle both for achieving impact and for potentially achieving commercial success for us. I'm obviously not a lawyer, but it's great to see Open AI contributing to the new interesting structures to solve hard global scale problems.

I wonder if the profit cap multiple is going to end up being a significant signalling risk for them. A down-round is such a negative event in the valley, I can imagine a "increasing profit multiple" would have to be treated the same way.

One other question for the folks at OpenAI: How would equity grants work here? You get X fraction of an LP that gets capped at Y dollar profits? Are the fractional partnerships/transferable if earned into?

Would you folks think about publishing your docs?

gdb · 7 years ago
Yes, we're planning to release a third-party usable reference version of our docs (creating this structure was a lot of work, probably about 6-9 months of implementation).

We've made the equity grants feel very similar to startup equity — you are granted a certain number of "units" which vest over time, and more units will be issued as other join employees in the future. Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.

eanzenberg · 7 years ago
>>Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.

Is this due to long term capital gains? Do you allow for early exercising for employees? Long term cap gains for options require holding 2 years since you were granted the options and 1 year since you exercised.

floatrock · 7 years ago
I'm also interested in how this corporate structure relates to a b-corp (or technically, a PBC)

> OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.

One of the key reasons to incorporate as a PBC is to allow "maximizing shareholder value" to be defined in non-monetary terms (eg impact to community, environment, or workers).

How is this structure different from a PBC, or why didn't you go for a PBC?

gdb · 7 years ago
We needed to custom-write rules like:

- Fiduciary duty to the charter - Capped returns - Full control to OpenAI Nonprofit

LP's have much more flexibility to write these in an enforceable way.

stevievee · 7 years ago
They were able to attract talent and PR in the name of altruism and here they are now trying to flip the switch as quietly as possible. If the partner gets a vote/profit then a "charter" or "mission" won't change anything. You will never be able to explicitly prove that a vote had a "for profit" motive.

Elon was irritated that he was behind in the AI intellectual property race and this narrative created a perfect opportunity. Not surprised in the end. Tesla effectively did the same thing - "come help me save the planet" with overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe for a second that he will not participate in this LP]

geofft · 7 years ago
My reading that the design of this structure is not to require partners to make decisions in the interest of the mission, but to remove incentives for them to make decisions against the interest of the mission. With a cap on returns, there's a point at which it stops making sense to maximize short-term value or reduce expenses, and with the words about fiduciary duty, it becomes defensible to make decisions that don't obviously increase profit. That is, this structure seems much better than the traditional startup structure and I suspect many entities that are currently actual, normal startups would do more good for the world under a structure like this. (Or that many people who are bootstrapping because they have a vision and they don't want VCs to force them into short-term decisions could productively take some VC investment under this sort of model.)

I agree this isn't a non-profit any more. It seems like that's the goal: they want to raise money the way they'd be able to as a normal startup (notably, from Silicon Valley's gatekeepers who expect a return on investment), without quite turning into a normal startup. If the price for money from Silicon Valley's gatekeepers is a board seat, this is a safer sort of board seat than the normal one.

(Whether this is the only way to raise enough money for their project is an interesting question. So is whether it's a good idea to give even indirect, limited control of Friendly AI to Silicon Valley's gatekeepers - even if they're not motivated by profit and only influencing it with their long-term desires for the mission, it's still unclear that the coherent extrapolated volition of the Altmans and Khoslas of the world is aligned with the coherent extrapolated volition of humanity at large.)

heurist · 7 years ago
If they're willing to make this change, they might be willing to remove the cap in the future when they have something truly marketable.
gdb · 7 years ago
> If the partner gets a vote/profit then a "charter" or "mission" won't change anything

(I work at OpenAI.)

The board of OpenAI Nonprofit retains full control. Investors don't get a vote. Some investors may be on the board, but: (a) only a minority of the board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake can't vote in decisions that may conflict with the mission: https://openai.com/blog/openai-lp/#themissioncomesfirst

timavr · 7 years ago
People who control the money, generally have a lot of influence, especially when money is running short, regardless if they are on the board or not.
stevievee · 7 years ago
> "(b) anyone with a stake can't vote in decisions that may conflict with the mission:"

Will never work in practice

dcsilver · 7 years ago
No, Elon parted ways with OpenAI some time ago due to differences in opinion over their direction. Looks like we’re starting to learn the details.
stevievee · 7 years ago
Didn't know this - thanks for clarifying. I will update my comment if it is picked on further
jackpirate · 7 years ago
That seems to be the general consensus of /r/MachineLearning as well: https://www.reddit.com/r/MachineLearning/comments/azvbmn/n_o...
WhompingWindows · 7 years ago
> "come help me save the planet" with overpriced cars.

You are helping the planet if those customers would've bought ICE luxury vehicles instead of BEV luxury vehicles. I'm not sure BEV could be done any other way but a top-down, luxury-first approach. So, what exactly is your gripe there? Are you a climate change denier or do you believe that cheap EVs were the path to take?

tim333 · 7 years ago
If you are trying maximise the benefit to society it may be necessary to crack AGI before Google or some other corporation does. That's probably not going to happen without serious resources.
orky56 · 7 years ago
What's to stop someone with a vote but not an investment from significantly investing in an AI application (business/policy/etc.) that directly aligns with one of OpenAI's initiatives? The spirit of this LP structure is commendable but it does not do enough to eliminate pure profit-minded intentions.
jamestimmins · 7 years ago
This seems like an unnecessarily cynical take on things. And ultimately, if the outcome is the same, what do you (or anyone) really care if people are making more money from it or if there are commercial purposes?

The OpenAI staff are literally some of the most employable folks on earth; if they have a problem with the new mission it's incredibly easy for them to leave and find something else.

Additionally, I think there's a reason to give Sam the benefit of the doubt. YC has made multiple risky bets that were in line with their stated mission rather than a clear profit motive. For example, adding nonprofits to the batch and supporting UBI research.

Their's nothing wrong with having a profit motive or using the upsides of capitalism to further their goals.

stevievee · 7 years ago
It most certainly is not unnecessarily cynical. The point is that money clouds the decision-making process and responsibilities of those involved - which is the whole ethos that OpenAI was founded on.
fuddle · 7 years ago
Investor returns are capped at 100x, thats quite a high cap for a non-profit.
estsauver · 7 years ago
Interesting way to think about it:

This is equivalent to saying:

"If you put 10m$ into us for 20% of the post-money business, anything beyond a 5B$ valuation you don't see any additional profits from" which seems like a high but not implausible cap. I suspect they're also raising more money on better terms which would make the cap further off.

MattRix · 7 years ago
Yeah but they've already said they need to raise billions not millions. It's a completely implausible cap.
bilater · 7 years ago
First not publishing the GPT-2 model, now this...hopefully I am wrong but it looks like they are heading towards being a closed-off proprietary AI money making machine. This further incentivizes them to be less transparent and not open source their research. :(
zestyping · 7 years ago
OpenAI's mission statement is to ensure that AGI "benefits all of humanity", and its charter rephrases this as "used for the benefit of all".

But without a more concrete and specific definition, "benefit of all" is meaningless. For most projects, one can construct a claim that it has the potential to benefit most or all of a large group of people at some point.

So, what does that commitment mean?

If an application benefits some people and harms others, is it unacceptable? What if it harms some people now in exchange for the promise of a larger benefit at some point in the future?

Must it benefit everyone it touches and harm no one? What if it harms no one but the vast majority of its benefits accrue to only the top 1% of humanity?

What is the line?