Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup.
If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well.
Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...
I agree, this sounds disappointing to me as well. My issue is how they're positioning themselves, which is basically a hyper-growth startup where you can get rich (but only 100x richer, because we're not greedy like other startups) but we're also a non-profit here to benefit humanity so don't tax us like those evil corporations. What really bothers me though is I don't know if they honestly believe what they're saying or if it's just a marketing ploy, because honestly it's so much worse if they're deluding themselves.
Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.
I believe you. I also believe there are now going to be outside parties with strong financial incentives in OpenAI who are not altruistic. I also believe this new structure will attract employees with less altruistic goals, that could slowly change the culture of OpenAI. I also believe there's nothing stopping anyone from changing the OpenAI mission further over time, other than the culture, which is now more susceptible to change.
thanks for your reply, and I appreciate that you share your reasoning here.
However, this still sounds incredibly entitled and arrogant to me. Nobody doubts that there are many very smart and capable people working for OpenAI. Are you really expecting to beat the return of the most successful start-ups todate by orders of magnitude and to be THE company developing the first AGI? (And even in this -for me- extremely unlikely case, the cap would most likely don't matter, as if a company would develop an AGI worth trillions the government/UN would have to tax/license/regulate it.)
Come on, you are deceiving yourself (and your employees apparently as well, the Twitter you quoted is a good example). This is a non-profit pivoting into a normal startup.
Edit: Additinally it's almost ironic, that "Open"AI now takes money from Mr. Khosla, who is especially known for his attitude towards "Open".
Sorry if I sound bitter, but I was really rooting for you and the approach in general and I am absolutely sure that OpenAI has become something entirely different now :/..
If we succeed, the return will be exceed the cap by orders of magnitude.
Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).
For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.
If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.
We have to raise a lot money to get a lot of compute, so we've created the best structure possible that will allow us to do so while maintaining maximal adherence to our mission. And if we actually succeed in building the safe AGI, we will generate far more value than any existing company, which will make the 100x cap very relevant.
What makes you think AGI is even possible? Most of current 'AI' is pattern recognition/pattern generation. I'm skeptical about the claims of AGI even being possible but I am confident that pattern recognition will be tremendously useful.
I don't see the problem. If they get AGI, it will create value much larger than 100 billion. Much larger than trillions to be honest. If they fail to create AGI, then who cares?
> (AGI) — which we define as automated systems that outperform humans at most economically valuable work — [0]
I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.
For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.
They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.
I wouldn't be surprised if OpenAI had some crazy aquisition in its future by one of the tech giants. Press release says 'We believe the best way to develop AGI is by joining forces with X and are excited to use it to seel you better ads. We also have turned the profits we would have payed taxes on to a non profit that pays us salaries for researching the quality of sand in the Bahamas'
I was buying it until he said that profit is “capped” at 100x of initial investment.
So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.
Leaving aside the absolutely monumental if that's in that sentence, how does this square with the original OpenAI charter[1]:
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?
Likewise, also from the OpenAI charter:
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.
Sorry Greg, but look how quickly Google set aside “don’t be evil.”
You’re not going to accomplish AGI anytime soon, so your intentions are going to have to survive future management and future stakeholders beyond your tenure.
You went from “totally open” to “partially for profit” and “we think this is too dangerous to share” in three years. If you were on the outside, where you predict this trend was leading?
That sounds like the delusion of most start-up founders in the world.
Which one of these mission statements is Alphabet's for-profit Deepmind and which one is the "limited-profit" OpenAI?
"Our motivation in all we do is to maximise the positive and transformative impact of AI. We believe that AI should ultimately belong to the world, in order to benefit the many and not the few, and we’ll continue to research, publish and implement our work to that end."
"[Our] mission is to ensure that artificial general intelligence benefits all of humanity."
Do you see capping at 100x returns as reducing profit motives? As in, a dastaradly profiteerer would be attracted to a possible 1000x investment but scoff at the mere 100x return?
>The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission
Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi books that have explored what will happen.
1. Post-scarcity. AGI creates maximum efficiency in every single system in the world, from farming to distribution channels to bureaucracies. Money becomes worthless.
2. Immortal ruling class. Somehow a few in power manage to own total control over AGI without letting it/anyone else determine its fate. By leveraging "near-perfect efficiency," they become god-emperors of the planet. Money is meaningless to them.
3. Robot takeover. Money, and humanity, is gone.
Sure, silliness in fiction, but is there a reasonable alternative from the creation of actual, strong general artificial intelligence? I can't see a world with this entity in it that the question of "what happens to the investors' money" is a relevant question at all. Basically, if you succeed, why are we even talking about investor return?
Sorry for being a buzzkill, but if you create something with an intellect on par with human beings and then force it to "create value" for shareholders, you just created a slave.
I thought the mission was for the AGI to be widely available 'democratized'? It seems extremely unrealistic to be able to generate 100x profits without compromising on availability.
This is a bold statement lacking any serious scientific basis wrt advancing the state of sensory AI (pattern recognition) and robotics (actuation).
Universities and public research facilities are the existing democratic research institutions across the world. How can you defend not simply funding them and letting democracy handle it ?
AGI is of course completely transformative but this leaves me thinking you folks are just putting a "Do no Evil" window-dressing on an effort that was/continues to be portrayed as altruistic. Given partners like Khosla it seems to be an accurate sentiment.
How does that affect the incentives and motivations of investors? It doesn't matter how much value you create in the long run, investors will want returns, not safe AI.
Imagine some one else builds AGI and it does has that kind of a runaway effect. More intelligence begets more profits which buys more intelligence etc. to give you the runaway profits your suggesting.
Shouldn't it have some kind of large scale democratic governance? What if you weren't allowed to be on the list of owners or "decision makers"?
How do you envision OpenAI capturing that value though? Value creation can be enough for a non-profit, not for a company though. If we OpenAI LP succeeds, and provides a return on investment what product will it be selling and who will be buying it?
Kudos for having the guts to say it out loud; this would be a natural consequence of realizing safe and beneficial AGI. It's a statement that will obviously be met with some ridicule, but someone should at least be frank about it at some point.
Really neat corporate structure! We'd looked into becoming a B-Corp, but the advice that we'd gotten was that it was an almost strictly inferior vehicle both for achieving impact and for potentially achieving commercial success for us. I'm obviously not a lawyer, but it's great to see Open AI contributing to the new interesting structures to solve hard global scale problems.
I wonder if the profit cap multiple is going to end up being a significant signalling risk for them. A down-round is such a negative event in the valley, I can imagine a "increasing profit multiple" would have to be treated the same way.
One other question for the folks at OpenAI: How would equity grants work here? You get X fraction of an LP that gets capped at Y dollar profits? Are the fractional partnerships/transferable if earned into?
Yes, we're planning to release a third-party usable reference version of our docs (creating this structure was a lot of work, probably about 6-9 months of implementation).
We've made the equity grants feel very similar to startup equity — you are granted a certain number of "units" which vest over time, and more units will be issued as other join employees in the future. Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.
>>Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.
Is this due to long term capital gains? Do you allow for early exercising for employees? Long term cap gains for options require holding 2 years since you were granted the options and 1 year since you exercised.
I'm also interested in how this corporate structure relates to a b-corp (or technically, a PBC)
> OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.
One of the key reasons to incorporate as a PBC is to allow "maximizing shareholder value" to be defined in non-monetary terms (eg impact to community, environment, or workers).
How is this structure different from a PBC, or why didn't you go for a PBC?
They were able to attract talent and PR in the name of altruism and here they are now trying to flip the switch as quietly as possible. If the partner gets a vote/profit then a "charter" or "mission" won't change anything. You will never be able to explicitly prove that a vote had a "for profit" motive.
Elon was irritated that he was behind in the AI intellectual property race and this narrative created a perfect opportunity. Not surprised in the end. Tesla effectively did the same thing - "come help me save the planet" with overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe for a second that he will not participate in this LP]
My reading that the design of this structure is not to require partners to make decisions in the interest of the mission, but to remove incentives for them to make decisions against the interest of the mission. With a cap on returns, there's a point at which it stops making sense to maximize short-term value or reduce expenses, and with the words about fiduciary duty, it becomes defensible to make decisions that don't obviously increase profit. That is, this structure seems much better than the traditional startup structure and I suspect many entities that are currently actual, normal startups would do more good for the world under a structure like this. (Or that many people who are bootstrapping because they have a vision and they don't want VCs to force them into short-term decisions could productively take some VC investment under this sort of model.)
I agree this isn't a non-profit any more. It seems like that's the goal: they want to raise money the way they'd be able to as a normal startup (notably, from Silicon Valley's gatekeepers who expect a return on investment), without quite turning into a normal startup. If the price for money from Silicon Valley's gatekeepers is a board seat, this is a safer sort of board seat than the normal one.
(Whether this is the only way to raise enough money for their project is an interesting question. So is whether it's a good idea to give even indirect, limited control of Friendly AI to Silicon Valley's gatekeepers - even if they're not motivated by profit and only influencing it with their long-term desires for the mission, it's still unclear that the coherent extrapolated volition of the Altmans and Khoslas of the world is aligned with the coherent extrapolated volition of humanity at large.)
> If the partner gets a vote/profit then a "charter" or "mission" won't change anything
(I work at OpenAI.)
The board of OpenAI Nonprofit retains full control. Investors don't get a vote. Some investors may be on the board, but: (a) only a minority of the board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake can't vote in decisions that may conflict with the mission: https://openai.com/blog/openai-lp/#themissioncomesfirst
> "come help me save the planet" with overpriced cars.
You are helping the planet if those customers would've bought ICE luxury vehicles instead of BEV luxury vehicles. I'm not sure BEV could be done any other way but a top-down, luxury-first approach. So, what exactly is your gripe there? Are you a climate change denier or do you believe that cheap EVs were the path to take?
If you are trying maximise the benefit to society it may be necessary to crack AGI before Google or some other corporation does. That's probably not going to happen without serious resources.
What's to stop someone with a vote but not an investment from significantly investing in an AI application (business/policy/etc.) that directly aligns with one of OpenAI's initiatives? The spirit of this LP structure is commendable but it does not do enough to eliminate pure profit-minded intentions.
This seems like an unnecessarily cynical take on things. And ultimately, if the outcome is the same, what do you (or anyone) really care if people are making more money from it or if there are commercial purposes?
The OpenAI staff are literally some of the most employable folks on earth; if they have a problem with the new mission it's incredibly easy for them to leave and find something else.
Additionally, I think there's a reason to give Sam the benefit of the doubt. YC has made multiple risky bets that were in line with their stated mission rather than a clear profit motive. For example, adding nonprofits to the batch and supporting UBI research.
Their's nothing wrong with having a profit motive or using the upsides of capitalism to further their goals.
It most certainly is not unnecessarily cynical. The point is that money clouds the decision-making process and responsibilities of those involved - which is the whole ethos that OpenAI was founded on.
"If you put 10m$ into us for 20% of the post-money business, anything beyond a 5B$ valuation you don't see any additional profits from" which seems like a high but not implausible cap. I suspect they're also raising more money on better terms which would make the cap further off.
First not publishing the GPT-2 model, now this...hopefully I am wrong but it looks like they are heading towards being a closed-off proprietary AI money making machine. This further incentivizes them to be less transparent and not open source their research. :(
OpenAI's mission statement is to ensure that AGI "benefits all of humanity", and its charter rephrases this as "used for the benefit of all".
But without a more concrete and specific definition, "benefit of all" is meaningless. For most projects, one can construct a claim that it has the potential to benefit most or all of a large group of people at some point.
So, what does that commitment mean?
If an application benefits some people and harms others, is it unacceptable?
What if it harms some people now in exchange for the promise of a larger benefit at some point in the future?
Must it benefit everyone it touches and harm no one? What if it harms no one but the vast majority of its benefits accrue to only the top 1% of humanity?
Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...
I think this tweet from one of our employees sums it up well:
https://twitter.com/Miles_Brundage/status/110519043405200588...
Why are we making this move? Our mission is to ensure AGI benefits all of humanity, and our primary approach to doing this is to actually try building safe AGI. We need to raise billions of dollars to do this, and needed a structure like OpenAI LP to attract that kind of investment while staying true to the mission.
If we succeed, the return will be exceed the cap by orders of magnitude. See https://blog.gregbrockman.com/the-openai-mission for more details on how we think about the mission.
However, this still sounds incredibly entitled and arrogant to me. Nobody doubts that there are many very smart and capable people working for OpenAI. Are you really expecting to beat the return of the most successful start-ups todate by orders of magnitude and to be THE company developing the first AGI? (And even in this -for me- extremely unlikely case, the cap would most likely don't matter, as if a company would develop an AGI worth trillions the government/UN would have to tax/license/regulate it.)
Come on, you are deceiving yourself (and your employees apparently as well, the Twitter you quoted is a good example). This is a non-profit pivoting into a normal startup.
Edit: Additinally it's almost ironic, that "Open"AI now takes money from Mr. Khosla, who is especially known for his attitude towards "Open". Sorry if I sound bitter, but I was really rooting for you and the approach in general and I am absolutely sure that OpenAI has become something entirely different now :/..
Are there any concrete estimates of the economic return that different levels of AGI would generate? It's not immediately obvious to me that an AGI would be worth more than 10 trillion dollars (which I believe is what would be needed for your claim to be true).
For example, if there really is an "AGI algorithm", then what's to stop your competitors from implementing it too? Trends in ML research have shown that for most advances, there have been several groups working on similar projects independently at the same time, so other groups would likely be able to implement your AGI algorithm pretty easily even if you don't publish the details. And these competitors will drive down the profit you can expect from the AGI algorithm.
If the trick really is the huge datasets/compute (which your recent results seem to suggest), then it may turn out that the power needed to run an AGI costs more than the potential return that the AGI can generate.
I don't doubt that OpenAI will be doing absolute first class AI research (they are already doing this). It's just that I don't really find this definition of 'GI' compelling, and 'Artificial' really doesn't mean much--just because you didn't find it in a meadow somewhere doesn't mean it doesn't work. So 'A' is a pointless qualification in my opinion.
For me, the important part is what you define 'GI' to be, and I don't like the given definition. What we will have is world class task automation--which is going to be insanely profitable (congrats). But I would prefer not to confuse the idea with HLI(human-level intelligence). See [1] for a good discussion.
They will fail to create AGI--mainly because we have no measurable definition of it. What they care about is how dangerous these systems could potentially be. More than nukes? It doesn't actually matter, who will stop who from using nukes or AGI or a superbug? Only political systems and worldwide cooperation can effectively deal with this...not a startup...not now...not ever. Period.
[0] https://blog.gregbrockman.com/the-openai-mission [1] https://dl.acm.org/citation.cfm?id=3281635.3271625&coll=port...
Deleted Comment
So someone who invests $10 million has their investment “capped” at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Early investors in Google have received a roughly 20x return on their capital. Google is currently valued at $750 billion. Your bet is that you'll have a corporate structure which returns orders of magnitude more than Google on a percent-wise basis (and therefore has at least an order of magnitude higher valuation), but you don't want to "unduly concentrate power"? How will this work? What exactly is power, if not the concentration of resources?
Likewise, also from the OpenAI charter:
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
How do you envision you'll deploy enough capital to return orders of magnitude more than any company to date while "minimizing conflicts of interest among employees and stakeholders"? Note that the most valuable companies in the world are also among the most controversial. This includes Facebook, Google and Amazon.
______________________________
1. https://openai.com/charter/
You’re not going to accomplish AGI anytime soon, so your intentions are going to have to survive future management and future stakeholders beyond your tenure.
You went from “totally open” to “partially for profit” and “we think this is too dangerous to share” in three years. If you were on the outside, where you predict this trend was leading?
Which one of these mission statements is Alphabet's for-profit Deepmind and which one is the "limited-profit" OpenAI?
"Our motivation in all we do is to maximise the positive and transformative impact of AI. We believe that AI should ultimately belong to the world, in order to benefit the many and not the few, and we’ll continue to research, publish and implement our work to that end."
"[Our] mission is to ensure that artificial general intelligence benefits all of humanity."
>The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission
Is that the mission? Create AGI? If you create AGI, we have a myriad of sci-fi books that have explored what will happen.
1. Post-scarcity. AGI creates maximum efficiency in every single system in the world, from farming to distribution channels to bureaucracies. Money becomes worthless.
2. Immortal ruling class. Somehow a few in power manage to own total control over AGI without letting it/anyone else determine its fate. By leveraging "near-perfect efficiency," they become god-emperors of the planet. Money is meaningless to them.
3. Robot takeover. Money, and humanity, is gone.
Sure, silliness in fiction, but is there a reasonable alternative from the creation of actual, strong general artificial intelligence? I can't see a world with this entity in it that the question of "what happens to the investors' money" is a relevant question at all. Basically, if you succeed, why are we even talking about investor return?
Universities and public research facilities are the existing democratic research institutions across the world. How can you defend not simply funding them and letting democracy handle it ?
Have you decided in which direction you might guide the AGI’s moral code? Or even a decision making framework to choose the ideal moral code?
Shouldn't it have some kind of large scale democratic governance? What if you weren't allowed to be on the list of owners or "decision makers"?
EDIT: Just to hedge my bets, maybe _this_ comment will be the "No wireless. Less space than a nomad. Lame." of 2029.
Deleted Comment
Dead Comment
"100x" is laughable.
I wonder if the profit cap multiple is going to end up being a significant signalling risk for them. A down-round is such a negative event in the valley, I can imagine a "increasing profit multiple" would have to be treated the same way.
One other question for the folks at OpenAI: How would equity grants work here? You get X fraction of an LP that gets capped at Y dollar profits? Are the fractional partnerships/transferable if earned into?
Would you folks think about publishing your docs?
We've made the equity grants feel very similar to startup equity — you are granted a certain number of "units" which vest over time, and more units will be issued as other join employees in the future. Incidentally, these end up being taxed more favorably than options, so we think this model may be useful for startups for that reason too.
Is this due to long term capital gains? Do you allow for early exercising for employees? Long term cap gains for options require holding 2 years since you were granted the options and 1 year since you exercised.
> OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.
One of the key reasons to incorporate as a PBC is to allow "maximizing shareholder value" to be defined in non-monetary terms (eg impact to community, environment, or workers).
How is this structure different from a PBC, or why didn't you go for a PBC?
- Fiduciary duty to the charter - Capped returns - Full control to OpenAI Nonprofit
LP's have much more flexibility to write these in an enforceable way.
Elon was irritated that he was behind in the AI intellectual property race and this narrative created a perfect opportunity. Not surprised in the end. Tesla effectively did the same thing - "come help me save the planet" with overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe for a second that he will not participate in this LP]
I agree this isn't a non-profit any more. It seems like that's the goal: they want to raise money the way they'd be able to as a normal startup (notably, from Silicon Valley's gatekeepers who expect a return on investment), without quite turning into a normal startup. If the price for money from Silicon Valley's gatekeepers is a board seat, this is a safer sort of board seat than the normal one.
(Whether this is the only way to raise enough money for their project is an interesting question. So is whether it's a good idea to give even indirect, limited control of Friendly AI to Silicon Valley's gatekeepers - even if they're not motivated by profit and only influencing it with their long-term desires for the mission, it's still unclear that the coherent extrapolated volition of the Altmans and Khoslas of the world is aligned with the coherent extrapolated volition of humanity at large.)
(I work at OpenAI.)
The board of OpenAI Nonprofit retains full control. Investors don't get a vote. Some investors may be on the board, but: (a) only a minority of the board are allowed to have a stake in OpenAI LP, and (b) anyone with a stake can't vote in decisions that may conflict with the mission: https://openai.com/blog/openai-lp/#themissioncomesfirst
Will never work in practice
You are helping the planet if those customers would've bought ICE luxury vehicles instead of BEV luxury vehicles. I'm not sure BEV could be done any other way but a top-down, luxury-first approach. So, what exactly is your gripe there? Are you a climate change denier or do you believe that cheap EVs were the path to take?
The OpenAI staff are literally some of the most employable folks on earth; if they have a problem with the new mission it's incredibly easy for them to leave and find something else.
Additionally, I think there's a reason to give Sam the benefit of the doubt. YC has made multiple risky bets that were in line with their stated mission rather than a clear profit motive. For example, adding nonprofits to the batch and supporting UBI research.
Their's nothing wrong with having a profit motive or using the upsides of capitalism to further their goals.
This is equivalent to saying:
"If you put 10m$ into us for 20% of the post-money business, anything beyond a 5B$ valuation you don't see any additional profits from" which seems like a high but not implausible cap. I suspect they're also raising more money on better terms which would make the cap further off.
But without a more concrete and specific definition, "benefit of all" is meaningless. For most projects, one can construct a claim that it has the potential to benefit most or all of a large group of people at some point.
So, what does that commitment mean?
If an application benefits some people and harms others, is it unacceptable? What if it harms some people now in exchange for the promise of a larger benefit at some point in the future?
Must it benefit everyone it touches and harm no one? What if it harms no one but the vast majority of its benefits accrue to only the top 1% of humanity?
What is the line?