I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:
> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
That is a very obvious thing for them to say though regardless of what they truly believe, because (a) it legitimizes removing the cap , making fundraising easier and (b) averts antitrust suspicions.
> "Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission."
One remarkable advantage of being a "Public Benefit Corporation" is this it:
> prevent[s] shareholders from using a drop in stock value as evidence for dismissal or a lawsuit against the corporation[1]
In my view, it is their own shareholders that the directors of OpenAI are insulating themselves against.
(b) is true but no so much (a). If investors thought it would be winner take all and they thought ClosedAI would win they'd invest in ClosedAI only and starve competitors of funding.
As a deeper issue on "justification", here is something I wrote related to this in 2001 on the risks of non-profits engaging in self-dealing when they create artificial scarcity to enrich themselves:
"Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"
"Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."
That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.
The value investor Mohnish Pabrai once talked about his observation that most companies with a moat pretend they don’t have one and companies without pretend they do.
A version of this is emphasized in the thielverse as well. Companies in heavy competition try to intersect all their qualities to appear unique. Dominant companies talk about their portfolio of side projects to appear in heavy competition (space flight, ed tech, etc).
There needs to be regulations about deceptive, indirect, purposefully ambiguous or vague public communication by corporations (or any entity). I'm not an expert in corporate law or finance, but the statement should be:
"Open AI for-profit LLC will become a Public Benefit Corporation (PBC)"
followed by: "Profit cap is hereby removed" and finally "The Open AI non-profit will continue to control the PBC. We intend it to be a significant shareholder of the PBC."
AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity.
Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
> AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
Remember however that their charter specifies: "If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project"
It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.
AGI could be a winner-take-all market... for the AGI, specifically for the first one that's General and Intelligent enough to ensure its own survival and prevent competing AGI efforts from succeeding...
Homo Sapiens wiped out every other intelligent hominid and every other species on Earth exists at our mercy. That looks a lot like the winners (humans) taking all.
Well, yeah, the world in which it is winner take all is the one where it accelerates productivity so much such that the first firm to achieve it doesn't provide access to its full capabilities directly to oursiders but uses it themselves and conquers every other field of endeavor.
That's always been pretty overtly the winner-take-all AGI scenario.
AGI might not be fungible. From the trends today it's more likely there will be multiple AGIs with different relative strengths and weakness, different levels of accessibility and compliance, different development rates, and different abilities to be creative and surprising.
OpenAI is winning in a similar way that Apple is winning in smartphones.
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
Please promise to come back to this comment in 2030 and playfully mock me for ever being worried and I will buy you a coffee. If AGI is invented before 2030 please buy me one and let me mock you playfully.
to me it sounds like an admission that AGI is bullshit! AGI would be so disruptive to the current economic regime that "winner takes all" barely covers it, I think. Admitting they will be in normal competition with other AI companies implies specializations and niches to compete, which means Artificial Specialized Intelligence, NOT general intelligence!
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
I don't read it that way. It reads more like AGIs will be like very smart people and rather than having one smart person/AGI, everyone will have one. There's room for both Beethoven and Einstein although they were both generally intelligent.
It's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...
I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.
Sounds a little too much like, "It's not AGI today ergo it will never become AGI"
Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.
If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.
> At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?
> are just really useful input/output devices that respond to a stimuli
LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)
That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.
The US government probably doesn't think it's behind.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
Unless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.
The EU can say all it wants about banning AI applications with unacceptable risk. But ASML is still selling machines to TSMC, which makes the chips which the AI companies are using. The EU is very much profiting off of the AI boom. ASML makes significantly more money than OpenAI, even.
US government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.
Absolutely. It's frankly quite shocking to see how otherwise atheist or agnostic people have so quickly begun worshipping at the altar of "inevitable AGI apocalypse", much in the same way as how extremist Christians await the rapture.
Because many people fundamentally don’t believe AGI is possible at a basic level, even AI researchers. Humans tend to only understand what materially affects their existence.
Well, possibly it isn't. Possibly LLMs are limited in ways that humans aren't, and that's why the staggering advances from GPT-2 to GPT-3 and from GPT-3 to GPT-4 have not continued. Certainly GPT-4 doesn't seem to be more powerful than the largest nuclear weapons.
But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:
1. Designing nuclear weapons.
2. Designing and troubleshooting mining, materials processing, and energy production equipment.
3. Making money by investing in the stock market.
4. Discovering new physics and chemistry.
5. Designing and troubleshooting electronics such as GPUs.
6. Building better AI.
7. Cracking encryption.
8. Finding security flaws in computer software.
9. Understanding the published scientific literature.
10. Inferring unpublished discoveries of military significance from the published scientific literature.
11. Formulating military strategy.
Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.
If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.
I feel this. I had a very productive convo with an LLM today and realized that a huge part of the value of it was that it addressed my questions in a focused way, without trying to sell me anything or generate SEO rankings or register ad impressions. It just helped me. And that was incredibly refreshing in a digital world that generally feels adversarial.
Then the thought came, when will they start showing ads here.
I like to think that if we learn to pay for it directly, or the open source models get good enough, we could still enjoy that simplicity and focus for quite a while. Here’s hoping!
The "good" thing is this is all way too expensive to be ad-supported. Maybe there will be some ad-supported products using very small/cheap models, but the leading edge stuff is always going to be at the leading-edge of compute usage too, and someone has to pay the bill. Even with investors subsidizing a lot of the costs, it's still very expensive to use the best models heavily for real work.
For all of the skepticism I've seen of Sam Altman, listening to interviews with him (eg by Ben Thompson) he says he really does not want to create an ad tier for OpenAI.
Even if you take him at his word, incentives are hard to ignore (and advertising is a very powerful business model when your goal is to create something that reaches everyone)
I'm hoping there will always be a good LLM option, for the following reasons:
1) The Pareto frontier of open LLMs will keep expanding. The breakneck pace of open research/development, combined with techniques like distillation will keep the best open LLMs pretty good, if not the best.
2) The cost of inference will keep going down as software and hardware are optimized. At the extreme, we're lookin toward bit-quantized LLMs that run in RAM itself.
These two factors should mean a good open LLM alternative should always exist, one without ulterior motives. Now, will people be able to have the hardware to run it? Or will users just put up with ads to use the best LLM? The latter is likely, but you do have a choice.
In the future AI will be commoditized. You'll be able to buy an inference server for your home in the form factor like a wi-fi router now. They will be cheap and there will be a huge selection of different models, both open-source and proprietary. You'll be able to download a model with a click of a button. (Or just torrent them.)
Ads intermixed into llm responses is so clearly evil that openai will never do it so long as the nonprofit has a controlling stake (which it currently still has), because the nonprofit would never allow it.
The insidious part is it doesn't have to be so blatant as adverts, you can achieve a lot by just slight biases in text output.
Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.
LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.
Ads / SEO but with AI responses was so obviously the endgame given how much human attention it controls and the fact that people aren't really willing to pay what it costs (when decent free, open-weights alternatives exist)
I see OpenAI's original form as the last gasp of a kind of liberal tech; in a world where "doing good" was seen as very important, the non-profit approach made sense and got a lot of people on board. These days the Altmans and the pmarcas of the world are much more comfortable expressing their authoritarian, self-centered world views; the "evolving" structure of Open AI is fully in line with that. They want to be the kings they always thought of themselves as, and now they get to do so without couching it in "doing good".
That world never existed. Yes, pockets did - IT professionals with broadband lines and spare kit hosting IRC servers and phpBB forums from their homes free of charge, a few VC-funded companies offering idealistic visions of the net until funding ran dry (RIP CoHost) - but once the web became privatized, it was all in service of the bottom line by companies. Web 2.0 onwards was all about centralization, surveillance, advertising, and manipulation of the populace at scale - and that intent was never really a secret to those who bothered to pay attention. While the world was reeling from Cambridge Analytica, us pre-1.0 farts who cut our teeth on Telnet and Mosaic were just kind of flabbergasted that ya'll were surprised by overtly obvious intentions.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
I don’t think the parent was saying that everyone’s intentions were pure until recently, but rather that naked greed wasn’t cool before, but now it is.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
What we are observing is the effects of profit maximization when the core value to the user is already fulfilled. It's a type of pathological optimization that is useful at the beginning but eventually pathologizes.
When we already have efficient food production that drove down costs and increased profits (a good thing), what else is there for companies to optimize for, if not loading it with sugar, putting it in cheap plastic, bamboozling us with ads?
This same dynamic plays out in every industry. Markets are a great thing when the low hanging fruit hasn't been picked, because the low hanging fruit is usually "cut the waste, develop basic tech, be efficient". But eventually the low hanging fruit becomes "game human's primitive reward circuits".
I think it did and still does today - every single time an engineer sees a problem an starts an open-source project to solve it - not out of any profit motive and without any monetization strategy in mind, but just because they can, and they think the world would be better off.
Coincidentally, and as another pre-1.0 fart myself :-) -- one who remembers when Ted Nelson's "Computer Lib / Dream Machines" was still just a wild hope -- I was thinking of something similar the other day (not USPS-specific for hosting, but I like that).
It was sparked by going to a video conference "Hyperlocal Heroes: Building Community Knowledge in the Digital Age" hosted by New_ Public:
https://newpublic.org/
"Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."
A not-insignificant amount of time in that one-hour teleconference was spent related to funding models for local social media and local reporting.
Afterwards, I got to thinking. The USA spent literally trillions of dollars on the (so-many-problematical-things-about-it-I-better-stop-now) Iraq war.
https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War
"According to a Congressional Budget Office (CBO) report published in October 2007, the US wars in Iraq and Afghanistan could cost taxpayers a total of $2.4 trillion by 2017 including interest."
Or, from a different direction, the USA spends about US$200 billion per year on mostly-billboard-free roads:
https://www.urban.org/policy-centers/cross-center-initiative...
"In 2021, state and local governments provided three-quarters of highway and road funding ($154 billion) and federal transfers accounted for $52 billion (25 percent)."
That's about US$700 per person per year on US roads.
So, clearly huge amounts of money are available in the USA if enough people think something is important. Imagine if a similar amount of money went to funding exactly what you outlined -- a free web presence for distributed social media -- with an infrastructure funded by tax dollars instead of advertisements. Isn't a healthy social media system essential to 21st century online democracy with public town squares?
And frankly such a distributed social media ecosystem in the USA might be possible for at most a tenth of what roads cost, like perhaps US$70 per person per year (or US$20 billion per year)?
Yes, there are all sorts of privacy and free speech issues to work through -- but it is not like we don't have those all now with the advertiser-funded social media systems we have. So, it is not clear to me that such a system would be immensely worse than what we have.
But what do I know? :-) Here was a previous big government suggestion be me from 2010 -- also mostly ignored (until now 15 years later the USA is in political crisis over supply chain dependency and still isn't doing anything very related to it yet):
"Build 21000 flexible fabrication facilities across the USA"
https://web.archive.org/web/20100708160738/http://pcast.idea...
"Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
Oh I'm not saying they every believed more than their self-centered views, but that in a world that leaned more liberal there was value in trying to frame their work in those terms. Now there's no need to pretend.
They deeply believe in the Ayn Rand mindset that the system that brings them the most individual wealth is also the best system for humanity as a whole.
When people that wealthy are that delusional... With few checks or balances from politics, media, or even social media... I don't think humanity as a whole is in for a great time.
The problem with that mindset is that money is a proxy for the Marxist idea of inherent value. The distinction does not matter when you are just an average dude, doubling your money doubles the amount of material wealth you have access to.
But once you control a significant enough chunk of money, it becomes clear the pie doesn't get any bigger the more shiny coins you have, you only have more relative purchasing power, automatically making everyone else poorer.
Is it reasonable to assign the descriptor “authoritarian” to anyone who simply does not subscribe to the common orthodoxy of one faction in the american culture war? That is what it seems to me is happening here, though I would love to be wrong.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
Donating millions to a fascist president (in Altman’s case) seems pretty authoritarian to me. And he seems happy enough hanging out with Thiel and other Yarvin groupies.
I’m not sure exactly what they meant by “liberal” in this case, but since they put it in contrast with authoritarianism, I assume they meant it in the conventional definition of the word (where it is the polar opposite of authoritarianism). Instead of the American politics-as-sports definition that makes it a synonym for “team blue.”
No, "authoritarian" is a word with a specific meaning. I'm not sure about applying it to Sam Altman, but Marc Andreessen has expressed views that I consider authoritarian in his victory lap tour since last year's presidential election.
No I don't think it is. I DO think those two people want to be in charge (along with other billionaires) and they want the rest of us to follow along, which is in my book an authoritarian POV. pmarca's recent "VC is the only job that can't be done by AI" is a good example of that; the rest of us are to be managed and controlled by VCs and robots.
Why are you changing the subject? The “War on Terror” was never intended to spread democracy as far as I know; democracy was a means by which to achieve the objective of safety from terrorism.
For better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.
The recent flap over ChatGPT's fluffery/flattery/glazing of users doesn't bode well for the direction that OpenAI is headed in. Someone at the outfit appeared to think that giving users a dopamine hit would increase time-spent-on-app or some other metric - and that smells like contempt for the intelligence of the user base and a manipulative approach designed not to improve the quality of the output, but to addict the user population to the ChatGPT experience. Your own personal yes-person to praise everything you do, how wonderful. Perfect for writing the scripts for government cabinent ministers to recite when the grand poobah-in-chief comes calling, I suppose.
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
Yes and no. It sounds like the capped profit PPU holders will get to have their units convert 1:1 with unlimited profit equity shares, which are obviously way more valuable. So the nonprofit loses insanely in this move and all current investors and employees make a huge amount.
> Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
One remarkable advantage of being a "Public Benefit Corporation" is this it:
> prevent[s] shareholders from using a drop in stock value as evidence for dismissal or a lawsuit against the corporation[1]
In my view, it is their own shareholders that the directors of OpenAI are insulating themselves against.
[1] https://en.wikipedia.org/wiki/Benefit_corporation
Dead Comment
https://pdfernhout.net/on-funding-digital-public-works.html#...
"Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"
"Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."
That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.
This is originally from The Art of War.
"Open AI for-profit LLC will become a Public Benefit Corporation (PBC)"
followed by: "Profit cap is hereby removed" and finally "The Open AI non-profit will continue to control the PBC. We intend it to be a significant shareholder of the PBC."
Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.
The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.
But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.
It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.
That's always been pretty overtly the winner-take-all AGI scenario.
Deleted Comment
OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.
I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.
I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."
Yeah; and:
Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!We need to get closer to the norm and give shares of a for-profit to employees in order to create retention.
Please promise to come back to this comment in 2030 and playfully mock me for ever being worried and I will buy you a coffee. If AGI is invented before 2030 please buy me one and let me mock you playfully.
and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."
This is another nail in the coffin
Which sounds pretty in-line with the SV culture of putting profit above all else.
It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.
ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.
But AIs that are on a level with humans for many common tasks is not that far off.
― Sun Tzu
Quite the arc from the original organization.
Deleted Comment
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
Any of the signatories here match your criteria? https://safe.ai/work/statement-on-ai-risk#signatories
Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.
Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.
If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...
The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?
LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)
That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.
So you don't mind if your economic value drops to zero, with all human labour replaced by machines?
Dependent on UBI, existing in a basic pod, eating rations of slop.
Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.
Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?
I don't know. I'm just asking questions.
Mostly OpenAI and DeepMind and it stunk of 'pulling up the drawbridge behind them' and pivoting from actual harm to theoretical harm.
For a crowd supposedly entrenched in startups, it's amazing everyone here is so slow to recognise it's all funding pitches and contract bidding.
The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.
But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:
1. Designing nuclear weapons.
2. Designing and troubleshooting mining, materials processing, and energy production equipment.
3. Making money by investing in the stock market.
4. Discovering new physics and chemistry.
5. Designing and troubleshooting electronics such as GPUs.
6. Building better AI.
7. Cracking encryption.
8. Finding security flaws in computer software.
9. Understanding the published scientific literature.
10. Inferring unpublished discoveries of military significance from the published scientific literature.
11. Formulating military strategy.
Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.
If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.
Look forward to re-living that shift from life-changing community resource to scammy and user-hostile
Then the thought came, when will they start showing ads here.
I like to think that if we learn to pay for it directly, or the open source models get good enough, we could still enjoy that simplicity and focus for quite a while. Here’s hoping!
The $20 monthly payment is not enough though and companies like Google can keep giving away their AI for free till OpenAI is bankrupt.
Even if you take him at his word, incentives are hard to ignore (and advertising is a very powerful business model when your goal is to create something that reaches everyone)
1) The Pareto frontier of open LLMs will keep expanding. The breakneck pace of open research/development, combined with techniques like distillation will keep the best open LLMs pretty good, if not the best.
2) The cost of inference will keep going down as software and hardware are optimized. At the extreme, we're lookin toward bit-quantized LLMs that run in RAM itself.
These two factors should mean a good open LLM alternative should always exist, one without ulterior motives. Now, will people be able to have the hardware to run it? Or will users just put up with ads to use the best LLM? The latter is likely, but you do have a choice.
That step, along with getting politicians to pass it, is the only thing that will stop that outcome.
Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.
LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.
That doesn't mean it has to always be this way, though. Back when I had more trust in the present government and USPS, I mused on how much of a game changer it might be for the USPS to provide free hosting and e-mail to citizens, repurposing the glut of unused real estate into smaller edge compute providers. Everyone gets a web server and 5GB of storage, with 1A Protections letting them say and host whatever they like from their little Post Office Box. Everyone has an e-mail address tied to their real identity, with encryption and security for digital mail just like the law provides for physical mail. I still think the answer is about enabling more people to engage with the internet on their selective terms (including the option of disengagement), rather than the present psychological manipulation everyone engages in to keep us glued to our screens, tethered to our phones, and constantly uploading new data to advertisers and surveillance firms alike.
But the nostalgic view that the internet used to be different is just that: rose-tinted memories of a past that never really existed. The first step to fixing this mess is acknowledging its harm.
The Internet has changed a lot over the decades, and it did used to be different, with the differences depending on how many years you go back.
When we already have efficient food production that drove down costs and increased profits (a good thing), what else is there for companies to optimize for, if not loading it with sugar, putting it in cheap plastic, bamboozling us with ads?
This same dynamic plays out in every industry. Markets are a great thing when the low hanging fruit hasn't been picked, because the low hanging fruit is usually "cut the waste, develop basic tech, be efficient". But eventually the low hanging fruit becomes "game human's primitive reward circuits".
It absolutely did. Steve Wozniak was real. Silicon Valley wasn't always a hive of liars and sycophants.
It was sparked by going to a video conference "Hyperlocal Heroes: Building Community Knowledge in the Digital Age" hosted by New_ Public: https://newpublic.org/ "Reimagine social media: We are researchers, engineers, designers, and community leaders working together to explore creating digital public spaces where people can thrive and connect."
A not-insignificant amount of time in that one-hour teleconference was spent related to funding models for local social media and local reporting.
Afterwards, I got to thinking. The USA spent literally trillions of dollars on the (so-many-problematical-things-about-it-I-better-stop-now) Iraq war. https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War "According to a Congressional Budget Office (CBO) report published in October 2007, the US wars in Iraq and Afghanistan could cost taxpayers a total of $2.4 trillion by 2017 including interest."
Or, from a different direction, the USA spends about US$200 billion per year on mostly-billboard-free roads: https://www.urban.org/policy-centers/cross-center-initiative... "In 2021, state and local governments provided three-quarters of highway and road funding ($154 billion) and federal transfers accounted for $52 billion (25 percent)."
That's about US$700 per person per year on US roads.
So, clearly huge amounts of money are available in the USA if enough people think something is important. Imagine if a similar amount of money went to funding exactly what you outlined -- a free web presence for distributed social media -- with an infrastructure funded by tax dollars instead of advertisements. Isn't a healthy social media system essential to 21st century online democracy with public town squares?
And frankly such a distributed social media ecosystem in the USA might be possible for at most a tenth of what roads cost, like perhaps US$70 per person per year (or US$20 billion per year)?
Yes, there are all sorts of privacy and free speech issues to work through -- but it is not like we don't have those all now with the advertiser-funded social media systems we have. So, it is not clear to me that such a system would be immensely worse than what we have.
But what do I know? :-) Here was a previous big government suggestion be me from 2010 -- also mostly ignored (until now 15 years later the USA is in political crisis over supply chain dependency and still isn't doing anything very related to it yet): "Build 21000 flexible fabrication facilities across the USA" https://web.archive.org/web/20100708160738/http://pcast.idea... "Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
Deleted Comment
Dead Comment
But once you control a significant enough chunk of money, it becomes clear the pie doesn't get any bigger the more shiny coins you have, you only have more relative purchasing power, automatically making everyone else poorer.
I have not seen anything from sama or pmarca that I would classify as “authoritarian”.
altman building a centralised authority of who will be classed as "human" is about as authoritarian as you could get
You mean, AGI will benefit all of humanity like War on Terror spread democracy?
Deleted Comment
Altman keeps on talking about AGI as if we're already there.
But reasonable people could argue that we've achieved AGI (not artificial super intelligence)
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Fwiw, Sam Altman will have already seen the next models they're planning to release
What it really says is that if a user wants to control the interaction and get the useful responses, direct programmatic calls to the API that control the system prompt are going to be needed. And who knows how much longer even that will be allowed? As ChatGPT reports,
> "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo and ChatGPT Plus environments) to no longer expose the full system prompt or baseline prompt directly."
Deleted Comment