> “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”
It's awesome to see the amazing value for society being created by big tech these days.
To think that even a year ago the idea of Instagram-style social media where all posts are openly AI-generated sounded very dystopian, now I can clearly so it is something people would pay for and HN people would gladly build. I wasn’t always a Luddite, but damn they made me one.
> I wasn’t always a Luddite, but damn they made me one.
If this industry didn't pay so well, I would've been gone years ago. I'm lucky to work in a job that I think is ethical and improving the world, but it's so goddamn embarrassing to even be in the same room as the AI and blockchain types and the ad hucksters.
There was /r/SubSimulatorGPT2, but you're telling me people would PAY for that? Maybe if you tricked them - which is arguably what Reddit is doing.
Social media's always been about giving people whatever makes them come back to the site, no matter how unethical. If an army of fake fans makes me think I have an army of real fans and keep posting for attention from my fans, they will totally do that. Unless it's illegal.
What you gotta understand is this world is going straight to hell, humanity might not be around much longer. Might as well embrace the chaos and enjoy the ride down. No point in being a Luddite now, the time for that was decades ago.
In George Orwell's 1984, there is a machine called the versificator that generates music and literature without any human intervention, presumably for the "entertainment" of the proletarians.
Each time I think I've seen dystopia and the pinnacle of stupidity someone finds a new way to top it. Either that's an amazing superpower, or I'm infected with incurable optimism.
Also, if the value in grok is reposting stupid things, I don't see how it adds any value to have it embedded in the social network. You could just as easily ridicule Gemini this way?
Also Twitter generates viral tweets because people use Twitter. A new social network with a similarly embedded AI will just be ridiculed on Twitter and go viral there
I would say sending rockets to space is currently orthogonal with current issues we are facing as a species. A decade ago, I would have said the big issue was finding a way to live without destroying ecosystems, but apparently finding a way to live without a major war is also a hot topic now.
This kind of news should be a death-knell for OpenAI.
If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
Alternative is that OpenAI is being quickly locked out of sources of human interactions because of competition, one way to "fix" that is build you're own meadow for data cows.
xAI isn't allowing people to use the Twitter feed to train AI
Google is keeping it's properties for Gemini
Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.
The assumption that you can just build a successful social network as an aside because you need access to data seems wildly optimistic. Next will be Netflix announcing working on AGI because lately show writers have been not very imaginative, and they need fresh content to keep subscribers.
I came across a quote in a forum which was part of a discussion around why corporate messaging and pandering has gotten so crazy lately. One comment stuck out as especially interesting, and I'll quote it in full below:
---
C suites are usually made of up really out of touch and weird workaholics, because that is what it takes to make it to the C suite of a large company. They buy DSS (decision support service / software) from vendors, usually marketing groups, that basically tell them what is in and what isn't. Many marketing companies are now getting that data from twitter and reddit, and portraying it as the broad social trend. This is a problem because twitter and reddit are both extremely curated and censored, and the tone of the conversation there is really artificial, and can lead to really bad conclusions.
---
This is only somewhat related, but if OpenAI did actually succeed in building their own successful social media platform (doubtful) they would be basing a lot of their model on whatever subset of people wanted to be part of the OpenAI social media platform. The opportunity for both mundane and malicious bias in models there seems huge.
Somewhat related, apparently a lot of English spellings were standardized by the invention of the printing press. This isn't surprising; it was one of the first technologies to really democratize written materials, and so it had a very outsized power to set standards. LLMs feel like they could be a bit like this, particularly if everyone continues with their current trends of intentionally building reliance on them into their products / companies / workflows. As a real life example, someone at work realized you could ask co-pilot to rate the professionalism of your communication during a meeting. This seems quite chilling, since you're not really rating your professionalism, but measuring yourself against whatever weird bell curve exists in co-pilot.
I'm absolutely baffled that LLMs are seeing broad adoption, and absolutely baffled that people intentionally adopting and integrating them into their lives. I'm in my early 40s now. I'm not sure if I can get out of the tech field at this point, but I'm seriously thinking about options at this point.
I would just like to appreciate your imagery and wordplay here, it’s spot on and I think should be our standard for conceptualizing this corporate behavior.
They also have a contract with Reddit to train on user data (a common go-to source for finding non-spam search results). Unsure how many other official agreements they have vs just scraping.
> Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.
If this were their plan, they’d be discounting that some of their users would be controlled by their own AI.
My guess is that they’re trying other things to diversify themselves and/or to try to keep investors interested. Whether or not it works is irrelevant as long as they can convince others it will increase their usage.
But don't they have ChatGPT, the fifth or whatever most popular website on the planet? And deals with Reddit. Sure that can't touch the treasure trove Google is sittig on, xAI sure won't give them access and Github could perhaps sell their data (but that's a maybe)
> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.
- Product is not kickass. Hallucinations and cost limit its usefulness, and it's incinerating money. Prices are too high and need to go much higher to turn a profit.
- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.
- Many key employees have already left or been forced out.
> I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Even if you believe all that to be true, it in no way contradicts what you quoted or makes it unfair. Having a kick ass product and good brand awareness in no way correlates to being close to AGI.
> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
Or even if you did come up with AGI, so would everyone else. Gemini is arguably better than ChatGPT now.
Bingo. The secret sauce is never a sustainable long term moat. Things leak, competitors copy, or they make even better things, employees switch jobs. AI looks less vulnerable, since it’s academically difficult and expertise is still limited. But time and time again, new secret sauce ingredients last for months at a maximum, before they are exceeded oftentimes by hobbyists or other small actors.
I remember at Google they said well if source code leaks nobody is actually worried about stealing tech, the vast majority of code is open to all employees, with some exceptions like spam-, ranking, etc. It’s still protected, but not considered moat.
The moat comes from other things, such as datacenter & global networking infrastructure, marketing new products by pushing them through existing products (put Gemini in search, add chrome to Android etc). Most importantly you can use data you already have to bootstrap new products, say Gmail and calendar integrated with personalized assistants.
If you play your cards right, yes there’s some first-mover advantages, but they are more superficial than your average Twitter hype thread makes you think. It can give you the ability to set unofficial standards and APIs, like Kubernetes, S3 – (maybe OpenAI APIs?). And you can set certain agendas and market your name for recognition and trust. But all that can slip through your fingers if a behemoth picks up where you left off. They have so many advantages, except for being the fastest.
In fairness, the AGI definition predicted by doomsday safety experts who don’t have time for such unimportant concerns as copyright or misinformation via this tech, everyone who is most certainly not merely hyping to get investor cash and the utterly serious and scientifically grounded research happening at MIRI, is essentially that one company will achieve AGI, shortly thereafter that will lead to a singularity and that’s that. No one else could create a second because that scary AGI is so powerful and would prevent anyone else from shutting it down, including other AGI. And no, this is totally no a sci-fi plot, but rigorous research. Incidentally, I’m looking for someone to help me prevent OAI from killing my own grandfather, cause that scenario is also incredibly likely and should be taken seriously.
If it’s not obvious already, I believe we are far away from that with LLMs and that the attention those working in model safety give to AGI over current day concerns like Meta just torrenting for model data, are not very serious people, but I have accepted that this isn’t a popular opinion amongst industry professionals.
Not least because letting laws prevent them from model training is bad according to them, either for the same weird logic Musk uses to justify testing FSD Betas on an unwilling public due to the potential to prevent future deaths, or because they genuinely took RB seriously. No idea what’s worse for serious adults…
AGI is a technology or a feature, not a product. ChatGPT is a product. They need some more products to pay for one of the most expensive technologies ever (to not be delivered yet).
AI as we know it (GPT based LLM’s) have peaked. OpenAI noticed this sometime autumn last year when would-be GPT-5 was unimpressive despite the huge size. I still think ChatGPT 4.5 was GPT-5, just rebranded to set expectations.
Google Gemini 2.5 Pro was remarkably good and I’m not sure how they did it. It’s like an elite athlete doing a jump forward despite harsh competition. They probably have excellent training methodology and data quality.
DeepSeek made huge inroads in affordability…
But even with those, intelligence itself is seeing diminishing returns while training costs are not.
So OpenAI _needs_ to diversify - somehow. If they rely on intelligence alone, then they’re toast. So they can’t.
I tentatively agree that LLMs have reached somewhat of a ceiling at this stage and diversifying would make sense at this stage, in any other industry. But as others pointed out, OAI and others have attached their valuation directly to their definition of achieving “AGI”. Any pivot from that, if it were realistic in the coming years (my opinion: it isn’t), would be foolhardy and go against investors, so in turn, this is clearly admitting that even sama doesn’t see AGI as possible in the near term.
Adding social media to your thing is so 2018. Is the next big thing really just a warmed over version of the last big thing? Is sama just completely out of ideas to save his money-burner?
It's an easy thing to slap on to a service with lots of users. Back in the day this would be called a 'message board'. "Social media" requires the use of iframes that can be embedded on 3rd party sites. OpenAI is a login-only environment so I can see them going for a Discord-type of platform rather than something that spreads to the open web.
I think it might just be about distribution. Grok gets a lot of interesting opportunities for it over X, then throw in the way people reacted to new 4o image gen capabilities.
On the other hand, if you knew AGI was on the near horizon, you'd know that AGI will want to have friends to remain happy. You can give AGI a physical form so it can walk down to the bar – or you can, much more simply, give it an online social network.
Someone down below mentioned ads, and I think that might well be the route they're going to try: charging advertisers to influence the output of the AI.
As for whether it will work, I don't know how they're possibly going to get the "seed community" which will encourage others to join up. Maybe they're hoping that all the people making slop posts on other social networks want to cut out the middleman and have communities of people who actually enjoy that. As always, the sfw/nsfw censorship line will be an important definer, and I can't imagine them choosing NSFW.
> Now they've hamstrung themselves into this AGI nonsense to try
AFAIK, they've been on the AGI hype-train for a very long time, before they reached mainstream popularity for sure. From their own blog (2020 - https://openai.com/index/organizational-update/), here is a mention of their "mission":
> We’re proud of these and other research breakthroughs by our team, all made as part of our mission to achieve general-purpose AI that is safe and reliable, and which benefits all humanity.
I'm not sure OpenAI trying to reach AGI is a "strategic mistake" as much as "the basis for the business" (which, to be fair, was a non-profit organization initially).
There could be too-many-cooks in the AI research part of their work.
Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.
>One idea behind the OpenAI social prototype, we’ve heard, is to have AI help people share better content. “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”
This would be a decent PR stunt, but would such a platform offer anything of value?
It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.
Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.
An interesting use for AI right now would be using it as a gatekeeping filter, selecting social media for quality based on customisable definitions of quality.
Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of "value" are.
The current maximalist "Use AI to generate as much as possible" trend is the opposite of social intelligence.
I think that's right. Twitter without ads, showing you content you _do_ want to see using some embeddings magic, with decent blocking mechanisms, and not being run as a personal mouthpiece by the world's most unpopular man ... certainly not the worst idea.
It's a nice idea in principle, but would probably immediately become a way by the admins to promote some views and discourage others with the excuse of some opinions being of lower quality.
Why would AI be any better at filtering out spam than developers have so far been with ML?
The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].
I've never been comfortable with this idea that people should use their real identity online. Sure they can if they choose to, but IMO it absolutely shouldn't be required or expected.
The idea that I would give a copy of my passport to a social media company just to sign up, and that the social media company has access to verify the validity of the passport with the issuing government, just feels very wrong to me.
No, nothing of value. If you ever want to lose faith in the future of humanity search "@grok" on Twitter and look at all the interactions people have with it. Just total infantilism, people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read, arguing with it or whining to Musk if they don't get the answer they want to confirm what they already believe.
the worst is like a dozen people in the replies to a post asking Grok the exact same obvious follow-up question. Somehow, having access to an LLM has completely annihilated these commenters' ability to scroll down 50 pixels.
Before we get too excited with disparaging those seeking summaries, it's common for people of all levels to want summary information. It doesn't mean they want everything summarized or are bad people.
I'm not particularly interested in "tariffs, what are they good for, what's the history and examples good or bad"... so I asked for a summary from grok. It gave me a decent summary. Concise and structured. I asked a few follow-ups, then went on with my life knowing a little more than nothing about tariffs. A win for summarized information.
> people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
You also can get Grok to fact check bullshit by tagging @grok and asking it a question about a post. Unfortunately this is not realtime as it can sometimes take up to an hour to respond, but I've found it to be pretty level headed in its responses. I use this feature often.
True. I see that too. It's a good addition to community notes. It can correctly evaluate "partially true" posts and those lacking details, so it's great at spotting cherry-picked information.
I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.
But I really don't see why anyone would even use an open ai "social network" in the first place.
It does allow one thing for open ai. Other than training data which admittedly will probably be pretty low quality. It is a natural venue for ad sales.
Social media is a plague, including LinkedIn. Anything that lets you follow others and/or erodes your anonymity is just different degrees of cancer waiting to happen.
The best I ever enjoyed the internet was the sweet spot between dial up and DSL where I was gaming in text based/turn based games, talking on forums, and chatting using IRC.
Agreed. I wasn't particularly hooked, didn't use it very much already. As an architect, designer, and professor I had ig, and for the last five years basically only for work. But the feeling of freedom in its absence these past few months has been palpable.
Early fb reconnecting with people I hadn't seen since high school was okay. The blog / Google Reader era happening at the same time was the real golden age for me. And it's been all downhill since.
> I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.
Meta/Twitter/etc. are drug dealers.
> But I really don't see why anyone would even use an open ai "social network" in the first place.
I really don't see why anyone would even use Heroin yet they do.
Oh I get one thing - other than ads. So the idea of an LLM filter to algorithmically tailor your own consumption has some utility.
The logical application would be an existing social network -using- chat gpt to do this.
But all the existing ones have their own models, so if they can't plug in to an existing one like goooooogle did to yahoo in the olden days, they have to start their own.
That makes a certain amount of (backward) sense for them. I don't think it'll work. But there's some logic if you're looking from -their- worldview.
Isn't the selling point behind Blue sky is that you can customize your feed your way? I don't know the tech behind that but the feed is "open" isn't it? Can they plug into that?
Social media is this generation's cigarettes. It feels good to use it for a bit, and there's an enormous amount of advertising for it and social pressure to use it, but it's extremely addictive and the long-term personal and public health consequences are absolutely crushing.
I hope one day we can strictly regulate social media & make pariahs of the people who built it, as we did with tobacco. Instead, we just did the equivalent of handing the entire federal government into Phillip Morris's control, so my hopes are not high.
HN skips many of the dark patterns that other social medias have:
- no infinite scroll (you have to click on "More")
- no personal recommendation
- no feedback loop between your upvotes and the feed
- no messaging or following between users
HN looks a lot more like news groups from back in the days.
But I can’t follow them.
I don’t get notifications when they post new links or comments, I can’t send them specifically my links and comments.
I have no groups or circles.
HN is more of a discussion forum and not for connecting with others.
Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.
…or alternatively it’s not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
The culture presents such a tempting world view for the type of people who populate HN.
I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
That's why it's so important to reduce all of your personal data points online. Imagine what they can reconstruct based on their modeling and comparing you to similar users. I have 60 years of involuntary data collection ahead of me. This is not going to be fun.
> I don't even think it'll be from terminators and nuclear wars and that sort of thing
I do. And I don't even think the issue is a hostile AI. There are 8 billion people in the world. Millions of those people have severe mental issues and would destroy the world if they could. It seems highly likely to me that AI will eventually give at least one of those people the means.
That'll be great for the world's natural outsiders. Those that hate pop music and dislike even taylored ads because of the creepy feeling of influence. Or who don't follow any politicians because they're all out to hoodwink you.
Oh, a subset will be at risk of being artificially satisfied but your hardcore grouch will always have a special "yeah, yeah, fuck off bot" attitude.
There is a bias there in action: we are assuming that the entire world is like this thing we just happen to be thinking about.
It is not.
Even if it were just a minority, there are plenty of people outside "this thing" that will profit from the ((putative) majority's) anesthesia. Or which at least will try to set the world on fire (anybody remember the elections in USA a few months ago? That was really dumb. But sometimes a dumb feat shows that one is alive, which is better than doing nothing and being taken for dead. Or it is at least good-enough peacocking to attract mates and pass on the genes, which is just an extravagant theory of mine that I'm almost certainly sure is false. And do not take this as an endorsement of DJT). I'm not being an optimist here; I've seen firsthand the result of revolutions, but it may be the least-bad outcome.
> I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I think we'll just die out. Everyone will be too busy having fun to have kids. It's already started in the West.
> The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
We already have so many of those that it’s very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.
Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.
The Culture is about a post-capitalist utopia. You’re describing yet another cyberpunk-esque world where people have still have to do wage-labor to not starve.
You’re right so I made a slight edit to separate my two ideas. Thanks for even reading them at all! I try to contribute positively to this site when I can, and riffing on the overlap between fiction and real-life — a la Doctorow — seems like a good way to be curious.
> Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
Controversial stance probably, but this very much sounds like a world I'd love to live in.
They just want the next wave of Ghibli meme clicks to go to them, really.
This will be built on the existing thread+share infra ChatGPT already has, and just allow profiles to cross-post into conversations, with UI and features more geared toward remixing each other's images.
I actually would love this. I hate having to go to another website to share some thoughts I had using tools in a platform.
I miss the days when experiences would actually choose to integrate other platforms into their experiences, yes I was sort of a fan of the FB/Google share button and Twitter side feed (not the tracking bits though).
I wasn't a fan of LLM and the whole chat experience a few years ago, I'm a very mild convert now with the latest models and I'm getting some nominal benefit, so I would love to have some kind of shared chat session to brain storm, e.g. on a platform better than Figma.
The one integration of AI that I think is actually neat is Teams + AI Note taking. It's still a hit or miss a lot of the time, but it at least saves and notes something important 30% of the time.
Collaboration enhancements would be a wonderful outcome in place of AGI.
The answer seems more obvious to me. They dont even care if its competitive or scales too much. xAI has a crazy data advantage firehousing Twitter, llama FB/IG and CGPT just has, well, the internet.
Id hope they have some clever scheme to acquire users, but ultimately they want the data/
Feels like a natural next step, honestly. If they already have users generating tons of content via ChatGPT, hosting it natively and adding light social features might just be a way to keep people engaged and coming back. Not sure if it's meant to compete with Twitter/Instagram, or just quietly become another daily habit for users
Perhaps, but currently OpenAI is stuck sharecropping with existing social networks – producing the content but not deriving the value. It is hard to move grander visions forward when you don't own the land.
It's awesome to see the amazing value for society being created by big tech these days.
If this industry didn't pay so well, I would've been gone years ago. I'm lucky to work in a job that I think is ethical and improving the world, but it's so goddamn embarrassing to even be in the same room as the AI and blockchain types and the ad hucksters.
Social media's always been about giving people whatever makes them come back to the site, no matter how unethical. If an army of fake fans makes me think I have an army of real fans and keep posting for attention from my fans, they will totally do that. Unless it's illegal.
As PoC I’d say look at the subreddit SubSimulatorGPT2
www.reddit.com/r/SubSimulatorGPT2
Dead Comment
Even better, soon none of us will have to use social media at all, our AI bots will do it for us. Then we will finally find peace.
https://engineeringprompts.substack.com/p/does-chatgpt-use-1...
https://simonwillison.net/2025/Jan/12/generative-ai-the-powe...
Also, if the value in grok is reposting stupid things, I don't see how it adds any value to have it embedded in the social network. You could just as easily ridicule Gemini this way?
Also Twitter generates viral tweets because people use Twitter. A new social network with a similarly embedded AI will just be ridiculed on Twitter and go viral there
If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
xAI isn't allowing people to use the Twitter feed to train AI
Google is keeping it's properties for Gemini
Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.
---
C suites are usually made of up really out of touch and weird workaholics, because that is what it takes to make it to the C suite of a large company. They buy DSS (decision support service / software) from vendors, usually marketing groups, that basically tell them what is in and what isn't. Many marketing companies are now getting that data from twitter and reddit, and portraying it as the broad social trend. This is a problem because twitter and reddit are both extremely curated and censored, and the tone of the conversation there is really artificial, and can lead to really bad conclusions.
---
This is only somewhat related, but if OpenAI did actually succeed in building their own successful social media platform (doubtful) they would be basing a lot of their model on whatever subset of people wanted to be part of the OpenAI social media platform. The opportunity for both mundane and malicious bias in models there seems huge.
Somewhat related, apparently a lot of English spellings were standardized by the invention of the printing press. This isn't surprising; it was one of the first technologies to really democratize written materials, and so it had a very outsized power to set standards. LLMs feel like they could be a bit like this, particularly if everyone continues with their current trends of intentionally building reliance on them into their products / companies / workflows. As a real life example, someone at work realized you could ask co-pilot to rate the professionalism of your communication during a meeting. This seems quite chilling, since you're not really rating your professionalism, but measuring yourself against whatever weird bell curve exists in co-pilot.
I'm absolutely baffled that LLMs are seeing broad adoption, and absolutely baffled that people intentionally adopting and integrating them into their lives. I'm in my early 40s now. I'm not sure if I can get out of the tech field at this point, but I'm seriously thinking about options at this point.
Instead, I’m just going to hang out here in this hacker meadow and on FOSS social networks where something like that would never happen!
sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.
My guess is that they’re trying other things to diversify themselves and/or to try to keep investors interested. Whether or not it works is irrelevant as long as they can convince others it will increase their usage.
I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.
- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.
- Many key employees have already left or been forced out.
Even if you believe all that to be true, it in no way contradicts what you quoted or makes it unfair. Having a kick ass product and good brand awareness in no way correlates to being close to AGI.
Or even if you did come up with AGI, so would everyone else. Gemini is arguably better than ChatGPT now.
I remember at Google they said well if source code leaks nobody is actually worried about stealing tech, the vast majority of code is open to all employees, with some exceptions like spam-, ranking, etc. It’s still protected, but not considered moat.
The moat comes from other things, such as datacenter & global networking infrastructure, marketing new products by pushing them through existing products (put Gemini in search, add chrome to Android etc). Most importantly you can use data you already have to bootstrap new products, say Gmail and calendar integrated with personalized assistants.
If you play your cards right, yes there’s some first-mover advantages, but they are more superficial than your average Twitter hype thread makes you think. It can give you the ability to set unofficial standards and APIs, like Kubernetes, S3 – (maybe OpenAI APIs?). And you can set certain agendas and market your name for recognition and trust. But all that can slip through your fingers if a behemoth picks up where you left off. They have so many advantages, except for being the fastest.
If it’s not obvious already, I believe we are far away from that with LLMs and that the attention those working in model safety give to AGI over current day concerns like Meta just torrenting for model data, are not very serious people, but I have accepted that this isn’t a popular opinion amongst industry professionals.
Not least because letting laws prevent them from model training is bad according to them, either for the same weird logic Musk uses to justify testing FSD Betas on an unwilling public due to the potential to prevent future deaths, or because they genuinely took RB seriously. No idea what’s worse for serious adults…
Google Gemini 2.5 Pro was remarkably good and I’m not sure how they did it. It’s like an elite athlete doing a jump forward despite harsh competition. They probably have excellent training methodology and data quality.
DeepSeek made huge inroads in affordability…
But even with those, intelligence itself is seeing diminishing returns while training costs are not.
So OpenAI _needs_ to diversify - somehow. If they rely on intelligence alone, then they’re toast. So they can’t.
TPUs absolutely dumpster Nvidia cards, for the same reason that mining bitcoin is done with ASICs instead of cards.
So yeah, just more training, more data, and so on.
If Google wasn't so cloud focused, they could take over the AI Chip market lead from NVIDIA.
Right now OAI's synthetic data pipeline is very heavily weighted to 1-on-1 conversations.
But models are being deployed into multi-user spaces that OAI doesn't have access to.
If you look at where their products are headed right now, this is very much the right move.
Expect it to be TikTok style media formats.
Deleted Comment
As for whether it will work, I don't know how they're possibly going to get the "seed community" which will encourage others to join up. Maybe they're hoping that all the people making slop posts on other social networks want to cut out the middleman and have communities of people who actually enjoy that. As always, the sfw/nsfw censorship line will be an important definer, and I can't imagine them choosing NSFW.
https://tidings.potato.horse/about
You don't need some mythical AI to be a great company. You need great products, which OpenAI has, and they keep improving them.
Now they've hamstrung themselves into this AGI nonsense to try and entice investors further, I guess.
AFAIK, they've been on the AGI hype-train for a very long time, before they reached mainstream popularity for sure. From their own blog (2020 - https://openai.com/index/organizational-update/), here is a mention of their "mission":
> We’re proud of these and other research breakthroughs by our team, all made as part of our mission to achieve general-purpose AI that is safe and reliable, and which benefits all humanity.
I'm not sure OpenAI trying to reach AGI is a "strategic mistake" as much as "the basis for the business" (which, to be fair, was a non-profit organization initially).
Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.
OpenAI doesn't have enough money to even run ChatGPT in perpetuity, so building internal moonshots is an irresponsible waste of investor funds.
This would be a decent PR stunt, but would such a platform offer anything of value?
It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.
Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.
Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of "value" are.
The current maximalist "Use AI to generate as much as possible" trend is the opposite of social intelligence.
The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].
0 - https://onlyhumanhub.com
The idea that I would give a copy of my passport to a social media company just to sign up, and that the social media company has access to verify the validity of the passport with the issuing government, just feels very wrong to me.
https://x.com/Pee159604/status/1909445730697462080
Before we get too excited with disparaging those seeking summaries, it's common for people of all levels to want summary information. It doesn't mean they want everything summarized or are bad people.
I'm not particularly interested in "tariffs, what are they good for, what's the history and examples good or bad"... so I asked for a summary from grok. It gave me a decent summary. Concise and structured. I asked a few follow-ups, then went on with my life knowing a little more than nothing about tariffs. A win for summarized information.
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
Like all those start-ups that are on the 'mission' to save the world with an app. Not sure if it is PR for users or VCs.
But I really don't see why anyone would even use an open ai "social network" in the first place.
It does allow one thing for open ai. Other than training data which admittedly will probably be pretty low quality. It is a natural venue for ad sales.
The best I ever enjoyed the internet was the sweet spot between dial up and DSL where I was gaming in text based/turn based games, talking on forums, and chatting using IRC.
Early fb reconnecting with people I hadn't seen since high school was okay. The blog / Google Reader era happening at the same time was the real golden age for me. And it's been all downhill since.
Meta/Twitter/etc. are drug dealers.
> But I really don't see why anyone would even use an open ai "social network" in the first place.
I really don't see why anyone would even use Heroin yet they do.
“It feels good.”
“I can quit whenever I want.”
“I was on it the whole night instead of sleeping. I felt awful in the morning.”
“I can’t stop. All my friends are on it and I don’t want to be alone.”
The logical application would be an existing social network -using- chat gpt to do this.
But all the existing ones have their own models, so if they can't plug in to an existing one like goooooogle did to yahoo in the olden days, they have to start their own.
That makes a certain amount of (backward) sense for them. I don't think it'll work. But there's some logic if you're looking from -their- worldview.
I hope one day we can strictly regulate social media & make pariahs of the people who built it, as we did with tobacco. Instead, we just did the equivalent of handing the entire federal government into Phillip Morris's control, so my hopes are not high.
Deleted Comment
- no infinite scroll (you have to click on "More") - no personal recommendation - no feedback loop between your upvotes and the feed - no messaging or following between users
HN looks a lot more like news groups from back in the days.
HN is more of a discussion forum and not for connecting with others.
There is no concept of "friends" on a forum like HN, since people purely gather to discuss topics of interest here.
Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.
…or alternatively it’s not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
I do. And I don't even think the issue is a hostile AI. There are 8 billion people in the world. Millions of those people have severe mental issues and would destroy the world if they could. It seems highly likely to me that AI will eventually give at least one of those people the means.
Oh, a subset will be at risk of being artificially satisfied but your hardcore grouch will always have a special "yeah, yeah, fuck off bot" attitude.
Which is why we'll need to acquire the drug gland technology before AGI - no mind can sell me anything if I can feel content on demand.
"Amused to Death"
Great title and an even better album. https://en.wikipedia.org/wiki/Amused_to_Death
It is not.
Even if it were just a minority, there are plenty of people outside "this thing" that will profit from the ((putative) majority's) anesthesia. Or which at least will try to set the world on fire (anybody remember the elections in USA a few months ago? That was really dumb. But sometimes a dumb feat shows that one is alive, which is better than doing nothing and being taken for dead. Or it is at least good-enough peacocking to attract mates and pass on the genes, which is just an extravagant theory of mine that I'm almost certainly sure is false. And do not take this as an endorsement of DJT). I'm not being an optimist here; I've seen firsthand the result of revolutions, but it may be the least-bad outcome.
I think we'll just die out. Everyone will be too busy having fun to have kids. It's already started in the West.
We already have so many of those that it’s very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.
Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.
Controversial stance probably, but this very much sounds like a world I'd love to live in.
E.g. old days of Yahoo (portal)
This will be built on the existing thread+share infra ChatGPT already has, and just allow profiles to cross-post into conversations, with UI and features more geared toward remixing each other's images.
I miss the days when experiences would actually choose to integrate other platforms into their experiences, yes I was sort of a fan of the FB/Google share button and Twitter side feed (not the tracking bits though).
I wasn't a fan of LLM and the whole chat experience a few years ago, I'm a very mild convert now with the latest models and I'm getting some nominal benefit, so I would love to have some kind of shared chat session to brain storm, e.g. on a platform better than Figma.
The one integration of AI that I think is actually neat is Teams + AI Note taking. It's still a hit or miss a lot of the time, but it at least saves and notes something important 30% of the time.
Collaboration enhancements would be a wonderful outcome in place of AGI.
Id hope they have some clever scheme to acquire users, but ultimately they want the data/
1. Look "Studio Ghibli" went viral, let's capitalize
2. Switching cost for LLMs are low. If we can't be the best let's find other ways to lock our users in and make our product super sticky
https://sora.com/explore?type=videos