The study is based on having LLMs decide to amplify one of the top ten posts on their timeline or share a news headline. LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.
The behavioral options are restricted to posting news headlines, reposting news headlines, or being passive. There’s no option to create original content, and no interventions centered on discouraging reposting. Facebook has experimented[0] with limits to reposting and found such limits discouraged the spread of divisive content and misinformation.
I mostly use social media to share pictures of birds[1]. This contributes to some of the problems the source article[2] discusses. It causes fragmentation; people who don’t like bird photos won’t follow me. It leads to disparity of influence; I think I have more followers than the average Mastodon account. I sometimes even amplify conflict[3].
LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.
This was my initial reaction as well, before reading the interview in full. They admit that there are problems with the approach, but they seem to have designed the simulation in a very thoughtful way. There really doesn't seem to be a better approach, apart from enlisting vast numbers of people instead of using LLMs/agent systems. That has its own problems as well of course, even leaving cost and difficulty aside.
There’s no option to create original content...
While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
I'm not sure the experiment can be done other than to try interventions on real users of a public social media service as Facebook did in the article I linked. Of course people running those services usually don't have the incentives to test harm reduction strategies and certainly don't want to publicize the results.
> the vast majority of users don't create original content
That's true now at least most of the time, but I think it's as much because of design and algorithmic decisions by the platforms to emphasize other types of content. Early Facebook in particular was mostly original content shared between people who knew each other. The biggest problem with that was it wasn't very profitable.
> > There’s no option to create original content...
> While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
Ok, but, this is by design. Other forms of social media, places like Mastodon etc have a far, far higher rate of people creating original content.
The fundamental problem with social media (and many other things) is humans, specifically our biological makeup and (lack of) overriding mechanisms. One could argue that pretty much everything we call 'civilised behavior' is an instance of applying a cultural override for a biological drive. Without it, we are very close to shit-flinging murderous apes.
For so many of our problems what goes wrong is that we fail to stop our biological drive from taking the wheel to the point where we consciously observe ourselves doing things we rationally / culturally know we should not be doing.
Now the production side of media/content/goods evolves very fast and does not have a similarly strong legacy biological drive holding it back, so it is very, very good (and ever improving) at exploiting the sitting duck that is our biological makeup (food engineering, game engineering etc. are very similar to social media engineering in this regard).
The only reliable defense against that is training ourselves to not give in to our biological drives when they are counterproductive. For some that might be 'disconnect completely' (i.e. take away the temptations altogether), but having a healthy approach to encountering the temptations is far more robust. I am of the opinion that labeling the social media purveyors and producers in general as evil abusers is not necessarily inaccurate, but counterproductive in that it tends to absolve individuals of their responsibility in the matter. Imagine telling a heroin addict: "you can't help it, it's those evil dealers that are keeping you hooked to the heroin".
Social media as a concept can definitely be fixed. Just stop doing algorithms, period.
Strip the platforms of any and all initiative. Make them dumb pipes. Stop pretending that people want to use social media for entertainment and news and celebrities. Stop trying to turn it into interactive TV. Stop forcing content from outside of my network upon me. Make the chronological feed the only option.
Social media is meant to be a place where you get updates about the lives of people you follow. You would visit several times a day, read all new updates, maybe post your own, and that's it. The rest of the time, you would do something else.
> Stop pretending that people want to use social media for entertainment and news and celebrities
People actually want media, social and otherwise, for exactly that.
> Stop forcing content from outside of my network upon me.
There are social media apps and platforms that don't do that. They are persistently less popular. People, by and large, do want passive discovery from outside of their network, just like they do, in aggregate, want entertainment and news and celebrities.
> Make the chronological feed the only option.
Chronological by what? Original post? Most recent edit? Most recent response? Most recent reaction?
> There are social media apps and platforms that don't do that. They are persistently less popular.
I think that's a bit murky. Facebook became popular when it was like that. They changed how it worked after it was already popular, seeking to make more mony.
Facebook could provide you options to only see friend's content. People have certainly asked for it. They absolutely refuse to.
People want addictions. The answer is to regulate addictive algorithms, not to give them more things to be addicted to.
But addictions are wonderfully useful politically, so that's unlikely to happen.
The point is simple - an algorithm is a form of meta-content. It's curated and designed, not neutral. And so are its commercial, psychological, and political effects.
Currently SM companies have been allowed to use algorithms with little or no oversight. The emphasis has been on regulating privacy, not influence.
In the same way the media need to have a Fairness Doctrine restored to restore sanity and quality to journalism, algorithm providers need to be able to demonstrate an equivalent for their platforms.
This is very much against the spirit of the times, but that spirit is algorithmically created - which just proves the point.
If you're thinking "Yes, but government..." - how do you know that's a spontaneous original thought, and not something you've been deliberately conditioned to believe?
The trouble is people want to smoke and drink, too.
The human mind is more easily exploitable than any computer.
Of course people want to see divisive content. Divisive content is engineered to trigger large emotions, which humans naturally respond to. That's what heroin exists - it feels good. Really, really good. Of course you don't need only good feelongs, bad feelings can shape behavior too.
The problem comes in when we start shifting from making products that simply cause these behavior changes to instead making products engineered to elicit the biggest response possible.
Its very easy to do, too. Want to sell more cigarettes? Add butane to the wrapping so it burns better. Concentrate the nicotine. Do all the obvious stuff. The more bad you make it, the better it will sell.
Same thing with, say, Facebook. Want people to use Facebook more? More doomerism, more lies, more misinformation, more Jesus Christ, more click bait. People want the addictive stuff, that's why we call it addictive.
So yes, social media apps without these tools are unpopular and generally fail. In the same way a nicotine-free cigarette would fail.
> Strip the platforms of any and all initiative. Make them dumb pipes. Stop pretending that people want to use social media for entertainment and news and celebrities. Stop trying to turn it into interactive TV. Stop forcing content from outside of my network upon me. Make the chronological feed the only option.
Congrats, you now have platforms no one will care about, as attention span gets sniped by competitors who want to maximize engagement and don't care about your arbitrary rules (ie, literally what happened 15 years ago).
More addictive products will always win, because they're more addictive.
That doesn't mean we need to one-up each other and repeatedly hyper optimize until we're making the digital equivalent of fentanyl-laced cigarettes.
Also, we can actually ban stuff. I know it's not a physical product, but that doesn't mean we just have to fucking put up with whatever and deal with it.
We'd never let a company sell fentanyl-laced cigarettes. But when it comes to tech, we all just throw up our hands and say "well we tried nothing and we're all out of ideas!"
I like to think that humanity is starting to build up some amount of immunity and aversion towards algorithmic feeds. Non-tech friends and family in my life (Norway and the Netherlands) express a rapidly growing discomfort being on Google and Meta's platforms, and some have gone cold turkey, others are actively formulating an exit strategy.
I dropped out of these platforms some years ago and have been very happy having the Fediverse (and HN!) as my only social media. It is just the right amount of engagement and impulse for me. I do not check my feeds compulsively, but occasionally – and the people I follow is a diverse bunch, giving me food for thought and keeping me up to date with topics and software projects.
It is still a niche place to hang out, but I'm OK with that. Now and then, friends get curious enough to join and check it out.
I would care, and I imagine there are others who would too. I don’t use social media anymore (at all!) because of this. If I could have the chronological feed restored and no intrusion of other content I’d redownload immediately. There must be a market for this.
You know the "retweet" feature on Twitter didn't originally exist? Before the feature was implemented, people would just write "RT" followed by the author username, then past in the text of the tweet they wanted to retweet.
There's nothing wrong with reposts that are made knowingly by people you follow. My issue is with current dominant social media platforms all focusing on forcing people to see content from outside of their network that they would've otherwise never seen, because neither them, nor the people they follow, would follow anything like that.
> Nothing is preventing people from doing exactly this.
I try my best to do this but it's futile.
See, even if you don't use the algorithm, the algorithm still uses you. So, for example, you can't just discuss something on Twitter with your followers. Sometimes it would decide to show your tweet to a bunch of random people with the radical version of the views opposite to yours, and you will be forced to use the block button way too damn much. You can't opt out of having your tweets recommended to strangers.
Even when that doesn't happen, many of your followers would still miss your posts if you don't appease the algorithm because they would still use the algorithmic feed — whether knowingly or because of all the dark patterns around that setting (it's very hidden, never sticks, or both).
So no, the very existence of the algorithmic feed is the problem, it ruins the experience for everyone regardless of whether they use it or not.
That's the old vBulletin forum approach. Recommendation algorithms solve the problem of scale when these are too large. Winding back the clock doesn't work, things evolve to the next stage especially if people already know what it looks like.
I'm not talking about literal television, but more about something where you don't really get to decide what you see, you go there to get entertained with whatever.
it will fix very little. The "problems" of social media are rooted in selfish human behavior, it's like a giant high-school. You can't "fix" that because it's ingrained in humans
Social media as a vessel for diverse discussion is a tall order. It’s too public, too tied to context, and ultimately a no-win game. No matter how carefully you present yourself, you’ll end up being the “bad guy” to someone. The moment a discussion touches even lightly on controversy, healthy dialogue becomes nearly impossible.
Think of it this way: you’re hosting a party, and an uninvited stranger kicks the door open, then starts criticizing how you make your bed. That’s about what it feels like to try to “fix” social media.
I really liked the Circles feature in Google+: you defined groups of friends, and you could make your posts visible to particular groups.
They were not like group chats or subreddits, the circles were just for you, it was just an easy way to determine which of your followers would see one of your posts.
This kind of interaction was common in early Facebook and Twitter too, where only your friends or followers saw what you posted, often just whitelisted ones. It was not all public all the time. Google+ just made that a bit more granular.
I suppose that these dynamics have been overtaken by messaging apps, but it's not really the same thing. It's too direct, too real-time and all messages mixed-in, I like the more async and distributed nature of posts with comments.
Granted, if you really want a diverse discussion and to talk with everyone in the world at once, indeed that's a different problem and probably fundamentally impossible to make non-toxic, people are people.
Whitelisting solves the problem for me. I curate every tweet I see with a browser extension. Strangers can't kick down the door. I only see content from my direct follows. It dramatically reduces the stress. Maybe a little like horse blinkers.
Do you mind sharing this extension? I would prefer if it also shows the retweets of people you follow as that is an endorsement no matter what people say.
> Whitelisting solves the problem for me. I curate every tweet I see with a browser extension. Strangers can't kick down the door. I only see content from my direct follows. It dramatically reduces the stress. Maybe a little like horse blinkers.
So you are fighting against the platform that you're using. It reminds me about people constantly fighting with their own computer (Windows) to remove ads and crap. In both cases viable alternatives exist which don't require this huge effort.
> Social media as a vessel for diverse discussion is a tall order. It’s too public, too tied to context, and ultimately a no-win game. No matter how carefully you present yourself, you’ll end up being the “bad guy” to someone. The moment a discussion touches even lightly on controversy, healthy dialogue becomes nearly impossible.
Worth reading Jaron Lanier's Ten Arguments for Deleting Your Social Media Accounts Right Now book:
I still think old school linear forums are the best format for online discussion. They’re not perfect by any means, but I think they still beat all the alternatives I’ve tried.
The old school forums also centered around a single topic or interest, which I think helped keep things focused and more civil. Part of the problem with social media is that it wants to be everything for everyone.
Linear is awful. Discussions are always so bad with them. The only advantage is that following the newest message is easier, which still doesn't prevent people from ignoring them if the thread becomes too long.
The main reason why you might think that way, is less the format, and probably more the moderation, and lower amount of people in those forums. I mean take this forum here, is far better than twitter or any other social media-sloop. Smaller Subreddits, especially with good moderation, are also far better than big Subreddits.
The internet has become a primary battlefield for making money, and we can't go back to the days when it was just a non-commercial hobby that people enjoyed. To make money online, it's crucial to spread content as widely as possible, and the most effective methods for this are clickbait and ragebait. That's why the enshittification of the internet was inevitable.
"Diverse discussion" is just something I don't want. Of course I've made up my mind about all kinds of things and I don't really need to see opposing points of view as though they are novel thoughts that I've never considered before. Sure, tell me again why your religion or your conspiracy theory proves that the scientific consensus is a hoax. Maybe you'll convince me this time?
I don't mind Mastodon, but I'm pretty selective in who I follow, and diversity of opinions isn't one of my criteria.
> I don't really need to see opposing points of view as though they are novel thoughts that I've never considered before
I mean, that's fine, if you think that you can consider all conceivable angles thoroughly, by yourself. I for one welcome opposing views, but I suppose if my idea of that meant "religion or conspiracy theories" I'd probably be avoiding it too.
A lot of talk goes into how Facebook or other social media use algorithms to encourage engagement, that often includes outrage type content, fake news, rabbit holes and so on.
But here's the thing ... people CHOOSE to engage with that, and users even produce that content for social media platforms for free.
It's hard to escape that part.
I remember trying Bluesky and while I liked it better than Twitter, for me it was disappointing that it was just Twitter, but different. Outlandish short posts, same lame jokes / pithy appeals to our emotions, and so on. People on there want to behave the same way they wanted to on Twitter.
> But here's the thing ... people CHOOSE to engage
Kinda, but they also don't really realise that they have much more control over the feed than they expect (in certain areas)
For the reel/tiktok/foryou-instagram feeds, it shows you subjects that you engage with. It will a/b other subjects that similar people engage with. Thats all its doing. continual a/b to see if you like what ever flavour of bullshit is popular.
Most people don't realise that you can banish posts from your feeds by doing a long press "I don't like this" equivalent. It takes a few times for the machine to work out if its an account, groups of accounts of theme that you don't like, and it'll stop showing it to you. (threads for example took a very long time to stop showing me fucking sports.)
Why don't more people know this? because it hurts short term metrics for what ever bollocks the devs are working on. so its not that well advertised. just think how unsuccessful the experiments in the facebook app would have been if you were able to block the "other posts we think you might like" experiments. How sad Zuckerberg would be that his assertion was actually bollocks?
There's definitely a mass of people who can't/won't/don't get past passive/least-effort relationships with things on screens. These would be the type that in the TV days would simply leave the TV on a specific channel all day and just watch whatever was on, and probably haven't changed their car radio dial from the station they set it to when they bought the car. In modern times they probably have their cable TV they still pay for on a 24 hour news channel and simply have that going all day.
To be fair, in times far past, you really didn't have much choice in TV or radio channels, and I suspect it's this demographic that tend to just scroll down Facebook and take what it gives without much thought other than pressing Like on stuff.
Transparency would prove or disprove this. Release the algorithm and let us decide for ourselves. In my experience, Instagram made an algorithm change 3-4 years ago. It used to be that my feed was exactly my interests. Then overnight my feed changed. It became a mix of 1. interracial relationship success stories 2. scantily clad women clickbait, 3. east asian "craft project" clickbait, and just general clickbait. It felt as if "here's what other people like you are clicking on" became part of the algorithm.
Maybe the TikTok algorithm is better, but the "I don't like this" action on Meta properties just blatantly does not work. I still get the same type of clickbait content no matter how many times I try to get rid of it. Maybe watching other types of Reels would do it, but no thanks.
Personally I really enjoy Mastodon and Bluesky but I am very deliberate at avoiding negative people, I do not follow and often mute or block “diss abled” people who complain about everything or people who think I make their life awful because I am cisgender or who post 10 articles an hour about political outrage. The discover page on Bluesky is algorithmic and respects the “less like this” button and last time I looked has 75% less outrage than the following page. (A dislike button that works is a human right in social media!)
Once I get my database library reworked, a project I have in the queue is a classifier which filters out negative people so I can speed follow and not add a bunch of negativity to my feed, this way I get to enjoy real gems like
FWIW, I've been consistently posting quality stuff on Bluesky for the last year, and despite having a few hundred followers, I get ZERO engagement.
People in the Bluesky subreddit tell me it's not a "post and ghost" platform in that you have to constantly interact with people if you want to earn engagement, but that's too time consuming.
In other words, the discovery algorithm(s) on BlueSky sucks.
What gets me about some platform is all the text-in-images and video with senseless motion. I've been dipping my toes into just about any social where I could possibly promote my photography and the worst of them all is Instagram where all the senseless motion drives me crazy.
brains are wired that way. Gossip and rage bait is not something that people actively decide for, it's subconscious. It's weird saying that this is the problem of individuals - propaganda is effective not because people are choosing to believe it.
Right. When we're talking about the scale of humanity itself, we've moved far past individual actions.
At the scale we're operating, if only 1% is susceptible to these algorithms, that's enough to translate to noticeable social issues and second-order effects.
Current social media have basically found the "bliss point" of online engagement to generate revenue and keep the eyes attached. These companies found a way to keep people hooked, and strong emotions seem to be a major tool.
It really isn't a choice. It is very accessible. Many friends are on social networks and you slowly get sucked into shorts. Then, it becomes an addiction as your brain crave the dopamine hits.
In the same was a smoker "chooses" to engage with cigarettes. Let's not underestimate the fact that core human programming is being exploited to enable such behavior. Similar to telling a smoker to "just out the cigsreet down", we can't just suddenly tell people in social media to "stop being angry".
>people on [BlueSky] want to behave the same way they wanted to on Twitter.
Yes. Changing established habits is even harder to address. You can't make a horse drink (I'm sure anyone who ever had to deal with a disengaged captive audience feels this in their souls). Whike it's become many peoples primary "news source", aka the bread, most people came there for the circus.
I don't really have an answer here. Society needs to understand social media addiction the same way they understand sugar addictions; have it slammed in there that it's not healthy and to use sparingly. That's not something you can fix with laws and regulation. Not something you fix in even a decade.
Technically correct, but choice is here very simplified. The system is unable to understand WHY people engage with something, and in which way. That's poisoning the pool, and enforcing certain content and types of presentation.
> while I liked it better than Twitter, for me it was disappointing that it was just Twitter, but different
I feel exactly the same way.
I think there needs to be a kind of paradigm shift into something different, probably something that people in general don't have a good schema for right now.
Probably something decentralized or federated is necessary in my opinion, something in between email and twitter or reddit? But there's always these chicken and egg issues with adoption, who are early adopters, how that affects adoption, genuine UX-type issues etc.
> Probably something decentralized or federated is necessary in my opinion, something in between email and twitter or reddit?
So, Usenet? The medium is the message and all that, sure, but unless you change where the message originates you are ultimately going to still end up in the same place.
Sounds like a return to old school, long term forums. They still exist, but there's a reason Reddit and Twitter took over the "forum space". They toom the core ideas and injected it with "engagement". In this case, with the voting system of reddit, and the follower system of Twitter. Gamefying the act of interacting with peope had effects beyond anyone's comprehension in 2007
Widespread adoption before understanding risks - embraced globally before fully grasping the mental health, social, and political consequences, especially for young people.
Delayed but significant harm - can lead to gradual impacts like reduced attention span, increased anxiety, depression, loneliness, and polarization
Corporate incentives misaligned with public health - media companies design platforms for maximum engagement, leveraging psychological triggers while downplaying or disputing the extent of harm
- Smoking feels good but doesn't provide any useful function.
- Some social media use feels good and doesn't provide any useful function, but social media is extremely useful to cheaply keep in touch with friends and family and extremely useful for discovering and coordinating events.
Fortunately the "keep in touch" part can be done with apps that don't have so much of the "social media" part, like Telegram, Discord, and even Facebook Messenger versus the main app.
Smoking, too, also has useful functions related to socialization. It was a product and drug first, but a social tool second. Half the reason people smoked is because other people did. They wanted to go outside, have a chitchat, and have a smoke.
Now that smoking is gone in the US, a lot of that is dead, and for the worse. A lot of places now don't even have balconies or other outdoor sections. Every piece of area is dedicated towards productive activity, whether that be shopping, working, or eating. There's very little "just sit around" places in modern businesses and workplaces. The joy of random encounter has been significantly decreased. Everything is now in passing.
I think most of the social media power users don't connect with friends and family at all through the platforms. Young Gen Zers just scroll Tiktok (or whatever clone they prefer) and share the ones they like through snapchat/discord/telegram/messenger/sms/whatsapp. Some will post stuff for their friends to see through "close friends" or whatever, but it's much less personal than it once was with Facebook groups and whatnot
This analogy undersells the negative impact of social media. Smoking wasn't a propaganda machine at the hands of a few faceless corpos with no clear affiliation, for example, nor did it form a global spynet
I'd like to see more software that amplifies local social interactions.
There are apps like Meetup, but a lot of people just find it too awkward. Introverts especially do not want to meet just for the sake of meeting people, so they fallback on social media.
Maybe this situation is fundamentally not helped by software. All of my best friendships organically formed in real-world settings like school, work, neighborhood, etc.
This isn't a technology problem. Technology can help accessibility, but fundamentally this is an on-the-ground, social coordination problem.
Functioning, welcoming, and well-ran communities are the only thing that solves this. Unfortunately, technology often makes this worse, because it creates such a convenient alternative and also creates a paradox of choice. I.e. people think "when there's 1000 meetups to check out, and this one isn't perfect, I'll just move onto the next one" when actually it's the act of commitment that makes a community good.
Indeed, you're describing the lack of a 3rd place. These days, maybe even the lack of a 2nd place as you graduate school and work is now fully remote. Without that societal push towards being in a public spot, many people will simply withdraw to themselves.
A third place would fix this, especially for men who need "things". You go to a bar for "thing" and if you meet some others to yell at sports with, bonus. We have less "things" for gen Z, and those things happen rather infrequently in my experience. I'm not sure if a monthly Meetup is quite enough to form strong bonds.
Why can't it be fixed? Just remove algorithms and show only subscribed content in chronological order. That's how most of the early platforms worked and it was fine.
Probably because there's no monetary incentive for that, so "can't". It would mean the big social media companies collapsing, because their entire raison d'etre at this point is mass-manipulation.
That will fix barely anything. Early platforms were also bad, but they were new, filled with people still discovering things and trying out this new world, people who had more hope, a more positive view on the world, a better upbringing and experience from the offline-world. This is gone now and won't come back. The global village is settled, and it's burning.
I think it really is that simple. Have a discovery channel, recommendations side bar, just stop trying to add "shareholder value" through flawed machine learning attempts. Maintain a useful piece of software, is it too much to ask an earnings-driven corp? Probably.
why do you treat it like absolute voodoo? its a website that shows you videos, with the same algorithm theyve been using for 20 years. its now only a problem since basically iOS came out and now every single clueless non technical person is on the internet and discovering decade old memes for the first time.
I work in survey research and I'm rather appalled at how many people would rather survey a sample of AIs than a sample of people and claim they can come to some valid conclusion as a result.
There are many ways AIs differ from real people and any conclusions you can draw from them are limited at best -- we've had enough bad experiments done with real people
Appalling. The entire question of "fixing social media", for any definition of "fixing", involves not just the initial reaction to some change but the second-and-greater-order effects. LLMs are point-in-time models and intrinsically can not be used for even guessing at second-order effects of a policy over time. This shouldn't have gotten past the proposal phase.
The worst problems with people these days seem to be they don’t pick up the phone. Probability-based polls are still pretty good about most things unless they involve Donald Trump —- it seems some Trump supporters either don’t pick up the phone or lie to pollsters. Some polls correct for this with aggressive weighting but how scientific it really is is up in the air.
>I work in survey research and I'm rather appalled at how many people would rather survey a sample of AIs than a sample of people and claim they can come to some valid conclusion as a result.
It doesn’t surprise me if they found that the emergent behaviors didn’t change given their method. Modifying the simulation to make them behave differently would mean your rules have changed the model’s behavior to “jump tracks” into simulating a different sort of person who would generate different outputs. It’s not quite analogous to having the same Bob who likes fishing responding to different stimuli. Sort of like how Elon told Grok to be “unfathomably based” and stop caring about being PC” and suddenly it turned into a Neo-Nazi Chan-troll. Changing the inputs for an LLM isn’t taking a core identity and tweaking it, it’s completely altering the relationships between all the tokens it’s working with.
I would assume there is so much in the corpus based on behavior optimized for the actual existing social media we have that the behavior of the bots is not going to change because the bot isn’t responding to incentives like a person would it’s mimicking the behavior it’s been trained on and if there isn’t enough training data of behavior under the different inputs you’re trying to test you’re not actually applying the “treatment” you would think you are.
Genuine question : are you scared for your job ?
I see this tendency to use "synthetic personas" growing and frankly, having to explain why this sucks is insulting in itself. Decision makers are just not interested in having this kind of thought argument.
Yes and mostly No. No, because I work in games and I've seen enough people thinking that a "good game" just needs pretty graphics and a facsimile of "fun" to know that AI can't ever simulate this. Mkst Humans can't even seem to do it consistently, on all organizational levels.
But i have a footnote of "yes" because as you said, decision makers are just not interested in having this discussion about "focus on making fun games". So it will unfortunately affect my job in the short and even medium terms. Because so much of big money in games these days is in fact not focused on making a game, but on trying to either generate a gambling simulator, an engagement trap, or (you guessed it) AI hype. Both to try and claim you can just poof up assets, and to try and replace labor.
Knowing this, I do have long term plans to break out into my own indie route.
Not really. Sales is doing better than it ever has since I’ve been here. For one thing, AI folks want our data. Despite challenges in the industry, public opinion is more relevant than ever and the areas where we are really unsurpassed is (1) historical data and (2) the most usable web site, the latter one I am a part of.
The behavioral options are restricted to posting news headlines, reposting news headlines, or being passive. There’s no option to create original content, and no interventions centered on discouraging reposting. Facebook has experimented[0] with limits to reposting and found such limits discouraged the spread of divisive content and misinformation.
I mostly use social media to share pictures of birds[1]. This contributes to some of the problems the source article[2] discusses. It causes fragmentation; people who don’t like bird photos won’t follow me. It leads to disparity of influence; I think I have more followers than the average Mastodon account. I sometimes even amplify conflict[3].
[0] https://www.socialmediatoday.com/news/internal-research-from...
[1] https://social.goodanser.com/@zaktakespictures/
[2] https://arxiv.org/html/2508.03385v1#S3
[3] https://social.goodanser.com/@zaktakespictures/1139481946021...
This was my initial reaction as well, before reading the interview in full. They admit that there are problems with the approach, but they seem to have designed the simulation in a very thoughtful way. There really doesn't seem to be a better approach, apart from enlisting vast numbers of people instead of using LLMs/agent systems. That has its own problems as well of course, even leaving cost and difficulty aside.
There’s no option to create original content...
While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
> the vast majority of users don't create original content
That's true now at least most of the time, but I think it's as much because of design and algorithmic decisions by the platforms to emphasize other types of content. Early Facebook in particular was mostly original content shared between people who knew each other. The biggest problem with that was it wasn't very profitable.
> While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
Ok, but, this is by design. Other forms of social media, places like Mastodon etc have a far, far higher rate of people creating original content.
The fundamental problem with social media (and many other things) is humans, specifically our biological makeup and (lack of) overriding mechanisms. One could argue that pretty much everything we call 'civilised behavior' is an instance of applying a cultural override for a biological drive. Without it, we are very close to shit-flinging murderous apes.
For so many of our problems what goes wrong is that we fail to stop our biological drive from taking the wheel to the point where we consciously observe ourselves doing things we rationally / culturally know we should not be doing.
Now the production side of media/content/goods evolves very fast and does not have a similarly strong legacy biological drive holding it back, so it is very, very good (and ever improving) at exploiting the sitting duck that is our biological makeup (food engineering, game engineering etc. are very similar to social media engineering in this regard).
The only reliable defense against that is training ourselves to not give in to our biological drives when they are counterproductive. For some that might be 'disconnect completely' (i.e. take away the temptations altogether), but having a healthy approach to encountering the temptations is far more robust. I am of the opinion that labeling the social media purveyors and producers in general as evil abusers is not necessarily inaccurate, but counterproductive in that it tends to absolve individuals of their responsibility in the matter. Imagine telling a heroin addict: "you can't help it, it's those evil dealers that are keeping you hooked to the heroin".
Strip the platforms of any and all initiative. Make them dumb pipes. Stop pretending that people want to use social media for entertainment and news and celebrities. Stop trying to turn it into interactive TV. Stop forcing content from outside of my network upon me. Make the chronological feed the only option.
Social media is meant to be a place where you get updates about the lives of people you follow. You would visit several times a day, read all new updates, maybe post your own, and that's it. The rest of the time, you would do something else.
People actually want media, social and otherwise, for exactly that.
> Stop forcing content from outside of my network upon me.
There are social media apps and platforms that don't do that. They are persistently less popular. People, by and large, do want passive discovery from outside of their network, just like they do, in aggregate, want entertainment and news and celebrities.
> Make the chronological feed the only option.
Chronological by what? Original post? Most recent edit? Most recent response? Most recent reaction?
I think that's a bit murky. Facebook became popular when it was like that. They changed how it worked after it was already popular, seeking to make more mony.
Facebook could provide you options to only see friend's content. People have certainly asked for it. They absolutely refuse to.
But addictions are wonderfully useful politically, so that's unlikely to happen.
The point is simple - an algorithm is a form of meta-content. It's curated and designed, not neutral. And so are its commercial, psychological, and political effects.
Currently SM companies have been allowed to use algorithms with little or no oversight. The emphasis has been on regulating privacy, not influence.
In the same way the media need to have a Fairness Doctrine restored to restore sanity and quality to journalism, algorithm providers need to be able to demonstrate an equivalent for their platforms.
This is very much against the spirit of the times, but that spirit is algorithmically created - which just proves the point.
If you're thinking "Yes, but government..." - how do you know that's a spontaneous original thought, and not something you've been deliberately conditioned to believe?
The human mind is more easily exploitable than any computer.
Of course people want to see divisive content. Divisive content is engineered to trigger large emotions, which humans naturally respond to. That's what heroin exists - it feels good. Really, really good. Of course you don't need only good feelongs, bad feelings can shape behavior too.
The problem comes in when we start shifting from making products that simply cause these behavior changes to instead making products engineered to elicit the biggest response possible.
Its very easy to do, too. Want to sell more cigarettes? Add butane to the wrapping so it burns better. Concentrate the nicotine. Do all the obvious stuff. The more bad you make it, the better it will sell.
Same thing with, say, Facebook. Want people to use Facebook more? More doomerism, more lies, more misinformation, more Jesus Christ, more click bait. People want the addictive stuff, that's why we call it addictive.
So yes, social media apps without these tools are unpopular and generally fail. In the same way a nicotine-free cigarette would fail.
Congrats, you now have platforms no one will care about, as attention span gets sniped by competitors who want to maximize engagement and don't care about your arbitrary rules (ie, literally what happened 15 years ago).
That doesn't mean we need to one-up each other and repeatedly hyper optimize until we're making the digital equivalent of fentanyl-laced cigarettes.
Also, we can actually ban stuff. I know it's not a physical product, but that doesn't mean we just have to fucking put up with whatever and deal with it.
We'd never let a company sell fentanyl-laced cigarettes. But when it comes to tech, we all just throw up our hands and say "well we tried nothing and we're all out of ideas!"
I dropped out of these platforms some years ago and have been very happy having the Fediverse (and HN!) as my only social media. It is just the right amount of engagement and impulse for me. I do not check my feeds compulsively, but occasionally – and the people I follow is a diverse bunch, giving me food for thought and keeping me up to date with topics and software projects.
It is still a niche place to hang out, but I'm OK with that. Now and then, friends get curious enough to join and check it out.
I would care, and I imagine there are others who would too. I don’t use social media anymore (at all!) because of this. If I could have the chronological feed restored and no intrusion of other content I’d redownload immediately. There must be a market for this.
Every other form of mass media is regulated, for good reason.
Algorithms only make them worse, not bad. The flaws are there, because people.
> Stop pretending that people want to use social media for entertainment and news and celebrities.
No, people buy this content, because they want it, not because it's there.
> Social media is meant to be a place where you get updates about the lives of people you follow.
Strange claim. Nothing is preventing people from doing exactly this.
I try my best to do this but it's futile.
See, even if you don't use the algorithm, the algorithm still uses you. So, for example, you can't just discuss something on Twitter with your followers. Sometimes it would decide to show your tweet to a bunch of random people with the radical version of the views opposite to yours, and you will be forced to use the block button way too damn much. You can't opt out of having your tweets recommended to strangers.
Even when that doesn't happen, many of your followers would still miss your posts if you don't appease the algorithm because they would still use the algorithmic feed — whether knowingly or because of all the dark patterns around that setting (it's very hidden, never sticks, or both).
So no, the very existence of the algorithmic feed is the problem, it ruins the experience for everyone regardless of whether they use it or not.
wait are you talking about social media or sites that play videos?
While we're at it, shall we stop storing data?
Think of it this way: you’re hosting a party, and an uninvited stranger kicks the door open, then starts criticizing how you make your bed. That’s about what it feels like to try to “fix” social media.
They were not like group chats or subreddits, the circles were just for you, it was just an easy way to determine which of your followers would see one of your posts.
This kind of interaction was common in early Facebook and Twitter too, where only your friends or followers saw what you posted, often just whitelisted ones. It was not all public all the time. Google+ just made that a bit more granular.
I suppose that these dynamics have been overtaken by messaging apps, but it's not really the same thing. It's too direct, too real-time and all messages mixed-in, I like the more async and distributed nature of posts with comments.
Granted, if you really want a diverse discussion and to talk with everyone in the world at once, indeed that's a different problem and probably fundamentally impossible to make non-toxic, people are people.
This extension? https://github.com/rxliuli/mass-block-twitter
Worth reading Jaron Lanier's Ten Arguments for Deleting Your Social Media Accounts Right Now book:
https://www.amazon.com/Arguments-Deleting-Social-Media-Accou...
The era of sites with a phpbb forum and an irc channel was really fun for me and I miss it a lot
I made lots of friends that way in the past, close friends, and it's unlike anything I've encountered since then with social media
The main reason why you might think that way, is less the format, and probably more the moderation, and lower amount of people in those forums. I mean take this forum here, is far better than twitter or any other social media-sloop. Smaller Subreddits, especially with good moderation, are also far better than big Subreddits.
Where are good discussions between really different viewpoints anywhere?
I don't mind Mastodon, but I'm pretty selective in who I follow, and diversity of opinions isn't one of my criteria.
I mean, that's fine, if you think that you can consider all conceivable angles thoroughly, by yourself. I for one welcome opposing views, but I suppose if my idea of that meant "religion or conspiracy theories" I'd probably be avoiding it too.
But here's the thing ... people CHOOSE to engage with that, and users even produce that content for social media platforms for free.
It's hard to escape that part.
I remember trying Bluesky and while I liked it better than Twitter, for me it was disappointing that it was just Twitter, but different. Outlandish short posts, same lame jokes / pithy appeals to our emotions, and so on. People on there want to behave the same way they wanted to on Twitter.
Kinda, but they also don't really realise that they have much more control over the feed than they expect (in certain areas)
For the reel/tiktok/foryou-instagram feeds, it shows you subjects that you engage with. It will a/b other subjects that similar people engage with. Thats all its doing. continual a/b to see if you like what ever flavour of bullshit is popular.
Most people don't realise that you can banish posts from your feeds by doing a long press "I don't like this" equivalent. It takes a few times for the machine to work out if its an account, groups of accounts of theme that you don't like, and it'll stop showing it to you. (threads for example took a very long time to stop showing me fucking sports.)
Why don't more people know this? because it hurts short term metrics for what ever bollocks the devs are working on. so its not that well advertised. just think how unsuccessful the experiments in the facebook app would have been if you were able to block the "other posts we think you might like" experiments. How sad Zuckerberg would be that his assertion was actually bollocks?
To be fair, in times far past, you really didn't have much choice in TV or radio channels, and I suspect it's this demographic that tend to just scroll down Facebook and take what it gives without much thought other than pressing Like on stuff.
Once I get my database library reworked, a project I have in the queue is a classifier which filters out negative people so I can speed follow and not add a bunch of negativity to my feed, this way I get to enjoy real gems like
https://mas.to/@skeletor
Cross posting that would cure some of the ills of LinkedIn!
FWIW, I've been consistently posting quality stuff on Bluesky for the last year, and despite having a few hundred followers, I get ZERO engagement.
People in the Bluesky subreddit tell me it's not a "post and ghost" platform in that you have to constantly interact with people if you want to earn engagement, but that's too time consuming.
In other words, the discovery algorithm(s) on BlueSky sucks.
Facebook is not my page, it looks nothing like I want... my content is in many ways the least important thing featured.
brains are wired that way. Gossip and rage bait is not something that people actively decide for, it's subconscious. It's weird saying that this is the problem of individuals - propaganda is effective not because people are choosing to believe it.
At the scale we're operating, if only 1% is susceptible to these algorithms, that's enough to translate to noticeable social issues and second-order effects.
And it's not 1%.
It really isn't a choice. It is very accessible. Many friends are on social networks and you slowly get sucked into shorts. Then, it becomes an addiction as your brain crave the dopamine hits.
Similar to what Howard Moskowitz did with food.
In the same was a smoker "chooses" to engage with cigarettes. Let's not underestimate the fact that core human programming is being exploited to enable such behavior. Similar to telling a smoker to "just out the cigsreet down", we can't just suddenly tell people in social media to "stop being angry".
>people on [BlueSky] want to behave the same way they wanted to on Twitter.
Yes. Changing established habits is even harder to address. You can't make a horse drink (I'm sure anyone who ever had to deal with a disengaged captive audience feels this in their souls). Whike it's become many peoples primary "news source", aka the bread, most people came there for the circus.
I don't really have an answer here. Society needs to understand social media addiction the same way they understand sugar addictions; have it slammed in there that it's not healthy and to use sparingly. That's not something you can fix with laws and regulation. Not something you fix in even a decade.
Technically correct, but choice is here very simplified. The system is unable to understand WHY people engage with something, and in which way. That's poisoning the pool, and enforcing certain content and types of presentation.
I feel exactly the same way.
I think there needs to be a kind of paradigm shift into something different, probably something that people in general don't have a good schema for right now.
Probably something decentralized or federated is necessary in my opinion, something in between email and twitter or reddit? But there's always these chicken and egg issues with adoption, who are early adopters, how that affects adoption, genuine UX-type issues etc.
So, Usenet? The medium is the message and all that, sure, but unless you change where the message originates you are ultimately going to still end up in the same place.
Widespread adoption before understanding risks - embraced globally before fully grasping the mental health, social, and political consequences, especially for young people.
Delayed but significant harm - can lead to gradual impacts like reduced attention span, increased anxiety, depression, loneliness, and polarization
Corporate incentives misaligned with public health - media companies design platforms for maximum engagement, leveraging psychological triggers while downplaying or disputing the extent of harm
- Smoking feels good but doesn't provide any useful function.
- Some social media use feels good and doesn't provide any useful function, but social media is extremely useful to cheaply keep in touch with friends and family and extremely useful for discovering and coordinating events.
Fortunately the "keep in touch" part can be done with apps that don't have so much of the "social media" part, like Telegram, Discord, and even Facebook Messenger versus the main app.
Now that smoking is gone in the US, a lot of that is dead, and for the worse. A lot of places now don't even have balconies or other outdoor sections. Every piece of area is dedicated towards productive activity, whether that be shopping, working, or eating. There's very little "just sit around" places in modern businesses and workplaces. The joy of random encounter has been significantly decreased. Everything is now in passing.
Dead Comment
There are apps like Meetup, but a lot of people just find it too awkward. Introverts especially do not want to meet just for the sake of meeting people, so they fallback on social media.
Maybe this situation is fundamentally not helped by software. All of my best friendships organically formed in real-world settings like school, work, neighborhood, etc.
This is at core a 3rd places issue, haven't had the capital to restart it post covid.
Functioning, welcoming, and well-ran communities are the only thing that solves this. Unfortunately, technology often makes this worse, because it creates such a convenient alternative and also creates a paradox of choice. I.e. people think "when there's 1000 meetups to check out, and this one isn't perfect, I'll just move onto the next one" when actually it's the act of commitment that makes a community good.
[0] https://www.theoffline-club.com/
A third place would fix this, especially for men who need "things". You go to a bar for "thing" and if you meet some others to yell at sports with, bonus. We have less "things" for gen Z, and those things happen rather infrequently in my experience. I'm not sure if a monthly Meetup is quite enough to form strong bonds.
And what would be the downside of that? :D
This is also how Mastodon works today, and it is fine.
There are many ways AIs differ from real people and any conclusions you can draw from them are limited at best -- we've had enough bad experiments done with real people
https://en.wikipedia.org/wiki/Stanford_prison_experiment#Int...
It's so weird to live in a time when what you just said needs to be said.
For us layman, the flaw of using AI trained on people for surveys is, human. Humans have a unique tendency to be spontaneous, wouldn’t you say?
How would a focus group research team approach this when they’re bombarded by AI solutions that want their research funds?
A YC company just launched doing exactly that.
https://news.ycombinator.com/item?id=44755654
I would assume there is so much in the corpus based on behavior optimized for the actual existing social media we have that the behavior of the bots is not going to change because the bot isn’t responding to incentives like a person would it’s mimicking the behavior it’s been trained on and if there isn’t enough training data of behavior under the different inputs you’re trying to test you’re not actually applying the “treatment” you would think you are.
But i have a footnote of "yes" because as you said, decision makers are just not interested in having this discussion about "focus on making fun games". So it will unfortunately affect my job in the short and even medium terms. Because so much of big money in games these days is in fact not focused on making a game, but on trying to either generate a gambling simulator, an engagement trap, or (you guessed it) AI hype. Both to try and claim you can just poof up assets, and to try and replace labor.
Knowing this, I do have long term plans to break out into my own indie route.