Readit News logoReadit News
yapyap · a year ago
“ Excessive moderation is a barrier to open and robust debate, ultimately undermining the diversity of perspectives that make meaningful discourse possible. Supressing dissenting opinions will lead to an echo chamber effect. Would you like to join me an upcoming campaign to restore Europe? Deus vult!”

ah social media, some people are truly as dumb as rocks

ranger_danger · a year ago
Not to mention all the people extremely confused over what "CSAM" is seemingly without having the ability to google it.
astrange · a year ago
I think your life is better off if you don't know what that means, so feel free not to look it up.
ASalazarMX · a year ago
Googling it gives the expected child porn definition as the first result. It's not a scarlet letter in your Google profile to google CSAM, there are plenty of legitimate reasons to be familiar with the term.
rustcleaner · a year ago
CSAM: the digital munitions glowies drop on a target they intend to frame and eliminate. Examples include free speech imageboards, blogs hosting unpopular opinions and the communities surrounding them, etc.

I can't run a free speech darknet site because some glowboy will upload CP to it and then netflow dox me to put me in a federal pen if I'm not fast enough on it! The only good fix is "there are no illegal numbers."

Deleted Comment

paulddraper · a year ago
It's the rightist inverted version of "paradox of tolerance"

Dead Comment

voat · a year ago
I'm interested to see how Bluesky ends up handling bad actors in the long-term. Will they have the resources to keep up? Or will it become polluted like the other large platforms.

Also, if a part of their business model will be based off selling algorithmic feeds, won't that mean more spam is actually good for their bottom line because they'll sell more algorithmic feeds that counter the spam?

paxys · a year ago
The AT Protocol already accounts for this. There will eventually be community-built content labelers and classifers that you can subscribe to to rank and moderate your own feed however you want.
Waterluvian · a year ago
I have a feeling that this is going to create a weird thing of some magnitude where accounts end up on popular blacklists for poor reasons and have no recourse.

I’m concerned that in time it might develop into zealous communities of samethink where you have to mind any slightly dissenting opinion or you’ll get blacklisted.

I think what I’m thinking about is essentially that judges cannot be replaced by community opinion. (Not that Twitter moderation was less bad).

luckylion · a year ago
I understand the moderators working for the big social networks have a terrible job and often see the worst the internet has to offer.

Who is going to do that job as a volunteer? Or is that expected to be solved by technology? Hard to imagine them achieving what Google, Facebook etc could not reliably.

evbogue · a year ago
If AT was distributed using a signed hash message protocol combined with a simple replication strategy (perhaps only replicating a friend and the friends friends) to spread those posts out between their PDSes this burden of moderation would fall less upon the shoulders of their main PDS.

As always I refer the conversion to the ssb API documentation[1] for an example of how AT could have been made.

[1] https://scuttlebot.io/

sojournerc · a year ago
Relevant username. Voat definitely fell victim to bad actors.

Are you a creator/founder?

dyauspitr · a year ago
Voat specifically selected for bad actors.
hipadev23 · a year ago
Due to how easy it is to setup accounts and post on Bluesky, it’s likely many of the same operatives behind the propaganda and bot armies on Twitter are now pushing the same vitriolic content, and triggering these reports. If they can negatively impact Bluesky at a critical moment, it’ll reduce the flow of users who will quickly surmise “oh this is just like twitter”
citizenkeen · a year ago
This underestimates the effect of Bluesky’s culture of “block and move on”. There are curated block lists you can subscribe to. Individual communities do a pretty good job of shutting down toxicity they don’t want to engage with.
agoodusername63 · a year ago
It shares the same problem that Twitter had years ago back when it supported API blocklists.

Everybody you're blocking is at the whims of the blocklist owner, and it didn't take long for those people to go insane and use their lists as a tool for their own personal unrelated crusades.

Bluesky is already starting to experience this from a few I saw going around

ks2048 · a year ago
You're right, they need to do well with the bot problem to really succeed.

But, it won't be "just like twitter" unless the "Discover" tab ("For You" on X) is filled the billionaire owner's non-stop hyper-partisan, political posts.

know-how · a year ago
What's really funny is the same people whining about Musk's views would be cheering him on if he shared their own.
ranger_danger · a year ago
How would they make it harder / reduce bots without sacrificing privacy (such as SMS/ID verification/etc.)?

I think if you can realistically solve that you'd be a millionaire already.

hipadev23 · a year ago
I don’t think you realistically can. I’d instead approach it from limiting the reach of new accounts until proven as good actors.

Or switch it back to invite only, as there’s a massive userbase now, and if you invite a problematic account it becomes a problem for your account too. Operate on a vouch system.

jabroni_salad · a year ago
this IMO is why groupchat is best social network. Anything with more than 20 people doesnt go on my phone. sorry marketers.
jacoblambda · a year ago
Moderation lists and labellers honestly already get you most of the way there. Labellers are very effective at flagging spam/botted content and accounts that continuously show up on labellers as spam/bot content get referred to moderation lists dedicated to specific types of spam and bot content.

So you can already start by using a labeller and just hiding that content behind a warning (kind of like the NSFW wall), hiding it entirely, or just attaching a visual tag to it (based on preferences). And then to filter out more consistent perpetrators you can rely on mute/block lists.

johnnyanmac · a year ago
No one's saying the quiet part out loud. Pay for an account. Even $1, one time, is enough to cut almost all those bot farms down.

Is it realistic? yes. Is it viable? I'm not sure. People claim to care more about privacy but will choose ads and trackers over a subscription any day of the week. Anyone operating a website or app with a subsciption knows this.

DrillShopper · a year ago
That problem is unsolvable
ben_w · a year ago
> I think if you can realistically solve that you'd be a millionaire already.

Please.

If I knew how to do that, or even how to reduce bots even with SMS verification etc., I'd be a multi-billionaire at least.

Making a twitter clone is relatively easy, making a community with a good vibe that's actually worth spending time using is the one single problem that makes none of the clones stand out to normal users.

beeflet · a year ago
hashcash
jmyeet · a year ago
This is the big challenge of any platform for user-generated content and it's incredibly difficult to scale, do well and be economical. A bit like CAP, it's almost like "pick 2". You will have to deal with:

- CSAM

- Lower-degree offensive material eg Youtube had an issue a few years ago where (likely) predators were commenting timestamps on inocuous videos featuring children or on Tiktok videos with children get saved way more often. I would honestly advise any parent to never publicly post videos or photos of your children to any platform, ever.

- Compliance with regulation in different countries (eg NetzDG in Germany);

- Compliance with legal orders to take down content

- Compliance with legal orders to preserve content

- Porn, real or AI

- Weaponization of reporting systems to silence opinions. Anyone who uses Tiktok is familiar with this. Tiktok clearly will simply take down comments and videos when they receive a certain number of reports without it ever being reviewed by a human, giving you the option to appeal

- Brigading

- Cyberbullying and harassment

This is one reason why "true" federation doesn't really work. Either the content on Bluesky (or any other platform) has to go through a central review process, in which case it's not really federated, or these systems need to be duplicated across more than one node.

sailfast · a year ago
Agreed - moderation at scale is a tough and expensive problem to get right.

That said, I wonder how much these days it would take to get it working well enough using existing LLMs. I'm not sure how much you would need to do that wasn't a bit off the shelf if you were mostly trying to keep your safe harbor protections / avoid regulator scorn.

bakugo · a year ago
The audience Bluesky is currently cultivating is the kind of audience that mashes the report button every time they see something they disagree with, so this isn't surprising.

If the user base actually keeps growing at a steady rate, I don't see how they'll get the resources to deal with so many reports (especially since they don't seem to have a plan to monetize the site yet) without resorting to the usual low-effort solutions, such as using some sort of algorithm that bans automatically based on keywords or number of reports.

almatabata · a year ago
> without resorting to the usual low-effort solutions, such as using some sort of algorithm that bans automatically based on keywords or number of reports.

Or you prioritize reports from accounts with a proven track record. If I consistently report posts that clearly violate the rules why shouldn't my report count more than an account that just got created?

If you consistently report nonsense then you should accumulate negative karma until at some point you can safely ignore whatever they report in the future.

beeflet · a year ago
What should the karma be of a new account? That is the minimum karma that spammers can readily abuse.
idlewords · a year ago
Bluesky would really benefit from a notional ($1/year) signup fee. That small bit of friction makes a vast difference in knocking down all kinds of spam, at the price of being considered a bit uncool (for having a revenue stream).
squigz · a year ago
And at the price of anonymity, and making the platform inaccessible to those who can't afford the signup fee (which will certainly stay 1 USD per year forever, right?) (inb4 someone tells me how everyone can afford $1)

Not to mention that this won't solve the spam that actually matters. What's dropping a few thousand dollars to a dedicated attacker?

johnnyanmac · a year ago
no, $1, one time. Despite the owner, this was one thing SomethingAwful seemed to do right over 20 years ago. The goal isn't to make money, but discourage botting. any paywall works and $1 is about as low as you can go in a digital transaction without credit card brokers making it difficult for you.

And yes, it really shouldn't go up. SomethingAwful was 10bux back in 2005, and is still 10bux in 2025 (they monetized other things over the decades, but not the entry cost).

Can it be exploited? Sure, about as much as Bluesky can add "Bluesky Gold" at any time. When it enshittifies I hope it takes a shorter time to leave than Twitter.

>inb4 someone tells me how everyone can afford $1

if you have the time to be commenting on social media, you can afford $1. The cost of electricity to run your phone for a month is probably $1.

johneth · a year ago
I doubt it would change anything. One of the first things Elon did after taking control of Twitter was to make verification pay-to-play. Now the blue checkmark is basically a sign of either a bot, grifter, or engagement farmer.

Putting up a paywall hasn't deterred bots, and it won't work.

hobobaggins · a year ago
not sure why you're being downvoted. It's what Metafilter and Whatsapp did (but delayed until the following year, IIRC). Maybe Metafilter isn't the best example :)
amatecha · a year ago
Huh, almost as if hosting everyone on a centralized service isn't sustainable, and self-hosted, federated social media is more sustainable as more people come online?

The couple times I've visited people's Bluesky profiles I've noticed they got hit with false-positive moderation actions for completely innocuous stuff, which cemented my initial impressions that the platform has fundamental problems that will probably just get worse over time.

People have this proposed solution like, "Bsky should just have a user fee"... But this just reminds me how Mastodon servers are typically run by small communities and friend groups which solicit a bit of donation here and there to keep things running. Not lining the pockets of some large/powerful central org/corp but rather keeping the money within the community. As an added bonus each community gets to set our own rules which can vary from what other servers choose, thus ensuring greater trust and agency within one's self-governed community. Adding to this, servers form "relationships"/rapport with other ethically/socially-compatible communities. When there's a moderation action that goes awry (not that I have personally even seen this happen, just giving a comparison to Bsky), you have direct communication with the person/people involved because it's literally your social circle, not some stranger who will never know/care who you are.

BTW, how to prevent spam on mastodon: block mastodon.social (the original "default server" people keep signing up to for some reason)

Waterluvian · a year ago
I suspect that when people love Bluesky so much, a lot of that is actually just the fact that it’s free and has no ads and the population was quite manageable.

I don’t think I’ve seen a concrete plan for how it’s going to keep scaling and pay the bills.

CharlesW · a year ago
"With this fundraise, we will continue supporting and growing Bluesky’s community, investing in Trust and Safety, and supporting the ATmosphere developer ecosystem. In addition, we will begin developing a subscription model for features like higher quality video uploads or profile customizations like colors and avatar frames. Bluesky will always be free to use — we believe that information and conversation should be easily accessible, not locked down. We won’t uprank accounts simply because they’re subscribing to a paid tier."

https://bsky.social/about/blog/10-24-2024-series-a

Waterluvian · a year ago
I’m hoping subscription model without special uprank will be sufficient!

I’m very skeptical but I’m rooting for success!

Dead Comment

toss1 · a year ago
If it is an influence operation, the people who want to wield influence pay the bills. Already the point of X/Twitter (large Saudi funding, likely to help prevent another Arab spring type event in their country), and the point of the hundreds of millions SBF spread around. Bluesky's Series A was Blockchain Capital; seems like part of this year's significant movement of crypto influencers into politics. If so, they don't need it to turn a profit, they'll profit off the influence. Just like the corporations who normally jettison any money-losing department, but buy and keep permanently loss-making news departments for the influence they can create.