So, in other words, the list of people I have blocked on Bluesky, for whatever reason, is readable to everyone on the Internet without the use of any external service.
Is this a design I'd like to use? Or a gaping privacy hole attempted to be explained away through "implementation reasons"?
I would love the ability to copy someone else's block list. Some various popular personalities on ActivityPub have had to deal with abuse from servers and individuals that I would like to preemptively filter out of my time line.
However, that shouldn't be the default.
I'm not sure why this page keeps bringing up that "people on other platforms can find out if they're blocked or not". You're always going to have ways to detect that. It doesn't address the conclusion "so it may as well just be public".
There's something to be said for server-local blocking ("muting"), which this post also advocates. However, if you're going with that approach, why put "real" blocks in your protocol?
Yes. For all the cases you've mentioned, I might want to share my blocklist - but it should be an option rather than the default. I may want to share it only with select individuals rather than everyone who might want to use it to scrape it, archive it, monetize it, and stir online drama on my behalf AND on behalf of whomever I've blocked.
Seriously, it's worse than on Twitter. Bluesky slowly starts looking like "let's redo ActivityPub but so we can make money off it".
It is a good feature. All bans by users and by entities (subreddits/forums/etc) should be visible for everyone, and also in reverse - "banned by" info should be available. This keeps moderators/admins in check somewhat.
PS: I don't use Bluesky at all, just from previous experience on forums.
I, as the moderator of my own Fediverse timeline, do not need to be kept "in check" by having a list of my blocks published anywhere. This, and I trust my admins enough to do a good moderating job without needing to look at the exact list of bans that they do.
Who is the entity that is supposed to keep mods/admins "in check"?
I wonder why they even bothered. Only their client-side "mute" feature actually works in the presence of rogue federated servers, and it only takes one person to set up such an instance. It seems like both the federation design and the decision to be public-only fundamentally can't support blocks like this, and they just slammed it in there anyway and hoped for the best. The part that doesn't work is like 99% of the effort they expended here; the client-side mute is trivial and the only part that does always work.
The rogue client question is addressed in the article:
> In theory, a bad actor could create their own rogue client or interface which ignores some of the blocking behaviors, since the content is posted to a public network. But showing content or notifications to the person who created the block won’t be possible, as that behavior is controlled by their own PDS and client. It’s technically possible for a rogue client to create replies and mentions, but they would be invisible or at least low-impact to the recipient account for the same reasons. Protocol-compliant software in the ecosystem will keep such content invisible to other accounts on the network.
That is, compliant clients are expected to mute any interactions between blocked users, even if rogue clients continue to generate such interactions. Only users on other rogue clients would be able to see them. So the hope is that almost all recipients would not be using such clients.
Yes, I know; where do you think I got the idea from? :)
Client-side mute is a different feature than the blocking discussed here. They have both (Ctrl+F "mute", the second usage in the article). I am suggesting only having the client-side mute, not bothering with all this server-side blocking stuff which is loaded with caveats.
> “Mute” behavior can be implemented entirely in a client app because it only impacts the view of the local account holder. Blocks require coordination and enforcement by other parties, because the views and actions of multiple (possibly antagonistic) parties are involved.
Only having mute is still a bad outcome, it's just that it's the corner they've written themselves into here. Private blocks is the feature people actually want, and it's not possible with their architecture.
> As we currently understand it, on Mastodon, you only see content when there is an explicit follow relationship between accounts and servers, and follows require mutual consent.
You can't even create a mastodon test account to test (and disprove) this theory?
It explains a lot about Bluesky of they are that ignorant about the alternatives.
You don't even need an account. You can disprove this nonsense by visiting any Mastodon server and looking at the local feed and confirm that most posts are accessible to anyone.
Your instance won't get posts pushed to it if nobody on that instance follows a given account, but you can still pull unless an account is private, and at least one instance implements a kind of follow that pulls to let people keep tabs on accounts without openly following them.
On the record blocks is a protocol mistake that has been ported over from Scuttlebot. They'd be better off with a client side mute button.
It's an algorithmic choice, but most who've used these platforms for years know public blocking can also be a form of abuse on these kinds of protocols.
BS could (and this was done by the people who sunk Scuttlebot/Patchwork) for example choose to list all of the people who block you on your profile page as an act of shaming and exclusion.
yes, and the key difference in distributed/decentralized systems is if you post your blocks they are broadcast to the public. On Twitter you can figure out you're blocked/shadowbanned by logging out, but in this case a peer can generate a list of all of the people who block you and advertise it.
On the app I'm writing, blocks are private. Only the blocker knows they have blocked someone. If they block, they disappear from the other member's view. It's as if they didn't exist.
As mentioned, someone can figure out that they've been blocked, but they can't be certain. There's no way to know who another member is connected to. That's also private to the member, so the logging in as an anonymous user can see that you are available to unblocked members, but not who you are connected to. You can infer that you are blocked. Also, nothing is truly public. Only members can even see other members, and, currently, every member request is vetted by a human. We're not going for scale, which means that getting that sockpuppet account might not be so easy.
That's mainly because of the demographic we Serve. There are quite a few dangerous people, therein, so we need to be pretty circumspect about privacy and security. We make sure that every member has full control of their privacy and data, and we also default to the most secure. No dark patterns to trick people into divulging information.
That said, it's a simple community app, so we can't throw too much friction into the way members interact with each other. If someone is really worried, they shouldn't use our app (or any other social media app, because ours is more anal than most).
One important thing to note here. In decentralized social networks, blocks make very little sense and are nothing more than social convention.
There's nothing stopping you from writing a Bluesky (or Mastodon) server that doesn't respect blocks, shows a list of users that you have been blocked by or gives you block notifications. On centralized networks with closed-off APIs, you can make first-party apps respect block semantics and the point of blocks is accomplished (friction increases.) On decentralized networks, users can just migrate to non-block-respecting instances, and nobody else will ever know whether another instance respects blocks or not.
I think what you are describing is majority-accurate (arguably https://docs.joinmastodon.org/admin/config/#authorized_fetch addresses some of it, though it's not watertight) for public content. However, quite a lot of people set up their profiles/posts with access controls.
Honestly the more I hear about AT/Bluesky and some of these other new protocols, the more I keep coming back to thinking of ActivityPub as the one with the most potential for the future.
BlueSky is a better social media service than any ActivityPub system out there right now because of the content recommendation algorithms BS was designed to support.
You need to gather the messages from various huge AP instances for anything close to a good recommendation algorithm.
I think the whole algorithm situation is exactly what's wrong with social media today, but it's also what makes people come back for more. Sadly, I think BS will beat AP in this regard because of that.
I certainly understand where you're coming from but isn't that only demonstrable right now because BS is the only "huge" instance running at the moment? They haven't federated yet so we assume this is just going to work across the entire federation of servers?
Further, development on ActivityPub is not "done," correct? As in, someone could technically still write/develop their own client and attach its own algorithm over the top of ActivityPub, right?
To be clear, I hope I'm not sounding like my opinion on this is definitive or anything and I'm just having a conversation. There are definitely problems with ActivityPub, it just sounds like BlueSky is trying to solve/fix every problem that every individual person maybe/possibly/who knows will have before they even launch which, in my experience, is generally a concerning approach.
> One proposed mechanism to make blocks less public on Bluesky is the use of bloom filters. The basic idea is to encode block relationships in a statistical data structure, and to distribute that data structure instead of the set of actual blocks. The data structure would make it easy to check if there was a block relationship between two specific accounts, but not make it easy to list all of the blocks.
a bloom filter provides no false negatives, but allows some false positives
how do you model an "A-blocks-B" relationship with this data structure?
specifically, how do you ensure that "A-blocks-B" blocks only B, and never C D or E?
As I understand it, you enter a fact into the data structure (A blocks B). Now every time you query: is “A blocks B” in the data structure, it will always return True. But also very rarely it will return True for “A blocks C” or anything else that isn’t true. It would make it harder to dump the full list, but it doesn’t solve the case of “does this user block me?” Because you can still easily query that.
> But also very rarely it will return True for “A blocks C” or anything else that isn’t true. It would make it harder to dump the full list, but it doesn’t solve the case of “does this user block me?” Because you can still easily query that.
if "A blocks C" returns true when A doesn't block C, then this is a problem, right?
it means you can trust false responses (no false negatives), but you can't trust true responses (some false positives)
if you get a true response, you have to confirm it's actually true through some other source, which must not return false positives
that other source therefore must be authoritative, and must also be query-able by any client that can query the bloom filter -- it's no panacea
a bloom filter is an optimization, not a source of truth
Is this a design I'd like to use? Or a gaping privacy hole attempted to be explained away through "implementation reasons"?
However, that shouldn't be the default.
I'm not sure why this page keeps bringing up that "people on other platforms can find out if they're blocked or not". You're always going to have ways to detect that. It doesn't address the conclusion "so it may as well just be public".
There's something to be said for server-local blocking ("muting"), which this post also advocates. However, if you're going with that approach, why put "real" blocks in your protocol?
The second party can find out if they're blocked by the first party.
A third party cannot find out all the people that the first party blocked.
That's a massive usability difference.
You should have both I think. One so the other person knows, one so the other person doesn't know explicitly.
Maybe I want to grab a list of those I also want to block. Or negatively weight.
Conversely, the follow list can add positive weight until I say otherwise.
Distributed email spam and trust lists.
There's the potential for creating a strong filter bubble, but if the user can control their own system's behavior it still seems like a win.
Seriously, it's worse than on Twitter. Bluesky slowly starts looking like "let's redo ActivityPub but so we can make money off it".
enforcement means it needs to be read-able by the relevant software
"optionally" public means that software that can't see the block list can't enforce it
and that means the only way to enforce the block list (reliably) is in your client (which presumably always had read permissions)
this is in no way controversial
PS: I don't use Bluesky at all, just from previous experience on forums.
Who is the entity that is supposed to keep mods/admins "in check"?
> In theory, a bad actor could create their own rogue client or interface which ignores some of the blocking behaviors, since the content is posted to a public network. But showing content or notifications to the person who created the block won’t be possible, as that behavior is controlled by their own PDS and client. It’s technically possible for a rogue client to create replies and mentions, but they would be invisible or at least low-impact to the recipient account for the same reasons. Protocol-compliant software in the ecosystem will keep such content invisible to other accounts on the network.
That is, compliant clients are expected to mute any interactions between blocked users, even if rogue clients continue to generate such interactions. Only users on other rogue clients would be able to see them. So the hope is that almost all recipients would not be using such clients.
Client-side mute is a different feature than the blocking discussed here. They have both (Ctrl+F "mute", the second usage in the article). I am suggesting only having the client-side mute, not bothering with all this server-side blocking stuff which is loaded with caveats.
> “Mute” behavior can be implemented entirely in a client app because it only impacts the view of the local account holder. Blocks require coordination and enforcement by other parties, because the views and actions of multiple (possibly antagonistic) parties are involved.
Only having mute is still a bad outcome, it's just that it's the corner they've written themselves into here. Private blocks is the feature people actually want, and it's not possible with their architecture.
You can't even create a mastodon test account to test (and disprove) this theory?
You don't even need an account. You can disprove this nonsense by visiting any Mastodon server and looking at the local feed and confirm that most posts are accessible to anyone.
Your instance won't get posts pushed to it if nobody on that instance follows a given account, but you can still pull unless an account is private, and at least one instance implements a kind of follow that pulls to let people keep tabs on accounts without openly following them.
It's an algorithmic choice, but most who've used these platforms for years know public blocking can also be a form of abuse on these kinds of protocols.
BS could (and this was done by the people who sunk Scuttlebot/Patchwork) for example choose to list all of the people who block you on your profile page as an act of shaming and exclusion.
Assuming BS is p2p, which is another topic.
As mentioned, someone can figure out that they've been blocked, but they can't be certain. There's no way to know who another member is connected to. That's also private to the member, so the logging in as an anonymous user can see that you are available to unblocked members, but not who you are connected to. You can infer that you are blocked. Also, nothing is truly public. Only members can even see other members, and, currently, every member request is vetted by a human. We're not going for scale, which means that getting that sockpuppet account might not be so easy.
That's mainly because of the demographic we Serve. There are quite a few dangerous people, therein, so we need to be pretty circumspect about privacy and security. We make sure that every member has full control of their privacy and data, and we also default to the most secure. No dark patterns to trick people into divulging information.
That said, it's a simple community app, so we can't throw too much friction into the way members interact with each other. If someone is really worried, they shouldn't use our app (or any other social media app, because ours is more anal than most).
Then the person blocked doesn’t even know.
Unlogged people can’t see anything. The app is pretty much useless, unless you have an account.
There's nothing stopping you from writing a Bluesky (or Mastodon) server that doesn't respect blocks, shows a list of users that you have been blocked by or gives you block notifications. On centralized networks with closed-off APIs, you can make first-party apps respect block semantics and the point of blocks is accomplished (friction increases.) On decentralized networks, users can just migrate to non-block-respecting instances, and nobody else will ever know whether another instance respects blocks or not.
I think what you are describing is majority-accurate (arguably https://docs.joinmastodon.org/admin/config/#authorized_fetch addresses some of it, though it's not watertight) for public content. However, quite a lot of people set up their profiles/posts with access controls.
Deleted Comment
You need to gather the messages from various huge AP instances for anything close to a good recommendation algorithm.
I think the whole algorithm situation is exactly what's wrong with social media today, but it's also what makes people come back for more. Sadly, I think BS will beat AP in this regard because of that.
Further, development on ActivityPub is not "done," correct? As in, someone could technically still write/develop their own client and attach its own algorithm over the top of ActivityPub, right?
To be clear, I hope I'm not sounding like my opinion on this is definitive or anything and I'm just having a conversation. There are definitely problems with ActivityPub, it just sounds like BlueSky is trying to solve/fix every problem that every individual person maybe/possibly/who knows will have before they even launch which, in my experience, is generally a concerning approach.
a bloom filter provides no false negatives, but allows some false positives
how do you model an "A-blocks-B" relationship with this data structure?
specifically, how do you ensure that "A-blocks-B" blocks only B, and never C D or E?
if "A blocks C" returns true when A doesn't block C, then this is a problem, right?
it means you can trust false responses (no false negatives), but you can't trust true responses (some false positives)
if you get a true response, you have to confirm it's actually true through some other source, which must not return false positives
that other source therefore must be authoritative, and must also be query-able by any client that can query the bloom filter -- it's no panacea
a bloom filter is an optimization, not a source of truth