I got verified in the initial round of verification.
On a technical level, this sort of works like a Root CA: anyone can verify anyone by publishing a `app.bsky.graph.verification` record to their PDS. Bluesky then chooses to turn those from trusted accounts into the blue check, similar to browsers bundling root CAs into the browser.
I am not 100% sure how I feel about this feature overall, but it is something that a lot of users are clamoring for, and I'm glad it's at least "on-protcol" instead of tacked on the side somehow. We'll see how it goes.
Initially I just thought they verified people working at Bluesky, which made enough sense, but this initial batch seeming so arbitrarily decided isn't a good look. It feels all too similar to the "I know someone at Twitter" verification in the SF tech community.
Unfortunately that’s how I’m beginning to see this too, a sign of old school nepotism and struggle to regain lost status. We’ve seen how this unfolded for Twitter.
I hear you. I haven't investigated every account that got the badge, but it feels to me like they picked people who are both technical and engaged with the protocol, so not entirely arbitrary. That naturally will have some correlation with "I know someone at bsky". I know I've seen accounts that I think are cooler than I am who didn't get verified yet! I'm sure they'll be expanding soon, which will dilute this sort of association.
An imperfect system is still better than nothing. Look what happened to Twitter with the removal of its verification (before feckless Musk had driven it fully into the ground).
It seems to me this feature would be much better if users could subscribe to verifiers the way they can labelers, perhaps with the official verifier subscribed by default. The current implementation feels centralized in a way that conflicts with BlueSky's stated goals.
I'd agree that would be nice, but at least they can change into that in the future if they want.
Hilariously, it's kind of less centralized than I expected: there's no "Bluesky is the web of the root of trust" here, only "Bluesky chooses which records convert to UI" which leaves the whole system open for others.
It's great for preventing notable accounts from being impersonated, I spend a lot of time on Bluesky and impersonation of notable accounts has been a real pain, verification largely solves this problem and I'm very happy about it.
I wish it'd work like labelers and other moderation features: with users able to choose which verifiers to use. I trust the NYT as far as I can throw them when it comes to verification, for example, whereas I'd be interested in something flagging Bluesky employees or contributors to a given GitHub repository or whatever other bizarre things people would use this for like they already use labels.
What's your concern with the NYT? Do you think they are incompetent and might verify people who are not who they say they are, or do you think that they are malicious and will deliberately verify bad actors, or something else?
The NYT account on Bluesky does nothing besides make automated posts linking to their own articles. Why would account verification even matter in that case? It is in effect just a spambot. It posts links and doesn't engage with responses.
Bluesky could always revoke NYT or any other 3rd party verification site if they abused it. The bsky community would identify bad verifications very quickly.
I have no real insight. I do know that I am a big fan of Bluesky/atproto and post about it fairly regularly, and enjoy being friendly with the devs. They verified just over 200 accounts, and most of them are news organizations and their employees, and the rest are programmers who regularly use the site and/or engage with the protocol.
I think this makes sense, because 1. most people want this sort of feature for news and 2. the kinds of people they verified technically are likely to play around with it and see how sound it is, which is who I'd want to be kicking the tires.
I'm not sure when they'll verify more people, but this is only the beginning, for sure.
In the core team's clients, if the 'verified' account changes its display-name and/or handle, does the blue check stay, disappear, or do some secret third thing?
> Bluesky’s moderation team reviews each verification to ensure authenticity.
How is this compatible with Bluesky's internal cultural vision of "The company is a future adversary"[1][2][3]? With Twitter, we've seen what happens with the bluecheck feature when there's a corporate power struggle.
The problem with Twitter (before the whole blue check system was gutted into meaninglessness) was that not enough verification badges were handed out. It's not exactly a dangerous situation.
Bluesky's idea of verified orgs granting verification badges to its own org members would be an example of a much more robust and hands off system than what Twitter had.
The dangerous scenario is what happened to Twitter after the Elon takeover: verification becomes meaningless overnight while users still give the same gravity to verification badges which causes a huge impersonation problem. But that possibility is not a reason to have zero verification.
The problem I had with twitter was the check was supposed to mean one thing and one thing only: that the person was who he or she claimed to be.
What twitter starting doing was removing blue checks from people who were causing problems for the platform (but not behaving bad enough to kick off). This made no sense because people still needed to know if a person was who he claimed to be (e.g., Milo Yiannopoulos) even if the person was controversial or problematic or just plain nasty.
Blue Checks weren't "gutted". Now they just mean something else -- you're a premium subscriber.
The problem with Twitter (before the whole blue check system was gutted into meaninglessness) was that verification badges were merit and nepotism and not identity based
Same as the current labeling/moderation service: any participant can verify any other participant. Which verifiers gets a check to appear is a property of the AppView.
If Bluesky becomes evil, you just configure your AppView not to trust their verifications.
Of course, that's the problem: right now we mostly have one AppView (bsky.app), which is the current SPOF in the mitigation plan against the "Bsky becomes the baddies" scenario.
We need a way to reflect that human "social trust" is born distributed, and centralising trust subverts it. But here, while they introduce third party verifiers, rather than individuals deciding which verifiers to trust, bsky is going to bless some. So this is just centralised trust with delegation.
What's missing from the blog announcement is that on the at protocol, anyone can publish a verification of any account. It is then up to each client to decide which verifiers to display / trust / etc.
With that in mind, it seems like bluesky is trying to thread the needle on providing tools for the community to do their own verification (via the protocol) while also making their own client "notable user" friendly (via blessed verifications that show blue checks).
I also don't see why it wouldn't be possible for someone to build a labeler that shows verifications from non-bluesky blessed sources. Then community members could subscribe to that labeler to get non-blessed verifications that they choose to show. It wouldn't show up as a blue check but it would still show up on the user's profile in bluesky.
It would look something like this existing "verification" labeler that doesn't use the underlying verification feature on the protocol but instead has to maintain the data in a 3rd party store: https://imgur.com/a/tXR4FUu
Additionally, third-party clients like Pinksky or Skylight could choose to show blue checks or whatever UI for any verifiers they choose. All the data is on the protocol now, so the 3rd party clients wouldn't need to do the verification themselves.
Human social trust works great at small scale. You go to Jim the butcher because everybody you know in town regularly goes to Jim the butcher and they know he's been making meat pretty well for a long time.
An automated version of this system might say "we verify anybody who at least N people within 3-4 steps of your followers graph are also following."
In a big city, you go to the store that's labeled "Butcher" and figure that, because the building is quite permanent and there's a lot of butchery stuff in there and it seems clean and there are people going in and out, then it's probably a fine butcher shop. No real "social" trust involved.
An automated version of this is probably domain checking, follower count, checking that N other 'verified' accounts follow it, some basic "is this a known malicious actor" checks, waiting until the account has some age, etc. Still kind of distributed, but not really relying on your own social trust metrics.
What's fun is that Bluesky allows you to implement both of those mechanisms yourself.
I don't see what the problem was with using domains. If you're trying to claim you work for NYT then get a NYT verified account?
And what ever happened to Keybase? That seemed like a good solution. Verify by public private key? It really seems like that could be extended. I mean we have things like attribute keys and signing keys. It seems like a solvable solution but just the platforms need to create a means for the private bodies to integrate their keys.
Hell, I wish we'd have verification keys for cops and gov employees. So me a badge? No, show me a badge with a key I can scan and verify your identity. It's much harder to forge a badge with a valid key than it is to forge a badge that just looks good enough
> If you're trying to claim you work for NYT then get a NYT verified account?
Part of the problem here is consistent identity over time. People do not like changing their handles unless they want to. I'm steveklabnik.com now, but if I started working at the NYT, and had to switch to steveklabnik.nyt.com, old links break to my posts, etc. And what happens if I want to be verified by more than one org at a time? Domains (at present) can't do that.
Are there any good examples of a working "vouch" system? I vouch for a few friends, they vouch others, etc. But if my credibility is revoked, everyone downstream of me is either yanked or needs a new voucher.
A long time ago there was this “web of trust”, I don’t think it exists anymore. Was one of the big CA and you could get different certificates through some form of vouching, I think it even went as far as meeting people to show your ID and then they sign you or something. As it was run by a big CA, not really distributed but IIRC they kept their involvement minimal. It’s been a long time but if you’re curious maybe look into that
There is a p2p social network (as in, people offering there services whatsoever) in France that does exactly this: it's called "Gens de confiance". It works well, although it creates kind of a gated community (as intended: it is mainly meant for upper-class social circles).
My initial thought is about GPG's "Web of Trust" system for trusting strangers' keys. But I don't know if that's a very good example since it always seemed somewhat esoteric and maybe not very successful in general.
"Working" would be a stretch, but this is how "web of trust" systems like PGP are supposed to function. Although I would say the BlueSky system sounds like it could skirt some of the pitfalls of web of trust because verifiers can also be trusted to revoke verification.
Apps on ATProto get to decide for themselves. Another Bluesky client, or a completely different app, can make different choices. Users can then decide which interface they want to use. All part of the design of ATProto
IMO a system of "I vouch for these accounts" and "I trust the accounts these accounts vouch for, and the accounts those vouched for vouch for up to x levels deep" would be a workable solution.
I built handles.net[1] to make it easy for organisations to manage their member's handles, I think that using domain names for identity is neat and valuable, I have a vested interest in its success as a paradigm but... domain name "verification" is not the right solution today for non-technical people. I shared this sentiment a few months ago[2] and I have only become more confident in that assessment since.
The approach they've taken ("trusted verifiers") is an approach aligned with their values, as it is an extension of the labelling concept that is already well established in the ecosystem. As an idealist, it is a shame that they gave up, I think they could have had an impact on shifting how non-technical people view domain names and understand digital identity... but as a pragmatist, this is the right choice. Bluesky has to pick their battles, and this isn't a hill to die on.
> The approach they've taken ("trusted verifiers") is an approach aligned with their values, as it is an extension of the labelling concept that is already well established in the ecosystem.
That just leaves me wondering why they bothered with a new separate system instead of just using the existing label system. A "verified by bsky.social" or "verified by nyt.com" or whatever label would do the job perfectly well, no?
I would have liked to have seen a justification for this as well. One thing about labels is that they can apply on a per-post granularity as well as a per-account granularity, but verification is purely account-level. Another is that they have slightly different semantics, you can lose your blue check if you change your handle or display name, but labels stay the same no matter what. That's probably the real justification for making it its own feature.
Yeah my initial reaction was not too positive. There's something weird to me about simply delegating verification to a third party organization. I'd prefer a more pure solution. Maybe we don't have a solution yet that is simple enough for widespread adoption. The domain based identity does seem a bit too complicated for the average user.
It’s ironic that many comments are skeptical of strong centralized moderation, but they’re posting these comments on a forum with perhaps the strongest and most centralized moderation team of the entire internet.
All I’m saying is that if weak moderation has had a positive effect somewhere, it’s worth showcasing that. Otherwise the evidence is decisively in favor of strong moderation.
In terms of how to keep the moderation team from deteriorating, other platforms could learn a thing or two from HN: put someone competent in charge of the team, and give them lots of incentives to do well.
HN moderation is easy mode because it's a small site and politics is "banned". Trying to do HN-quality moderation of political discourse among millions of users seems impossible.
There are a lot of users that have complained about the s-banning on this site. While the moderation team of this site seems to be well-intentioned, it does inevitably lead to a very strong slant. S-banning users doesn't make them or their viewpoints go away. They just end up happening elsewhere.
Because those conversations do end up happening elsewhere, this site is famous for leaving readers with a strongly false impression of what viewpoints are actually popular among whatever you would want to call this Silicon Valley hacker / VC scene space.
The highly insidious thing about censorship is not only you don't know what you're not seeing but you don't know you're not seeing it -- you don't know what's missing.
>Because those conversations do end up happening elsewhere
All research shows the opposite in fact. Adding friction to something causes a chilling effect in nearly any and all examples we have ever paid attention to. When you remove easy access to guns, people kill themselves less despite there being other easy ways to do so. When reddit banned a bunch of toxic communities, the entire site had less toxicity, even on subreddits that were unrelated to the toxic communities
Friction works. It works insanely effectively too.
This is better than twitters nonsensical verification but still does not close the loop all the way. I think what is needed are a set of equivalency verification's. Sort of like the domain verification used in getting a TLS certificate.
Something like
bluesky user X is equivalent(has control)
to domain A(domain verification)
to youtube account B (youtube verification)
to mastodon account C (mastodon verification)
to D@nytimes.com (email verification)
So logically I would expect a protocol that allows cross domain verification. Best I can come up with is something that works sort of like domain verification extended to user@domain verification. that is, a better engineered version of "make a youtube video with the string 'unique uuid code' in the comment" so that we can verify you own that youtube account"
The problem is that some domains would have no problem standing up this sort of verification. The Times only benefits from verifying it's employees. However I can see fellow social media sites balking as this equivalency weakens their walls that keep people in.
What you're proposing is reminiscent of Keybase's account verification system. You make a post or equivalent on each platform with cryptographic proof that it's you. (e.g here's mine for GitHub https://gist.github.com/ammaraskar/0f2714c46f796734efff7b2dd...).
> Additionally, through our Trusted Verifiers feature, select independent organizations can verify accounts directly.
As someone who believes in equal access and privilege, this is just horrible. "Trusted Verifiers" - how does the bsky team decide which orgs can be trusted? One could argue that this is worse than Twitter. And of course, the echo chamber is going to get worse.
It's the same as "Trusted flaggers" under the EU's DSA. Nobody trusts them. Just like when non-democratic countries call themselves "Democratic Republic".
> It's the same as "Trusted flaggers" under the EU's DSA. Nobody trusts them. Just like when non-democratic countries call themselves "Democratic Republic"
Trusted flaggers literally need to publish transparency records and are approved by orgs in EU countries under elected governments.
If you say that is all bullshit and EU is a North Korea and North Korea is a shining example of democracy then you should probably remove your dig at DPRK's self naming;) Because your own comment measures non-democratic countries by the standard of democratic countries. if you want to be wrong be consistent at least
Hamartia: The tragic flaw that takes the hero to the top will lead its downfall.
It seems to me that BlueSky is trying to rewind the clock and be the pre-Elon Twitter. They had a decent chance to become what Signal is to messaging, but looks like they are trying to be just another Social Media company.
On a technical level, this sort of works like a Root CA: anyone can verify anyone by publishing a `app.bsky.graph.verification` record to their PDS. Bluesky then chooses to turn those from trusted accounts into the blue check, similar to browsers bundling root CAs into the browser.
* https://pdsls.dev/at://did:plc:z72i7hdynmk6r22z27h6tvur/app.... <- bluesky verifying me. it's coming from at://bsky.app, and therefore, blue check
* https://pdsls.dev/at://did:plc:3danwc67lo7obz2fmdg6jxcr/app.... <- me verifiying people I know. it's coming from at://steveklabnik.com, and therefore, no blue check.
I am not 100% sure how I feel about this feature overall, but it is something that a lot of users are clamoring for, and I'm glad it's at least "on-protcol" instead of tacked on the side somehow. We'll see how it goes.
I hear you. I haven't investigated every account that got the badge, but it feels to me like they picked people who are both technical and engaged with the protocol, so not entirely arbitrary. That naturally will have some correlation with "I know someone at bsky". I know I've seen accounts that I think are cooler than I am who didn't get verified yet! I'm sure they'll be expanding soon, which will dilute this sort of association.
Hilariously, it's kind of less centralized than I expected: there's no "Bluesky is the web of the root of trust" here, only "Bluesky chooses which records convert to UI" which leaves the whole system open for others.
You don't trust the NYT to verify its own reporters?
Also, why do you say that in any circumstance? Who do you trust?
Dead Comment
I’m on Bsky as well but haven’t seen any such updates.
I think this makes sense, because 1. most people want this sort of feature for news and 2. the kinds of people they verified technically are likely to play around with it and see how sound it is, which is who I'd want to be kicking the tires.
I'm not sure when they'll verify more people, but this is only the beginning, for sure.
How is this compatible with Bluesky's internal cultural vision of "The company is a future adversary"[1][2][3]? With Twitter, we've seen what happens with the bluecheck feature when there's a corporate power struggle.
[1]: https://news.ycombinator.com/item?id=35012757 [2]: https://bsky.app/profile/pfrazee.com/post/3jypidwokmu2m [3]: https://www.newyorker.com/magazine/2025/04/14/blueskys-quest...
The problem with Twitter (before the whole blue check system was gutted into meaninglessness) was that not enough verification badges were handed out. It's not exactly a dangerous situation.
Bluesky's idea of verified orgs granting verification badges to its own org members would be an example of a much more robust and hands off system than what Twitter had.
The dangerous scenario is what happened to Twitter after the Elon takeover: verification becomes meaningless overnight while users still give the same gravity to verification badges which causes a huge impersonation problem. But that possibility is not a reason to have zero verification.
What twitter starting doing was removing blue checks from people who were causing problems for the platform (but not behaving bad enough to kick off). This made no sense because people still needed to know if a person was who he claimed to be (e.g., Milo Yiannopoulos) even if the person was controversial or problematic or just plain nasty.
Blue Checks weren't "gutted". Now they just mean something else -- you're a premium subscriber.
If Bluesky becomes evil, you just configure your AppView not to trust their verifications.
Of course, that's the problem: right now we mostly have one AppView (bsky.app), which is the current SPOF in the mitigation plan against the "Bsky becomes the baddies" scenario.
We need a way to reflect that human "social trust" is born distributed, and centralising trust subverts it. But here, while they introduce third party verifiers, rather than individuals deciding which verifiers to trust, bsky is going to bless some. So this is just centralised trust with delegation.
With that in mind, it seems like bluesky is trying to thread the needle on providing tools for the community to do their own verification (via the protocol) while also making their own client "notable user" friendly (via blessed verifications that show blue checks).
I also don't see why it wouldn't be possible for someone to build a labeler that shows verifications from non-bluesky blessed sources. Then community members could subscribe to that labeler to get non-blessed verifications that they choose to show. It wouldn't show up as a blue check but it would still show up on the user's profile in bluesky.
It would look something like this existing "verification" labeler that doesn't use the underlying verification feature on the protocol but instead has to maintain the data in a 3rd party store: https://imgur.com/a/tXR4FUu
Additionally, third-party clients like Pinksky or Skylight could choose to show blue checks or whatever UI for any verifiers they choose. All the data is on the protocol now, so the 3rd party clients wouldn't need to do the verification themselves.
An automated version of this system might say "we verify anybody who at least N people within 3-4 steps of your followers graph are also following."
In a big city, you go to the store that's labeled "Butcher" and figure that, because the building is quite permanent and there's a lot of butchery stuff in there and it seems clean and there are people going in and out, then it's probably a fine butcher shop. No real "social" trust involved.
An automated version of this is probably domain checking, follower count, checking that N other 'verified' accounts follow it, some basic "is this a known malicious actor" checks, waiting until the account has some age, etc. Still kind of distributed, but not really relying on your own social trust metrics.
What's fun is that Bluesky allows you to implement both of those mechanisms yourself.
And what ever happened to Keybase? That seemed like a good solution. Verify by public private key? It really seems like that could be extended. I mean we have things like attribute keys and signing keys. It seems like a solvable solution but just the platforms need to create a means for the private bodies to integrate their keys.
Hell, I wish we'd have verification keys for cops and gov employees. So me a badge? No, show me a badge with a key I can scan and verify your identity. It's much harder to forge a badge with a valid key than it is to forge a badge that just looks good enough
Part of the problem here is consistent identity over time. People do not like changing their handles unless they want to. I'm steveklabnik.com now, but if I started working at the NYT, and had to switch to steveklabnik.nyt.com, old links break to my posts, etc. And what happens if I want to be verified by more than one org at a time? Domains (at present) can't do that.
They got acquired by Zoom and promptly put Keybase into maintenance mode.
DNS for your average user is too complicated. Also what should the domain name be for a journalist at the NYT? What if they leave the NYT?
Could use follows, retweets, etc instead of page links
Deleted Comment
The approach they've taken ("trusted verifiers") is an approach aligned with their values, as it is an extension of the labelling concept that is already well established in the ecosystem. As an idealist, it is a shame that they gave up, I think they could have had an impact on shifting how non-technical people view domain names and understand digital identity... but as a pragmatist, this is the right choice. Bluesky has to pick their battles, and this isn't a hill to die on.
[1] https://handles.net [2] https://news.ycombinator.com/item?id=42749786
That just leaves me wondering why they bothered with a new separate system instead of just using the existing label system. A "verified by bsky.social" or "verified by nyt.com" or whatever label would do the job perfectly well, no?
They didn't really give up, though - the domain verification still stands and is just as powerful as ever.
All I’m saying is that if weak moderation has had a positive effect somewhere, it’s worth showcasing that. Otherwise the evidence is decisively in favor of strong moderation.
In terms of how to keep the moderation team from deteriorating, other platforms could learn a thing or two from HN: put someone competent in charge of the team, and give them lots of incentives to do well.
Well, the “wrong” politics are.
Because those conversations do end up happening elsewhere, this site is famous for leaving readers with a strongly false impression of what viewpoints are actually popular among whatever you would want to call this Silicon Valley hacker / VC scene space.
The highly insidious thing about censorship is not only you don't know what you're not seeing but you don't know you're not seeing it -- you don't know what's missing.
All research shows the opposite in fact. Adding friction to something causes a chilling effect in nearly any and all examples we have ever paid attention to. When you remove easy access to guns, people kill themselves less despite there being other easy ways to do so. When reddit banned a bunch of toxic communities, the entire site had less toxicity, even on subreddits that were unrelated to the toxic communities
Friction works. It works insanely effectively too.
Something like
So logically I would expect a protocol that allows cross domain verification. Best I can come up with is something that works sort of like domain verification extended to user@domain verification. that is, a better engineered version of "make a youtube video with the string 'unique uuid code' in the comment" so that we can verify you own that youtube account"The problem is that some domains would have no problem standing up this sort of verification. The Times only benefits from verifying it's employees. However I can see fellow social media sites balking as this equivalency weakens their walls that keep people in.
As someone who believes in equal access and privilege, this is just horrible. "Trusted Verifiers" - how does the bsky team decide which orgs can be trusted? One could argue that this is worse than Twitter. And of course, the echo chamber is going to get worse.
read again, slowly perhaps about first layer of verification.
Trusted flaggers literally need to publish transparency records and are approved by orgs in EU countries under elected governments.
If you say that is all bullshit and EU is a North Korea and North Korea is a shining example of democracy then you should probably remove your dig at DPRK's self naming;) Because your own comment measures non-democratic countries by the standard of democratic countries. if you want to be wrong be consistent at least
there’s nothing surprising about this
Censorship is a negative frame when it also provides healthy platform moderation and safety.
Dead Comment
It seems to me that BlueSky is trying to rewind the clock and be the pre-Elon Twitter. They had a decent chance to become what Signal is to messaging, but looks like they are trying to be just another Social Media company.
We’re truly in the post-social media age.