Authoritarian jurisdictions with a modus operandi of compelling their businesses and citizens by force are thus much riskier than Western democracies, even flawed ones. I at least expect it's a lot harder to say no to demands to break your promises that come with credible threats of torturing your family.
I'll also say that it's quite hard to make a messaging app without the servers that run the service having a great deal of power in the protocol. Many types of flaws or bugs in a client or protocol go from "theoretical problem" to "major issue" in the presence of a malicious server.
So if end-to-end security is a goal, you must pay attention to not only the protocol/algorithms and client codebase. The software publisher's risks are important (E.g., Zoom has a lot of risk from a China-centric development team). As are those of the hosting provider (if different from the publisher).
And also less obvious risks, like the mobile OS, mobile keyboard app, or AI assistant that are processing your communications even though they're sent between clients with E2EE.
Reflections on Trusting Trust is still a great read for folks curious about these issues.
We once had a Zulip update rejected by Apple because we had a link to our GitHub project with the source code for the app in the app itself. And it turns out, if you then click around GitHub, you can find a "Pricing" page that doesn't pay Apple's tax.
Details are here for anyone curious: https://news.ycombinator.com/item?id=28175759
For a community: A lot of communities want anyone in the public to be able to join their spaces and read their channels. For that use case, E2EE makes the chat system slower and less usable, with limited security benefits over using web standard encryption.
What E2EE may protect you from is a malicious server operator reading the organization's messages. If the server operator is a leader in the organization already, that person may already be directly a recipient of all the interesting messages anyway. The practical benefit I see for E2EE in Zulip is mainly for Zulip Cloud or other settings where a third party is hosting the Zulip server for you.
As for decentralization, a self-hosted Zulip server only talks to external infrastructure for sending notifications (Emails, mobile push) and to implement user-controlled features (E.g., outgoing webhooks). The Zulip API is fully documented, easy to understand and implement, and has community-developed clients.
Maybe you're thinking of federation? Matrix has fancier mirroring functionality, which can be important if your use case requires sharing dozens of channels with dozens of other servers. But Zulip is supported by Matterbridge and has various nicer mirroring integrations for sharing a channel with another server on various protocols.
But I don't know of many use cases where the usability of the core chat system isn't far more important than the usability of federation features.
Society is within its rights to demand that financial institutions both (1) protect their customers' sensitive personal information and (2) fight money laundering, which AFAIK is impractical without KYC rules at institutions like Coinbase that connect crypto to the traditional monetary system.
I'm not going to source-build and force a community of folks to source-build an APK to access an essential technical capability of a messaging app. Zulip is essentially rendered as useless in such a case. That is wrong and unethical.
I can't say that the mobile notifications model is perfect. I have my frustrations with it. One of them that might not be obvious is that various military/government installations on airgapped networks are all freeriding.
Alternative options are (1) having no meaningful monetization of self-hosting or (2) moving away from being fully FOSS.
What would you like to see us do?
I think they've had a pretty frustrating experience with corporations freeriding on their work. See, for example: https://news.ycombinator.com/item?id=34129623.
I wish them the best of luck, and hope they're seeing success. Certainly in 2025 Europe should be thinking hard about the risks of a huge portion of workplace communication happening in Cloud-based chat and email applications owned by a few US companies (Microsoft/Google/SalesForce).
Honestly, we've had the same problem that Arathorn highlights in that thread. When I first started, I imagine businesses self-hosting Zulip would pay for support and fund the project that way. In practice, Zulip is too easy to operate for self-hosters to do that, and the better we make the product, the less support people actually need. The support-only business model can work, but I don't think it's workable for chat.
Your team made the decision, now you get to own it. And I'll keep putting a light on it until it changes. You call it vitriol, I call it fair warning to anyone considering adopting Zulip.
I am not a business and I certainly would not touch Zulip with a ten-foot pole given your team's decision.
For example, Mattermost's $10/user/month plan is proprietary software with roughly the features that Zulip provides as entirely FOSS (with a $3.50/user/month push notifications service).
By the way, since you mentioned Signal: Signal is great, but it's really just not comparable.
Signal is a SMS replacement/messenger app with minimal features that requires very low COGS per user, and launched with a $50M grant from a billionaire. Zulip is a team chat app designed to replace much more complex and capable products (Discord, Slack, Microsoft Teams).
For some use cases where self hosting is required for compliance reasons, this is a deal breaker. And spinning your own mobile apps isn't quite practical.
While I'm here, the Flutter rewrite of the mobile app is launching next month, and while the initial launch won't add much functionality over the previous React Native apps, the rewrite is way faster, less buggy, and a lot more pleasant to add new features to.
I think you misinterpreted the most important nuance in this post. The rest of your comment is about jurisdiction in the context of who develops the client software.
The blog post is talking about jurisdiction in the context of where ciphertext is stored, and only calls that mostly irrelevant. The outro even acknowledges that when jurisdiction does matter at all, it's about the security of the software running on the end machine. (The topic at hand is end to end encryption, after all!)
So, no, this isn't a dangerous view. I don't think we even disagree at all.
What's dangerous is the framing; many E2EE messengers give the server a LOT more power than "just stores the ciphertext". https://news.ycombinator.com/item?id=33259937 is discussion of a relevant example that's gotten a lot of attention, with Matrix giving the server control over "who is in a group", which can be the whole ball game for end-to-end security.
And that's not even getting into the power of side channel information available to the server. Timing and other side channel attacks can be powerful.
Standard security practice is defense in depth, because real-world systems always have bugs and flaws, and cryptographic algorithms have a history of being eventually broken. Control over the server and access to ciphertext are definitely capabilities that, in practice, can often be combined with vulnerabilities to break secure systems.
If the people who develop the software are different from those who host the server, that's almost certainly software you can self-host. Why not mention self-hosting in the article?
If you're shopping for a third party to host a self-hostable E2EE messenger for you. The framing of the server as just "storing ciphertext" would suggest trustyworthyness of that hosting provider isn't relevant. I can't agree with that claim.