This is idiotic. Signal didn't take money from the NSA, they took money from the Broadcasting Board of Governors, which funded virtually all Internet privacy projects during the time period we're talking about, both American and otherwise; some of those dollars went to development, and still more went to contracting commercial pentests from firms like Matasano, the one I co-ran, and iSEC Partners.
This article says it's not rehashing DeVault's arguments, and it isn't; it's making an even dumber set of arguments.
The NSA thing is the only thing I didn't check out, and it appears you are right, so I am dumb. -- I wrote this before Marlinspike stepped down and I never thought to check that either, so second point to you: I am idiotic.
Regardless that particular point stands even if he's not the messiah anymore, that he was heralded as a saint who bestowed on humanity the right to privacy, and that we should trust in him.
From what I can find about the entity that funded Signal from the government it seems to be a lot to do with the CIA and anti-censorship products designed to disrupt other countries... Which, actually fits with the narrative of censorship resistant messaging, at least -- so no reason to think that it betrays the stated mission of Signal
The foundation that funded them used to be called: Radio Free Asia (which on inspection seems to be considered propaganda, though seems to market itself as free media), now called OTF, if you see the list of other software they sponsor it's very much in the same category: https://en.wikipedia.org/wiki/Open_Technology_Fund
So, I recant those statements about NSA, I only know that a number of people in NSA are not using Signal, and I had heard about NSA funding from somewhere, which obviously is not true.
Correct the article, and note what you originally said when you do so, so your readers can make their own decisions about how seriously to take your arguments.
The details you're providing about BBG, RFA, and OTF aren't relevant, and just add detail to what I said. In case you were relating them to educate me: there's no need, I have firsthand knowledge of the programs you're slandering (whether you mean to or not).
The lack of Signal’s use within the NSA is not positive evidence of its compromise. The NSA has operational constraints that don’t even remotely map onto a phone-number-identified messaging scheme.
Reports from 2007/2008 had already indicated significant interference and government spying between agencies and with private corporations. Also, at that time it was widely believed but not yet confirmed Dual_EC_DRBG was backdoored via NIST/NSA collaboration.
Many folks equate any US government money with NSA money, and did so especially around this time, which is likely why you made this mistake. Taking any US-backed government money, even at the time Signal took it, was and should be suspect.
IIRC Signal continued to take money even after Snowden. So it is a fair point and not at all idiotic, and the over-reaction after you agreed to correct the article is suspect.
I do think Signal needed that money to survive, and it probably was put to good use, but I would have acted differently and more transparently about how my use of that money was communicated to the public.
Doesn’t necessarily mean it’s cryptographically insecure. I can imagine any NSA employee installing a typical strong-crypto software like Signal, PGP, or TOR on a personal device is a massive red flag worth investigating. If I were in that position I would not want to garner that kind of attention even if the crypto itself is fine
I have to imagine using Signal with a high level clearance might itself arouse suspicion, for one thing. People inside the intelligence community really can’t expect much privacy for themselves.
> The NSA thing is the only thing I didn't check out
You might want to check again because your post is full of inaccuracies. From signal being on f-droid to their backend being relevant to security, you got almost everything wrong.
I’m personally not a fan of requiring phone numbers or disallowing third party clients. I’m not really sure how I could characterize those concerns as being “dumb” even if they didn’t particularly bother me.
There are valid reasons to prefer other apps, but this is not really one of them; it is the Signal idiosyncrasy that has most clearly been vindicated, by what happened to Matrix, which has the opposite model and has suffered for it and will continue to.
In the unlikely event that a set of vulnerabilities as devastating as those from the Nebuchadnezzar paper were found in Signal, Signal controls the whole platform end-to-end, and can simply publish software updates to fix them. Matrix has to do a coordinated multivendor update of their entire protocol.
This article makes early on a major factual error, claiming that Signal is on F-Droid. It is not, and the Signal team’s refusal to authorize the app on F-Droid is a well-known and longstanding decision. The author then claims that the team’s development of new functionality behind closed doors (payments) represents some kind of betrayal of Signal being open source. Yet all clients running on people’s phones in the meantime were based on source that was published, with verifiable builds. Finally, Moxie Marlinspike is no longer leading Signal.
There are nevertheless legitimate reasons to not trust Signal and to worry about compromise. I am pleased to see that the author mentions one major but usually overlooked one: the (optional, but encouraged) sharing of people’s contact lists with the server through Intel SGX functionality, which has been repeatedly found to be insecure.
> Yet all clients running on people’s phones in the meantime were based on source that was published, with verifiable builds.
"Based on" meaning they're built from that source but have changes to it? Because given that the published source had known bugs and was not updated for months, either they're not fixing known bugs for months, or they're publishing builds based on unpublished source. (And the article does address the verifiable builds point, pointing out that it doesn't and can't really work).
Thank you, I’ve always highlighted most of these points here in HN and I get heavily downvoted and attacked being “paranoid”, how is it paranoid if you don’t even have a way to confirm the physical servers are secure from memory injection attacks/boot attacks/etc.? How would you verify after the audit, something was changed? Phone numbers are inherently have broken security by protocol design (and I personally have a lot of personal work on attacking GSM protocol, and I know how easy it’s for 3 letters agencies with enough funding and access to exploit it), why I don’t have an option to choose? Why it was shilled so hard that time when WhatsApp went dark? There’s a LOT of sketchy things, there’s noway I would trust it, never will.
The point of an E2E cryptographic design is that you don’t ever need to trust the server — the server can be as malicious as it pleases, so long as you can affirmatively demonstrate that the client never communicates anything that isn’t encrypted with a key the server has no access to.
That was the original argument. But Signal has now rolled out functionality (sharing of contact lists and other details with the server through Intel SGX) that does force one to trust the server.
> I do not believe that end-to-end encryption means anything at all when the network and the client are the same entity.
That's textbook black-and-white thinking, and it's bullshit. I agree that network and client being controlled by the same entity raises questions, but that doesn't imply that E2EE "doesn't mean anything".
A very basic argument that shows why you are wrong: It's much easier for the government to compel a company to hand over data from their servers than it is to compel the company to write and publish a backdoored client. The two scenarios are not equivalent in practice, and this is what matters. Threat models that ignore how the real world operates are useless.
Please read up on the concept of "defense in depth", central to modern information security, which is built around the insight that security mechanisms can be valuable even if they don't work perfectly in all circumstances.
> It's much easier for the government to compel a company to hand over data from their servers than it is to compel the company to write and publish a backdoored client. The two scenarios are not equivalent in practice, and this is what matters. Threat models that ignore how the real world operates are useless.
There is a very real threat when
- Signal servers operate in the US and clients on app stores run by large US companies
- Signal can becompelled to not release government-imposed backdoors
- Signal stops releasing their open-source version, but patches it arbitrarily in production
I've read an interview that there is code related to anti-spam you can't share if you're a large network, because it's an arms race. But because Signal does not make their operations transparent beyond what's absolutely necessary to keep secret wrt. anti-spam, this creates distrust: It leaves a sense that they care more about uptime than trust, because they got big. So it's not the messenger of choice for political dissidents, where your threat model does involve the government to some degree (passive or active).
My primary issue with Signal is that they’re a USA-based org, and AFAIK a NSL would allow the US government to collect the same metadata they could with SMS or iMessage
They might not have the contents of your messages, but they know who you’re talking to, and when
Remember that if you're not USA based, NSA doesn't need an NSL to collect information from you; they can just own you up, and that is literally their chartered purpose.
That's not to say you should use US providers! Just that NSLs aren't a good reason to pick a provider. Pick a service that doesn't have information to share about you in the first place as your high order bit.
> Remember that if you're not USA based, NSA doesn't need an NSL to collect information from you; they can just own you up, and that is literally their chartered purpose.
"Owning you up" is harder (not impossible, but harder) when they can't simply send a letter and bring the force of the law to bear. NSLs are a very good reason to avoid any system that requires you to use a provider that has a presence in the US (and there are analogous concerns about e.g. AU, and obviously any country where legal and practical protections are weak enough that a strongman can just send a team of thugs round is a nonstarter). But really any specific country is beside the point; it should be table stakes for a serious cryptosystem that one can avoid depending on any single point (and make choices based on one's own trust base vs available resources) whether that's for relay servers, app maintenance, or anything else.
> Pick a service that doesn't have information to share about you in the first place as your high order bit.
True enough; obviously trusting your security to a system that requires you to use a phone number identity is laughable in the first place.
For sure! I’m assuming TAO or whomever can come after me at any point for manufactured reasons, if nothing else. Nothing I can do there…
I do think this matters in a general sense, because state actors targeting individual users is a completely different threat from state actors collecting the communication graph of a major hub.
> Of course signal is open source, it’s on f-droid
Even with their criticism, the author is giving Signal too much credit.
Signal is not on F-Droid. Signal sends their lawyers after open-source app repos for including their app.
I think the only claim they have is their trademark name "Signal". I wonder what's a good name for packagers to use for apps like this. Reminds me of Firefox and IceCat, or Rust Lang and Crab Lang.
It would merely be annoying if it was just about a trademarked name, but it's worse than that. They actually forbid using their infrastructure from non-official clients.
Trademark doesn't mean you can't say “Signal” at all. You can officially call it Whatever but then always say “Whatever (like Signal)” or “Whatever (Signal-compatible)”.
This is an excellent post, but I want to point out that what really matters here is your threat model.
If you are trying to hide from the NSA or other nation states, you have a LOT of work cut out for you. There are basically two sub threat models: are you trying to hide from the dragnet (in which case, just using any obscure and relatively obfuscated communications mechanism will work) or the scenario in which you’re being actively targeted (in which case you need rock solid security from end to end). Keep in mind that the Security version of https://en.m.wikipedia.org/wiki/Analog_hole means the security of your networked device is just as important as your messaging protocol, and… good luck with that on mobile.
If you are just a small fish trying to avoid something with a court-admissible record (and don’t care about parallel construction) you’re probably fine with Signal, provided you understand that your counterparty can just give you up.
I hate to bring out the “nothing to hide” argument because I disagree with its premise from a moral standpoint, but from a practical standpoint, I recommend avoiding having “directly targeted by the NSA and needing to avoid it” as your threat model to begin with.
Unless you are also building your entire hardware from scratch, including the CPU, and writing or auditing all firmware and device drivers, "zero trust" is a fantasy.
Zero trust means verifying everything. Not only has no living person verified the entire technology stack they are using, it is literally impossible to do so for any modern consumer device, since they all contain closed hardware and software that affects the trust model yet cannot be verified in any meaningful sense.
I personally think all the major VPNs are honeypots, that’s my conspiracy. Subpoenaing my ISP is far more work legally than just having data being fed to them for free.
For those who think that’s too far, Crypto AG. A company that actually wasn’t founded by the CIA, but was slowly bought out by intelligence agencies with shell companies. Also they were Swiss by every appearance! Good thing there isn’t a modern Swiss company many people here use and trust because they are Swiss and not US…
Also, paying for VPNs with cash is, in my opinion, overrated when they know your actual IP address. Sure, visit the coffee shop, but if the coffee shop has cameras…
This article says it's not rehashing DeVault's arguments, and it isn't; it's making an even dumber set of arguments.
Regardless that particular point stands even if he's not the messiah anymore, that he was heralded as a saint who bestowed on humanity the right to privacy, and that we should trust in him.
From what I can find about the entity that funded Signal from the government it seems to be a lot to do with the CIA and anti-censorship products designed to disrupt other countries... Which, actually fits with the narrative of censorship resistant messaging, at least -- so no reason to think that it betrays the stated mission of Signal
The foundation that funded them used to be called: Radio Free Asia (which on inspection seems to be considered propaganda, though seems to market itself as free media), now called OTF, if you see the list of other software they sponsor it's very much in the same category: https://en.wikipedia.org/wiki/Open_Technology_Fund
So, I recant those statements about NSA, I only know that a number of people in NSA are not using Signal, and I had heard about NSA funding from somewhere, which obviously is not true.
EDIT: It appears the link to the submission has been slightly altered to break it; it should be: https://blog.dijit.sh/i-don-t-trust-signal
The details you're providing about BBG, RFA, and OTF aren't relevant, and just add detail to what I said. In case you were relating them to educate me: there's no need, I have firsthand knowledge of the programs you're slandering (whether you mean to or not).
Reports from 2007/2008 had already indicated significant interference and government spying between agencies and with private corporations. Also, at that time it was widely believed but not yet confirmed Dual_EC_DRBG was backdoored via NIST/NSA collaboration.
Many folks equate any US government money with NSA money, and did so especially around this time, which is likely why you made this mistake. Taking any US-backed government money, even at the time Signal took it, was and should be suspect.
IIRC Signal continued to take money even after Snowden. So it is a fair point and not at all idiotic, and the over-reaction after you agreed to correct the article is suspect.
I do think Signal needed that money to survive, and it probably was put to good use, but I would have acted differently and more transparently about how my use of that money was communicated to the public.
Doesn’t necessarily mean it’s cryptographically insecure. I can imagine any NSA employee installing a typical strong-crypto software like Signal, PGP, or TOR on a personal device is a massive red flag worth investigating. If I were in that position I would not want to garner that kind of attention even if the crypto itself is fine
To be fair, as I read it, the criticism was directed at your work and not you. We all make mistakes.
You might want to check again because your post is full of inaccuracies. From signal being on f-droid to their backend being relevant to security, you got almost everything wrong.
Dead Comment
The ball is on Signal's control, but they are clearly not caring to improve further, just stagnate so long as they have got control of it
In the unlikely event that a set of vulnerabilities as devastating as those from the Nebuchadnezzar paper were found in Signal, Signal controls the whole platform end-to-end, and can simply publish software updates to fix them. Matrix has to do a coordinated multivendor update of their entire protocol.
There are nevertheless legitimate reasons to not trust Signal and to worry about compromise. I am pleased to see that the author mentions one major but usually overlooked one: the (optional, but encouraged) sharing of people’s contact lists with the server through Intel SGX functionality, which has been repeatedly found to be insecure.
"Based on" meaning they're built from that source but have changes to it? Because given that the published source had known bugs and was not updated for months, either they're not fixing known bugs for months, or they're publishing builds based on unpublished source. (And the article does address the verifiable builds point, pointing out that it doesn't and can't really work).
That's the server code, not the client code.
> the article does address the verifiable builds point, pointing out that it doesn't and can't really work
IIRC (the article is now down for me for some reason), the author was again talking about server code.
That's textbook black-and-white thinking, and it's bullshit. I agree that network and client being controlled by the same entity raises questions, but that doesn't imply that E2EE "doesn't mean anything".
A very basic argument that shows why you are wrong: It's much easier for the government to compel a company to hand over data from their servers than it is to compel the company to write and publish a backdoored client. The two scenarios are not equivalent in practice, and this is what matters. Threat models that ignore how the real world operates are useless.
Please read up on the concept of "defense in depth", central to modern information security, which is built around the insight that security mechanisms can be valuable even if they don't work perfectly in all circumstances.
There is a very real threat when
I've read an interview that there is code related to anti-spam you can't share if you're a large network, because it's an arms race. But because Signal does not make their operations transparent beyond what's absolutely necessary to keep secret wrt. anti-spam, this creates distrust: It leaves a sense that they care more about uptime than trust, because they got big. So it's not the messenger of choice for political dissidents, where your threat model does involve the government to some degree (passive or active).They might not have the contents of your messages, but they know who you’re talking to, and when
That's not to say you should use US providers! Just that NSLs aren't a good reason to pick a provider. Pick a service that doesn't have information to share about you in the first place as your high order bit.
"Owning you up" is harder (not impossible, but harder) when they can't simply send a letter and bring the force of the law to bear. NSLs are a very good reason to avoid any system that requires you to use a provider that has a presence in the US (and there are analogous concerns about e.g. AU, and obviously any country where legal and practical protections are weak enough that a strongman can just send a team of thugs round is a nonstarter). But really any specific country is beside the point; it should be table stakes for a serious cryptosystem that one can avoid depending on any single point (and make choices based on one's own trust base vs available resources) whether that's for relay servers, app maintenance, or anything else.
> Pick a service that doesn't have information to share about you in the first place as your high order bit.
True enough; obviously trusting your security to a system that requires you to use a phone number identity is laughable in the first place.
I do think this matters in a general sense, because state actors targeting individual users is a completely different threat from state actors collecting the communication graph of a major hub.
SIGINT is one half their chartered purpose. The other half is SIGSEC. And I had good reason to know directly.
Even with their criticism, the author is giving Signal too much credit.
Signal is not on F-Droid. Signal sends their lawyers after open-source app repos for including their app.
I think the only claim they have is their trademark name "Signal". I wonder what's a good name for packagers to use for apps like this. Reminds me of Firefox and IceCat, or Rust Lang and Crab Lang.
Or Beacon.
Or Flare.
Or FOSSE2EEMsgApp.
If you are trying to hide from the NSA or other nation states, you have a LOT of work cut out for you. There are basically two sub threat models: are you trying to hide from the dragnet (in which case, just using any obscure and relatively obfuscated communications mechanism will work) or the scenario in which you’re being actively targeted (in which case you need rock solid security from end to end). Keep in mind that the Security version of https://en.m.wikipedia.org/wiki/Analog_hole means the security of your networked device is just as important as your messaging protocol, and… good luck with that on mobile.
If you are just a small fish trying to avoid something with a court-admissible record (and don’t care about parallel construction) you’re probably fine with Signal, provided you understand that your counterparty can just give you up.
I hate to bring out the “nothing to hide” argument because I disagree with its premise from a moral standpoint, but from a practical standpoint, I recommend avoiding having “directly targeted by the NSA and needing to avoid it” as your threat model to begin with.
Zero Trust Security Model - Trust no One (Internet Security) : https://en.wikipedia.org/wiki/Zero_trust_security_model
Zero trust means verifying everything. Not only has no living person verified the entire technology stack they are using, it is literally impossible to do so for any modern consumer device, since they all contain closed hardware and software that affects the trust model yet cannot be verified in any meaningful sense.
For those who think that’s too far, Crypto AG. A company that actually wasn’t founded by the CIA, but was slowly bought out by intelligence agencies with shell companies. Also they were Swiss by every appearance! Good thing there isn’t a modern Swiss company many people here use and trust because they are Swiss and not US…
Also, paying for VPNs with cash is, in my opinion, overrated when they know your actual IP address. Sure, visit the coffee shop, but if the coffee shop has cameras…
VPN services are such easy targets that even if they weren't honeypots they would effectively be so.