Well, I kind of feel that I have to repost my comment on this old thread[1] with regards to the government of Egypt blocking Signal application:
"Isn't it "weird" that they chose to block Signal app and not the signal-protocol based Whatsapp?
If Whatsapp really implements the same kind of security and privacy measures that Signal does, why is Whatsapp allowed to continue operating?
If signal is preventing them spy on users and they ban it, is in't it safe to assume that Whatsapp is NOT preventing them spy on users, so they let it operate? Wouldn't you expect Whatsapp to be also targeted, especially considering the broad user-base it has compared to Signal?
Yes, I know they had blocked Whatsapp in the past, but they didn't block it now. Which means that something has changed in the relationship of the Egyptian gov and Whatsapp since 2015."
Simple explanation would be that activists use Signal. [1]
They don't trust WhatsApp and rely on Signal for secure messaging. Blocking Signal means they are able to target activists without impacting much of the rest of the population.
[1] Many of the people I know who are activists in countries where they need to protect their identities use Signal
There is no logical way to verify that all activists (or even a majority of them) use Signal over WhatsApp. The perception that activists use Signal may have been enough to block them, but having a huge backdoor in WhatsApp is reason enough to not take action.
> Simple explanation would be that activists use Signal.
But why do activists simply not use WhatsApp, instead of Signal? If both were suppose to be fully encrypted and secure, why not use the tool that is available. I assume the needing encryption is to prevent the government snooping and eavesdropping on your plans rather than "liking the UI/UX of one system over the other"?
Maybe the activists know something we did not, and are right to be paranoid...
WhatsApp is used by over a billion people. I'm sure some activists in Egypt use WhatsApp, too. That said, I think WhatsApp was blocked in Egypt, too, at least for a while. I don't know if they later "fixed" that or not, and how they did it.
Good point, but there is an explanation: blocking WhatsApp would lead to more intense backlash. See what happened in Brazil.
Not to say it isn't both, but the price of blocking (one of) the most popular messaging apps is higher to a government than blocking one in the low low percentiles of usage.
If they blocked Signal just because it was less of a trouble to block compared to WhatsApp, then all the people that were on Signal will easily switch to WhatsApp...
What you have at this point, is a government paying the price of blocking a less popular messaging app they cannot control, while the people they are after can just switch to a MASSIVELY used messaging app the gov can also not control and additionally, is too expensive to block.
If this was the case,it would actually work against the gov. Do not underestimate gov authorities, they are not THAT naive. If they had not blocked Signal at all, they could at least track Signal users and at least have that information: that this small group of people (Signal users), contains the group of people they are after. They could have their honey pot there. Mixing the "dangerous" Signal userbase with the chaotic massive userbase of WhatsApp makes no sense, unless you really have WhatsApp on your side.
This is a nice idea, but it's also baseless speculation.
You're implying that WhatsApp, Inc. gave the Egyptian government the ability to remotely retrigger this backdoor whenever they want to (for those who haven't actually read the article: this backdoor only works when WhatsApp issues a key change for a conversation, and only then in certain circumstances). In other words, you imply that Egypt said "Hey WhatsApp, please actively hack into your Egyptian users' messages and send us the results" and WhatsApp said "ok sure here ya go".
It might be true, but Zuckerberg might be a FSB informant and I might be Elvis reincarnate. These are all baseless, yet not entirely implausible claims.
well it's only baseless speculation if you can provide at least one plausible alternative so we can say, "we don't know which is true".
niksakl's point is that the go-to "probably nothing going on" or the other "WhatsApp too popular to block so we block Signal instead" explanations are just not plausible at all.
So I don't think it's entirely baseless, and with this new information, even less so.
And Egypt making such a deal with a large company, you make it sound like you believe that's implausible, but this has in fact happened before: When Egypt hired Nokia and Siemens to develop, build and implement their DPI infrastructure. Later claiming "gosh we never expected they'd actually use this to hunt down, torture and kill dissidents". Maybe governments aren't that naive, but corporations surely will try and claim to be.
> You're implying that WhatsApp, Inc. gave the Egyptian government the ability to remotely retrigger this backdoor whenever they want to (for those who haven't actually read the article: this backdoor only works when WhatsApp issues a key change for a conversation, and only then in certain circumstances). In other words, you imply that Egypt said "Hey WhatsApp, please actively hack into your Egyptian users' messages and send us the results" and WhatsApp said "ok sure here ya go".
No, the private hackers Govs hire were able to use an exploit to snoop on Whatsapp. That's very probable.
There's a cost/benefit tradeoff to blocking each service and different governments have different thresholds.
It is more likely that the cost of blocking Signal was negligible in contrast to the benefit, while blocking WhatsApp would likely have huge cost - especially in a country that has only recently experienced a number of citizen-driven coupés.
It is also possible that they're specifically targeting a group (Muslim Brotherhood, or Jund al Islam and other Sinai insurgency groups) that utilize Signal.
To add to those who have referenced the cost to the government: consider who else uses WhatsApp besides just activists - it's likely many government employees use WhatsApp as well.
Anecdotal tidbit: I worked at the Rio 2016 Olympics. My team consisted of Brazilians, Americans, Britons, and Koreans. WhatsApp was how we communicated[1], I'm sure the same was true for most of the other thousands of people working setup for the Olympics.
When a power-hungry judge forced WhatsApp to be blocked a couple weeks before the opening ceremonies, it was rather problematic for the Olympics staff. My first thought was "uhhh. This isn't going to last for long," and it didn't.
I can't say for sure that it's because the IOC president called up the Brazilian president, and the Brazilian president yelled at the judge, but I like to think that's what happened.
[1] Integrated language translation would be a FANTASTIC feature to add.
There was a different commenter, possibly in a different HN thread, who was explaining that as an Egyptian resident he thought the government was blocking things like Whatsapp and Signal to protect one of the non government employers in Egypt, the telecommunications industry, which makes money from charging for phone calls and sms messages.
I remember receiving the downvote brigade[1], when Moxie himself said that I should trust WhatsApp without having the source code and the ability to put it on my device.
We (even a "smart" community like HN) clearly do not have the ability to think critically about security, and even when our leaders are sincere -- and I really don't mean to suggest Moxie/Signal was complicit in this move -- we still rush to defend our champions so quickly that we don't even think about what's going on.
However something really important is that this might be mere incompetence: FaceBook might not have any mechanism for launching this attack, they just thought the notification message was annoying so they didn't display it. To that end we need to be vigilant about stupidity as well.
Where does it end? Will we actually stop being okay with buffer overflows and sloppy programming? Or are we going to continue trying to "be safer" and use "safe languages" and continuing to try to solve the problem of too much code to read clearly with more code.
> when Moxie himself said that I should trust WhatsApp without having the source code and the ability to put it on my device.
What are you talking about? All I can see there is that you asked for the source code of the QR generator and he delivered. He does not say you should trust WhatsApp.
That's not what geocar asked. He didn't ask anything, actually.
Rather he pointed out that what you see in the WhatsApp UI is meaningless because you have no way of knowing that the app you're running matches the code Moxie linked, or that the code your friends are running does. Moxie replied with a link to the QR generation code but this didn't answer geocar's question, probably because there is no answer.
Here's a simple way to put it. End-to-end messaging security is assumed to be (at least traditionally) about moving the root of trust. Before you had to trust Facebook. Now you don't. A closed source app that can update on demand doesn't move the root of trust and this probably doesn't match people's intuitive expectations of what it does.
Many people have pointed out similar things to what geocar has pointed out: E2E encryption is essentially meaningless if you can't verify what your computers are actually doing. Unfortunately fixing this is a hard problem.
I wrote about this issue extensively in 2015 in the context of encrypted email and bitcoin apps (where you can steal money by pushing a bad auto update):
I proposed an app architecture that would allow for flexible control over the roots of trust of programs using sandboxing techniques. Unfortunately there's been very little (no?) research into how to solve this problem, even though it's the next step in completing the journey that Signal has started.
By the way, just to make it super clear, the work Moxie has done on both Signal and WhatsApp is still excellent. It is, as I said, necessary work. But the Guardian has unfortunately not understood security well enough, nor have people from the cryptography community really helped journalists understand the limits of this approach. Nor has Facebook, I think.
> All I can see there is that you asked for the source code of the QR generator and he delivered.
Eh, I kind of agree with geocar's point in the original thread. Moxie shared source code to "a" QR generator. Is there any way to verify that this code is what's running inside of WhatsApp?
More interesting, I stand by my prediction that WhatApp would have a backdoor in it or start selling information once Facebook acquired it regardless of Moxie's improvements. Looks like I called it again. People need to stay away from this messenger unless they absolutely have to be on it for friends and family. Still encourage them to download Signal for anything private.
As many argue (e.g. tptacek), and I find myself increasingly convinced by this:
- source code can be looked at, even verified, but it's hard. (Remember many bugs in OpenSSL, for example.)
- but binaries, too, can be disassembled, even verified. It might be harder, but it's a shades of grey, not binary (ha).
- even if you have the source code, you have to ensure that the binaries actually distributed to your phone correspond to the source code. That muddles the issue further.
I'd go further and say Moxie is complicit by way of negligence. It's unethical to assist in the implementation of your protocol when you can't guarantee its privacy protections will actually stand. Otherwise it's free PR for Facebook to tout "Snowden-approved crypto".
I have no doubt Moxie acted in good faith and wanted to expand encryption to a large number of users, but this is just another example of why proprietary software cannot be trusted.
Any and all proprietary implementations of the Signal protocol are now suspect. OWS should denounce these implementations as least as firmly as they do interoperable open source Signal client forks.
On a completely unconnected note, what was the name of that technique that GCHQ uses to disrupt online forums and subtly undermine peoples reputations?
> I'd go further and say Moxie is complicit by way of negligence
Your "further" stance is not supported by the evidence. You might disagree with the design choices, but they're not negligence or "complicity". Moxie answered, in the other thread, that
a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement.
I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
The key part is this, and it was apparently reported back in April 2016 with Facebook replying it's "expected behavior", it's not something a general attacker can do but it would enable WhatsApp/Facebook to read conversations:
> WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.
It's worth noting as the article says, that this is built on top of the Signal protocol. In Signal, a similar situation with a user changing key offline will result in failure of delivery. Within WhatsApp under Settings>Account>Security there is an option to Show Security Notifications which will notify you if a users key has changed.
According to the article however, the notification is given after the messages are resent. There is nothing the user can seemingly do to prevent retransmission on a forced key change. This prevents further information from being sent, but in case of undelivered messages, they could be snooped on.
I happened to have the Security Notifications on for a while now. I see the message: "X's security code has changed." pretty often.
Under what circumstances does a new pair of encryption keys get generated?
One circumstance is when you put your sim card in a different phone. The new phone recognises that you already have a WhatsApp account, as it's tied to your phone number, but it doesn't have your private key, so it will generate a new pair and start exchanging the public part.
I don't think this is as serious as it seems, this exploit only applies to undelivered messages, which granted is not great, but is at least something.
And any WhatsApp update could potentially include code to snoop on decrypted messages so exploits that can only be performed from the WhatsApp server side - i.e the example in the article about snooping entire conversations - are not really that relevant.
Having said that, it's disappointing and they should adopt Signal's approach.
> Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”
In other words, what seems like "a vulnerability that only affects some messages" could be turned into a full blown interception capability with very little change.
It's possible for any message that is not marked as delivered.
All Facebook has to do is not mark messages as delivered, i.e. lieing to the device, which can probably be done easily. So they could ask a device to regenerate keys and send the same message again, over and over again.
> The supposed “backdoor” the Guardian is describing is
> actually a feature working as intended, and it would
> require significant collaboration with Facebook to be
> able to snoop on and intercept someone’s encrypted
> messages, something the company is extremely unlikely
> to do.
What? You must be some sort of conspiracy theorist. Just be rational and extrapolate from your beliefs: if you admit that Facebook might do this, then why not Google, AT&T, Microsoft? There would be no end to it. Basically it would mean that all businesses are spying on you and handing the information over to the government.
I have complete faith that that is untrue based upon just the history of the last 5 years.
At the risk of stating the obvious: there is real benefit to using an entirely decentralised open source comms system like Riot.im (Matrix) or Conversations (XMPP), where you can pick precisely which app to run, who to trust to build that app, who to trust to advertise your public keys, and who to host your server.
It's inevitable that big centralised services like WhatsApp or even Signal are going to be under pressure from governments to support lawful intercept; in many countries it's essentially illegal to run a communication service that can't be snooped under a court order. Multinationals like Facebook are neither going to want to break the law (as it ends up with their senior management getting arrested: https://www.theguardian.com/technology/2016/mar/01/brazil-po...) - nor pull out of those territories (given WhatsApp market penetration in Brazil is 98.5% or similar).
No matter what IM service you use: As long as they manage the public keys for their users, they will be vulnerable to exactly this problem. This isn't just WhatsApp. This applies to iMessage and Signal too.
In all cases, we rely on the word of the service provider that they don't sneak additional public keys to encrypt for into the clients and in all cases we hear that doing so would cause a message dialog to appear, but we have zero control over that as this is just an additional software functionality (yes. Signal is Open Source, but do you know whether the software you got from the App Store is the software that's on Github?)
Also imagine the confusion and warning-blindness it would cause if every time one of my friends gets a new device I'd get huge warnings telling me that public keys have changed.
This is a hard problem to solve in a user-friendly way and none of the current IM providers really solve it. Maybe Threema does it best with their multiple levels of authenticity.
As such I think it's unfair to just complain about WhatsApp here.
'As such I think it's unfair to just complain about WhatsApp here.'
I disagree. WhatsApp have a known vulnerability which they won't fix (indeed they deliberately added this vuln on top of the Signal protocol), and no denial that they have used this vulnerability in the past.
They made a big PR song and dance about this feature only to backdoor it. That deserves criticism.
Exactly -- plus, the "notify key changes" setting is off by default. When I was looking through WhatsApp on my girlfriend's phone (it's useful to know what popular applications look like to be able to help others, even if I don't use them myself) I was very surprised to learn it was off by default. That's the same as disabling certificate checking on https and hoping that the pubkey you got is valid. It took me a while to believe it was actually the default and she hadn't turned it off herself (probably by accident), but it seems to be true. I just can't imagine how people call Whatsapp encrypted when whatsapp can push a new key à la "here, go encrypt your messages to this pubkey please".
The encryption protects you against others snooping on your messages in transit which is what it is meant to do.
Absolutely mothing really stops any of WhatsApp, Apple or even Signal itself from reading your messages if they want to/are compelled to. The only way to protect yourself against the service provider is to manage public keys yourself manually using GPG like workflows which have proven to be unworkable.
The trade off is do you want free and easy to use messaging which protects you from other snoopers but not the service provider/government itself or do you want much more secure systems that no one outside the technology priesthood will use.
I don't think people switch phones often enough for the warnings to be a nuisance or be ignored.
I agree that a lot of people would be very confused when they see the error, though, and while it's easy enough to explain even in layman's terms, I don't think it would help.
I think that's it's totally fair to complain about WhatsApp, since the issue mentioned is separate from the more general problem you describe; they could easily have done it the way Signal does, and I suspect they opted to do it the way the do it for the same reason they don't have the security notifications on -- they don't want to deal with the confusion.
> I don't think people switch phones often enough for the warnings to be a nuisance or be ignored.
apparently in some countries they do and that's a reason to compromise the rest of the world..
just summarising how bizarre this excuse really is.
make it an opt-in setting, in some countries reliable connectivity in a situation of frequently changing devices (the more I think about it, the more contrived it sounds) might be more important than privacy, but in others it very much isn't and the consequences for failing privacy are much worse than missing a message between swapping of devices.
that's not a tradeoff you should get to make for everyone.
the error message itself (I have it on) is not at all obtrusive btw, it's a friendly yellow (like the old Google ads) small type, which a user will either ignore or get a vague sense of unease about not being secure (which is exactly correct), I don't see how this can be further confusing.
... minus the libraries in native code which also are considerably harder to reverse-engineer than the java parts.
Also, unless you're suspicious and actually check, you could be served a special version by the App Store that was compiled only for you and contains the required add-a-key-but-dont-show-a-popup feature.
I'm not saying that Signal and/or Google are shipping a backdoor. I'm saying that we have to trust them that they don't.
As long as they manage the public keys for their users, they will be vulnerable to exactly this problem.
Indeed, the most secure way is to generate and confirm each other's keys physically. The thought occurred to me that those whom you'd want to truly communicate securely with are likely people you have met via other means already --- including in person --- and so you should already have an effectively independent channel to share keys. It seems like the level of trust you have with someone is proportional to the probability of that being true: if you've never actually met someone in person, how do you know they are who they say they are? In some sense, you could say that, how secure the communication with someone is, doesn't matter if you don't already have that relationship of trust established.
The solution to this is to have multiple independent clients all working with the same protocol. This way it doesn't matter if an IM service handles your public keys, cause if they send different ones, they can't prevent the client from notifying. They simply don't control the client.
In general, it is the control of FB over the whatsapp client where the vulnerabilities lie.
>As long as they manage the public keys for their users, they will be vulnerable
On the other hand, as long as users are required to manage their public keys, there won't be end-to-end encryption for the masses (which WhatsApp had declared as their goal and to some degree achieved).
At least until key management and other security basics will be taught at elementary school, by the time multiplication table is taught.
> As such I think it's unfair to just complain about WhatsApp here.
I think it would be wrong to start complaining about other apps. We don't know of vulnerabilities in other apps. We DO know of one in WhatsApp. Let's focus on what we know and take WhatsApp to task on it instead of wasting energy on what we don't know.
"Isn't it "weird" that they chose to block Signal app and not the signal-protocol based Whatsapp? If Whatsapp really implements the same kind of security and privacy measures that Signal does, why is Whatsapp allowed to continue operating? If signal is preventing them spy on users and they ban it, is in't it safe to assume that Whatsapp is NOT preventing them spy on users, so they let it operate? Wouldn't you expect Whatsapp to be also targeted, especially considering the broad user-base it has compared to Signal? Yes, I know they had blocked Whatsapp in the past, but they didn't block it now. Which means that something has changed in the relationship of the Egyptian gov and Whatsapp since 2015."
1. https://news.ycombinator.com/item?id=13219304
They don't trust WhatsApp and rely on Signal for secure messaging. Blocking Signal means they are able to target activists without impacting much of the rest of the population.
[1] Many of the people I know who are activists in countries where they need to protect their identities use Signal
I would never trust a closed source messaging app if I was an activist, regardless of what encryption they claim to implement.
But why do activists simply not use WhatsApp, instead of Signal? If both were suppose to be fully encrypted and secure, why not use the tool that is available. I assume the needing encryption is to prevent the government snooping and eavesdropping on your plans rather than "liking the UI/UX of one system over the other"?
Maybe the activists know something we did not, and are right to be paranoid...
Not to say it isn't both, but the price of blocking (one of) the most popular messaging apps is higher to a government than blocking one in the low low percentiles of usage.
If they blocked Signal just because it was less of a trouble to block compared to WhatsApp, then all the people that were on Signal will easily switch to WhatsApp... What you have at this point, is a government paying the price of blocking a less popular messaging app they cannot control, while the people they are after can just switch to a MASSIVELY used messaging app the gov can also not control and additionally, is too expensive to block.
If this was the case,it would actually work against the gov. Do not underestimate gov authorities, they are not THAT naive. If they had not blocked Signal at all, they could at least track Signal users and at least have that information: that this small group of people (Signal users), contains the group of people they are after. They could have their honey pot there. Mixing the "dangerous" Signal userbase with the chaotic massive userbase of WhatsApp makes no sense, unless you really have WhatsApp on your side.
I hope you understand what I am trying to say.
edit: rephrasing
You're implying that WhatsApp, Inc. gave the Egyptian government the ability to remotely retrigger this backdoor whenever they want to (for those who haven't actually read the article: this backdoor only works when WhatsApp issues a key change for a conversation, and only then in certain circumstances). In other words, you imply that Egypt said "Hey WhatsApp, please actively hack into your Egyptian users' messages and send us the results" and WhatsApp said "ok sure here ya go".
It might be true, but Zuckerberg might be a FSB informant and I might be Elvis reincarnate. These are all baseless, yet not entirely implausible claims.
niksakl's point is that the go-to "probably nothing going on" or the other "WhatsApp too popular to block so we block Signal instead" explanations are just not plausible at all.
So I don't think it's entirely baseless, and with this new information, even less so.
And Egypt making such a deal with a large company, you make it sound like you believe that's implausible, but this has in fact happened before: When Egypt hired Nokia and Siemens to develop, build and implement their DPI infrastructure. Later claiming "gosh we never expected they'd actually use this to hunt down, torture and kill dissidents". Maybe governments aren't that naive, but corporations surely will try and claim to be.
No, the private hackers Govs hire were able to use an exploit to snoop on Whatsapp. That's very probable.
Not all speculation is inappropriate; sometimes it is the seed from which a correct conclusion ultimately grows.
It doesn't have to be THIS particular backdoor. "Why build one when you can build two at twice the price? Only, this [second] one can be kept secret."
It is more likely that the cost of blocking Signal was negligible in contrast to the benefit, while blocking WhatsApp would likely have huge cost - especially in a country that has only recently experienced a number of citizen-driven coupés.
It is also possible that they're specifically targeting a group (Muslim Brotherhood, or Jund al Islam and other Sinai insurgency groups) that utilize Signal.
Anecdotal tidbit: I worked at the Rio 2016 Olympics. My team consisted of Brazilians, Americans, Britons, and Koreans. WhatsApp was how we communicated[1], I'm sure the same was true for most of the other thousands of people working setup for the Olympics.
When a power-hungry judge forced WhatsApp to be blocked a couple weeks before the opening ceremonies, it was rather problematic for the Olympics staff. My first thought was "uhhh. This isn't going to last for long," and it didn't.
I can't say for sure that it's because the IOC president called up the Brazilian president, and the Brazilian president yelled at the judge, but I like to think that's what happened.
[1] Integrated language translation would be a FANTASTIC feature to add.
We (even a "smart" community like HN) clearly do not have the ability to think critically about security, and even when our leaders are sincere -- and I really don't mean to suggest Moxie/Signal was complicit in this move -- we still rush to defend our champions so quickly that we don't even think about what's going on.
However something really important is that this might be mere incompetence: FaceBook might not have any mechanism for launching this attack, they just thought the notification message was annoying so they didn't display it. To that end we need to be vigilant about stupidity as well.
Where does it end? Will we actually stop being okay with buffer overflows and sloppy programming? Or are we going to continue trying to "be safer" and use "safe languages" and continuing to try to solve the problem of too much code to read clearly with more code.
[1]: https://news.ycombinator.com/item?id=11669395
What are you talking about? All I can see there is that you asked for the source code of the QR generator and he delivered. He does not say you should trust WhatsApp.
Rather he pointed out that what you see in the WhatsApp UI is meaningless because you have no way of knowing that the app you're running matches the code Moxie linked, or that the code your friends are running does. Moxie replied with a link to the QR generation code but this didn't answer geocar's question, probably because there is no answer.
Here's a simple way to put it. End-to-end messaging security is assumed to be (at least traditionally) about moving the root of trust. Before you had to trust Facebook. Now you don't. A closed source app that can update on demand doesn't move the root of trust and this probably doesn't match people's intuitive expectations of what it does.
Many people have pointed out similar things to what geocar has pointed out: E2E encryption is essentially meaningless if you can't verify what your computers are actually doing. Unfortunately fixing this is a hard problem.
I wrote about this issue extensively in 2015 in the context of encrypted email and bitcoin apps (where you can steal money by pushing a bad auto update):
https://moderncrypto.org/mail-archive/messaging/2015/001510....
I proposed an app architecture that would allow for flexible control over the roots of trust of programs using sandboxing techniques. Unfortunately there's been very little (no?) research into how to solve this problem, even though it's the next step in completing the journey that Signal has started.
By the way, just to make it super clear, the work Moxie has done on both Signal and WhatsApp is still excellent. It is, as I said, necessary work. But the Guardian has unfortunately not understood security well enough, nor have people from the cryptography community really helped journalists understand the limits of this approach. Nor has Facebook, I think.
Eh, I kind of agree with geocar's point in the original thread. Moxie shared source code to "a" QR generator. Is there any way to verify that this code is what's running inside of WhatsApp?
Deleted Comment
- source code can be looked at, even verified, but it's hard. (Remember many bugs in OpenSSL, for example.)
- but binaries, too, can be disassembled, even verified. It might be harder, but it's a shades of grey, not binary (ha).
- even if you have the source code, you have to ensure that the binaries actually distributed to your phone correspond to the source code. That muddles the issue further.
I have no doubt Moxie acted in good faith and wanted to expand encryption to a large number of users, but this is just another example of why proprietary software cannot be trusted.
Any and all proprietary implementations of the Signal protocol are now suspect. OWS should denounce these implementations as least as firmly as they do interoperable open source Signal client forks.
They don't. Moxie does not want the forks to use his servers or the name of his app, that is all.
I just want to voice my opinion that maybe 1 in 100 people have Moxie's integrity and ethics.
Your "further" stance is not supported by the evidence. You might disagree with the design choices, but they're not negligence or "complicity". Moxie answered, in the other thread, that
a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement. I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
https://news.ycombinator.com/item?id=13394900
https://tobi.rocks/2016/04/whats-app-retransmission-vulnerab...
EDIT:
Sorry, misread.
That is why I said I really don't mean to suggest Moxie/Signal was complicit in this move
This was presented in the lightning talks at 33c3, starting around minute 48: https://media.ccc.de/v/33c3-8089-lightning_talks_day_4
Here's the congress wiki with some more links: https://events.ccc.de/congress/2016/wiki/Lightning:A_Backdoo...
And a blogpost: https://tobi.rocks/2016/04/whats-app-retransmission-vulnerab...
What do you call a known vulnerability that can be used for eavesdropping that a company refuses to fix ?
1) A mistake
2) A bug
3) A backdoor
> WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.
It's worth noting as the article says, that this is built on top of the Signal protocol. In Signal, a similar situation with a user changing key offline will result in failure of delivery. Within WhatsApp under Settings>Account>Security there is an option to Show Security Notifications which will notify you if a users key has changed.
And any WhatsApp update could potentially include code to snoop on decrypted messages so exploits that can only be performed from the WhatsApp server side - i.e the example in the article about snooping entire conversations - are not really that relevant.
Having said that, it's disappointing and they should adopt Signal's approach.
> Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”
In other words, what seems like "a vulnerability that only affects some messages" could be turned into a full blown interception capability with very little change.
All Facebook has to do is not mark messages as delivered, i.e. lieing to the device, which can probably be done easily. So they could ask a device to regenerate keys and send the same message again, over and over again.
Deleted Comment
So it downgrades "end-to-end encryption" to "transport layer security".
I, for one, certainly cannot imagine Facebook collaborating to such an extent with the government.
I have complete faith that that is untrue based upon just the history of the last 5 years.
https://whispersystems.org/blog/there-is-no-whatsapp-backdoo...
It's inevitable that big centralised services like WhatsApp or even Signal are going to be under pressure from governments to support lawful intercept; in many countries it's essentially illegal to run a communication service that can't be snooped under a court order. Multinationals like Facebook are neither going to want to break the law (as it ends up with their senior management getting arrested: https://www.theguardian.com/technology/2016/mar/01/brazil-po...) - nor pull out of those territories (given WhatsApp market penetration in Brazil is 98.5% or similar).
In all cases, we rely on the word of the service provider that they don't sneak additional public keys to encrypt for into the clients and in all cases we hear that doing so would cause a message dialog to appear, but we have zero control over that as this is just an additional software functionality (yes. Signal is Open Source, but do you know whether the software you got from the App Store is the software that's on Github?)
Also imagine the confusion and warning-blindness it would cause if every time one of my friends gets a new device I'd get huge warnings telling me that public keys have changed.
This is a hard problem to solve in a user-friendly way and none of the current IM providers really solve it. Maybe Threema does it best with their multiple levels of authenticity.
As such I think it's unfair to just complain about WhatsApp here.
I disagree. WhatsApp have a known vulnerability which they won't fix (indeed they deliberately added this vuln on top of the Signal protocol), and no denial that they have used this vulnerability in the past.
They made a big PR song and dance about this feature only to backdoor it. That deserves criticism.
how would you fix it without causing notification-blindness?
Absolutely mothing really stops any of WhatsApp, Apple or even Signal itself from reading your messages if they want to/are compelled to. The only way to protect yourself against the service provider is to manage public keys yourself manually using GPG like workflows which have proven to be unworkable.
The trade off is do you want free and easy to use messaging which protects you from other snoopers but not the service provider/government itself or do you want much more secure systems that no one outside the technology priesthood will use.
I agree that a lot of people would be very confused when they see the error, though, and while it's easy enough to explain even in layman's terms, I don't think it would help.
I think that's it's totally fair to complain about WhatsApp, since the issue mentioned is separate from the more general problem you describe; they could easily have done it the way Signal does, and I suspect they opted to do it the way the do it for the same reason they don't have the security notifications on -- they don't want to deal with the confusion.
apparently in some countries they do and that's a reason to compromise the rest of the world..
just summarising how bizarre this excuse really is.
make it an opt-in setting, in some countries reliable connectivity in a situation of frequently changing devices (the more I think about it, the more contrived it sounds) might be more important than privacy, but in others it very much isn't and the consequences for failing privacy are much worse than missing a message between swapping of devices.
that's not a tradeoff you should get to make for everyone.
the error message itself (I have it on) is not at all obtrusive btw, it's a friendly yellow (like the old Google ads) small type, which a user will either ignore or get a vague sense of unease about not being secure (which is exactly correct), I don't see how this can be further confusing.
Yes: https://whispersystems.org/blog/reproducible-android/
Also, unless you're suspicious and actually check, you could be served a special version by the App Store that was compiled only for you and contains the required add-a-key-but-dont-show-a-popup feature.
I'm not saying that Signal and/or Google are shipping a backdoor. I'm saying that we have to trust them that they don't.
Indeed, the most secure way is to generate and confirm each other's keys physically. The thought occurred to me that those whom you'd want to truly communicate securely with are likely people you have met via other means already --- including in person --- and so you should already have an effectively independent channel to share keys. It seems like the level of trust you have with someone is proportional to the probability of that being true: if you've never actually met someone in person, how do you know they are who they say they are? In some sense, you could say that, how secure the communication with someone is, doesn't matter if you don't already have that relationship of trust established.
In general, it is the control of FB over the whatsapp client where the vulnerabilities lie.
On the other hand, as long as users are required to manage their public keys, there won't be end-to-end encryption for the masses (which WhatsApp had declared as their goal and to some degree achieved).
At least until key management and other security basics will be taught at elementary school, by the time multiplication table is taught.
I've tried in the past to get friends to switch over to Telegram, but there are issues since they rolled their own encyption protocol.
I've looked into using Mumble for voice, it seems quite secure because you host it yourself, and it's open source.
There's also a good list from the EFF: https://www.eff.org/node/82654
Deleted Comment
I think it would be wrong to start complaining about other apps. We don't know of vulnerabilities in other apps. We DO know of one in WhatsApp. Let's focus on what we know and take WhatsApp to task on it instead of wasting energy on what we don't know.