Readit News logoReadit News
m1el · 6 years ago
I appreciate the effort, but this is one of those cases in which there isn't much security added by encryption.

If I assume the website owner is malicious (or subverted by powers that be), I cannot trust the JS code provided by the website. Key disclosure is as trivial as one line hidden somewhere in megabytes of JS code delievered from the server. Which makes "end-to-end" part nearly meaningless.

_bxg1 · 6 years ago
How does that differ from a native app? In any E2E context you have to trust the client code.

Sure, there are a couple extra steps needed here:

- The website author needs to avoid casually throwing in dependencies (unlike perhaps the average JS project)

- HTTPS for loading code resources is crucial; maybe even CDNs need to be avoided

- The user needs to avoid having browser extensions enabled

But the fundamental problem doesn't seem different nor intractable.

thanksforfish · 6 years ago
I agree, not intractable but there are meaningful differences:

* A native app can be installed once. A browser app is effectively downloaded fresh each time you use it. That give more exposure to attack. * A native app is downloaded, then run. A browser app does both in one step. Separation gives the user a chance to verify using hashes, signatures, etc, ideally cross checked from multiple different domains incase only the download server/primary domain is hacked. If there are sensitive user cookies, malicious javascript can steal them immediately on page load.

Native apps that update themselves can be a mess, but OS or package manager updates are generally carefully managed (I.E. apt).

For the CDN concern, sub resource integrity should work well (of course you need to verify that the version you hash is actually safe, which people sometimes skip). I think browser support is good these days.

But I agree, not intractable.

zimbatm · 6 years ago
As a website it's trivial to serve different content to different users. Each page load can or cannot contain a compromised payload. And since the website also identifies the users it's also easy to target specific users.

To reduce that risk, and temptation, we want to be in a model where the likelihood of detecting a compromised payload is increased. This is why software distributions like Debian are important, because they serve the same content to all their users, and there is a chain of verification that is established. One vigilant user is enough to detect that breach.

An even better solution is if the source code is available because it makes it easier for vigilant users to find those undesirable changes. This, paired with reproducible builds also allow to independently verify that the compiled output is indeed produced from that given source code.

Somewhere on that line we also have App stores which don't gives much more guarantees than the website but transfer that trust from the publishers to the application distribution platform. A compromised user still has the opportunity to capture the binary and submit it for analysis.

m1el · 6 years ago
I can build a native app from the codebase myself. I can be sure that a native up won't change in-between launches.

> In any E2E context you have to trust the client code.

I don't have to trust code which is continuously being delievered from the server. This is an intractable problem.

derefr · 6 years ago
> In any E2E context you have to trust the client code.

You have to trust it not to exfiltrate your local plaintext data, sure; but encryption and key management in a native app might be outsourced to a TPM chip, in a way where the native app can't steal the keys, nor decrypt anything "behind your back", in practical terms meaning there's a smaller surface-area of code to audit.

dheera · 6 years ago
It differs from a native app in that every time they change the JS code behind the website, you would not know. You could have thoroughly inspected the source, and then when you go to use it tomorrow, that one line of spyware could be inserted.

For native apps, you can install a particular version from source and not change it.

dylkil · 6 years ago
>As the maintainer of Excalidraw, I now sleep much better at night. If the hosting service gets compromised, it doesn’t really matter as none of the content can be decrypted without the key.

Seems the maintainer maded the point of protecting himself more than anything.

henriquez · 6 years ago
Right. "Javascript cryptography considered harmful" is a trope at this point, but that doesn't make it an axiom. The author is using this to _not store users' private data_ on their own infrastructure.

Even if Excalidraw _was_ compromised and someone put malicious JS code in the app, the total scope of the breach would be very limited. Records could only be exfiltrated on an individual basis, and only if the users opened their URLs and exposed the decryption key to the malicious Javascript payload.

This is a smart way to offer a free service without worrying too much about liability or compliance with the ever-expanding set of regional privacy laws.

xvector · 6 years ago
Are there any current browser standards for validating that the served JS is ‘safe’? I can see such a thing being also useful for applications like ProtonMail, for example?

What about something like a browser extension that queries an audit server for a list of signed hashes of ‘safe’ JS?

- Well-known code auditors could perform reviews of JS

- They could sign JS they find safe with their PGP keys and upload it to some server

- Users could choose to trust certain auditors

- Every time you visit a site that you choose to require this kind of validation, you could check that the hashed JS matches the key

I guess we’re going the way of PKI+SHA hashes of distributed binaries all over again though. Also, if the website updates JS, you’d need to wait for auditors to review it, and there’s a whole mess there (websites would probably have to serve beta versions of their code ahead of release so auditors could have time to review them). Finally, JS would have to be static across all users and I’m not sure how feasible this is.

There is some benefit, though? Now you are distributing the trust over ProtonMail and your trusted auditors. This could be useful if we find ProtonMail to be compromised one day. This might even spawn businesses aimed solely at reviewing websites’ code.

There has to be a better way to do this. How can we bring ‘code review’ to web applications?

vbezhenar · 6 years ago
You need to sign an entire chain of HTML+JS+CSS+everything else, as you can build keylogger with CSS. Web if weird. I wouldn't be surprised to find out that one can build keylogger with some tricky font file. But it definitely should be possible to build an addon like that. Although it would require some good cryptographers as not to make a mistakes.

I don't think there are any browser standards for that. I guess that such a webapp is too niche and this threat is extremely niche, so very few people would care for it to be a general purpose standard.

cxr · 6 years ago
dat:// can give you that.
geofft · 6 years ago
It's using end-to-end encryption to address a slightly different threat model than the usual one: the website operator doesn't want the liability/danger of holding cleartext data. E2E does solve this even in the browser scenario.

Now an attacker who dumps the server's disks doesn't compromise user data, they'd have to activate modify the website. This raises the bar of a successful targeted attack and aldosterone basically eliminates risk from untargeted attacks, which is well worth doing.

(Also now warrants that can compel disclosure have nothing they can target, eliminating Lavabit-style attacks. There's still the possibility of court orders making you write software like the proposed use of the All Writs Act against Apple, but that's on much less certain legal ground.)

It's true that this doesn't let you avoid trusting the provider, but you're not going to get that anyway - and this scheme is certainly no worse. (Arguably you're not going to get that on native apps either these days, thanks to closed-source app stores and automatic updates, and automatic updates are a very good thing.)

est31 · 6 years ago
> It's true that this doesn't let you avoid trusting the provider, but you're not going to get that anyway - and this scheme is certainly no worse. (Arguably you're not going to get that on native apps either these days, thanks to closed-source app stores and automatic updates, and automatic updates are a very good thing.)

There is no guarantee, but security researchers often check the contents of apps like WhatsApp, Threema, etc. However, that doesn't help you if you specifically get a special version of the app that sends your content to the app writers. For websites, how hard this is depends on your infrastructure, but it is more or less trivial. For app stores like Google Play or apple app store, there is no such feature to push a special version to a subset of the population specified by name. You can only push it to entire classes of devices.

So suddenly Google, Apple, etc. have to be in on the attack which drastically reduces the number of people who can pull off supply chain attacks. Maybe you should be still worried about the US government, but while Saudi princes can bribe the Threema creator, they can't compel Apple to push a malicious update the Threema creator signed to select people only. So they'll have to hack the device via other means.

duxup · 6 years ago
I'm not refuting your point here but it feels like on HN everytime we hit any web topic there are the kind of comments about "wait but this could happen / what if the website owner is malicious / included some wonky code he doesn't know about, etc".

They're not wrong, but also not the point of the article.

>As the maintainer of Excalidraw, I now sleep much better at night. If the hosting service gets compromised, it doesn’t really matter as none of the content can be decrypted without the key.

This seems to be the point of the article, and valid IMO. Yet on HN more and more we get dragged off on these larger state of the web topics. They're not wrong either, but I feel like the volume sort of drown out the point / valid topics too.

rixtox · 6 years ago
I think the key problem here is where and how do we establish a root of trust for Web applications.

Currently we have some form of root of trust for HTTPS/TLS, code signing, trusted execution, that the OS or browser or chipsets distribute a set of "trusted" root certificates. For the consumers, they are implicitly backed by the big companies that manage the screening, auditing, and distribution of these certificates. But of course, these certificates have limited use cases that not yet covering the end-to-end encryption application for Web.

One way or another, we have to start our root of trust at some layer, either it's hardware, OS, drivers, or applications. But in general, the lower the layer gets, the lower the risk would be. Because it's easier for an evil actor to target specific user in higher layers.

There are more to be solved than just the root of trust for E2E on Web. For example, even if we can use trusted execution environment on a Web application to ensure secure key generation and key escore, we still have to face the problem of how to input or present the cleartext data with the user. If we still let the JS code to handle the cleartext in any way, there could still be a chance that the distributor of the JS code might steal them.

With that said, not only should we have a root of trust, but we also have to trust the UI provider that operates on the sensitive data for user input or display. The lowest UI layer is usually the OS, so even we have hardware root of trust, but if the trusted UI is in the OS layer, we would still be throttled at the level of trust on the OS layer.

thekyle · 6 years ago
So just load the JS locally from a browser extension. I know MEGA (end-to-end encrypted cloud storage) has an extension for that. I'm sure other end-to-end encrypted web apps (ProtonMail, Bitwarden, etc) could do the same.

https://mega.nz/extensions

thanksforfish · 6 years ago
> there isn't much security added by encryption

Compared to just using HTTPS?

While there are ways the encryption can be removed or subverted that's a far cry from adding no benefit. Compared to just using HTTPS for client to server encryption, this protects the user from a bunch of server-based attacks. Certainly not all, but it does meaningfully raise the bar.

Keep in mind that security needs to be usable, and needs to exist in tools users actually use. If I'm a user if a web app, then browser based e2e encryption helps me. Downloading the diagram, installing PGP tools, figuring out how to use it, sending the file via a different mechanism... probably not something an average user wants to try.

smolder · 6 years ago
Modifying source to extract keys runs a risk of detection; it is overt. So it's preferable to passively snoop. If someone has a way to passively snoop TLS contents, the extra encryption protects data from them.
dwheeler · 6 years ago
Agreed. If it stored data on a different server (allowed via CORS) then there could be some additional security, since then as a long as the JavaScript isn't subverted on the same site, you're storing encrypted data on a different server.

But yes, if you run JavaScript, you're always trusting that site to not do a "quiet update" of the code to something malicious. I don't see any obvious way to counter that, short of downloading the JavaScript code & running it yourself.

jstanley · 6 years ago
I like this kind of design in conjunction with delivering the code over IPFS. That way you know the code has not been tampered with, as long as you trust your IPFS gateway.
nwsm · 6 years ago
All JS is executed in the browser. If the malicious site wanted to steal the data, it must send the key to the server.

With enough inspecting, debugging, and network watching you would be able to see what they're doing and how.

While I agree you can obfuscate this in the JS payload, it doesn't make e2e encryption in web apps "meaningless". It would just take one user doing some due diligence to expose the malice.

vbezhenar · 6 years ago
It does not work this way if this attack is targeted on a very few users and it's trivial to serve a different scripts to a different users.
lipis · 6 years ago
It's open source :)
m1el · 6 years ago
And how can I be sure that the code delivered by the server is the same code as in the public codebase?

Nothing stops the owner of the service to run arbitrary JavaScript in Users' browser.

bconnorwhite · 6 years ago
Why encrypt passwords? E2E encryption is similar in that as it is much for the website owner as the client. Data is a liability
MaxBarraclough · 6 years ago
LastPass has the same problem.
lukeschlather · 6 years ago

    const encrypted = await window.crypto.subtle.encrypt(
      { name: "AES-GCM", iv: new Uint8Array(12) /* don't reuse key! */ },
      key,
      new TextEncoder().encode(JSON.stringify(content))
    );
This looks wrong. The iv should be a randomly generated string that is only used once. I'm not super familiar with modern Javascript, but I think you're just initializing a bunch of null bytes.

Honestly, I feel like whoever designed AES intended for people to make this mistake, because half of the sample code I see has this error, and it would have been easily avoided by specifying the IV length in the standard and requiring libraries to automatically generate it and prepend it to the ciphertext rather than letting callers have control over how it is initialized and stored.

prophesi · 6 years ago
Yeah, the WebCrypto API has a built-in function for this. I hope the author fixes the code, because this is otherwise a dangerous tutorial.

window.crypto.getRandomValues(new Uint8Array(12));

If the author followed the MDN guides for it, they would have noticed that the example code also uses a random array. https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt...

dchest · 6 years ago
No, it doesn't need to be random if the key is used only once.

PS The author didn't just copy something from a tutorial without knowing what he was doing, he actually asked for advice https://github.com/excalidraw/excalidraw/issues/610

dchest · 6 years ago
It is not wrong. In AES-GCM, the "iv" is a nonce — any value (random, counter, cat picture) that is different for different encryptions with the same key. If you use the key only for one encryption, as it is used here, it can be static (e.g. all zeros).

BTW, regarding the AES comment: the authors of AES designed the block cipher, they didn't design the mode (GCM, which is CTR and GMAC). Nonce is a standard requirement for a stream cipher/stream mode of a block cipher/AEAD. In WebCrypto it's actually hard to skip setting the iv. And again, the author didn't make a mistake here.

lioeters · 6 years ago
> I think you're just initializing a bunch of null bytes.

You're right. From MDN:

> The Uint8Array typed array represents an array of 8-bit unsigned integers. The contents are initialized to 0.

..And the function signature for crypto.subtle.encrypt() describes the iv parameter for the algorithm:

> iv - A BufferSource — the initialization vector. This must be unique for every encryption operation carried out with a given key.

> Put another way: never reuse an IV with the same key.

https://developer.mozilla.org/en-US/docs/Web/API/AesGcmParam...

The code comment implies that the author knew the iv parameter should be unique - and yet passed an array of zeros every time.

dchest · 6 years ago
never reuse an IV with the same key

Exactly, the key is always different.

wildduck · 6 years ago
Well on the site it was said

"We encrypt the content with that random key. In this case, we only encrypt the content once with the random key so we don’t need an iv and can leave it filled with 0 (I hope…)."

Anyone think that is a good idea?

dchest · 6 years ago
It's a good idea if you encrypt with the same key _once_ — you can avoid attaching nonces to your ciphertext (less code and data), and have only 16-byte key in the URL.

In fact, using a random IV with AES-GCM is not exactly safe: 12-byte nonce is too small to avoid collisions with many encryptions. The recommendation is to not encrypt more than 2^32 messages with the same key if you use the random nonce.

SAI_Peregrinus · 6 years ago
If the key is securely random AND only used once, it won't compromise the encryption. But it's a bad idea, since it requires enforcing that the key is a nonce, instead of just a key. It's a bad habit, and can easily lead to compromise (when someone inevitably uses it as example code in a situation where those guarantees don't hold, for instance.)
vjeux · 6 years ago
Super excited to see that excalidraw made the front page! We published an article yesterday explaining how the end to end encryption works when sharing a link: https://blog.excalidraw.com/end-to-end-encryption/
mkl · 6 years ago
Any chance you could add actual pen-like drawing? That would make it much more versatile, and make it fit its description better ("Excalidraw is a whiteboard tool that lets you easily sketch diagrams that have a hand-drawn feel to them."). Bonus points for using PointerEvents to change the line width with the pen's pressure.

Saving the diagram automatically and restoring when the site is revisited is a nice touch.

lipis · 6 years ago
seemslegit · 6 years ago
As with every in-browser encryption deployment - what's the threat model here ?

> I now sleep much better at night. If the hosting service gets compromised, it doesn’t really matter as none of the content can be decrypted without the key.

Which can be easily exfiltrated by the compromiser as they are now in a position to deliver and run arbitrary javascript in your users browser where the keys reside

Also, why can't I draw an actual non-flawed circle ?

SamBam · 6 years ago
> As with every in-browser encryption deployment - what's the threat model here ?

I believe the biggest win is that existing content will not be accessible to a hacker even if they fully compromise the website, unless users re-open them. So, sure, plenty of content may get compromised if the site gets hacked, but some large percentage of old content will not be.

> Also, why can't I draw an actual non-flawed circle ?

Adding sloppiness when white boarding is a common pattern to indicate roughness. It subtly cues the viewer not to treat it as a completed product, and allows them more freedom to make changes. If you wanted clean lines, there are many web drawing tools for that too.

seemslegit · 6 years ago
> I believe the biggest win is that existing content will not be accessible to a hacker even if they fully compromise the website, unless users re-open them. So, sure, plenty of content may get compromised if the site gets hacked, but some large percentage of old content will not be.

The attacker won't need to wait for the users to open a specific drawing - just browse to the website, from there they can grab the keys for all drawings they have assuming that not that many of them exist in the first place and the attacker has the list of key ids from the compromised backend.

It does call for a much more noisy and visible attack which is by itself a valuable mitigation.

3pt14159 · 6 years ago
Well it helped one of those password manager companies during cloudbleed. There is a couple of threat models where it does help. Mitigates bitsquatting too.
geofft · 6 years ago
Depends on the attack, really. Some realistic threats:

- The hosting service doesn't wipe physical disks they discard.

- The hosting service doesn't wipe virtual disks between customers.

- Your offsite backup provider gets compromised.

- Someone conducts a non- targeted attack, dumps whatever SQL database they see, and leaves before they get noticed.

- Someone conducts a targeted attack but gets noticed before they can develop a working patch to compromise the service.

- Someone (e.g., a government) pressures your hosting provider for data but doesn't want you to know. Modifying your JS files would risk being noticed.

smolder · 6 years ago
Let's say someone vacuums up all the packets going over the wire passively and has some way of breaking TLS. Redundant encryption would be a further obstacle to reading the contents. TLS gets MITM'd in some environments, so it's useful there.
seemslegit · 6 years ago
MITM'd == not passive, so can inject malicious js.
1f60c · 6 years ago
> Also, why can't I draw an actual non-flawed circle ?

On my laptop (a MacBook Pro) holding down the Shift key accomplishes that. As for the hand-drawn edges, that's just part of the aesthetic :-)

seemslegit · 6 years ago
Yeah was referring to the edges, if you gonna add a "sloppiness" level in the toolbar might as well have proper edges
DVassallo · 6 years ago
Looks neat!

End-to-end encryption in the browser is not perfect security, as other comments are saying, but it’s significantly better than having user data in clear on a MySQL database somewhere.

PS. I’m the founder of Userbase.com — an open source service to help you build end-to-end encrypted web apps like this.

j4_hnews · 6 years ago
One of the problems with hacking the hashtag (#) in the URL is that this fragment identifier is supposed to be used to identify a portion of the document. As such unexpected things can happen when you try to exploit the hash tag for other purposes.

For example, with older of versions of Microsoft Office you cannot use a pound character in a hyperlink:

https://support.microsoft.com/en-us/help/202261/you-cannot-u...

Some email applications may unexpectedly strip the tag, attempting to be smart about handling it in the RFC 3986 sense (not anticipating this use-case at all). When the data appended to the # is a dependency to view the link, it can more easily render the link unusable.

jakearmitage · 6 years ago
What would be a valid alternative?
rattray · 6 years ago
For anyone else uninitiated:

> Excalidraw is a whiteboard tool that lets you easily sketch diagrams that have a hand-drawn feel to them.

It appears to be free open-source software on an MIT license.

https://github.com/excalidraw/excalidraw

ejstronge · 6 years ago
More information about the project (I’m unaffiliated, just wanted to learn more!) https://blog.excalidraw.com/reflections-on-excalidraw/