I should be happy about this -- who wouldn't want the entire web to be encrypted -- but SSL is so broken for normal people. SSL is expensive (wildcard certificates run $70 a year and up), confusing (how does one pick between the 200 different companies selling certificates?), and incredibly difficult to set up (what order should I cat the certificate pieces in again?).
If SSL doesn't change, this move will cut the little folks out of the internet. What are Mozilla's values?
That's just one project, and it doesn't even exist yet.
The web is moving faster every day, apparently. I sure do hope that project will be all it's chalked up to be.
For example, I need IP-only certs for a new project I'm working on (waiting for DNS to propagate to all clients is too unreliable and slow). If letsencrypt doesn't do that... well then I'd have to hope real hard for a competent CA out there who has an automated process available that allows IP-only certs. And whatever their price, if companies start following Mozilla's lead too soon, I'll have to pay up.
The wording in the article is perhaps not so damning yet, but it's still making me uneasy that they put out this press release while there are currently ZERO viable solutions for this.
How well does that work on a corporate intranet? How well does it work with Windows?
If the friction for testing, say, an enterprise LOB app on an internal-only QA IIS server is any higher than "basically zero" with Firefox, and the same friction doesn't apply to Chrome or IE, well.
The next problem is IP addresses. How is SNI these days? I've been meaning to experiment with it but the lack of extra personal certs have prevented me.
SSL should be a universally available free resource. I expect that it will be in the near future. That said, it is still very cheap for small sites too:
$9 - $11 / year for perfectly good certs. Less than $1 per month is a small burden.
That is today's prices, based past demand. As demand for SSL hosting goes up, gradually replacing plaintext hosting, the price will come down.
I actually expect the price of plaintext HTTP hosting to go up a bit; partly due to reduced demand, but also due to increased risk/liability. With SSL being the "industry best practice", I expect at least a few bean counters will view the risk of private information leaks or hypothetical legal liability for enabling DDOS (similar to the "attractive nuisance" doctrine).
There will be a turbulent transition period, of course. As someone currently living at the poverty line, I have argued against the CA system many times. A SSL cert (and anual renewal) may be an insignificant cost to some people, but it is a real barrier when tha cost represents days/weeks of food. Unfortunately, none of this removes the need for encryption or the risks of plaintext. This is why I'm very excited about Let’s Encrypt; It might solve the cost problem, and it might avoid the StartSSL "no second-source" problem because it is a protocol first.
Internet use is only going up, so these transition costs are only going to go up. We can pay it now, or pay even more in the future.
The original idea with SSL was to give out certificates to organizations, not domain holders. The labour involved made this an expensive process and today domain validated certificates are the most common.
The idea was that users should want to validate they speak with the organization McDonald's, not with mcdonalds.com which may or may not belong to them. Turns out users don't, and that the distinction gets even less important over time. Domain names is an important identifier for an organization now. You can still however see the old process at work in EV certificates, which normally carries an extra cost.
If SSL had been designed for domain validation from the start, if would have looked like DNSSEC. Cryptographically verified domain assignments is a good idea, and infinitely more secure than the domain validation schemes we use today.
Here at HN there are a handful who can't resist going on about NSA every time DNSSEC is mentioned, so I expect a few of those now. Please do understand the whole picture and how the complete certificate stack works before taking those statements at face value.
It's because all CAs are able to sign certs for all domains. Browsers simply do not trust your average domain registrar enough to give them power over all domains.
Now you might say don't trust the registrar, trust the people who run the .com (or whatever) TLD. That's getting close to what DNSSEC does, which some people say is better. But CAs weren't designed for this like DNSSEC. With the way CAs work, we would have to give the runners of .com power over all domains, which some people might not think is so bad. But it would also mean we would have to give the owners of .sucks power over all domains as well, which most people would be against.
No. Consider that DNS request/responses are simple, cleartext UDP packets. There's DNSSEC of course but nobody uses it (and also most security experts don't like it).
This is a legitimate concern, but I think the so-called dire consequences are a bit overblown.
Major browser vendors like Google and Mozilla don't change their policies in a vacuum while the rest of the world stays static. The move to "deprecate" HTTP is an explicit attempt to manipulate the rest of the world into making SSL easier and more affordable. It is unfair to evaluate this proposal in isolation without considering the market upheaval that it is very much intended to trigger.
Currently, most web hosts charge a hefty markup on SSL certificates and charge even more to enable them on a website hosted with them. This practice may no longer be sustainable as more and more people begin to demand SSL. "Free SSL with every 1-year contract!" could well become a standard marketing slogan, just as "Free domain with every 1-year contract!" has been for the last 10+ years.
Some domain registrars already offer free or low-cost (~$1.99) SSL certificates with the purchase of every domain. This may become more widespread as registrars scramble to remain competitive.
Android 2.x and Windows XP are major excuses for not adopting SNI, but the upcoming release of Windows 10 will reduce the market share of XP even further, and old Android's lifespan is also running out thanks to the planned obsolescence of mobile devices. By 2017-18, nobody will care about these platforms anymore, and if anyone still does, we can tell them to get Firefox.
Even without StartSSL or Let's Encrypt, existing CAs may be forced to cut their prices drastically as a horde of super-price-conscious consumers begin to flood their once prestigious trading floor. Some CAs have already been offering $20 wildcard certs through selected resellers. Expect more of these offers in the near future. This is a race to the bottom, and I'm thoroughly enjoying it!
To top it off, CloudFlare is offering free SSL (SNI required) to everyone. Expect services like this to become more common as SSL comes to be seen as an essential component of every online service.
Of course, there's no guarantee that these changes will occur. But I can guarantee that most of them will not occur unless there's massive, organized presssure on the lazy, greedy incumbents. Google and Mozilla are doing the world a great service by adding their weight to this much-needed pressure. Remember when the rest of the world basically ran an extortion racket to force the web hosting industry into upgrading to PHP 5? That was glorious. I want to see it happen again, this time for easy and affordable SSL.
If the deadline arrives and the world still isn't ready for the transition, we'll think again and adjust our strategies accordingly. Nothing wrong with that. In the meantime, let's be optimistic and go bully some web hosts!
> The move to "deprecate" HTTP is an explicit attempt to manipulate the rest of the world into making SSL easier and more affordable. It is unfair to evaluate this proposal in isolation without considering the market upheaval that it is very much intended to trigger.
I'd love to believe this but I've never once seen the https-only nazis bring up this issue on their own, or show any concern for the fact that it will limit speech on the web. The y mostly work for companies where getting ssl certs is no big deal, and they put their personal projects on github or heroku anyways.
The backbone of the web was the fact that you could put up a website on your own computer within a matter of minutes. That is now going to be gone and I've never seen the biggest advocates of this change show any concern whatsoever.
This article from 2014 [0] suggested Google Domains [1] may offer free SSL certificates. As far as I can see, that's not the case at this time. Does anyone have any information on this? How likely is it that this feature will come in the near future?
I always wondered why does this certificate order matter? Web server can (and does) parse certificates. It should reorder them in correct order and log warnings if any issues were found.
As an aside, I really don't like wildcard certs. If the private key is compromised, the consequences are so much worse than if you lose a regular cert.
That's true if you're trying to save money by putting a ton of domains behind a single wildcard cert using a single private key. But there are security advantages to using multiple wildcard certs based on different private keys. One of them is that you can develop a nearly infinite number of sites without exposing the domain name via the certificate, so they can't be crawled or pentested until they are deployed publicly. The number of certs you buy should be based on the number of private keys you can securely deploy.
$70 for a wildcard cert!? Where are you looking at? There's a shitload of AlphaSSL resellers that are much cheaper. I got 2 wildcard certs for $20/yr. Of course, there's really no need for a wildcard certificate, and StartCom gives out free, valid non-wildcard certs right now. On top of that, Lets Encrypt should simplify the process greatly.
I agree with trying to phase out HTTP, but I think their method is "annoying." What do features have to do with HTTP Vs. HTTPS? It just seems like an arbitrary punishment.
Wouldn't it just be significantly easier to simply change the URL art style to make clear that HTTP is "insecure." Like a red broken padlock on every HTTP page?
That has the following advantages:
- HTTP remains fully working for internal/development/localhost/appliance usage (no broken features).
- Users are reminded that HTTP is not secure.
- Webmasters are "embarrassed" into upgrading to HTTPS.
- Fully backwards compatible.
Seems like a perfect solution where everyone wins.
What features have to do with encryption is this. If a browser asks a user "Do you want http://example.com to be able to access your camera", what it is really asking is "Do you want http://example.com, anybody on your local network, state actors, anybody between you and example.com, people who can mess around with BGP and your DNS provider to be able to access your camera?". TLS mostly makes the first question more truthful.
You might explicitly include "employees of the coffee shop/hotel/library providing wifi" in the list of actors who can intercept your traffic. Honestly, I think that one will get the most attention of the average person.
It sounds like this is mostly going to be focused on features that require user consent. The article gives the example of media devices (camera and microphone) but there are plenty of others: fullscreen API, geolocation, notifications, large amounts of offline storage, etc.
These capabilities are sensitive enough that you want to give users control over who is granted access. But if pages are being loaded over HTTP, the user can have no way of establishing the authenticity of the Javascript code they're granting permissions to.
I can imagine a lot of personal sites will suffer from this. With most, they're sitting on something like Eleven2 or Dreamhost, who requires a dedicated IP for an SSL certificate, which the user then has to buy and figure out for himself (it's not trivial for the average "webmaster"), or buy the certificate from their host which is marked-up plenty.
Yes, the hosts could wildcard. Yes, there are other solutions out there. But for the average Joe who is blogging about his vacations and family? They're going to be completely lost.
Why don't shared hosts just wildcard? Shared certificate? Well, let's think about it... Charging ~$5/month/dedicated IP is a nice upsell, and getting $70 for an installed SSL cert that costs them $10 from their SSL cert reseller, that takes them 2 minutes to configure... That's a nice slice of pie. I'd take that bet any day.
I think you're overstating how bad things are. Dreamhost, for example, no longer requires a dedicated IP for SSL, though they do still recommend it for e-commerce. They are charging $15/year for a CA-signed certificate. Granted, that's for a single-site cert and they don't support wildcards under this scenario, but the vacation blogger isn't likely to need that anyway.
It's only an upsell now. If in the future SSL is required to get access, it stops being an upsell and starts having to be part of the basic package. Whether that will raise prices significantly is yet to be seen.
The actions Mozilla proposes sound awful. I believe that a secure (from the NSA) Internet is the way forward. But this seems so goofy to me. There are legitimate reasons for a site not to be hosted on HTTPS.
* It is a static site with no forms or logins
* It is non-critical info
* The site operator can't afford a certificate (Let's Encrypt is only one site...)
As you say: Color-code sites with a bit more granularity. Don't cripple the cleartext web.
All browsing behavior can be used to build a profile about someone, whether for advertising, surveillance, or whatever. There's a lot more information in the fact that person A visited pages 1-6 on unencrypted website B than one might realize. This reason alone should be enough for us to demand encryption (not necessarily via CA certificates) for any connection that isn't demonstrably local and unintercepted.
I was thinking that was the way to go too for a while, but then I realized that marking http as insecure will just get user used to clicking through security warnings and assuming that they're "normal"
While we're making art style changes, why don't we change the experience for self-signed certs?
When the user first visits an HTTPS page with a self-signed cert, they get the content, and the URL art style has a broken lock or something warning it's not known to be secure. (It's better than raw HTTP but it's not trusted.) With certificate pinning by the browser, the next time the user visits that page, if it's different, then they get the current experience that warns them in big scary text and requires several clicks to get past. There's a question of if it's different in that the server owner upgraded to a paid SSL cert should it show a warning or not, but if there's a way to sign that upgrade with the old cert that the browser can know about there shouldn't be a problem...
So if I have to renew my (self-signed) certificate, all my current users will now get scary warnings? I'm not sure we should be encouraging people to hold on to their possibly-compromised certs.
> When the user first visits an HTTPS page with a self-signed cert, they get the content, and the URL art style has a broken lock or something warning it's not known to be secure.
Do we assume the user is going to notice that URL art style, and actually heed it? Because if the answer is "no" (and I think in reality, the answer would be "no"), then pick a high value site, and MitM it with a self-signed cert. The user misses the indicator, and proceeds to interact with the site; does JS work? (let's steal the user's cookies) do forms work? (please log in!)
Not a bad idea, in theory, but... suppose I visit a site on Monday and see certificate A. Then when I return on Tuesday, I see a different certificate B. What reason is there to think that A is likely to be the "true" certificate, and B isn't?
Showing a big scary warning in one case, and not in the user, implies to the user that the browser has some reason to think one is more secure, which is misleading.
This is stupid. There are all kinds of use cases where you don't care who knows what you're looking at, or whether it is authentic.
Say I navigate to some restaurant's web page using HTTP. Even if I used HTTPS, someone spying on my traffic would know what I'm reading, if the IP address is a dedicated server for that web site only. Whether I use HTTP or HTTPS, they could infer that I'm interested in visiting the restaurant.
Secondly, I'm only interested in the opening hours. That is not classified information.
I suppose that a MITM attack could be perpetrated whereby the attackers rewrite the opening hours. I end up going to the place while it is in fact closed (and the area happens to be deserted), making me an easy target for the attackers to rob me.
Okay, okay, please deprecate HTTP; what was I thinking!
And that restaurant better get a properly signed certificate; no "self signed" junk! Moreover, I'm not going to accept it over the air the first time I visit, no siree. DNS could be redirecting me to a fake page which also has a signed certificate. I'm going to physically go the restaurant one time first, and obtain their certificate from them in person, on a flash drive, then install it in my devices. Then I'm going to pretend I was never there and don't know their opening hours, and obtain that info again using a nearly perfectly secured connection!
Or one of your browser tabs containing an HTTP-delivered page (any one, really) could arbitrarily be rewritten by the MITM to look the same at first, but carry some injected Javascript such that, a few minutes after it detects you've unfocused the page, it turns itself into a Gmail phishing site[1].
Since the SSL negotiation happens before the HTTP request, either there's only one certificate for that IP or you need to use SNI, which reveals the domain you're requesting.
You could have multiple domains in the certificate to avoid identification, but that has its own problems.
It has become so tiresome to deal with the likes of you - people who will say how they don't need or want SSL, how they don't care about privacy.
This is the techie version of "nothing to hide, nothing to fear". It's a pathetic argument and brings nothing to the table.
Just because you don't care about the NSA knowing you like McDonalds when you browse their menu, everybody else in the world shouldn't care about their government knowing they are gay (which, need I remind you, is an offense punishable by death in certain countries) when they browse an article on LGBT rights.
Because, if McDonalds doesn't need SSL for their menu, why would a writer need it for his small-audience blog?
I have to say, I actually disagree with this move. While I think the intentions sound noble, and I'm all for a more secure web, I also believe that a web browser has no business dictating that the entire web should be forced in HTTPs.
I don't see any benefit in this type of blanket, all or nothing, type of approach. In fact, I see it doing more damage than good. Encrypting blogs, news websites, etc still makes no sense to me. I'm actually disappointed in Mozilla for looking at doing this. As a developer I respect many of their products and see them as champions of the web in a lot of ways.
HTTPs does not:
- protect a user from malware on their own system with keylogging taking place
- increase security in outdated and insecure websites (eg: old known exploitable code)
- prevent any browser drive-by downloaders or exploits
- increase the security of the web server itself (web stack thats serving requests) - yeah that's you using a private VPS without doing Kernel updates.
These are likely the major factors of why people have security issues.
What is forcing HTTPS on the entire web actually doing? Who is it benefiting?
The government can still snoop your data in-flight. If someone is connected to a fake wifi endpoint there is on the fly SSL decryption out there.....
Do we still need TLS for actual secure transactions that deal with personal data? Yes, of course. That's what it is intended for.
Do we need TLS to read the latest TMZ post about Miley Cyrus?
You decide... (oh and it's http if you were wondering)
HTTPS provides authentication, not just confidentiality.
When you visit "blogs, news websites, etc" do you think there's no value in being able to know for sure that the content is exactly what the owner of the site intended? Even though ISPs have proven themselves willing to intercept and modify that content in transit?
You're oversimplifying and being dismissive without cause.
>a web browser has no business dictating that the entire web should be forced in HTTPs.
1. that isn't what is happening as per the article. They are going to begin picking features that shouldn't be allowed over HTTP (like, say, geo location, web camera access, etc).
2. a browser is precisely the actor that should push for these things. If not browser vendors, who?
>What is forcing HTTPS on the entire web actually doing?
Encrypting streams of data that were previously unencrypted.
>Who is it benefiting?
Users.
>The government can still snoop your data in-flight.
So your argument is 'this isn't perfect for all attack vectors, so it isn't useful at all'?
>Do we need TLS to read the latest TMZ post about Miley Cyrus?
> I don't see any benefit in this type of blanket, all or nothing, type of approach.
Imagine you're making some meatballs. You've got pigs, spices, and a stove.
If you're in Germany, there's no problem -- kill some pigs, grind some pork, mix in the spices, and cook your meatballs. You could make sausages the same way (as long as you've got tubing). And you're free to sample your food as you cook it to make sure it suits your tastes.
If you're in the US, you've got two options:
1. Give up on sausage entirely. Make sure your ground pork is well cooked before you even think of eating any of it.
2. Carefully vet the pigs for trichinosis before introducing their pork into your kitchen.
Unsurprisingly, we use option 1.
Germany, like the rest of Europe, has opted for a blanket solution where they're not allowed to have pigs with trichinosis. The US has opted for a different blanket solution where you can't eat raw pork. Nobody is suggesting that we carefully inspect individual pigs and treat the meat according to whether they had trichinosis.
The recent attack on Github, where malicious JavaScript was injected into a plain unencrypted http connection, is enough to convince me that requiring https everywhere is the right move.
Meanwhile, OpenBSD 5.7 came out today, with the following security fixes in LibreSSL (arguably the most secure SSL library so far):
"Multiple CVEs fixed including CVE-2014-3506, CVE-2014-3507, CVE-2014-3508, CVE-2014-3509, CVE-2014-3510, CVE-2014-3511, CVE-2014-3570, CVE-2014-3572, CVE-2014-8275, CVE-2015-0205 and CVE-2015-0206."
So if I were running a TLS-enabled site using LibreSSL from OpenBSD 5.6, I'd have been exposed to potentially 11+ CVEs. A little sooner with OpenSSL, and I would have been exposed to Heartbleed. And who knows how many CVEs will arise before 5.8 is released?
Why is it so impossible to write a secure TLS library? Why should I put my entire server at risk to appease the attempts of Mozilla and Google to prop up the CA business? Sorry, but I'll stick to parsing lines of text.
Let 'em remove HTTP completely. Hopefully after they break 90% of the web, we'll get some real user revolt, and some real competitors in the web browser space might emerge. Maybe from some people who actually listen to what their users are asking for.
I guess now we know what that "signed extensions only" change was for: what do you think they're going to do when someone submits a "Restore HTTP Functionality" add-on in the future?
Well, frequently the vulnerability of those CVEs is breaking or downgrading the crypto... or in other words: if exploited, the connection could become as insecure as HTTP.
So your argument is that since locks can occasionally be picked, doors shouldn't have locks? What exactly is the massive burden with HTTPS? The computational cost is tiny and will continue to become tinier, there are free cert providers like StartSSL and more coming soon, and the implementation is simple enough that anyone managing a server should be able to handle it easily.
The number of websites where I wouldn't prefer encryption and identity authentication is around zero, and the number of websites where I'm okay with someone injecting arbitrary JavaScript is exactly zero. The time people spend making flawed "if you have nothing to hide, you have nothing to fear" or "crypto libraries/CAs are bad, scary, and hard to use" arguments would be much better spent actually trying to improve those circumstances for the inevitable and necessary shift to HTTPS everywhere.
> So your argument is that since locks can occasionally be picked, doors shouldn't have locks?
A faulty lock on my house doesn't turn into Heartbleed.
The thing is, I don't need a lock on my server that serves up static, legal content. You might think it's a problem, that the NSA is going to spy on you, or China is going to inject attacks into your requests to my server, but that's your problem.
I'm not going to run a massively buggy TLS library with an API guide that would take a whole team of engineers weeks to decipher, just because you're intensely paranoid about accessing game-related data over HTTP.
Seriously, look at the GnuTLS documentation sometime. It's psychotic. As is MatrixSSL, PolarSSL, OpenSSL, and NSS. The closest to sanity I've ever seen was libtls, which is only on OpenBSD, still has lots of CVEs popping up, and can't do non-blocking mode.
> What exactly is the massive burden with HTTPS?
1. write your own HTTPS server. I'll wait a few months, or
2. find a library that's easy to use and won't expose my server to Heartbleed-like attacks, and
3. pay me $70/yr for the wildcard cert I would need.
I'll cover the extra CPU costs, since you say they're so small. (even though when people say "small", they're counting overhead as a percentage against a site running a bloated beast like Wordpress in PHP + MySQL.)
> there are free cert providers like StartSSL and more coming soon
That don't provide wildcart certs (and I have a wildcard CNAME entry; and I make use of that.)
> The number of websites where I wouldn't prefer encryption and identity authentication is around zero
And you're free to not visit my site, just like I wouldn't ever patronize a webstore that wasn't HTTPS. That's how markets are supposed to work. I don't see why your browser has to make the decision for the both of us.
> and the number of websites where I'm okay with someone injecting arbitrary JavaScript is exactly zero
Honestly ... I would be okay with blocking Javascript over HTTP. But I think that's more because I just hate Javascript :P
> would be much better spent actually trying to improve those circumstances
You seriously want me to write a TLS library?
My dream goal would actually be to have it built-into the sockets layer. If it could be enabled as easily as a setsockopt(SO_TLS_CERTIFICATE, (void*)certificatedata, ...); and OS updates could fix the security, I'd be a lot more inclined to get on board with the programming side.
I don't have a solution to the wildcard cert issue. I can't well start up my own CA to give them out for free. I guess it would at least be nice to see if they ever tone down self-signed certs from "WORSE THAN HITLER" to "at least equal to HTTP" in terms of warning messages. People keep talking about it, but it's been what? Over a decade now? I'll believe it when I see it.
Can someone explain why HTTPS is necessary for a webpage where I don't log in or submit any information?
For example, take the xkcd homepage. Not only do I not log into it, there's nowhere I _could_ log in. The only input is a search box (which seems to be disabled at the moment anyway). Is it really a security risk if my communication with xkcd's servers is unencrypted? (Yes, xkcd has a store and a forum, and I understand why you'd need HTTPS on those subdomains - but I don't see why the main domain needs it.)
I agree with the parts of their plan to disable browser features that could be a security risk to non-HTTPS pages - that makes total sense. But it seems absurd to prevent static pages from using future CSS layout features just because they're not using HTTPS.
Intermediaries can (and already do) silently cause the content to be tracked, altered or otherwise modified against both your and the site owner's interests.
How would you feel if they inserted javascript to mine bitcoins?
> Can someone explain why HTTPS is necessary for a webpage where I don't log in or submit any information?
What about a site giving out health info? No login there, but could have consequences if tampered with. Or recipes (same as health info in some cases). Or news (could make investors jump).
Not that HTTPS fixes all of this, but there's no reason to think that a non-interactive or "static page can never benefit from security.
If things like "python -m SimpleHttpServer" don't work then developers will switch browsers. I don't think anyone is seriously considering what it will take to migrate the long tail of development tools that use HTTP on localhost.
Chrome has been pushing the same thing (deprecating plain-text HTTP and/or visually marking it as non-secure) for quite some time, and they've been very clear that "localhost" will still be considered a secure origin. I don't see any reason to think that Firefox would behave differently.
And what about testing small applications on remote servers like "dev.my-personal-site.com"? I don't want to pay $15 for an SSL certificate and 15 minutes of my time just so I can get my dumb lunch break tetris HTML app running on the machine I SSH into from my tablet.
I am long past confused and heading toward awed, at this point, that it's not a common-sense practice for every web developer to generate a personal self-signed root-CA cert, and install it on all of their machines. It's as basic as having an SSH or PGP key.
Setting up a new box? Put your CA-cert in its trust roots. Then use your CA to generate a server cert for it; plop that in /etc/nginx and wherever else. Now it's secure!
This is exactly the original use-case for X.509 certificate authorities: pairing devices on a private network without having to give each of them a set of of their peers' keys in advance. You have a private network that you run services on? You're a CA.
And really, in the dev-environment case, you actually want client-auth, too, because then you get "clients who don't have a CA-issued client cert can't connect" for free.
In proper X.509, the server auths the client just like the client auths the server—it's really more of an equal-peers "we're both trusted by the CA—the network owner—so we should both trust each-other" kind of thing. The public Internet centralized X.509 model—where the client has a huge list of CAs that the user doesn't even know the contents of, and the server doesn't check anything—is a very strange and non-idiomatic implementation of the premise.
Has Mozilla indicated whether HTTP2 connections with opportunistic encryption will get access to secure-site features? If so, then SimpleHttpServer could be updated to use HTTP2+oe.
Here's a proposed way of phasing this plan in over time:
1. Mid-2015: Start treating self signed certificates as unencrypted connections (i.e. stop showing a warning, but the UI would just show the globe icon, not the lock icon). This would allow website owners to choose to block passive surveillance without causing any cost to them or any problems for their users.
2. Late-2015: Switch the globe icon for http sites to a gray unlocked lock. The self signed certs would still be the globe icon. The would incentivize website owners to at least start blocking passive surveillance if they want to keep the same user experience as previous. Also, this new icon wouldn't be loud or intrusive to the user.
3. Late-2016: Change the unlocked icon for http sites to a yellow icon. Hopefully, by the end of 2016, Let's Encrypt has taken off and has a lot of frameworks like wordpress including tutorials on how to use it. This increased uptake of free authenticated https, plus the ability to still use self-signed certs for unauthenticated https (remember, this still blocks passive adversaries), would allow website owners enough alternative options to start switching to https. The yellow icon would push most over the edge.
4. Late-2017: Switch the unlocked icon for http to red. After a year of yellow, most websites should already have switched to https (authenticated or self-signed), so now it's time to drive the nail in the coffin and kill http on any production site with a red icon.
5. Late-2018: Show a warning for http sites. This experience would be similar to the self-signed cert experience now, where users have to manually choose to continue. Developers building websites would still be able to choose to continue to load their dev sites, but no production website would in their right mind choose to use http only.
I would personally rather see those promoted and methods developed to securely bootstrap them than make us all reliant on centralised CA infrastructure. The centralised CAs are all at the mercy of their governments and hence, in my opinion, ought to be considered almost as insecure as self-signed certs.
EDIT: I think I misunderstood your comment - reading again it sounds like you are also in favour of self-signed (hopefully so).
Until supports for DANE and DNSSEC becomes widespread, unless it's a site for personal use, self-signed certs can't really be trusted by third parties.
(BTW, if you're not using a conventional CA, you'd best off being your own CA, and signing your certs with a CA certificate you've generated rather than simply self-signing the cert. It's a little more trouble in the short term, but it means that each time you subsequently need to generate a new cert, you don't need to put up with warnings everywhere because it'll be validated by your own CA cert. The downside of this is having to install the CA cert everywhere. That's what I do for my private stuff. There are tonnes of tutorials online on how to do it.)
Users should still get a warning if they requested an HTTPS URL. If they requested HTTP and there was opportunistic encryption, fine. But under no circumstances should HTTPS URLs, which indicate secure intent, silently downgrade to insecure (self-signed or otherwise).
If SSL doesn't change, this move will cut the little folks out of the internet. What are Mozilla's values?
From the site: Let’s Encrypt is a new Certificate Authority: It’s free, automated, and open. Arriving Mid-2015
The web is moving faster every day, apparently. I sure do hope that project will be all it's chalked up to be.
For example, I need IP-only certs for a new project I'm working on (waiting for DNS to propagate to all clients is too unreliable and slow). If letsencrypt doesn't do that... well then I'd have to hope real hard for a competent CA out there who has an automated process available that allows IP-only certs. And whatever their price, if companies start following Mozilla's lead too soon, I'll have to pay up.
The wording in the article is perhaps not so damning yet, but it's still making me uneasy that they put out this press release while there are currently ZERO viable solutions for this.
If the friction for testing, say, an enterprise LOB app on an internal-only QA IIS server is any higher than "basically zero" with Firefox, and the same friction doesn't apply to Chrome or IE, well.
Not great for people running small websites.
$9 - $11 / year for perfectly good certs. Less than $1 per month is a small burden.
https://www.namecheap.com/security/ssl-certificates/domain-v...
I actually expect the price of plaintext HTTP hosting to go up a bit; partly due to reduced demand, but also due to increased risk/liability. With SSL being the "industry best practice", I expect at least a few bean counters will view the risk of private information leaks or hypothetical legal liability for enabling DDOS (similar to the "attractive nuisance" doctrine).
There will be a turbulent transition period, of course. As someone currently living at the poverty line, I have argued against the CA system many times. A SSL cert (and anual renewal) may be an insignificant cost to some people, but it is a real barrier when tha cost represents days/weeks of food. Unfortunately, none of this removes the need for encryption or the risks of plaintext. This is why I'm very excited about Let’s Encrypt; It might solve the cost problem, and it might avoid the StartSSL "no second-source" problem because it is a protocol first.
Internet use is only going up, so these transition costs are only going to go up. We can pay it now, or pay even more in the future.
The idea was that users should want to validate they speak with the organization McDonald's, not with mcdonalds.com which may or may not belong to them. Turns out users don't, and that the distinction gets even less important over time. Domain names is an important identifier for an organization now. You can still however see the old process at work in EV certificates, which normally carries an extra cost.
If SSL had been designed for domain validation from the start, if would have looked like DNSSEC. Cryptographically verified domain assignments is a good idea, and infinitely more secure than the domain validation schemes we use today.
Here at HN there are a handful who can't resist going on about NSA every time DNSSEC is mentioned, so I expect a few of those now. Please do understand the whole picture and how the complete certificate stack works before taking those statements at face value.
Now you might say don't trust the registrar, trust the people who run the .com (or whatever) TLD. That's getting close to what DNSSEC does, which some people say is better. But CAs weren't designed for this like DNSSEC. With the way CAs work, we would have to give the runners of .com power over all domains, which some people might not think is so bad. But it would also mean we would have to give the owners of .sucks power over all domains as well, which most people would be against.
Clarity, because there will be so much distilled information and tooling around setup.
Affordability, because there will be so much volume that companies will start to compete on price in the same way that domain companies do.
Major browser vendors like Google and Mozilla don't change their policies in a vacuum while the rest of the world stays static. The move to "deprecate" HTTP is an explicit attempt to manipulate the rest of the world into making SSL easier and more affordable. It is unfair to evaluate this proposal in isolation without considering the market upheaval that it is very much intended to trigger.
Currently, most web hosts charge a hefty markup on SSL certificates and charge even more to enable them on a website hosted with them. This practice may no longer be sustainable as more and more people begin to demand SSL. "Free SSL with every 1-year contract!" could well become a standard marketing slogan, just as "Free domain with every 1-year contract!" has been for the last 10+ years.
Some domain registrars already offer free or low-cost (~$1.99) SSL certificates with the purchase of every domain. This may become more widespread as registrars scramble to remain competitive.
Android 2.x and Windows XP are major excuses for not adopting SNI, but the upcoming release of Windows 10 will reduce the market share of XP even further, and old Android's lifespan is also running out thanks to the planned obsolescence of mobile devices. By 2017-18, nobody will care about these platforms anymore, and if anyone still does, we can tell them to get Firefox.
Even without StartSSL or Let's Encrypt, existing CAs may be forced to cut their prices drastically as a horde of super-price-conscious consumers begin to flood their once prestigious trading floor. Some CAs have already been offering $20 wildcard certs through selected resellers. Expect more of these offers in the near future. This is a race to the bottom, and I'm thoroughly enjoying it!
To top it off, CloudFlare is offering free SSL (SNI required) to everyone. Expect services like this to become more common as SSL comes to be seen as an essential component of every online service.
Of course, there's no guarantee that these changes will occur. But I can guarantee that most of them will not occur unless there's massive, organized presssure on the lazy, greedy incumbents. Google and Mozilla are doing the world a great service by adding their weight to this much-needed pressure. Remember when the rest of the world basically ran an extortion racket to force the web hosting industry into upgrading to PHP 5? That was glorious. I want to see it happen again, this time for easy and affordable SSL.
If the deadline arrives and the world still isn't ready for the transition, we'll think again and adjust our strategies accordingly. Nothing wrong with that. In the meantime, let's be optimistic and go bully some web hosts!
I'd love to believe this but I've never once seen the https-only nazis bring up this issue on their own, or show any concern for the fact that it will limit speech on the web. The y mostly work for companies where getting ssl certs is no big deal, and they put their personal projects on github or heroku anyways.
The backbone of the web was the fact that you could put up a website on your own computer within a matter of minutes. That is now going to be gone and I've never seen the biggest advocates of this change show any concern whatsoever.
[0] http://techcrunch.com/2014/06/24/with-google-domains-its-tim...
[1] https://domains.google.com/about/
This "certificates are expensive" argument was only valid a decade ago, we have free certs now.
Mozilla pushing forward aggressively forces people to address the pain points.
Dead Comment
StartCom offers the free (for personal use) Class 1 X.509 SSL certificate "StartSSL Free"
http://en.wikipedia.org/wiki/StartCom
I can't find anything lower than $60.
Wouldn't it just be significantly easier to simply change the URL art style to make clear that HTTP is "insecure." Like a red broken padlock on every HTTP page?
That has the following advantages:
- HTTP remains fully working for internal/development/localhost/appliance usage (no broken features).
- Users are reminded that HTTP is not secure.
- Webmasters are "embarrassed" into upgrading to HTTPS.
- Fully backwards compatible.
Seems like a perfect solution where everyone wins.
These capabilities are sensitive enough that you want to give users control over who is granted access. But if pages are being loaded over HTTP, the user can have no way of establishing the authenticity of the Javascript code they're granting permissions to.
Yes, the hosts could wildcard. Yes, there are other solutions out there. But for the average Joe who is blogging about his vacations and family? They're going to be completely lost.
Why don't shared hosts just wildcard? Shared certificate? Well, let's think about it... Charging ~$5/month/dedicated IP is a nice upsell, and getting $70 for an installed SSL cert that costs them $10 from their SSL cert reseller, that takes them 2 minutes to configure... That's a nice slice of pie. I'd take that bet any day.
It's only an upsell now. If in the future SSL is required to get access, it stops being an upsell and starts having to be part of the basic package. Whether that will raise prices significantly is yet to be seen.
Average Joe uses Facebook, Tumblr, Wordpress, or any number of existing hosts to blog to his family.
When the user first visits an HTTPS page with a self-signed cert, they get the content, and the URL art style has a broken lock or something warning it's not known to be secure. (It's better than raw HTTP but it's not trusted.) With certificate pinning by the browser, the next time the user visits that page, if it's different, then they get the current experience that warns them in big scary text and requires several clicks to get past. There's a question of if it's different in that the server owner upgraded to a paid SSL cert should it show a warning or not, but if there's a way to sign that upgrade with the old cert that the browser can know about there shouldn't be a problem...
Do we assume the user is going to notice that URL art style, and actually heed it? Because if the answer is "no" (and I think in reality, the answer would be "no"), then pick a high value site, and MitM it with a self-signed cert. The user misses the indicator, and proceeds to interact with the site; does JS work? (let's steal the user's cookies) do forms work? (please log in!)
Showing a big scary warning in one case, and not in the user, implies to the user that the browser has some reason to think one is more secure, which is misleading.
Say I navigate to some restaurant's web page using HTTP. Even if I used HTTPS, someone spying on my traffic would know what I'm reading, if the IP address is a dedicated server for that web site only. Whether I use HTTP or HTTPS, they could infer that I'm interested in visiting the restaurant.
Secondly, I'm only interested in the opening hours. That is not classified information.
I suppose that a MITM attack could be perpetrated whereby the attackers rewrite the opening hours. I end up going to the place while it is in fact closed (and the area happens to be deserted), making me an easy target for the attackers to rob me.
Okay, okay, please deprecate HTTP; what was I thinking!
And that restaurant better get a properly signed certificate; no "self signed" junk! Moreover, I'm not going to accept it over the air the first time I visit, no siree. DNS could be redirecting me to a fake page which also has a signed certificate. I'm going to physically go the restaurant one time first, and obtain their certificate from them in person, on a flash drive, then install it in my devices. Then I'm going to pretend I was never there and don't know their opening hours, and obtain that info again using a nearly perfectly secured connection!
[1] http://www.azarask.in/blog/post/a-new-type-of-phishing-attac...
All that that attack requires, to be successful, is the ability for pages served over HTTP to run Javascript and submit forms.
Or a MITM attack could be perpetrated whereby your computer is -however briefly- part of a JavaScript powered DDOS machine: http://arstechnica.com/security/2015/03/31/massive-denial-of...
You could have multiple domains in the certificate to avoid identification, but that has its own problems.
This is the techie version of "nothing to hide, nothing to fear". It's a pathetic argument and brings nothing to the table.
Just because you don't care about the NSA knowing you like McDonalds when you browse their menu, everybody else in the world shouldn't care about their government knowing they are gay (which, need I remind you, is an offense punishable by death in certain countries) when they browse an article on LGBT rights.
Because, if McDonalds doesn't need SSL for their menu, why would a writer need it for his small-audience blog?
I don't see any benefit in this type of blanket, all or nothing, type of approach. In fact, I see it doing more damage than good. Encrypting blogs, news websites, etc still makes no sense to me. I'm actually disappointed in Mozilla for looking at doing this. As a developer I respect many of their products and see them as champions of the web in a lot of ways.
HTTPs does not:
- protect a user from malware on their own system with keylogging taking place
- increase security in outdated and insecure websites (eg: old known exploitable code)
- prevent any browser drive-by downloaders or exploits
- increase the security of the web server itself (web stack thats serving requests) - yeah that's you using a private VPS without doing Kernel updates.
These are likely the major factors of why people have security issues. What is forcing HTTPS on the entire web actually doing? Who is it benefiting? The government can still snoop your data in-flight. If someone is connected to a fake wifi endpoint there is on the fly SSL decryption out there.....
Do we still need TLS for actual secure transactions that deal with personal data? Yes, of course. That's what it is intended for.
Do we need TLS to read the latest TMZ post about Miley Cyrus? You decide... (oh and it's http if you were wondering)
When you visit "blogs, news websites, etc" do you think there's no value in being able to know for sure that the content is exactly what the owner of the site intended? Even though ISPs have proven themselves willing to intercept and modify that content in transit?
http://arstechnica.com/tech-policy/2013/04/07/how-a-banner-a...
http://arstechnica.com/tech-policy/2014/09/08/why-comcasts-j...
Deleted Comment
>a web browser has no business dictating that the entire web should be forced in HTTPs.
1. that isn't what is happening as per the article. They are going to begin picking features that shouldn't be allowed over HTTP (like, say, geo location, web camera access, etc).
2. a browser is precisely the actor that should push for these things. If not browser vendors, who?
>What is forcing HTTPS on the entire web actually doing?
Encrypting streams of data that were previously unencrypted.
>Who is it benefiting?
Users.
>The government can still snoop your data in-flight.
So your argument is 'this isn't perfect for all attack vectors, so it isn't useful at all'?
>Do we need TLS to read the latest TMZ post about Miley Cyrus?
Yes. See how easy that is?
Imagine you're making some meatballs. You've got pigs, spices, and a stove.
If you're in Germany, there's no problem -- kill some pigs, grind some pork, mix in the spices, and cook your meatballs. You could make sausages the same way (as long as you've got tubing). And you're free to sample your food as you cook it to make sure it suits your tastes.
If you're in the US, you've got two options:
1. Give up on sausage entirely. Make sure your ground pork is well cooked before you even think of eating any of it.
2. Carefully vet the pigs for trichinosis before introducing their pork into your kitchen.
Unsurprisingly, we use option 1.
Germany, like the rest of Europe, has opted for a blanket solution where they're not allowed to have pigs with trichinosis. The US has opted for a different blanket solution where you can't eat raw pork. Nobody is suggesting that we carefully inspect individual pigs and treat the meat according to whether they had trichinosis.
"Multiple CVEs fixed including CVE-2014-3506, CVE-2014-3507, CVE-2014-3508, CVE-2014-3509, CVE-2014-3510, CVE-2014-3511, CVE-2014-3570, CVE-2014-3572, CVE-2014-8275, CVE-2015-0205 and CVE-2015-0206."
So if I were running a TLS-enabled site using LibreSSL from OpenBSD 5.6, I'd have been exposed to potentially 11+ CVEs. A little sooner with OpenSSL, and I would have been exposed to Heartbleed. And who knows how many CVEs will arise before 5.8 is released?
Why is it so impossible to write a secure TLS library? Why should I put my entire server at risk to appease the attempts of Mozilla and Google to prop up the CA business? Sorry, but I'll stick to parsing lines of text.
Let 'em remove HTTP completely. Hopefully after they break 90% of the web, we'll get some real user revolt, and some real competitors in the web browser space might emerge. Maybe from some people who actually listen to what their users are asking for.
I guess now we know what that "signed extensions only" change was for: what do you think they're going to do when someone submits a "Restore HTTP Functionality" add-on in the future?
So your argument is that since locks can occasionally be picked, doors shouldn't have locks? What exactly is the massive burden with HTTPS? The computational cost is tiny and will continue to become tinier, there are free cert providers like StartSSL and more coming soon, and the implementation is simple enough that anyone managing a server should be able to handle it easily.
The number of websites where I wouldn't prefer encryption and identity authentication is around zero, and the number of websites where I'm okay with someone injecting arbitrary JavaScript is exactly zero. The time people spend making flawed "if you have nothing to hide, you have nothing to fear" or "crypto libraries/CAs are bad, scary, and hard to use" arguments would be much better spent actually trying to improve those circumstances for the inevitable and necessary shift to HTTPS everywhere.
A faulty lock on my house doesn't turn into Heartbleed.
The thing is, I don't need a lock on my server that serves up static, legal content. You might think it's a problem, that the NSA is going to spy on you, or China is going to inject attacks into your requests to my server, but that's your problem.
I'm not going to run a massively buggy TLS library with an API guide that would take a whole team of engineers weeks to decipher, just because you're intensely paranoid about accessing game-related data over HTTP.
Seriously, look at the GnuTLS documentation sometime. It's psychotic. As is MatrixSSL, PolarSSL, OpenSSL, and NSS. The closest to sanity I've ever seen was libtls, which is only on OpenBSD, still has lots of CVEs popping up, and can't do non-blocking mode.
> What exactly is the massive burden with HTTPS?
1. write your own HTTPS server. I'll wait a few months, or
2. find a library that's easy to use and won't expose my server to Heartbleed-like attacks, and
3. pay me $70/yr for the wildcard cert I would need.
I'll cover the extra CPU costs, since you say they're so small. (even though when people say "small", they're counting overhead as a percentage against a site running a bloated beast like Wordpress in PHP + MySQL.)
> there are free cert providers like StartSSL and more coming soon
That don't provide wildcart certs (and I have a wildcard CNAME entry; and I make use of that.)
> The number of websites where I wouldn't prefer encryption and identity authentication is around zero
And you're free to not visit my site, just like I wouldn't ever patronize a webstore that wasn't HTTPS. That's how markets are supposed to work. I don't see why your browser has to make the decision for the both of us.
> and the number of websites where I'm okay with someone injecting arbitrary JavaScript is exactly zero
Honestly ... I would be okay with blocking Javascript over HTTP. But I think that's more because I just hate Javascript :P
> would be much better spent actually trying to improve those circumstances
You seriously want me to write a TLS library?
My dream goal would actually be to have it built-into the sockets layer. If it could be enabled as easily as a setsockopt(SO_TLS_CERTIFICATE, (void*)certificatedata, ...); and OS updates could fix the security, I'd be a lot more inclined to get on board with the programming side.
I don't have a solution to the wildcard cert issue. I can't well start up my own CA to give them out for free. I guess it would at least be nice to see if they ever tone down self-signed certs from "WORSE THAN HITLER" to "at least equal to HTTP" in terms of warning messages. People keep talking about it, but it's been what? Over a decade now? I'll believe it when I see it.
For example, take the xkcd homepage. Not only do I not log into it, there's nowhere I _could_ log in. The only input is a search box (which seems to be disabled at the moment anyway). Is it really a security risk if my communication with xkcd's servers is unencrypted? (Yes, xkcd has a store and a forum, and I understand why you'd need HTTPS on those subdomains - but I don't see why the main domain needs it.)
I agree with the parts of their plan to disable browser features that could be a security risk to non-HTTPS pages - that makes total sense. But it seems absurd to prevent static pages from using future CSS layout features just because they're not using HTTPS.
How would you feel if they inserted javascript to mine bitcoins?
What about a site giving out health info? No login there, but could have consequences if tampered with. Or recipes (same as health info in some cases). Or news (could make investors jump).
Not that HTTPS fixes all of this, but there's no reason to think that a non-interactive or "static page can never benefit from security.
https://citizenlab.org/2014/08/cat-video-and-the-death-of-cl...
[1]: http://www.w3.org/TR/powerful-features/#is-origin-trustworth...
Setting up a new box? Put your CA-cert in its trust roots. Then use your CA to generate a server cert for it; plop that in /etc/nginx and wherever else. Now it's secure!
This is exactly the original use-case for X.509 certificate authorities: pairing devices on a private network without having to give each of them a set of of their peers' keys in advance. You have a private network that you run services on? You're a CA.
And really, in the dev-environment case, you actually want client-auth, too, because then you get "clients who don't have a CA-issued client cert can't connect" for free.
In proper X.509, the server auths the client just like the client auths the server—it's really more of an equal-peers "we're both trusted by the CA—the network owner—so we should both trust each-other" kind of thing. The public Internet centralized X.509 model—where the client has a huge list of CAs that the user doesn't even know the contents of, and the server doesn't check anything—is a very strange and non-idiomatic implementation of the premise.
openssl s_server -accept 8000 -key key.pem -cert cert.pem -HTTP
Self-signed certificates are treated as errors: https://bugzilla.mozilla.org/show_bug.cgi?id=431386
Switch generic icon to negative feedback for non-https sites: https://bugzilla.mozilla.org/show_bug.cgi?id=1041087
Here's a proposed way of phasing this plan in over time:
1. Mid-2015: Start treating self signed certificates as unencrypted connections (i.e. stop showing a warning, but the UI would just show the globe icon, not the lock icon). This would allow website owners to choose to block passive surveillance without causing any cost to them or any problems for their users.
2. Late-2015: Switch the globe icon for http sites to a gray unlocked lock. The self signed certs would still be the globe icon. The would incentivize website owners to at least start blocking passive surveillance if they want to keep the same user experience as previous. Also, this new icon wouldn't be loud or intrusive to the user.
3. Late-2016: Change the unlocked icon for http sites to a yellow icon. Hopefully, by the end of 2016, Let's Encrypt has taken off and has a lot of frameworks like wordpress including tutorials on how to use it. This increased uptake of free authenticated https, plus the ability to still use self-signed certs for unauthenticated https (remember, this still blocks passive adversaries), would allow website owners enough alternative options to start switching to https. The yellow icon would push most over the edge.
4. Late-2017: Switch the unlocked icon for http to red. After a year of yellow, most websites should already have switched to https (authenticated or self-signed), so now it's time to drive the nail in the coffin and kill http on any production site with a red icon.
5. Late-2018: Show a warning for http sites. This experience would be similar to the self-signed cert experience now, where users have to manually choose to continue. Developers building websites would still be able to choose to continue to load their dev sites, but no production website would in their right mind choose to use http only.
I would personally rather see those promoted and methods developed to securely bootstrap them than make us all reliant on centralised CA infrastructure. The centralised CAs are all at the mercy of their governments and hence, in my opinion, ought to be considered almost as insecure as self-signed certs.
EDIT: I think I misunderstood your comment - reading again it sounds like you are also in favour of self-signed (hopefully so).
(BTW, if you're not using a conventional CA, you'd best off being your own CA, and signing your certs with a CA certificate you've generated rather than simply self-signing the cert. It's a little more trouble in the short term, but it means that each time you subsequently need to generate a new cert, you don't need to put up with warnings everywhere because it'll be validated by your own CA cert. The downside of this is having to install the CA cert everywhere. That's what I do for my private stuff. There are tonnes of tutorials online on how to do it.)