The people who are pushing back against HTTPS really bug me to be honest. They say silly things like “I don’t care if people see most of my web traffic like when I’m browsing memes.”
That presumes that the ONLY goal of HTTPS is to hide the information transferred. However, you have to recognize the fact that you run JITed code from these sites. And we have active examples of third parties (ISPs, WiFi providers) inject code into your web traffic. When browsing the web with HTTP you are downloading remote code, JITing it, and running it on every site you visit if you aren’t 100% noscript with no HTTP exceptions. You have no way of knowing where that code actually came from.
Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?
My primary concern are local servers ‒ which of course, are irrelevant if you are a centralised service provider such as Google.
To provide some context, I'm currently working on a web application where the server is intended to be running inside a home network (where the server requires zero configuration by the user). As of now, some of the JS APIs I'm using are only available if the site is running in a secure context, so the server has to serve the application using HTTPS, otherwise some functionality won't be available. However, it is impossible to obtain a valid TLS certificate for this local connection -- I don't even know the hostname of my server, and IP based certificates aren't a thing. So basically, to get a "green lock symbol" in the browser, the server would have to generate a random CA and get the user to install it, which comes with its own severe security risks and is not an option.
So my current plan is to have a dual-stack HTTP/HTTPS server, which on first startup generates a random, self-issued certificate. When the server is first accessed using HTTP, the client automatically tries to obtain some resources via HTTPS. If this succeeds, the user is redirected to the HTTPS variant. If it fails due to a certificate error, the user is presented with a friendly screen telling her that upon clicking "next" an ugly error message will appear, and that this is totally fine. Oh, and here's how to permanently store an exception in your browser.
Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.
This sucks. It just seems that Google and co don't care about people running their own decentralised infrastructure; and marking your own local servers as "insecure" does definitively not help.
Yes, this reminds me of the Mozilla IoT gateway from yesterday, which seemed like it pulled exactly that rat-tail of requirements behind it. Something like:
- We'd like to make an IoT gateway that you can use from a browser.
- To get access to necessary APIs, we have to provide it via HTTPS.
- The get HTTP we need a certificate. Because no one is going to pay for it, we'll use Let's Encrypt.
- To get a Let's Encryt cert, we need a verifyable hostname on the public internet. Ok, let's offer subdomains on mozilla-iot.com.
- To verify that hostname, Let's Encrypt needs to talk to the gateway. Ok, let's provide a tunnel to the gateway.
- Now the gateway is exposed to the internet and could be hacked. So we need to continously update it to close vulnerabilities.
So in the end all your IoT devices are reachable from the internet. But hey, you can use Firefox to turn your lights on!
I don't think there's a way to do what you want in a secure manner.
I think fundamentally your issue here is with secure contexts, not with the site labeling. In the end, you can have a site like you describe, but you have to avoid using APIs that require secure contexts.
Any sort of avoidance of this, as by the method you describe ("please ignore the ugly warning you are about to see") is a mistake, because you're helping to train the users to ignore these messages.
> Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.
Is it, though? Assuming your server hasn't been compromised (nobody is monitoring it to make sure!), and assuming that the self-signed cert cannot be easily exfiltrated, and assuming that they don't do the same thing the next time they get an ugly warning from chase-bank.ru because they're sure that it's spurious -- then maybe?
Let's maybe hope that they'll make an exception for the RFC1918/4193 ranges. Of course, the other side of the coin of is that even a "private" network could be anything from your private home to your workplace intranet to an airport wi-fi hotspot, and can't be assumed to be safe from snooping/injection.
As for your particular hassle, it makes sense to me for a browser to mark sites that mix http/https as insecure from the point of view that once the data is on the plain http page you can no longer be sure that it won't be handed off over an unencrypted connection some place else by some rogue javascript.
Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks. Say, a method for routers to discover certificates announced by devices on the network to list them in its management interface where you can enable or disable them.
It would require you to run outside services to support it, but you could most certainly rig something together that lets each "installation" claim randomsubdomain.domainyoucontrol.com, phone home with the local network IP of the "installation", phone home the Lets Encrypt DNS-01 details, and then get a valid certificate for a domain that points to the local instance.
If you control the local DNS server, you can install a certificate for localserver.example.com, then have the server return a local IP for localserver.example.com
It’s not impossible to obtain a SSL certificate for a local connection. You can add an entry for fictional domain that maps to localhost in your hosts file and then self-sign a certificate and install it.
I sort of have the reverse problem. I would like to use a websocket to connect to an insecure host on the local network from a secure context. I realise that this is incredibly niche and would probably need independent confirmation through the browser to prevent abuse. But it's needed to connect to a local weechat instance from Glowing Bear, which is essentially a web UI for WeeChat, an IRC client: https://github.com/glowing-bear/glowing-bear. Right now we have an https and a non-https-version of the website, which is arguably even worse.
If you have a web service counterpart you should consider looking into webrtc... the sdp exchange can happen through your site and then they will connect directly
Your problem is isolated to local development servers, which can be easily excepted from blocking non-HTTPS sites. The potential privacy/security gains totally outweigh the inconvenience of seeing "Not Secure" in the URL bar of your browser on an app you are developing.
This isn't as big of a problem as you'd like to believe. IMHO.
Certificates in which the subject is one or more IP addresses rather than DNS names _are_ a thing, but not that many get issued by public CAs and almost certainly your laundry list of objections about how you don't want to require any setup or Internet connection will ensure you wouldn't be able to qualify.
You are deluding your users if you convey the idea that home networks are separated the internet. Or that traffic on a home network is safe and doesn't need TLS. Can't you just put up a domain can give your users subdomains on that?
You also could stop misusing the browser as an application frontend, and write a proper frontend with a cross-platform toolkit, and distribute that.
I don't understand why developers so often choose the browser as a frontend. Are there better rationales besides having at least some frontend for tyrant-controlled devices like iOS'es, and just using the skills one already has?
For the first, just tell the people to get proper devices.
Because of the second, I see schooling efforts for JavaScript by the tech giants so negatively. It leads to masses of people using JavaScript where it shouldn't be.
I think it can be summed up in one old but very relevant-in-our-times quote: "Those who give up freedom for security deserve neither."
At first, the idea that something is being done "for your safety and security" sounds good, but like all utopian goals, it has deeper connotations that are truly dystopian.
As mentioned in another of the comments here, this is yet another instance of companies using the "more secure" argument to gain control over the masses and ostracise anything they don't like. They're harnessing fear and exploiting it to their advantage.
To give an analogy in the real world, we don't lock ourselves in bulletproof cages and expend great efforts in hiding from others (for the most part), and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage. We shouldn't let companies and (and try very hard, unfortunately not always succeeding for) governments dictate every detail of how we should live our lives offline, and the same should apply online.
There's a very long tail of sites, many sadly disappearing from Google[1], of old yet extremely useful information, which are probably going to stay HTTP for the forseeable future. I made a comment about this in a previous "HTTP/S debate" article:
You have javascript disabled. I do not. I happen to use plenty of sites that require javascript, with HN being one of them. Most of the users out there do not have javascript disabled either. How do you account for their security?
> and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage.
This is just a warning though, no one is preventing you from "going there", it's merely a warning that it might not be safe. Your analogy is more similar to "there's a slow down on that road, let me navigate you somewhere else, but feel free to go there if you want". You have exactly the same amount of freedom.
How exactly is using HTTPS "gain control over the masses"? Google does not control the HTTPS infrastructure.
It's not like those sites are gone either. There's always archive.org.
I think your last statement highlights the real issue here. Everyone is afraid of malicious javascript. I don't know why they're conflating that with http injection.
The most straightforward thing to do would be to disable javascript on non https sites by default or warn if a nonhttps site has javascript. Most of the old sites we want to keep around don't have javascript in them (or much javascript in them).
Ideally people should only be enabling javascript on sites they trust (and are running https for "real" "trust") but having a trusted whitelist for enabling javascript brings back your big brother arguments.
> I think it can be summed up in one old but very relevant-in-our-times quote: "Those who give up freedom for security deserve neither."
Now if you could explain to me how using secure connections and showing a correct warning for insecure connections is restricting your freedom that'd be interesting.
My ISP offers security certificates for $145 more than I currently pay per year. The result is that I'll have to pay up, or face a drop off in traffic for my sites from people who will be too scared to view them because of this new browser-based warning... My sites are all public information, no secure data is displayed on them, and there are no user accounts beyond those of my team's editing accounts. It would be sort of overkill to require HTTPS on them.
Cloud hosting is much more expensive than my current hosting plan. It seems like this is also highly convenient for ISPs that http will be phased out because either way, ISPs make a lot more money out of web site business by the newly required standard.
This is the future we knew was coming, where it becomes so expensive for individuals to do the same as companies do. It's how Radio, TV, and many other things were taken away over the years, it simply became too much of a legal hurdle and way too expensive to run until large companies became the only channel owners.
It's just history repeating itself, but now to shut out individual web site and application makers who don't have resources to compete with big business. :\
> Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?
Please explain to me how Javascript delivered from a malicious ad delivered over HTTPS is somehow safer. Most malicious code is delivered with the help of the website.
I would have nothing against it, if browsers just disabled Javascript on plain-HTTP web sites. I do it anyway by default, enabling it on domains where it serves some useful purpose. Treating non-HTTP content as untrusted makes absolutely sense (though I don't think that HTTPS really makes a https://random.l33t.site/ a trusted source of anything).
The old simple plain-HTTP plain-HTML web is still useful and practical for showing text (don't care much even about CSS - HTML 3.2 is perfectly suitable for showing readable information). It seems to become a victim of collateral damage in the pursuit of "better web", which is sad.
My only concern is that the end user can always control which Certificate Authorities his/her browser accepts, and that anyone can set up their own CA. It seems like both of these conditions are vital for the future of decentralized web.
People who cheer for tons of stupid fucking HTTP warnings all over the place really bug me to be honest. Anyone who uses the web needs to be taught the difference between a secure and insecure connection and told where to look for signs (this assumes we burn stupid shit browsers that look to mangle URL bar by removing data from it).
The grandma will not make better decision if you put in another dumb warning like this one https://i.imgur.com/rxmyWtF.png to waste more and more of my time when I enter my password on the same website over and over. I KNOW IT'S INSECURE, JUST STFU already.
I'm at the point where I will look to build my browsers from source after removing that shit (along with removing inability to add certificate exceptions for certain situations). Good thing they're open source.
I don't get how so many people miss the point here, but alas.
It's actually not so complicated: obviously, there's nothing actionable for the user to do here. The message is for the user, but only indirectly: it's there to push developers into better practices. Your boss may not care about encryption or privacy, but definitely will care about hundreds of phone calls asking why they are warned that the form is insecure when they try to login.
With plenty of obscure pages accepting everything from passwords to credit card numbers on plain HTTP pages, this is important. There may not be someone browsing the page knowledgeable enough to catch this, but if any end user can know what they're doing is wrong, then it's much more likely it will be discovered.
While mostly not actionable directly for users, one thing a user can, and probably should, do is close the page, if they can.
> this assumes we burn stupid shit browsers that look to mangle URL bar by removing data from it
That's just about every browser now. I think that full URLs are only available in Firefox, and only if you set `browser.urlbar.trimURLs` to `false` in about:config. Hiding URL information is a very bad trend.
If this is just a warning, that's one thing. If it becomes a solid block, it means more hassle for everyone running an intranet, and a huge liability for millions of long-lived devices with embedded web servers for browser-based configuration.
Actually, I want a way to turn off the warning on my intranet because of all those printers, sensors, and control units on my intranet that are not going to get upgraded. I don't think I can find an HVAC or sensors with https, and I doubt anyone will authorize a 10's of thousand dollar replacement. I do not want a call anytime people have to deal with these devices and get a scary warning. Telling them to ignore it is not going to generate the right attitude when they go on the internet.
The fact that people want to force HTTPS really bugs me. HTTPS centralizes the publication rights to browser vendors and cert vendors. Why should you require permission before publishing content on the web? IMHO, They should rename the S in HTTPS to $.
I believe that Google's intentions here are to block other pass-through internet entities from collecting advertising data. Obviously, Google would never encrypt user data so that they couldn't mine it. Personally, I am more worried about Google mining our data than some rinky dink ISP.
Also, could you link to the PoCs? I have not seen a reliable PoC but maybe I haven't look that hard. The ones I've seen only work on specific CPUs, and only if certain preconditions are met. But anyway, that is a separate discussion.
I browse a popular gaming forum that has not implemented HTTPS. Chatting with the admin, the reason is simple: Ads.
We can definitely blame the ad networks. Some have switched, but many won't work on https, and websites relying on ad revenue must stay with HTTP or make less money with HTTPS-friendly ad networks.
This hasn’t been true for at least a year. There is zero difference in ad revenue for HTTP vs HTTPS now. Join any major online community for publishers and ask anyone if you don’t believe me.
>Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?
Seems totally irrelevant, since any "legit" site with a $10 certificate will still be able to inject malicious code, either by its operators putting it there directly, or by it being hacked -- whether there's a man in the middle or not. And with something like Spectre out there, https wont do anything.
> However, you have to recognize the fact that you run JITed code from these sites.
I guess browsers could then make an effort to help sites use scripts only when absolutely necessary and give users easy to use tools to disable scripting. But, oh wait, they do the exact opposite.
Or the users could switch to browsers that disable Javascript, third-party cookies by default instead of the ones controlled by advertising and DRM monopolies. But, oh wait, they do the exact opposite.
You don't need HTTPS to verify integrity. HTTPS actually adds attack vectors, complexity, and removes useful functionality like proxies. And like another commenter mentioned, most malware is delivered from an authentic site anyway.
HTTPS evangelists are basically playing a game of political ideology shrouded as concern for safety. I think they care more about their own privacy than they do the functionality, security, and maintenance required for HTTPS sites.
It's also not coincidental that Google has a vested interest in keeping all traffic surrounding their services hidden or obfuscated: traffic content and metadata is money. Google is basically eating the lunch money of ISPs' passive ad revenue. (This is also part of why they want to serve DNS over HTTPS)
How else would you practically verify integrity for web browsing?
What's wrong about caring about your privacy?!
Why the hell do ISPs deserve ad revenue? I don't like Google either, but ISPs that want to tamper with connections to inject their ads can fuck off and die in a fire. That is more unethical than anything Google has ever done.
The imaginary user cited does not need that code in order to "browse memes".
The code is there for advertising, e.g., to attract advertisers as customers by gathering data about users.
Hence the push to HTTPS is for companies that aim to generate revenue from selling access to or information about users to advertisers.
I have no problem with HTTPS on the public web, to the extent that it is the concept of encrypted html pages, and perhaps these are authenticated pages (PGP-signed was an early suggestion).
Encrypt a page of information (file), sign it with a public key and then send it over the wire . The wire (network) does not necessarily need to be secure.
However I do have a problem with SSL/TLS.
I would like to leave open the option to not use it in favor of alternative encryption schemes that may exist now or in the future. It seems one allegedly "user-focused" company wants to remove this option. Comply with their choice or be penalized.
The issue I have with TLS is only to the extent TLS is the idea of setting up a "secure channel" to some "authenticated" endpoint (cf. page), with this authentication process firmly under the control of commercial third parties, using an overly complex protocol suite and partial implementation that is continually evolving (moving target) while people scramble to try to fix every flaw that arises out of this complexity.
To the extent it is not what I describe, I have no issue. (That is, I'm pro-TLS.)
We have one company aiming to replace HTTP with their own HTTP/2 protocol, which to no surprise has features that benefit web app developers and the advertisers they seek to attract far more than they benefit users.
Could we design a scheme to encrypt users web usage that would not benefit advertisers? I think yes. But this is not what is being developed. Encryption today is closely coupled with the "ad-supported web". If we are not careful, this sort of policy pushing by Google could cripple the non-ad-supported web that existed before the company was incorporated.
Encrypted "channels" are not the only way to protect information transferred via the web. TLS is not the only game in town.
The only use I know for captive portals is EULAs, and I'm not sure those ever had legal weight (though obviously IANAL).
But honestly they were starting to be outdated (technologically) even before this. Since a lot of popular sites use HTTPS, I usually have to try and think of a non-HTTP site before I can get through. They're just a nuisance at this point.
This is not to defend Google's actions as "altruistic" in any way. But sometimes Google's interests and the public's do align.
When macOS connects to a Wi-Fi network it makes an HTTP connection to captive.apple.com, and if it doesn't get the expected page in response it pops up a little browser window. I much prefer this solutions to hijacking page loads in my browser (which tends to have lots of annoying side effects).
If you have to use an internet provider that does not provide reasonably direct access to the internet, you should tunnel your traffic through the service that does (e.g. a VPN).
The idea that all internet sites have to compensate for the low quality of the last mile of some users simply does not make sense. If a site accepts sensitive input from the users than sure, it needs an authenticated and encrypted connection; but if it serves static content, it may hold the internet infrastructure and the receivers responsible for the correct delivery.
I would have trouble following security advice from a company that was serving mining ads directly from its websites last week.
Main argument in favor of http would be it's making browser fingerprinting harder from a privacy standpoint. Https by itself let the server identifies its clients individualy without cookies.
As for reasons, there are tons of folks with personal websites on providers that only offer expensive yearly certificate options. Most CDN providers don't support LetsEncrypt. Most shared hosts don't. The Cloud Sites platform for multi-site shared hosting (formerly RackSpace, now at Liquid Web) that I use for a dozen of my friend's sites I host for free doesn't. So, essentially, this means that personal hosted websites either stay 'not secure', get ditched, or increase in price quite a lot. Yes, it's just 5 to 10 bucks a month to spin up another instance on Digital Ocean, but it's yet another 'server' to manage when I'd rather decrease the number than increase it.
My only issue is HTTP/2 client and servers only implementing TLS. This makes it unnecessarily hard to reverse-proxy HTTP/2 on a loopback connection where it's reasonable to assume being safe from MITM.
The existence of vulnerabilities doesn’t negate the bullshit security theatre of certificate authorities.
Banning http is a great example of tossing the baby with the bathwater.
Https is better, but there are still valuable use cases for unencrypted web traffic.
I am sorry this bugs you, but please note that your straw man is not the argument I’d make. Sometimes you need a low hassle web server. Renewing let’s encrypt certs is not low hassle.
> The existence of vulnerabilities doesn’t negate the bullshit security theatre of certificate authorities.
I agree that the current CA system has flaws, but there are efforts such as Certificate Transparency[1] and DANE[2] attempting to improve or bypass the CA system. That said, just having encryption defeats passive eavesdropping, and even the current CA system of authentication raises the bar for active eavesdropping.
> Banning http is a great example of tossing the baby with the bathwater.
I'm not sure what you mean by that.
> Https is better, but there are still valuable use cases for unencrypted web traffic.
To be clear the ONLY reason this is happening is to make sure that any ad served from Google is not tampered with. This is protection for their money making machine. Plain and simple.
The attack surface of browsers is relatively small if you keep them up to date. Browsers were early in including meltdown and spectre.
Running JavaScript is pretty harmless.
On the other hand https makes the web performance horrible
As long as this is just a non-obnoxious warning that translates http to something in plain English I can sort of live with it. But as soon as browsers start to inject security warnings, this is going to be very, very bad.
Our company manufactures devices for use in labs and industrial processes. They need complex user interfaces and there is a push to move to HTTP, HTML and JS to implement them with the device as the web server. The devices are usually installed in a controlled, isolated internal networks.
Our clients run these devices for a long time. 20 or 30 years are not unheard of. They also do not update them if there is no malfunction (they work and our clients love it that way).
Now how the hell do we create a webserver that will work reliably for the next 30 years if we have to embed SSL certificates with built-in expiration dates in the range of 1 to 3 years?
how the hell do we create a webserver that will work reliably for the next 30 years if we have to embed SSL certificates with built-in expiration dates in the range of 1 to 3 years
If there's a requirement that your code needs to run for 30 years without an update then current web technology is probably the wrong choice.
A common use case is a device serving a status report as a single simple, static, read-only (i.e. the backend has no intended ability to receive data) html file with limited or no markup. This can reasonably be expected to run for 30 years without an update - there's nothing to update, you have a fixed hardware system showing the same type of numbers to the user until it dies.
Serving this over http in a secure network would be reasonable.
Serving this over https would be reasonable if you can embed a certificate with an appropriate lifetime. Which you can't.
Sadly it's more of a 'wrong' requirement from the customer. Today's entreprise customers expect things to Just Work. They say 'just sell us a goddam appliance, we'll point a browser at it and we'll call it a day!'. I'm quoting real customers' documents here: 'it's should be as easy as Apple'. We've done it to ourselves.
But I recently had to watch from the sidelines as our management decided against .Net and Qt as the future stack for user interfaces. In the case of Qt the arguments were clearly misinformed, but I only got to comment on that after the fact. So now we will have to deal with web stacks by decree.
Sadly, medical and industrial "isolated networks" have turned out to be huge cans of worms. Never trust the network. Doubly so when you're faced against threats 30 years into the future.
Now how the hell do we create a webserver that will work reliably for the next 30 years if we have to embed SSL certificates with built-in expiration dates in the range of 1 to 3 years?
You can use self-signed certificates. Your clients will have to trust them (by updating their stores). That's hardly an ideal solution (deployment and security wise).
More broadly speaking, you can't rely on anything to stay the same in 30 years, in terms of infrastructure. Many companies therefore deliver both the devices themselves, as-well as the systems to control them (ie. custom laptops/tablets). More costly for everyone involved.
Sidebar: Why are you looking at web technologies if you aren't expecting to make any updates for 20-30 years? Is the interest for speedier development?
Welcome into web compatibility world - lot of companies do nice buck out of its quick evolution. And this is very, very good thing for you too if you play it well.
For internal use the answer is self-signed certificate. However I wouldn't count on web browser based software compatibility to last even 10 years. I think browser vendors at some point will decide to ditch <512-bit certs or some encryption scheme and one needs to make a cert/server upgrade. Some option may be Firefox ESR [0] requirement to ease an adoption.
Moreover browser is UI part of your solution and you need to define some requirements - this should be all right for well administrated environments. If the user for any reason upgrades/changes the browser but not your device then this is user introduced incompatibility. Your EULA should cover that. Also you should send notifications or just show some message in device panel to your customers that they need and upgrade if they want to use given browser past version X.
Customers need to learn that web means edge tech and that needs frequent upgrades. Their isolated internal network of the past e.g based on WiFi WEP would be practically a network open to bad actors today.
Actually, in industrial environments that seems like the perfect choice.
Over 30 years, you'll find vulnerabilities in pretty much anything; so you don't want that device on an open network anyway. But, you can put it on an isolated VLAN, configure it so that it's accessible only by that proxy and no other computers, and then all the http traffic is in the isolated part, and the users access the proxy through https.
>Our clients run these devices for a long time. 20 or 30 years are not unheard of.
Have you actually had clients run those html/js based devices "for 20 or 30 years" (e.g. have some running since 1997) or is this a totally hypothetical scenario?
About a month ago I attempted to restore a DOS-based industrial HMI panel that we shipped mid-1995. The 2 MiB of proprietary custom 72-pin SIMM flash memory had gone corrupt, and neither us nor the panel's OEM possessed the original software for restoration. We did have a later version of the software from 1997, but it needed a whopping 4 MiB to install. The machine was in active production until about two years ago, according to the customer.
I fully expect most of the systems that we are shipping now to be run for the next twenty years as well. Until security issues start losing customers buckets of money, they're not going to care one bit.
Our customers do indeed run devices from us and from competitors for decades. These lifespans are normal and expected. We currently manufacture and sell devices that have been developed 10 to 15 years ago with virtually no changes.
Internally, we have a push to transition user interfaces for future products to HTML/JS over HTTP. This was sold to our management as a solution with long term stability.
Networks which use the private network address blocks are not necessarily trusted networks (consider open Wi-Fi hotspots). Also, private networks do not necessarily use private network address blocks—intranets frequently use publicly-routable IP addresses where possible, as it simplifies linking physically distinct intranets. With IPv6 eliminating the scarcity of public addresses we can expect this to become much more common, even among home users.
So don’t use Chrome 68. Use whatever version you have now. Since you clearly aren’t intending to update anything for “30 years,” then it shouldn’t matter. If you were intending on keeping systems updated, could you reasonably depend on a web browser developer keeping things constant enough for 30 YEARS? If it’s a closed system, you control everything right? So use what works now.
The parent may not be able to dictate the browser compatibility, actually. Customers will use whatever they want, and if it don't works then they'll just 'whine' to the commercial next time he tries to sell them something. Do this too much and it's a recipe for disaster.
Our clients are unwilling to update device firmware, not us. They will happily upgrade their usual IT infrastructure over time, though. So we cannot realistically dictate browsers or browser versions.
Of all the complexity and unknown that are embedded in projects with decades of lifespan, for this one in particular the solution is quite straightforward: roll your own certificate authority and relative certificate distribution and deployment scripts.
This only works when SHA256 certs will be supported for 30 years (which I doubt it won't sustain the next 30 years..)
I know plenty of devices which can't be used correctly any more since SHA1 + SSLv2/SSLv3 was deprecated and they generated a CA+Cert with 512/1024 Bits.
And then we'd have to tell our clients how to add the root cert to their browsers and operating systems. I doubt that this would fly with many customers.
I'm glad this is happening, although I'm more excited for the day when they start very obviously marking password input fields as "NOT SECURE" when they are used on HTTP sites. Although I am genuinely surprised how much Google and others like Mozilla have had to drag many site owners kicking and screaming into HTTPS. I never would have imagined that end-to-end encryption by default would be consitered "controversial".
Also, those HTTPS numbers are amazing!
* Over 68% of Chrome traffic on both Android and Windows is now protected
* Over 78% of Chrome traffic on both Chrome OS and Mac is now protected
* 81 of the top 100 sites on the web use HTTPS by default
> Although I am genuinely surprised how much Google and others like Mozilla have had to drag many site owners kicking and screaming into HTTPS
Personally i am very concerned about how much power Google has over the web - all it takes to change millions of how web sites work and look like is a random decision by Google.
What bothers me even more is that most people do not seem to care much because they happen to agree with what Google is doing (so far). However i think that sites should make decisions on their configuration, layout, mobile friendliness and other things because they want to, not because they are forced by Google through taking advantage of their position and biasing what people see towards what they believe people should see.
I do not like that Google basically dictates how the web should behave.
(actually it isn't only the web, but also mail - just recently i had a mail of mine end up in the spam folder of someone else's GMail account because who knows what of the 1983847813 things i didn't configured exactly as Google likes it in my mail server... and of course the "solution" that many people present to this is to hand all my communications to Google)
> However i think that sites should make decisions on their configuration
It's OK when they have ideas what they are doing, but that's not always the case.
Before my trip to Kaua'i I googled some dive shops on the island to book a scuba diving trip. Every dive shops on the island seems to be using the same vendor to process online order of the reservation, which has the same form to input credit card number, and has some text around it saying "it's secure". They are not secure. They are on HTTP: https://twitter.com/fishywang/status/895133987525476354
This is possible because how ubiquitous the idea of HTTPS-all-the-things is now, not the other way around. Further, HTTP IS insecure(the S literally means secure) so...
You may be lacking SPF, DKIM or DMARC records. The lack of these records is a very reliable way to detect spambots that forge the From: field, so many mail sites now spam these by default.
I do agree with you - it is appalling how much power Google has. I think a bit of perspective is helpful, though.
There hasn't really been much of a span where there wasn't a de facto 900 Lb gorilla browser that threw its weight around. I think a lot of us watched this happen, but the various charts on this page are instructive:
It appears that somehow the browser market trends toward an unstable near-monopoly of sorts, at least so far.
So, meet the new boss, same as the old boss. At least https everywhere is a long-term public good that they're willing to take grief over forcing. It beats some of the other unilateral changes various once-dominant browsers forced.
Search engines and browser vendors have had a big influence on configuration, layout, mobile friendliness and other things long before HTTPS All The Things started to gain traction.
It's not random and that it just might serve security is a side effect.
It is about competitive advantage over other Ad Networks which might not implement HTTPS for AdSense. It is about raising both monetary and technical cost of server setup to make Google Cloud offer look even cheaper. It is very self serving.
Required maintenance for a simple, static http site: none.
Required maintenance for a simple, static https site: configure let’s encrypt and keep the cron job running.
Big difference? Not for some, but it sure is something that offers very little value for very many site owners. Even the top 100 sites are only at 80% https by default, and they do it for a living!
>it sure is something that offers very little value for very many site owners.
Because it's not meant to offer value for a site owner, it's meant to offer value to the user.
When I type a password on your website, I'm the one that has the most to lose there. When I type my credit card or other personal information into a site, I'm the one that will need to spend time and money getting control of my information if it was stolen.
When I am browsing the web and ads are being injected into the HTTP request, or my ISP is dragnet datamining, or a compromised router is injecting malware into every page, I'm the one that loses, not you.
The argument that an insecure website is easier to maintain than a secure website is like saying "a car without airbags is easier to work on". The extreme vast majority don't care about how easy it is to maintain the site, they care about their privacy and security.
And your counterpoint is needing to download and run some open-source software once a month (or automate the process and never touch it again). A few years ago that would have been a much larger list, but developers were listening to the complaints, and realized that the only way to a fully secure web was to make this process easier, so they did!
It's easier than ever to enable HTTPS on every website, and in the vast majority of cases it's a net improvement for users.
Required maintenance for a simple, static https site: Install certbot, press enter a few times, forget about it.
"keep a cron job running" sounds like it you're running the cron job by hand.
>Even the top 100 sites are only at 80% https by default, and they do it for a living!
That's entirely separate from your "simple, static site" example and yes, rolling any sort of large change out to a big site is a big deal, and if there isn't business motivation to do it it likely wont happen. Google is providing everyone a business motivation by threatening to point out to users that insecure sites are insecure.
If you have a simple static site, you can have Google's Firebase hosting thing do the HTTPS for you, without management, for free. Netlify[1] is another one I see recommended around here decently often which has the same service.
If you are running your own webserver even a static simple site has required maintenance. You need to keep the server and OS patched. So adding a letsencrypt cron job is not any worse than configuring something like Debian's unattended-upgrades.
But I don't think most site owners should be doing even that much. They should just pay for static hosting, which is cheap and ensures somebody else will keep the server, os, and cert all safe.
If you truly have a simple, static site, the required maintenance should most of the time be exactly the same: pay your hosting provider, nowadays they should be providing HTTPS.
A small static site I operate recently got HTTPS with zero action on my part. My hosting provider just did it of their own initiative and notified me that it was now done. I suppose they're going to do whatever ongoing maintenance it requires.
How do I run that cronjob on a simple, static site? I very deliberately create static sites and store them on e.g. s3 to be served without any effort on my part. I do not want to have to run a 24/7 server just for a monthly cronjob!
>I'm glad this is happening, although I'm more excited for the day when they start very obviously marking password input fields as "NOT SECURE" when they are used on HTTP sites.
> I never would have imagined that end-to-end encryption by default would be consitered "controversial"
The whole point of Google's https crusade is to secure users from ISPs profiling their browsing activity, which for them is about eliminating the competition because they still monitor and track everyone and so if they knock the ISPs out they solidify their monopoly position.
HTTPS is not bad but Google's motives (in the context of their business model and monopoly position) are and that gives some people pause.
> The whole point of Google's https crusade is to secure users from ISPs profiling their browsing activity, which for them is about eliminating the competition because they still monitor and track everyone and so if they knock the ISPs out they solidify their monopoly position.
Even if that's Google's motivation, I'm OK with the end result. I already use a VPN on my iPhone when on LTE because Verizon's been caught sniffing and manipulating traffic a few too many times. At least I can (and generally) do opt out of using Google's services, so I genuinely appreciate them helping me out of out Verizon's unwelcome inspections.
Wow, I'm genuinely surprised someone managed to spin something like Google advocating for a secure web into a monopoly agenda. I get it, corporations are evil. But not every move. Why can't this simply be something out of goodwill?
"obviously marking password input fields as "NOT SECURE""
- firefox has been doing this for a while, confuses some of my users - I am glad it's there, but wish it did not completely cover the login button on wordpress fields. Very glad there is a "learn more" attached to this.
"Google and others like Mozilla have had to drag many site owners kicking and screaming"
- I was dragged into this by the Google threats. Spent hours on it. Come to find our most popular few pages use a script that just will not function over https - no way to make it happen.
Then I spent hours crafting htaccess rules to make some pages https (home and password pages) - and some pages forced non-https (the 5 pages we have with needed chat script on them) - more hours into updating links on all pages and everything -
then come to find out the browsers have a function where if your home page is https only then it can't pull the sub pages as non-https (maybe it's the other way around, it's been a while) -
So I had to go and undo all the changes. I've been spending time trying to help develop newer chat scripts to have all the functionality of the old one our users prefer - and to no avail. So as google forces https use on sites to be in it's results, and now to not be labeled as insecure - we currently have to choose to remove our most popular functions on our site or lose the google battle completely.
We are still trying to get a newer chat system up and running that has our old familiar functions, but we don't have the resources that google and others have obviously.
We want https so bad, we love, love, love more encryption the better. It just has not been an easy thing for us to implement, and we've tried many things, included pushing our users to newer html5 based chat systems and such. Nothing has panned out quite yet. Fingers crossed we make strides in these areas before it gets worse.
No offense, but this can't have come as a surprise. The writing has been on the wall for a very long time. I'm glad that Google is forcing company's hands here, because it is obvious that if they didn't, some would never "find time" to get it done.
You may want to update the website linked via your HN profile as well. To start with, every page includes unicode replacement characters mixed into the text.
To be completely honest, this kind of counter-argument (not that you are advocating against HTTPS, but bear with we) always reads to me like "people forget passwords, and don't like having to type usernames, so we just rely on the honor system".
Yes, security has downsides, but in my opinion those downsides are well worth it for the benefits.
Similarly: I have a static site (no tracking, no PIIs) that's currently hosted on a friend's VPS. HTTPS is out, because they're already using it for something else, and Apache can't know which vhost the user is requesting in time to present appropriate certificate. Maybe there's a workaround for that, but neither I nor my friend know of any, we've both already spent few hours looking, and I do not feel like spending even more time figuring out, or moving it to a dedicated VPS with its own IP.
Basically, on the one hand I'm all happy. On the other hand, I totally do not want to do the work.
And beyond that, all the extra complexity introduced everywhere that you mention, and that I also had happen to me.
I'm actually quite shocked that 19 of the top 100 pages still use http by default. For small or internal pages that's fine but top 100 pages on the internet? Any idea which pages they're talking about?
Google's Transparency Report shows a list of which top sites do modern HTTPS by default, which do it but only if you explicitly request the HTTPS site, and which have crappy (e.g. TLS 1.0 only, or they require 3DES) cryptography, and which have nothing at all.
The last category features many Chinese sites. I could speculate about why that is, maybe the Great Firewall gives citizens no reason to bother trying to achieve security, maybe Chinese culture opposes privacy, maybe everybody in China is running Windows 95 still. But whatever the reason, that's an observable fact.
There's also a whole bunch of crappy British tabloid newspapers there. Given their print editions are specifically printed on the worst quality paper that will take print ink, and they are routinely accused of "gutter" journalism, perhaps it isn't a surprise that defending their reputations through the use of encryption isn't a priority? Or you know, maybe British culture... British great firewall... etcetera. No idea.
Fine, as long as they don't eventually make it difficult or impossible to ignore the warnings (as they've done with SSL sites with invalid certs). I have numerous devices with web interfaces that are 100% internal to my network and not reachable from the open Internet, but Chrome still refuses to let me access them (side note: thanks, Firefox, for respecting my decision as a user!). I can envision a near future where Chrome treats HTTP sites the same way.
You can still visit websites that have an invalid CA or invalid certificate DNS match, but if the website is set up for HSTS/HSTS preload then chrome respects the website's decision to not allow insecure connections.
I was accessing an internal router with firmware from the distance past of 2011. Note that this is an internal-only router that connects a couple of trusted subnets, so security isn't an issue and there's no requirement to replace it yet. The problem that bit me yesterday was that I literally could not find a way to get Chrome to open https://ro.ut.er.ip/ because the router's ancient cert is invalid.
I would have been perfectly fine with Chrome alerting me to that fact and providing a "click here to continue anyway" link, but that seems to no longer be an option. FWIW, Safari did the exact same thing. Only Firefox gave me the "I promise that's really what I want to do step aside plz" button I needed.
The problem is when Google goes and throws a commonly used internal-only TLD like .dev in their HSTS list. And of course, the whole problem that HSTS is the hosts file all over again.
That being said, the real fault in that incident is ICANN selling .dev in the first place. They should've been well aware of it's common use, and opted not to sell it.
Can you self-sign a certificate from the internal server and install it as trusted on the work computers? Or does Chrome only trust certificates that Google trusts?
Generally speaking, Chrome uses whatever is in the OS trust store, with certain exceptions for CAs that have been naughty (e.g. StartCom, WoSign, Symantec) or subCAs that were revoked via CRLSets. Private CAs present in the OS trust store will generally just work, the only exceptions being things like Superfish.
One small annoyance/drawback with everything moving to https: I travel a fair bit, and hotel wifi usually relies on users connecting to their AP and then using a DNS-based redirect to send the user to the login page. That only happens when on http, as https sites which are redirected get the MITM warning from the browser. I used to be able to just type in "google.com" in the address bar and be redirected accordingly; nowadays I struggle to remember a site I use which isn't https. Looking up the gateway address is kind of a pain too.
example.com has SSL: https://example.com/ -- so may one day default to using that. I'd really recommend http://neverssl.com/ for this purpose. The homepage explains it's literally designed for situations like captive portals.
Windows 10 detects this by trying to go to msftconnecttest.com in a browser if it detects using some heuristics that the wifi requires signin which should redirect you to the wifi signin page. Android also detects this as well by going to some Google-owned page that redirects you to the wifi signin if it detects that it is needed. What are you using that this is still a problem?
iOS and macOS both have captive-portal detection (in fact I think they pioneered it) but it's not 100% fool-proof and sometimes doesn't show up when it should.
Also supposedly some captive portals trap the well-known URLs used by captive portal detection for whatever reason (which is why Apple uses a huge list of seemingly-random domains)
But the browser could default that to https since that's allowed. If you use a real domain that has no https equivalent you should be safer (neverssl.com is one of those).
This totally sucks for web based services and sites which don't have a (user friendly) chance to use HTTPS.
Think of LAN only IoT devices which aren't proxy through a external company site, have no domain, are accessed through (local area network IP) and maybe run in non internet environment.
I wish there was a solution for web based encryption for this application domain and browser vendors start to think out of their internet only box. ... same goes for service workers.
Unfortunately, this is a problem that will never go away, no matter how slowly and gracefully we transition. There is no alternative that allows these devices to continue to operate without friction that doesn’t also enable current device manufacturers to kick the can down the road by releasing new HTTP-only devices.
If we want to keep making the web a safer place, these kinds of cutoffs have to happen. Infinite backward compatibility simply holds everyone back for the sake of decreasingly-relevant devices and irresponsible manufacturers of new hardware and the customers who purchase their products.
so the solution to the warning is to make your offline IoT devices secure, but how do you actually do that? i have an IoT product that runs a web server and needs to be accessible by users when the internet connection isn't available. how do I enable HTTPS on it? (this is not a hypothetical. it's a problem i actually need to solve)
as far as i can tell, my options are:
- install a self-signed cert on the device and force my users to click through all the warnings chrome throws up about untrusted certs
- create my own CA cert and sign the cert with that, and convince the user to install my CA cert as a trusted cert (which is not possible on iOS)
- get a cert signed by a trusted authority, and get the user to add an entry to their /etc/hosts file that maps the domain the cert is valid for whatever address the device is assigned
- distrubute a native (electron?) app that interfaces with my device and trusts my cert, and disallow direct browser access.
- find some sketchy SSL issuer who is willing to issue certs for *.local domains and run an mDNS resolver on my device
- Use HTTP instead of HTTPS and the only downside is a little badge in the address bar saying "not secure"
I'd love to have HTTPS everywhere, but i honestly don't know how to make it happen.
They should be doing the same thing for Javascript then. Release a version of Javascript without the crappy bits and not run the old version of Javascript.
> Think of LAN only IoT devices which aren't proxy through a external company site, have no domain, are accessed through (local area network IP) and maybe run in non internet environment.
As long as it's a “Not Secure” omnibox warning, I don't see it as a problem even there. Now, if they adopted the “block with non-obvious escape hatch” approach used for certificate errors for HTTP, that would be a problem.
I didn't read anything in the article that said that HTTP will stop working, only that it'll be marked as not secure in the new Chrome. If you understand the security risks involved and are okay with continued use of bare HTTP, it shouldn't make any difference to you.
the future of IoT is HTTPS with client side X.509 authentication. you don’t need internet to make that happen. but if you are web based and not using HTTPS... i can only ask why not? internal CAs are free
It's impossible to get a valid SSL certificate for an appliance running within someone their lan, without having to open ports. And opening ports would make the appliance even more vulnerable to attack.
So much work for so many people to update so many old websites that will see absolutely zero benefit from serving content via ssl.
I have an old travel blogging site with a few hundred thousand posts in read only mode. Thinking about how much work it was to upgrade my other sites to https, chase down and work around every http request in the code base, purchase and install certs for silly things like cloudfront that you wouldn't think would suck away two days of your life.
I'll probably just let the site die in July. It doesn't make money, so it's going to be a tough decision whether to dump thousands of dollars of otherwise billable time into upgrading it to accommodate google's silly whim.
It makes me a bit sad to hear you are distressed from this news. Would you like some help setting up SSL? We could go over Certbot & Let's Encrypt, as well as provide some advice on things like HSTS, Mixed Content, etc...
My email address is in my profile. I'm happy to help you however I can. I may not be able to make any code-level changes, but there may not be much work that needs to be done.
Edit: I just saw your comment https://news.ycombinator.com/item?id=16338576 -- I understand there's potentially a lot of one-time work, but it could be done very gradually, and with this announcement, I think CDNs and widget makers would adopt SSL as well. Once the site does support SSL, you can make SSL-related upgrades (like disabling old ciphers or protocol versions) without much disruption.
You could always consider a free service like Cloudflare which can sit in front of your site and serve the site via SSL to your customers. Yes, it's still unencrypted between CF and your site, however it would resolve the poor "insecure" UX.
As a bonus, CF also has functionality that can re-write http uri's to https.
If you do what you describe, the site will load minus all of its imagery and scripts, since those will be linked from a CDN as http://img.whatever.com/ or whatever. Anything linked with a full URL, nomatter how deep in your codebase will surface at some point in the future and throw up a scary warning for your users.
And you get to find homes for those 3rd party scripts hosted on http only domains.
And in my case I'll probably get to rewrite a Google Maps integration because that will have taken the opportunity to deprecate something important.
There really is a ton of work to pull this off. For every site on the internet more than a few years old.
And again, for zero benefit whatsoever except to clear the scary warning that Google plans on introducing.
If one strongly held that position, one could make a killing selling fixed-price contracts to audit and fix all SSL issues for any website running on any stack, regardless of age.
$1,000, fixed price, guaranteed no Google Chrome warnings or your money back.
Personally, that would not be a business I'd take on. Would you? If so, I (and a lot of other people) have some consulting work for you.
"To continue to promote the use of HTTPS and properly convey the risks to users, Firefox will eventually display the struck-through lock icon for all pages that don’t use HTTPS, to make clear that they are not secure." [1]
That presumes that the ONLY goal of HTTPS is to hide the information transferred. However, you have to recognize the fact that you run JITed code from these sites. And we have active examples of third parties (ISPs, WiFi providers) inject code into your web traffic. When browsing the web with HTTP you are downloading remote code, JITing it, and running it on every site you visit if you aren’t 100% noscript with no HTTP exceptions. You have no way of knowing where that code actually came from.
Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?
To provide some context, I'm currently working on a web application where the server is intended to be running inside a home network (where the server requires zero configuration by the user). As of now, some of the JS APIs I'm using are only available if the site is running in a secure context, so the server has to serve the application using HTTPS, otherwise some functionality won't be available. However, it is impossible to obtain a valid TLS certificate for this local connection -- I don't even know the hostname of my server, and IP based certificates aren't a thing. So basically, to get a "green lock symbol" in the browser, the server would have to generate a random CA and get the user to install it, which comes with its own severe security risks and is not an option.
So my current plan is to have a dual-stack HTTP/HTTPS server, which on first startup generates a random, self-issued certificate. When the server is first accessed using HTTP, the client automatically tries to obtain some resources via HTTPS. If this succeeds, the user is redirected to the HTTPS variant. If it fails due to a certificate error, the user is presented with a friendly screen telling her that upon clicking "next" an ugly error message will appear, and that this is totally fine. Oh, and here's how to permanently store an exception in your browser.
Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.
This sucks. It just seems that Google and co don't care about people running their own decentralised infrastructure; and marking your own local servers as "insecure" does definitively not help.
- We'd like to make an IoT gateway that you can use from a browser.
- To get access to necessary APIs, we have to provide it via HTTPS.
- The get HTTP we need a certificate. Because no one is going to pay for it, we'll use Let's Encrypt.
- To get a Let's Encryt cert, we need a verifyable hostname on the public internet. Ok, let's offer subdomains on mozilla-iot.com.
- To verify that hostname, Let's Encrypt needs to talk to the gateway. Ok, let's provide a tunnel to the gateway.
- Now the gateway is exposed to the internet and could be hacked. So we need to continously update it to close vulnerabilities.
So in the end all your IoT devices are reachable from the internet. But hey, you can use Firefox to turn your lights on!
I think fundamentally your issue here is with secure contexts, not with the site labeling. In the end, you can have a site like you describe, but you have to avoid using APIs that require secure contexts.
Any sort of avoidance of this, as by the method you describe ("please ignore the ugly warning you are about to see") is a mistake, because you're helping to train the users to ignore these messages.
> Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.
Is it, though? Assuming your server hasn't been compromised (nobody is monitoring it to make sure!), and assuming that the self-signed cert cannot be easily exfiltrated, and assuming that they don't do the same thing the next time they get an ugly warning from chase-bank.ru because they're sure that it's spurious -- then maybe?
As for your particular hassle, it makes sense to me for a browser to mark sites that mix http/https as insecure from the point of view that once the data is on the plain http page you can no longer be sure that it won't be handed off over an unencrypted connection some place else by some rogue javascript.
Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks. Say, a method for routers to discover certificates announced by devices on the network to list them in its management interface where you can enable or disable them.
Set up a DNS server that, for a domain of 10-0-0-1.$rand.X.com returns 10.0.0.1
Generate DNS challenges for your domain, issue lets encrypt certificates for said domain.
Viola, private, local, publicly-trusted, SSL-encrypted.
Have http://10.0.0.1/ redirect to https://10-0-0-1.$rand.x.com/
The biggest issue with this is getting LetsEncrypt to issue you enough certificates.
PS: this idea is somewhat borrowed from Plex.
This isn't as big of a problem as you'd like to believe. IMHO.
Deleted Comment
I don't understand why developers so often choose the browser as a frontend. Are there better rationales besides having at least some frontend for tyrant-controlled devices like iOS'es, and just using the skills one already has?
For the first, just tell the people to get proper devices.
Because of the second, I see schooling efforts for JavaScript by the tech giants so negatively. It leads to masses of people using JavaScript where it shouldn't be.
I think it can be summed up in one old but very relevant-in-our-times quote: "Those who give up freedom for security deserve neither."
At first, the idea that something is being done "for your safety and security" sounds good, but like all utopian goals, it has deeper connotations that are truly dystopian.
As mentioned in another of the comments here, this is yet another instance of companies using the "more secure" argument to gain control over the masses and ostracise anything they don't like. They're harnessing fear and exploiting it to their advantage.
To give an analogy in the real world, we don't lock ourselves in bulletproof cages and expend great efforts in hiding from others (for the most part), and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage. We shouldn't let companies and (and try very hard, unfortunately not always succeeding for) governments dictate every detail of how we should live our lives offline, and the same should apply online.
There's a very long tail of sites, many sadly disappearing from Google[1], of old yet extremely useful information, which are probably going to stay HTTP for the forseeable future. I made a comment about this in a previous "HTTP/S debate" article:
https://news.ycombinator.com/item?id=14751540
Fortunately the only good thing that may come of this is that people will just start completely ignoring "not secure".
And I have JS off by default and whitelisted on a very small number of sites, if you were wondering...
[1] https://news.ycombinator.com/item?id=16153840
> and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage.
This is just a warning though, no one is preventing you from "going there", it's merely a warning that it might not be safe. Your analogy is more similar to "there's a slow down on that road, let me navigate you somewhere else, but feel free to go there if you want". You have exactly the same amount of freedom.
How exactly is using HTTPS "gain control over the masses"? Google does not control the HTTPS infrastructure.
It's not like those sites are gone either. There's always archive.org.
The most straightforward thing to do would be to disable javascript on non https sites by default or warn if a nonhttps site has javascript. Most of the old sites we want to keep around don't have javascript in them (or much javascript in them).
Ideally people should only be enabling javascript on sites they trust (and are running https for "real" "trust") but having a trusted whitelist for enabling javascript brings back your big brother arguments.
As other commenter noted, Comcast has been observed to inject content to its customers; would you rather that on HTTP or that you run HTTPS?
Now if you could explain to me how using secure connections and showing a correct warning for insecure connections is restricting your freedom that'd be interesting.
Cloud hosting is much more expensive than my current hosting plan. It seems like this is also highly convenient for ISPs that http will be phased out because either way, ISPs make a lot more money out of web site business by the newly required standard.
This is the future we knew was coming, where it becomes so expensive for individuals to do the same as companies do. It's how Radio, TV, and many other things were taken away over the years, it simply became too much of a legal hurdle and way too expensive to run until large companies became the only channel owners.
It's just history repeating itself, but now to shut out individual web site and application makers who don't have resources to compete with big business. :\
“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”
And it was about taxes.
Please explain to me how Javascript delivered from a malicious ad delivered over HTTPS is somehow safer. Most malicious code is delivered with the help of the website.
This post was written with no javascript enabled.
The old simple plain-HTTP plain-HTML web is still useful and practical for showing text (don't care much even about CSS - HTML 3.2 is perfectly suitable for showing readable information). It seems to become a victim of collateral damage in the pursuit of "better web", which is sad.
The grandma will not make better decision if you put in another dumb warning like this one https://i.imgur.com/rxmyWtF.png to waste more and more of my time when I enter my password on the same website over and over. I KNOW IT'S INSECURE, JUST STFU already.
I'm at the point where I will look to build my browsers from source after removing that shit (along with removing inability to add certificate exceptions for certain situations). Good thing they're open source.
It's actually not so complicated: obviously, there's nothing actionable for the user to do here. The message is for the user, but only indirectly: it's there to push developers into better practices. Your boss may not care about encryption or privacy, but definitely will care about hundreds of phone calls asking why they are warned that the form is insecure when they try to login.
With plenty of obscure pages accepting everything from passwords to credit card numbers on plain HTTP pages, this is important. There may not be someone browsing the page knowledgeable enough to catch this, but if any end user can know what they're doing is wrong, then it's much more likely it will be discovered.
While mostly not actionable directly for users, one thing a user can, and probably should, do is close the page, if they can.
That's just about every browser now. I think that full URLs are only available in Firefox, and only if you set `browser.urlbar.trimURLs` to `false` in about:config. Hiding URL information is a very bad trend.
I believe that Google's intentions here are to block other pass-through internet entities from collecting advertising data. Obviously, Google would never encrypt user data so that they couldn't mine it. Personally, I am more worried about Google mining our data than some rinky dink ISP.
Also, could you link to the PoCs? I have not seen a reliable PoC but maybe I haven't look that hard. The ones I've seen only work on specific CPUs, and only if certain preconditions are met. But anyway, that is a separate discussion.
This is an absurd thing to say when Let's Encrypt exists.
We can definitely blame the ad networks. Some have switched, but many won't work on https, and websites relying on ad revenue must stay with HTTP or make less money with HTTPS-friendly ad networks.
Seems totally irrelevant, since any "legit" site with a $10 certificate will still be able to inject malicious code, either by its operators putting it there directly, or by it being hacked -- whether there's a man in the middle or not. And with something like Spectre out there, https wont do anything.
I guess browsers could then make an effort to help sites use scripts only when absolutely necessary and give users easy to use tools to disable scripting. But, oh wait, they do the exact opposite.
HTTPS evangelists are basically playing a game of political ideology shrouded as concern for safety. I think they care more about their own privacy than they do the functionality, security, and maintenance required for HTTPS sites.
It's also not coincidental that Google has a vested interest in keeping all traffic surrounding their services hidden or obfuscated: traffic content and metadata is money. Google is basically eating the lunch money of ISPs' passive ad revenue. (This is also part of why they want to serve DNS over HTTPS)
What's wrong about caring about your privacy?!
Why the hell do ISPs deserve ad revenue? I don't like Google either, but ISPs that want to tamper with connections to inject their ads can fuck off and die in a fire. That is more unethical than anything Google has ever done.
The imaginary user cited does not need that code in order to "browse memes".
The code is there for advertising, e.g., to attract advertisers as customers by gathering data about users.
Hence the push to HTTPS is for companies that aim to generate revenue from selling access to or information about users to advertisers.
I have no problem with HTTPS on the public web, to the extent that it is the concept of encrypted html pages, and perhaps these are authenticated pages (PGP-signed was an early suggestion).
Encrypt a page of information (file), sign it with a public key and then send it over the wire . The wire (network) does not necessarily need to be secure.
However I do have a problem with SSL/TLS.
I would like to leave open the option to not use it in favor of alternative encryption schemes that may exist now or in the future. It seems one allegedly "user-focused" company wants to remove this option. Comply with their choice or be penalized.
The issue I have with TLS is only to the extent TLS is the idea of setting up a "secure channel" to some "authenticated" endpoint (cf. page), with this authentication process firmly under the control of commercial third parties, using an overly complex protocol suite and partial implementation that is continually evolving (moving target) while people scramble to try to fix every flaw that arises out of this complexity.
To the extent it is not what I describe, I have no issue. (That is, I'm pro-TLS.)
We have one company aiming to replace HTTP with their own HTTP/2 protocol, which to no surprise has features that benefit web app developers and the advertisers they seek to attract far more than they benefit users.
Could we design a scheme to encrypt users web usage that would not benefit advertisers? I think yes. But this is not what is being developed. Encryption today is closely coupled with the "ad-supported web". If we are not careful, this sort of policy pushing by Google could cripple the non-ad-supported web that existed before the company was incorporated.
Encrypted "channels" are not the only way to protect information transferred via the web. TLS is not the only game in town.
The only use I know for captive portals is EULAs, and I'm not sure those ever had legal weight (though obviously IANAL).
But honestly they were starting to be outdated (technologically) even before this. Since a lot of popular sites use HTTPS, I usually have to try and think of a non-HTTP site before I can get through. They're just a nuisance at this point.
This is not to defend Google's actions as "altruistic" in any way. But sometimes Google's interests and the public's do align.
But hopefully, many captive portals will just die off.
Deleted Comment
The idea that all internet sites have to compensate for the low quality of the last mile of some users simply does not make sense. If a site accepts sensitive input from the users than sure, it needs an authenticated and encrypted connection; but if it serves static content, it may hold the internet infrastructure and the receivers responsible for the correct delivery.
Main argument in favor of http would be it's making browser fingerprinting harder from a privacy standpoint. Https by itself let the server identifies its clients individualy without cookies.
Deleted Comment
Banning http is a great example of tossing the baby with the bathwater.
Https is better, but there are still valuable use cases for unencrypted web traffic.
I am sorry this bugs you, but please note that your straw man is not the argument I’d make. Sometimes you need a low hassle web server. Renewing let’s encrypt certs is not low hassle.
I agree that the current CA system has flaws, but there are efforts such as Certificate Transparency[1] and DANE[2] attempting to improve or bypass the CA system. That said, just having encryption defeats passive eavesdropping, and even the current CA system of authentication raises the bar for active eavesdropping.
> Banning http is a great example of tossing the baby with the bathwater.
I'm not sure what you mean by that.
> Https is better, but there are still valuable use cases for unencrypted web traffic.
Like what?
[1] https://en.wikipedia.org/wiki/Certificate_Transparency
[2] https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...
Deleted Comment
On the other hand https makes the web performance horrible
Our company manufactures devices for use in labs and industrial processes. They need complex user interfaces and there is a push to move to HTTP, HTML and JS to implement them with the device as the web server. The devices are usually installed in a controlled, isolated internal networks.
Our clients run these devices for a long time. 20 or 30 years are not unheard of. They also do not update them if there is no malfunction (they work and our clients love it that way).
Now how the hell do we create a webserver that will work reliably for the next 30 years if we have to embed SSL certificates with built-in expiration dates in the range of 1 to 3 years?
If there's a requirement that your code needs to run for 30 years without an update then current web technology is probably the wrong choice.
Serving this over http in a secure network would be reasonable.
Serving this over https would be reasonable if you can embed a certificate with an appropriate lifetime. Which you can't.
A 30 year old no-updates secure webserver seems almost impossible given current security research progress.
If you can get control over the computer attached to the equipment, you already won. You don't need to exploit the web server.
More broadly speaking, you can't rely on anything to stay the same in 30 years, in terms of infrastructure. Many companies therefore deliver both the devices themselves, as-well as the systems to control them (ie. custom laptops/tablets). More costly for everyone involved.
For internal use the answer is self-signed certificate. However I wouldn't count on web browser based software compatibility to last even 10 years. I think browser vendors at some point will decide to ditch <512-bit certs or some encryption scheme and one needs to make a cert/server upgrade. Some option may be Firefox ESR [0] requirement to ease an adoption.
Moreover browser is UI part of your solution and you need to define some requirements - this should be all right for well administrated environments. If the user for any reason upgrades/changes the browser but not your device then this is user introduced incompatibility. Your EULA should cover that. Also you should send notifications or just show some message in device panel to your customers that they need and upgrade if they want to use given browser past version X.
Customers need to learn that web means edge tech and that needs frequent upgrades. Their isolated internal network of the past e.g based on WiFi WEP would be practically a network open to bad actors today.
[0] https://www.mozilla.org/en-US/firefox/organizations/
I imagine you'll be able to flip a Chrome flag to turn it off anyway.
Over 30 years, you'll find vulnerabilities in pretty much anything; so you don't want that device on an open network anyway. But, you can put it on an isolated VLAN, configure it so that it's accessible only by that proxy and no other computers, and then all the http traffic is in the isolated part, and the users access the proxy through https.
Have you actually had clients run those html/js based devices "for 20 or 30 years" (e.g. have some running since 1997) or is this a totally hypothetical scenario?
I fully expect most of the systems that we are shipping now to be run for the next twenty years as well. Until security issues start losing customers buckets of money, they're not going to care one bit.
Internally, we have a push to transition user interfaces for future products to HTML/JS over HTTP. This was sold to our management as a solution with long term stability.
I know plenty of devices which can't be used correctly any more since SHA1 + SSLv2/SSLv3 was deprecated and they generated a CA+Cert with 512/1024 Bits.
Also, those HTTPS numbers are amazing!
* Over 68% of Chrome traffic on both Android and Windows is now protected
* Over 78% of Chrome traffic on both Chrome OS and Mac is now protected
* 81 of the top 100 sites on the web use HTTPS by default
Personally i am very concerned about how much power Google has over the web - all it takes to change millions of how web sites work and look like is a random decision by Google.
What bothers me even more is that most people do not seem to care much because they happen to agree with what Google is doing (so far). However i think that sites should make decisions on their configuration, layout, mobile friendliness and other things because they want to, not because they are forced by Google through taking advantage of their position and biasing what people see towards what they believe people should see.
I do not like that Google basically dictates how the web should behave.
(actually it isn't only the web, but also mail - just recently i had a mail of mine end up in the spam folder of someone else's GMail account because who knows what of the 1983847813 things i didn't configured exactly as Google likes it in my mail server... and of course the "solution" that many people present to this is to hand all my communications to Google)
What do you think is a random decision?
> However i think that sites should make decisions on their configuration
It's OK when they have ideas what they are doing, but that's not always the case.
Before my trip to Kaua'i I googled some dive shops on the island to book a scuba diving trip. Every dive shops on the island seems to be using the same vendor to process online order of the reservation, which has the same form to input credit card number, and has some text around it saying "it's secure". They are not secure. They are on HTTP: https://twitter.com/fishywang/status/895133987525476354
There hasn't really been much of a span where there wasn't a de facto 900 Lb gorilla browser that threw its weight around. I think a lot of us watched this happen, but the various charts on this page are instructive:
https://en.wikipedia.org/wiki/Browser_wars
It appears that somehow the browser market trends toward an unstable near-monopoly of sorts, at least so far.
So, meet the new boss, same as the old boss. At least https everywhere is a long-term public good that they're willing to take grief over forcing. It beats some of the other unilateral changes various once-dominant browsers forced.
It's not random and that it just might serve security is a side effect.
It is about competitive advantage over other Ad Networks which might not implement HTTPS for AdSense. It is about raising both monetary and technical cost of server setup to make Google Cloud offer look even cheaper. It is very self serving.
And by that you mean?
Required maintenance for a simple, static https site: configure let’s encrypt and keep the cron job running.
Big difference? Not for some, but it sure is something that offers very little value for very many site owners. Even the top 100 sites are only at 80% https by default, and they do it for a living!
Because it's not meant to offer value for a site owner, it's meant to offer value to the user.
When I type a password on your website, I'm the one that has the most to lose there. When I type my credit card or other personal information into a site, I'm the one that will need to spend time and money getting control of my information if it was stolen.
When I am browsing the web and ads are being injected into the HTTP request, or my ISP is dragnet datamining, or a compromised router is injecting malware into every page, I'm the one that loses, not you.
The argument that an insecure website is easier to maintain than a secure website is like saying "a car without airbags is easier to work on". The extreme vast majority don't care about how easy it is to maintain the site, they care about their privacy and security.
And your counterpoint is needing to download and run some open-source software once a month (or automate the process and never touch it again). A few years ago that would have been a much larger list, but developers were listening to the complaints, and realized that the only way to a fully secure web was to make this process easier, so they did!
It's easier than ever to enable HTTPS on every website, and in the vast majority of cases it's a net improvement for users.
"keep a cron job running" sounds like it you're running the cron job by hand.
>Even the top 100 sites are only at 80% https by default, and they do it for a living!
That's entirely separate from your "simple, static site" example and yes, rolling any sort of large change out to a big site is a big deal, and if there isn't business motivation to do it it likely wont happen. Google is providing everyone a business motivation by threatening to point out to users that insecure sites are insecure.
[0] https://firebase.google.com/docs/hosting/
[1] https://www.netlify.com/docs/ssl/
But I don't think most site owners should be doing even that much. They should just pay for static hosting, which is cheap and ensures somebody else will keep the server, os, and cert all safe.
It's better to use a content-hosting service, and as a bonus you don't have to keep cron jobs running in order to get TLS.
Mozilla already does this with Firefox: https://support.mozilla.org/en-US/kb/insecure-password-warni...
The whole point of Google's https crusade is to secure users from ISPs profiling their browsing activity, which for them is about eliminating the competition because they still monitor and track everyone and so if they knock the ISPs out they solidify their monopoly position.
HTTPS is not bad but Google's motives (in the context of their business model and monopoly position) are and that gives some people pause.
Even if that's Google's motivation, I'm OK with the end result. I already use a VPN on my iPhone when on LTE because Verizon's been caught sniffing and manipulating traffic a few too many times. At least I can (and generally) do opt out of using Google's services, so I genuinely appreciate them helping me out of out Verizon's unwelcome inspections.
"Google and others like Mozilla have had to drag many site owners kicking and screaming" - I was dragged into this by the Google threats. Spent hours on it. Come to find our most popular few pages use a script that just will not function over https - no way to make it happen.
Then I spent hours crafting htaccess rules to make some pages https (home and password pages) - and some pages forced non-https (the 5 pages we have with needed chat script on them) - more hours into updating links on all pages and everything -
then come to find out the browsers have a function where if your home page is https only then it can't pull the sub pages as non-https (maybe it's the other way around, it's been a while) -
So I had to go and undo all the changes. I've been spending time trying to help develop newer chat scripts to have all the functionality of the old one our users prefer - and to no avail. So as google forces https use on sites to be in it's results, and now to not be labeled as insecure - we currently have to choose to remove our most popular functions on our site or lose the google battle completely.
We are still trying to get a newer chat system up and running that has our old familiar functions, but we don't have the resources that google and others have obviously.
We want https so bad, we love, love, love more encryption the better. It just has not been an easy thing for us to implement, and we've tried many things, included pushing our users to newer html5 based chat systems and such. Nothing has panned out quite yet. Fingers crossed we make strides in these areas before it gets worse.
Computer goes out of sync with some kind of cache server, welcome to HSTS errors and not able to surf to any site with cached content there.
Visit customers that have self signed certificates on their guest network, hello LTE to be able to do anything.
Yes, security has downsides, but in my opinion those downsides are well worth it for the benefits.
Visit customers that try to MITM you, get protected. That is, in fact, the point.
Similarly: I have a static site (no tracking, no PIIs) that's currently hosted on a friend's VPS. HTTPS is out, because they're already using it for something else, and Apache can't know which vhost the user is requesting in time to present appropriate certificate. Maybe there's a workaround for that, but neither I nor my friend know of any, we've both already spent few hours looking, and I do not feel like spending even more time figuring out, or moving it to a dedicated VPS with its own IP.
Basically, on the one hand I'm all happy. On the other hand, I totally do not want to do the work.
And beyond that, all the extra complexity introduced everywhere that you mention, and that I also had happen to me.
I find it disappointing that there are any sites that large that are using HTTP.
Deleted Comment
https://transparencyreport.google.com/https/top-sites
The last category features many Chinese sites. I could speculate about why that is, maybe the Great Firewall gives citizens no reason to bother trying to achieve security, maybe Chinese culture opposes privacy, maybe everybody in China is running Windows 95 still. But whatever the reason, that's an observable fact.
There's also a whole bunch of crappy British tabloid newspapers there. Given their print editions are specifically printed on the worst quality paper that will take print ink, and they are routinely accused of "gutter" journalism, perhaps it isn't a surprise that defending their reputations through the use of encryption isn't a priority? Or you know, maybe British culture... British great firewall... etcetera. No idea.
Deleted Comment
I would have been perfectly fine with Chrome alerting me to that fact and providing a "click here to continue anyway" link, but that seems to no longer be an option. FWIW, Safari did the exact same thing. Only Firefox gave me the "I promise that's really what I want to do step aside plz" button I needed.
That being said, the real fault in that incident is ICANN selling .dev in the first place. They should've been well aware of it's common use, and opted not to sell it.
https://www.chromium.org/chromium-os/chromiumos-design-docs/...
In fact, I'm not happy that there is anything responding to example.com at all.
Also supposedly some captive portals trap the well-known URLs used by captive portal detection for whatever reason (which is why Apple uses a huge list of seemingly-random domains)
Think of LAN only IoT devices which aren't proxy through a external company site, have no domain, are accessed through (local area network IP) and maybe run in non internet environment.
I wish there was a solution for web based encryption for this application domain and browser vendors start to think out of their internet only box. ... same goes for service workers.
If we want to keep making the web a safer place, these kinds of cutoffs have to happen. Infinite backward compatibility simply holds everyone back for the sake of decreasingly-relevant devices and irresponsible manufacturers of new hardware and the customers who purchase their products.
The sooner we get to universal HTTPS the better.
as far as i can tell, my options are:
- install a self-signed cert on the device and force my users to click through all the warnings chrome throws up about untrusted certs
- create my own CA cert and sign the cert with that, and convince the user to install my CA cert as a trusted cert (which is not possible on iOS)
- get a cert signed by a trusted authority, and get the user to add an entry to their /etc/hosts file that maps the domain the cert is valid for whatever address the device is assigned
- distrubute a native (electron?) app that interfaces with my device and trusts my cert, and disallow direct browser access.
- find some sketchy SSL issuer who is willing to issue certs for *.local domains and run an mDNS resolver on my device
- Use HTTP instead of HTTPS and the only downside is a little badge in the address bar saying "not secure"
I'd love to have HTTPS everywhere, but i honestly don't know how to make it happen.
As long as it's a “Not Secure” omnibox warning, I don't see it as a problem even there. Now, if they adopted the “block with non-obvious escape hatch” approach used for certificate errors for HTTP, that would be a problem.
Deleted Comment
And why would they think outside of their internet only box when they're providing an internet browser?
Deleted Comment
So much work for so many people to update so many old websites that will see absolutely zero benefit from serving content via ssl.
I have an old travel blogging site with a few hundred thousand posts in read only mode. Thinking about how much work it was to upgrade my other sites to https, chase down and work around every http request in the code base, purchase and install certs for silly things like cloudfront that you wouldn't think would suck away two days of your life.
I'll probably just let the site die in July. It doesn't make money, so it's going to be a tough decision whether to dump thousands of dollars of otherwise billable time into upgrading it to accommodate google's silly whim.
My email address is in my profile. I'm happy to help you however I can. I may not be able to make any code-level changes, but there may not be much work that needs to be done.
Edit: I just saw your comment https://news.ycombinator.com/item?id=16338576 -- I understand there's potentially a lot of one-time work, but it could be done very gradually, and with this announcement, I think CDNs and widget makers would adopt SSL as well. Once the site does support SSL, you can make SSL-related upgrades (like disabling old ciphers or protocol versions) without much disruption.
Let's Encrypt also do free certificates. There's no reason to pay for one.
As a bonus, CF also has functionality that can re-write http uri's to https.
If you do what you describe, the site will load minus all of its imagery and scripts, since those will be linked from a CDN as http://img.whatever.com/ or whatever. Anything linked with a full URL, nomatter how deep in your codebase will surface at some point in the future and throw up a scary warning for your users.
And you get to find homes for those 3rd party scripts hosted on http only domains.
And in my case I'll probably get to rewrite a Google Maps integration because that will have taken the opportunity to deprecate something important.
There really is a ton of work to pull this off. For every site on the internet more than a few years old.
And again, for zero benefit whatsoever except to clear the scary warning that Google plans on introducing.
OTOH, site will continue to work even if you do not do anything, it will just be marked as not secure.
$1,000, fixed price, guaranteed no Google Chrome warnings or your money back.
Personally, that would not be a business I'd take on. Would you? If so, I (and a lot of other people) have some consulting work for you.
"To continue to promote the use of HTTPS and properly convey the risks to users, Firefox will eventually display the struck-through lock icon for all pages that don’t use HTTPS, to make clear that they are not secure." [1]
[1] https://blog.mozilla.org/security/2017/01/20/communicating-t...