When I'm on free airport wifi they'll often limit the session to half an hour and then start MITMing your traffic to try to bounce you back out to the sign-in page to agree to the terms and conditions again. When that happens, the redirect won't work if you're trying to browse to an HTTPS URL. Worse, for many sites you can't tell your browser to "just go along with the MITM" because of HSTS.
So in airports I rack my brain for a website that might serve content over HTTP, not HTTPS. Alas, they're getting harder to find, and it's getting harder to keep those airport wifi sessions going. I half considered registering "alwaysinsecure.com" or "alwayshttp.com" so I could reliably find one... Now I'll probably just use qq.com, it's short and easy to remember.
Interesting. What the prescription here - to simply use http://neverssl.com in order to see if you need to re-authenticate to the captive portal? Is that correct?
It's not a technical or even standards setting problem. IIRC there's already a RFC for DHCP announced login webpage. It's business and legal - layer 8 and 9 problems.
Operating system maintainers have added captive portal detection to improve user experience. Hotspot owners often sabotage captive portal detection by whitelisting Apple and Google's CPD URLs! AAPL & GOOG have to change these URLs periodically and 802.11 vendors add the new ones in a cat and mouse game. So the customer's phone and thus customer think they're connected but they end up hitting a CP. Why? Because the CPD browser will close immediately upon receiving unencumbered Internet access. While many just want the legal liability limiting I Agree button to be tapped, many others owners want customers to see their store promotion/ad flyer page for longer than a second, place a tracking cookie, and do other value adds.
The interests of Wi-Fi network operators and users are not aligned. Right now only equipment manufacturers directly profit off Wi-Fi service. Hotspot owners can only indirectly earn money through attracting more hotel guests, cafe customers, etc. Some now harvest customer MAC address and in-store walking behavior. Ideally hotspot owners could receive a revenue share for offloading traffic from LTE, which would give them an incentive to be more loyal to users and maintain quality of experience.
I use example.org for that. I wouldn't be surprised if it has to stay available on http for hysterical raisins, such as everyone using it in their tests and outage detectors.
Almost all? You've had access to some pretty crappy captive portals.
In my experience most portals' firewalls block all traffic (at IP level) except ports 80 and 443, which are transparently redirected to their auth server. Tunnelling isn't an option because you just can't contact anything else.
And it's not like I'm describing something that's hard to do. You've been able to script this stuff in iptables forever.
www.com is a short, easy to remember, parked domain I tend to use. It has some text ads, but is probably reasonably light on bandwidth.
neverssl.com, mentioned below, is one I seem to for some reason always struggle to remember it's name when I need it. I end up typing things like nossl.com or neverhttps.com and such...
For a long time I used my bank's homepage to get me redirected to the login page. Until the (reluctantly) forced HTTPS. Since then I'm using http://neverssl.com :)
I have public content that I'm willing to share over the web. To offer
it over HTTP, I only need a few thousand lines of code on top of TCP.
It's realistic to prove that that code has no memory-safety problems.
To offer it over HTTPS, I need to add tens of thousands of lines of
extra code related to cryptography. No existing implementation is
known to be correct. The popular implementations don't have a strong
history of code quality. It's completely reasonable to expect that,
over time, new vulnerabilities will be found that will allow attackers
to read or write to unintended memory locations on my server.
I totally understand that, by adding a popular but historically unsafe
cryptography implementation, I can help web clients to avoid MITM
shenanigans while reading my content. MITM indeed is a problem. But,
I'm not willing to make that my problem. The vast majority of my
customers don't experience MITM data modification while reading my
content. The vast majority of my customers wouldn't care if the whole
world knew that they have read my content.
Perhaps in the future, code for all the required cryptography layers
will be just as provably correct as code for reading the cleartext
HTTP protocol. When that day comes, I will add HTTPS. Not before.
I can't understand this opinion. So what if it only negativity affects a few users? How do you feel about ML that produces problematic results for a portion of users? Security is correctness.
> you saying that since HTTPS isn’t perfectly secure you’re gonna use the definitely insecure HTTP? What kind of logic is that?
Different sort of security. While I don't agree with OP, the logic here is somewhat sound: he is concerned about the threats against the server. It is reasonable to claim that TLS increases the attack surface, because any any vulns in TLS implementation would be purely additive to the potential vulns in the HTTP server.
HTTPS worst case: you yourself are harmed (e.g., if someone finds a zero-day in a cryptography library and uses it to run arbitrary code on your server)
An option is to write your own web server and have the likes of Cloudflare provide TLS (and a bunch of other stuff!) on top of it for free. Best of both worlds.
>The vast majority of my customers don't experience MITM data modification while reading my content.
[citation needed]. You have no idea if that is the case or not. Too many ISPs inject ads, javascript, or other horrible technologies (flash) in to customers HTTP streams.
Users have no idea if what you are serving them is what they are receiving. If this is content like code samples they can be corrupted my malicious users. If it's political content the message can be altered.
<hyperbolic>I mean, we all know rurcliped is a Nazi, their website said so the other day. </hyperbolic>
It's worth noting that HTTP->HTTPS redirect is, in some sense, game-over already. A malicious actor upstream can mitm by blocking the redirect and providing an HTTP version to the client while communicating HTTPS to the server. sslstrip is one implementation.
A quick solution that solves many cases is to also use HTTP Strict Transport Security (HSTS), and declare it for at least a year. Then once the user visits the site (say from home) and gets the HSTS header, later visits in that time will always use HTTPS. This solves the problem in many cases.
The best solution is to get your site into the HSTS preload list. You can do that here: https://hstspreload.org/ - this adds the site to the Chrome list, and most other major browsers (Firefox, Opera, Safari, IE 11 and Edge) have an HSTS preload list based on the Chrome list. Once that happens and the browser updates, users will always use https, even if they ask for http.
That's why HSTS preloading is what you want to achieve. We've recently added openstreetmap.org and it was a fun project to make sure that everything would properly reply over https.
This is also why it's preferable to use TLS-only ports for SMTP (465) and IMAP (993) instead of using the STARTTLS protocol extensions. Mail clients aren't required to enforce STARTTLS and might fallback to plain text when an MITM blocks the extension.
Indeed the HTTP -> HTTPS redirect is only the first step in solving the problem.
A 301 redirect will offer some lasting protection as it can be cached but it's not really that great. The goal here is to take the first step to get on HTTPS and then longer term the sites can consider HSTS and eventually preloading.
The 'web' is kind of for the commons, but HTTPS is just a few steps away from being 'easy'. The steps required are frankly just a little too weird and ugly.
Most people making web sites, even many devs, don't necessarily want to, or have the wherewithal to deal with SSL.
Setting up SSL on AWS recently was a monster pain the rear for our little nothing project site, granted a lot of that was AWS oriented.
It needs to be easier, if it were easy, everyone would be doing it.
> many devs, don't [...] have the wherewithal to deal with SSL.
I can buy this for a non-dev clicking "Install Wordpress" in a legacy cpanel of a cheap shared web host (though in that case the web host should be setting up certs automatically for them), but what exactly is complicated about setting up certs for a dev? https://certbot.eff.org/ holds your hand through the entire ~30 second process. It's simpler than most other tasks a dev needs to do while setting up a website.
Sometimes it comes down to whether user-generated content with externally linked resources is allowed on the site. E.g. forums often allows embedding images by users and because users don't care about HTTP or HTTPS and only copy the image link from somewhere, you will end up with "mixed content" warnings and broken "locks" in the browser address bar all over the place.
The current alternative is to simply use HTTP, which doesn't yield any warnings. When Chrome will mark all non-HTTPS sites as insecure this will change, hopefully.
certbot has failed continually on my DO instance. It cannot update my cert on cron. I have to manually update my cert when I remember to do so. In fact, I had forgotten to do this for the last several months, so my site gets to display a nice scary warning for anyone who might venture forth and try https. I've searched how to fix it and nothing comes up. The site is static html and I've considered just removing https.
I use a cheap shared web host. My current and previous host both have LetsEncrypt implementations that take a couple of clicks. The former one was just default cpanel I think.
It is difficult if you want to understand the process, though no more so than, say, using Git IMO.
In short cpanel and other shared host panels make it as easy as the clicky-clicky Wordpress install
>but what exactly is complicated about setting up certs for a dev?
Seeing as how he mentioned AWS, it's a bit more complicated if you have a cluster of servers that are automatically managed. You have to set up a reverse proxy and integrate that into your cloud provider's workflow.
ACM combined with the various other services you need, especially Route53 or whatever, plus their byzantine new networking and security rules ... means you need to read a few sections of several different manuals to just do basic things.
AWS has become very complex over the last few years - there was a period where you didn't need admins. Now it seems you do again, just cloud admins, not local machine admins.
Our accuracy rate is currently around 99.6% so we're doing pretty well but of course I don't think it can ever be 100% either.
The biggest thing we've come across so far is geo-sensitive handling of requests. Some sites will redirect you to HTTPS or not based on where you are making the request from! This of course means you might see HTTPS and we see HTTP.
I think it's still fair to include those sites in this case because they aren't serving all traffic securely.
Hey, thanks for the extra insight, I do appreciate. Interesting to know that those sites show these behaviours too. Why could geo-location based redirect be interesting?
It doesn't work every time for me. If i open it in a new private session (chrome or firefox) there is no redirection... Probably a faulty configuration on their side.
In Germany I thought if kind of sad, that a major automotive forum isn't https secured (motortalk.de). I was asking myself, how they manage their user's logins.
Then I checked. The are secured. Not sure since when, but maybe the data from whynohttps isn't as fresh as one might think.
And lot of bigger press sites (spiegel.de, faz.net. computerbild.de) aren't secured still. Kind of a shame imho.
The main reason for big media sites and forums not being https secured is the interest of the advertising community. There was a huge community discussion on sites like golem.de and Heise.de, which only recently switched to https. Login pages are protected almost all the time.
The site I was debugging was a WordPress site that had somehow gotten the images in its opening carousel set to http:// despite the fact that the "header" module had a default of https://. Very useful to just feed it the URL and notice these were the broken images; I could have grep'd through a "show source" page, but whynopadlock.com made it easy for me to identify that it was the header module of a site for which I had little familiarity.
So in airports I rack my brain for a website that might serve content over HTTP, not HTTPS. Alas, they're getting harder to find, and it's getting harder to keep those airport wifi sessions going. I half considered registering "alwaysinsecure.com" or "alwayshttp.com" so I could reliably find one... Now I'll probably just use qq.com, it's short and easy to remember.
EDIT: Thanks all :-)
I also created http://httpforever.com/ for this purpose until such a time that the problem is solved properly.
Operating system maintainers have added captive portal detection to improve user experience. Hotspot owners often sabotage captive portal detection by whitelisting Apple and Google's CPD URLs! AAPL & GOOG have to change these URLs periodically and 802.11 vendors add the new ones in a cat and mouse game. So the customer's phone and thus customer think they're connected but they end up hitting a CP. Why? Because the CPD browser will close immediately upon receiving unencumbered Internet access. While many just want the legal liability limiting I Agree button to be tapped, many others owners want customers to see their store promotion/ad flyer page for longer than a second, place a tracking cookie, and do other value adds.
The interests of Wi-Fi network operators and users are not aligned. Right now only equipment manufacturers directly profit off Wi-Fi service. Hotspot owners can only indirectly earn money through attracting more hotel guests, cafe customers, etc. Some now harvest customer MAC address and in-store walking behavior. Ideally hotspot owners could receive a revenue share for offloading traffic from LTE, which would give them an incentive to be more loyal to users and maintain quality of experience.
https://news.ycombinator.com/item?id=13369038
Deleted Comment
Thanks for making me smile!
In my experience most portals' firewalls block all traffic (at IP level) except ports 80 and 443, which are transparently redirected to their auth server. Tunnelling isn't an option because you just can't contact anything else.
And it's not like I'm describing something that's hard to do. You've been able to script this stuff in iptables forever.
neverssl.com, mentioned below, is one I seem to for some reason always struggle to remember it's name when I need it. I end up typing things like nossl.com or neverhttps.com and such...
[0] https://crt.sh/?id=5857507
(I'm still sad that example.com responds to anything at all. It was originally defined as a completely unused, thus unresponsive, domain.)
I love this site.
I totally understand that, by adding a popular but historically unsafe cryptography implementation, I can help web clients to avoid MITM shenanigans while reading my content. MITM indeed is a problem. But, I'm not willing to make that my problem. The vast majority of my customers don't experience MITM data modification while reading my content. The vast majority of my customers wouldn't care if the whole world knew that they have read my content.
Perhaps in the future, code for all the required cryptography layers will be just as provably correct as code for reading the cleartext HTTP protocol. When that day comes, I will add HTTPS. Not before.
1. You don’t have to write any code to use HTTPS, it has already been written.
2. Are you saying that since HTTPS isn’t perfectly secure you’re gonna use the definitely insecure HTTP? What kind of logic is that?
Different sort of security. While I don't agree with OP, the logic here is somewhat sound: he is concerned about the threats against the server. It is reasonable to claim that TLS increases the attack surface, because any any vulns in TLS implementation would be purely additive to the potential vulns in the HTTP server.
HTTP worst case: your reader is harmed.
HTTPS worst case: you yourself are harmed (e.g., if someone finds a zero-day in a cryptography library and uses it to run arbitrary code on your server)
[citation needed]. You have no idea if that is the case or not. Too many ISPs inject ads, javascript, or other horrible technologies (flash) in to customers HTTP streams.
Users have no idea if what you are serving them is what they are receiving. If this is content like code samples they can be corrupted my malicious users. If it's political content the message can be altered.
<hyperbolic>I mean, we all know rurcliped is a Nazi, their website said so the other day. </hyperbolic>
The best solution is to get your site into the HSTS preload list. You can do that here: https://hstspreload.org/ - this adds the site to the Chrome list, and most other major browsers (Firefox, Opera, Safari, IE 11 and Edge) have an HSTS preload list based on the Chrome list. Once that happens and the browser updates, users will always use https, even if they ask for http.
A 301 redirect will offer some lasting protection as it can be cached but it's not really that great. The goal here is to take the first step to get on HTTPS and then longer term the sites can consider HSTS and eventually preloading.
Most people making web sites, even many devs, don't necessarily want to, or have the wherewithal to deal with SSL.
Setting up SSL on AWS recently was a monster pain the rear for our little nothing project site, granted a lot of that was AWS oriented.
It needs to be easier, if it were easy, everyone would be doing it.
I can buy this for a non-dev clicking "Install Wordpress" in a legacy cpanel of a cheap shared web host (though in that case the web host should be setting up certs automatically for them), but what exactly is complicated about setting up certs for a dev? https://certbot.eff.org/ holds your hand through the entire ~30 second process. It's simpler than most other tasks a dev needs to do while setting up a website.
The current alternative is to simply use HTTP, which doesn't yield any warnings. When Chrome will mark all non-HTTPS sites as insecure this will change, hopefully.
It is difficult if you want to understand the process, though no more so than, say, using Git IMO.
In short cpanel and other shared host panels make it as easy as the clicky-clicky Wordpress install
Seeing as how he mentioned AWS, it's a bit more complicated if you have a cluster of servers that are automatically managed. You have to set up a reverse proxy and integrate that into your cloud provider's workflow.
it may not be complicated but you cannot beat doing nothing.
Our biggest problem with the transition was mixed content issues with hardcoded HTTP URLs hidden away on long-forgotten pages.
https://aws.amazon.com/blogs/aws/new-aws-certificate-manager...
Doesn't seem that bad
AWS has become very complex over the last few years - there was a period where you didn't need admins. Now it seems you do again, just cloud admins, not local machine admins.
Those devs need to be shamed and taught a lesson.
>many train operators don't [...] have the wherewithal to read the safety manual
>many police officers don't [...] have the wherewithal to attend gun safety classes
>many senators don't [...] have the wherewithal to listen to their constituents
Example : http://www.leboncoin.fr/ redirects to https://www.leboncoin.fr/
The biggest thing we've come across so far is geo-sensitive handling of requests. Some sites will redirect you to HTTPS or not based on where you are making the request from! This of course means you might see HTTPS and we see HTTP.
I think it's still fair to include those sites in this case because they aren't serving all traffic securely.
Deleted Comment
Most pressing ones are uva.nl (major University) and at5.nl (local but relatively large news site).
Then I checked. The are secured. Not sure since when, but maybe the data from whynohttps isn't as fresh as one might think.
And lot of bigger press sites (spiegel.de, faz.net. computerbild.de) aren't secured still. Kind of a shame imho.
The site I was debugging was a WordPress site that had somehow gotten the images in its opening carousel set to http:// despite the fact that the "header" module had a default of https://. Very useful to just feed it the URL and notice these were the broken images; I could have grep'd through a "show source" page, but whynopadlock.com made it easy for me to identify that it was the header module of a site for which I had little familiarity.