> This initial approach, at least, does not cache the intermediate CNAMEs nor does it care about the CNAME TTL values.
That's a total violation of the standard and will break A LOT of things. Example: my.domain.com -> CNAME ec2-1-2-3-4.aws.com 30s TTL -> A 1.2.3.4 30days TTL.
So Firefox will now cache my.domain.com to 1.2.3.4 for 30 days? When you update the record for my.domain.com the change today will be applied in 30s, but with this flawed heuristic it won't expire until after 30 days.
I don't like the idea of this, but even the implementation is bad. If we're going to do DNS over HTTPS, then there should be a standalone application, and the system should be reconfigured to use it, so all running applications on the system use it.
I mean, do we really want all of our desktop applications to have their own built in custom ways of mapping domain names to IP addresses?
[edit] E.g on Linux, it could install an application with a DNS interface listening on localhost port 53, which would then convert the request into a "DNS over HTTPS" request, and resolv.conf would be updated to use that resolver.
And, like you said, this is a bad idea. Is there something wrong with efforts like DNSCrypt + DNSSec? That's supposed to provide authentication and encryption to DNS without sending everything over HTTP.
Did Mozilla just totally ignore the work that's already been done in this area?
DNSSEC doesn't do any of what DoH does. It doesn't provide query privacy. It doesn't even typically protect the last mile between end-user resolvers and servers; it's a server-to-server protocol. I'm deeply skeptical (to put it nicely) about DNSSEC, but you don't even have to share that perspective to see why DoH is useful --- it covers a set of problems DNSSEC simply doesn't address.
DNSCrypt is a niche protocol that basically does do what DoH does. Very few people use it. You can use it instead of DoH, if you like; just disable DoH in Firefox and set up DNSCrypt with your system resolver.
However I disagree that it is a bad idea and that the implementation is bad. Regardless of how software _should_ behave, Firefox operates in how software is actually run for their users. DNS is a source of security vulnerabilities and headaches.
Demanding a higher level abstraction is not always an option for many, but using Firefox often is. This is especially important for mobile, where a lot of people don't have access or knowledge to set in place a system wide proxy after rooting their phones, but it is very easy to install Firefox mobile.
What about web browser usage on library or campus computers? Often they will have several browsers installed as well.
The point is that making security more available and easier to use where it matters most is a good idea.
I just spent some time searching, and I actually don't see much in the way of clients. Most search results seem to be talking about how wizz-bang dns-over-https is, or talking about firefox's implementation.
If you know of a DNS over HTTPS client for Windows, please link it!
I'm using Cloudflare's cloudflared [0] on all of my machines and it working well and does what you are looking for. Nice bonus is being able to collect metrics from each of the agents in Prometheus.
I am in Indonesia where Reddit, Vimeo, The Pirate Bay and other sites are blocked. I just enabled TRR in Firefox 60 (They mention best support is in 62) and now I have full unblocked access to all those sites. Awesome.
It depends how the block is implemented. If it's done by intercepting DNS requests, it doesn't matter what server you _try_ to reach. There's also the Virgin (UK) approach of allowing the correct DNS response, but tampering with HTTP requests (though switching to DNS-over-HTTPS wouldn't help in that case).
I was already using 1.1.1.1, did not unblock. Would be very curious if the ISPs were actually proxying my DNS requests. Any tips on how I might test that on Ubuntu?
Well, shame on you. All the armchair experts at hacker news say that the implementation is really bad and you should feel really bad for using it. So please stop using it and make them happy.
There's no way to exclude or white list specific domains"
For me, the primary advantage of HOSTS/DNS is the ability to control answers to application queries for addresses and block ads.
This seems to remove all control a user might have through controlling such lookups. Yikes.
I think DOH is useful but in a different way. For example, it is useful for retrieving bulk DNS data using RFC 2616 pipelining, alleviating dependence on piecemeal DNS lookups, thus increasing speed and privacy. Data can be stored locally and refreshed periodically, if necessary (I have been doing this witout problems for 15 years). It's also useful for retrieving data from a variety of caches, allowing answers to be compared.
TRR doesn't read or care about /etc/hosts
There's no way to exclude or white list specific domains
Sigh. This is aggressively breaking normal DNS behavior (and will be an absurd hassle for a very large number of organizations, but in terms of extremely normal split-horizon and orgs with regulatory obligations to catch HTTPS traffic).
Applications should not contain their own encapsulated resolvers, let alone resolvers that default to sending all of my DNS traffic to for-profit companies that have previously experienced massive data leaks (and fun CF fact, they invited the then-CTO of Cambridge Analytica to talk at their Internet Summit event in SF last year).
They're trying to improve the security of a fundamental protocol - if we waited for committees every time we wanted something new, we wouldn't have HTTP2, HTML5 or a dozen other technologies.
I agree they shouldn't take away the "god-mode" /etc/hosts, which is only ever populated very intentionally by sysadmins and power users. If anything, that should be a flag just like the various modes of using TRR.
And finally - it's an open protocol in development, and anyone can set it up who wants to. If you don't want to use Google or Cloudflare, you don't have to. And FWIW millions of people are already using 8.8.8.8 and 1.1.1.1 and Cisco's OpenDNS as their primary resolver. That GOOG and CF are at the forefront of another increment of Internet standards should not be surprising.
Firefox's DOH client ignores /etc/hosts, but it shouldn't be too hard to host your own DOH server [1][2] that you could then configure how you see fit. I can see this pattern becoming widespread someday, and with DOH, people can re-use their experience in setting up webservers.
You think regular end users having to set up and maintain server software in order to force a name for an IP is going to become a widespread pattern? That's horrifying. I don't want to live in that world.
I think there’s great value in DOH caching servers running on home routers; all the benefits of DOH but “regular DNS” between clients and your home router.
"0 - Off (default). use standard native resolving"
...
"5 - Explicitly off. Also off, but selected off by choice and not default."
It seems that the plan for the "0 - default" is to switch the users to other modes without the user knowing it, and to keep the behavior off the user must specifically change the option to "5."
No, it is not problematic, it's a good engineering. Imagine in the future DNS over HTTPS will be supported by OS and there will be an OS-wide setting for it. Then it will make sense to change default setting in FireFox to use OS-wide setting.
"I better speculate on the reason here because surely Daniel is part of a conspiracy meant destroy the browsing experience of millions"
or...
It could be prepared for when the user gets asked what they want and then Firefox can remember an explicit "no" as compared to not selection ever made.
I'm strongly against this. Bypassing the system's DNS is a no-go.
If this passes, it's going to be a nightmare to system administrators. Basically, each and every split horizon will be broken.
But, promoting DNS-over-HTTPS in the browser and providing an easy-to-install, separate tool for Windows/OSX to solve through DNS-over-HTTPS is something I could get behind.
Like how on my network I do use dnscrypt-proxy, so everything is already using DNS-over-HTTPS.
> It also makes it easy to use a name server of your choice for a particular application instead of the one configured globally (often by someone else) for your entire system.
I can see app developers wanting this, but as a user, I really hope, this doesn't happen. It's bad enough that many applications today manage their own certificate stores, making the next part of internet infrastructure app-specific seems to me a way to more fragmentation and less understanding or oversight I'd have over my own system.
If anything we want _more_ trust stores rather than less, although I'd certainly take "uses the latest Mozilla NSS trust store" over "I pasted in this list I found on the Internet fifteen years ago and have never updated it" in most applications.
One reason to desire separate trust stores is that your model of trust is almost certainly not "I wish my application trusted exactly the same CAs as [say] the Firefox web browser and I will incorporate all the same special rules and exception as that browser".
Example: Back in 2016 the US government expressed interest in operating its own public CA. This probably won't happen under Trump of course, but in Firefox this would have been no problem if they met its other criteria, it's easily able to accept a new CA and apply constraints to it. So that in Firefox a US Federal Web PKI cert for whitehouse.gov works, but one for gov.uk or google.com does not. But does your application have that logic? Or would it just blindly trust the new CA because Mozilla added it to their trust store?
The other reason is that your application is (especially if you aren't on the ball enough to run your own trust store) probably not able to keep up with the security treadmill, and falling off may be painful.
Example: If your system depended upon SHA-1 certificates to function, but you used the Web PKI CAs from somewhere like Mozilla's NSS store, magically in 2016 no more new certificates were available. No problem for the browser vendors, they had voted for exactly this outcome. Too bad for your application.
The intent is good - to give users of the applications more power in their relation with their employer and state, similarly to what the GNU project and the GNU Hurd kernel promote. Of course, there is danger that develepers will misuse this and hardwire their preferred DNS into their application. If this becomes a problem, the users will have to either reject the application or apply some MITM remedy.
I'm not very happy we're now going to send all DNS traffic to 6 centralized DNS-over-HTTPS servers[1]. We can't trust our ISP, but we can trust Google and Cloudflare?
I also noticed that when I configure my Android's proxy settings to point at a Privoxy container that routes through a VPN, I still get DNS-hijacked to my provider's "thepiratebay.org has been blocked for you" page -- this only happens in Chrome mobile, not Firefox mobile. I was used to DNS resolving through the proxy server.
Mozilla actually has a contract with CloudFlare to protect user data. It's stricter than CF's normal privacy policy which applies to other users of the DNS-over-HTTPS service. Only 3 types of aggregate information will be kept for more than 24 hours. https://developers.cloudflare.com/1.1.1.1/commitment-to-priv...
That depends on your use case. If you live in a country where you could be prosecuted for making a DNS request to a politically sensitive website, yeah, you're probably better off trusting Google with your DNS history.
That's a total violation of the standard and will break A LOT of things. Example: my.domain.com -> CNAME ec2-1-2-3-4.aws.com 30s TTL -> A 1.2.3.4 30days TTL.
So Firefox will now cache my.domain.com to 1.2.3.4 for 30 days? When you update the record for my.domain.com the change today will be applied in 30s, but with this flawed heuristic it won't expire until after 30 days.
I mean, do we really want all of our desktop applications to have their own built in custom ways of mapping domain names to IP addresses?
[edit] E.g on Linux, it could install an application with a DNS interface listening on localhost port 53, which would then convert the request into a "DNS over HTTPS" request, and resolv.conf would be updated to use that resolver.
And, like you said, this is a bad idea. Is there something wrong with efforts like DNSCrypt + DNSSec? That's supposed to provide authentication and encryption to DNS without sending everything over HTTP.
Did Mozilla just totally ignore the work that's already been done in this area?
DNSCrypt is a niche protocol that basically does do what DoH does. Very few people use it. You can use it instead of DoH, if you like; just disable DoH in Firefox and set up DNSCrypt with your system resolver.
However I disagree that it is a bad idea and that the implementation is bad. Regardless of how software _should_ behave, Firefox operates in how software is actually run for their users. DNS is a source of security vulnerabilities and headaches.
Demanding a higher level abstraction is not always an option for many, but using Firefox often is. This is especially important for mobile, where a lot of people don't have access or knowledge to set in place a system wide proxy after rooting their phones, but it is very easy to install Firefox mobile.
What about web browser usage on library or campus computers? Often they will have several browsers installed as well.
The point is that making security more available and easier to use where it matters most is a good idea.
If you know of a DNS over HTTPS client for Windows, please link it!
[0] https://github.com/cloudflare/cloudflared
TRR doesn't read or care about /etc/hosts
There's no way to exclude or white list specific domains"
For me, the primary advantage of HOSTS/DNS is the ability to control answers to application queries for addresses and block ads.
This seems to remove all control a user might have through controlling such lookups. Yikes.
I think DOH is useful but in a different way. For example, it is useful for retrieving bulk DNS data using RFC 2616 pipelining, alleviating dependence on piecemeal DNS lookups, thus increasing speed and privacy. Data can be stored locally and refreshed periodically, if necessary (I have been doing this witout problems for 15 years). It's also useful for retrieving data from a variety of caches, allowing answers to be compared.
Sigh. This is aggressively breaking normal DNS behavior (and will be an absurd hassle for a very large number of organizations, but in terms of extremely normal split-horizon and orgs with regulatory obligations to catch HTTPS traffic).
Applications should not contain their own encapsulated resolvers, let alone resolvers that default to sending all of my DNS traffic to for-profit companies that have previously experienced massive data leaks (and fun CF fact, they invited the then-CTO of Cambridge Analytica to talk at their Internet Summit event in SF last year).
I agree they shouldn't take away the "god-mode" /etc/hosts, which is only ever populated very intentionally by sysadmins and power users. If anything, that should be a flag just like the various modes of using TRR.
And finally - it's an open protocol in development, and anyone can set it up who wants to. If you don't want to use Google or Cloudflare, you don't have to. And FWIW millions of people are already using 8.8.8.8 and 1.1.1.1 and Cisco's OpenDNS as their primary resolver. That GOOG and CF are at the forefront of another increment of Internet standards should not be surprising.
they can disable it, any organization that modify /etc/hosts can also change Firefox's preferences file
[1] https://github.com/st3fan/tinydoh [2] https://github.com/m13253/dns-over-https
Deleted Comment
"0 - Off (default). use standard native resolving"
...
"5 - Explicitly off. Also off, but selected off by choice and not default."
It seems that the plan for the "0 - default" is to switch the users to other modes without the user knowing it, and to keep the behavior off the user must specifically change the option to "5."
or...
It could be prepared for when the user gets asked what they want and then Firefox can remember an explicit "no" as compared to not selection ever made.
/ Daniel (author of the blog post)
If this passes, it's going to be a nightmare to system administrators. Basically, each and every split horizon will be broken.
But, promoting DNS-over-HTTPS in the browser and providing an easy-to-install, separate tool for Windows/OSX to solve through DNS-over-HTTPS is something I could get behind.
Like how on my network I do use dnscrypt-proxy, so everything is already using DNS-over-HTTPS.
It's disappointing to see this coming from someone so steeped in Internet contributions and history and believing it to be a good idea.
Bypassing system DNS is not just an enterprise no-go, it probably will end up reducing privacy.
The idea that ISPs sniff DNS is largely a red-herring, and further it's already easily addressed by DNSCrypt or DNS over (d)TLS.
Furthermore, ISPs in some countries like India forces all port 53 traffic to their own censored servers. DNS over TLS won’t solve that.
I can see app developers wanting this, but as a user, I really hope, this doesn't happen. It's bad enough that many applications today manage their own certificate stores, making the next part of internet infrastructure app-specific seems to me a way to more fragmentation and less understanding or oversight I'd have over my own system.
One reason to desire separate trust stores is that your model of trust is almost certainly not "I wish my application trusted exactly the same CAs as [say] the Firefox web browser and I will incorporate all the same special rules and exception as that browser".
Example: Back in 2016 the US government expressed interest in operating its own public CA. This probably won't happen under Trump of course, but in Firefox this would have been no problem if they met its other criteria, it's easily able to accept a new CA and apply constraints to it. So that in Firefox a US Federal Web PKI cert for whitehouse.gov works, but one for gov.uk or google.com does not. But does your application have that logic? Or would it just blindly trust the new CA because Mozilla added it to their trust store?
The other reason is that your application is (especially if you aren't on the ball enough to run your own trust store) probably not able to keep up with the security treadmill, and falling off may be painful.
Example: If your system depended upon SHA-1 certificates to function, but you used the Web PKI CAs from somewhere like Mozilla's NSS store, magically in 2016 no more new certificates were available. No problem for the browser vendors, they had voted for exactly this outcome. Too bad for your application.
I also noticed that when I configure my Android's proxy settings to point at a Privoxy container that routes through a VPN, I still get DNS-hijacked to my provider's "thepiratebay.org has been blocked for you" page -- this only happens in Chrome mobile, not Firefox mobile. I was used to DNS resolving through the proxy server.
[1] https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-av...