Readit News logoReadit News
CarlHoerberg · 8 years ago
> This initial approach, at least, does not cache the intermediate CNAMEs nor does it care about the CNAME TTL values.

That's a total violation of the standard and will break A LOT of things. Example: my.domain.com -> CNAME ec2-1-2-3-4.aws.com 30s TTL -> A 1.2.3.4 30days TTL.

So Firefox will now cache my.domain.com to 1.2.3.4 for 30 days? When you update the record for my.domain.com the change today will be applied in 30s, but with this flawed heuristic it won't expire until after 30 days.

mike-cardwell · 8 years ago
I don't like the idea of this, but even the implementation is bad. If we're going to do DNS over HTTPS, then there should be a standalone application, and the system should be reconfigured to use it, so all running applications on the system use it.

I mean, do we really want all of our desktop applications to have their own built in custom ways of mapping domain names to IP addresses?

[edit] E.g on Linux, it could install an application with a DNS interface listening on localhost port 53, which would then convert the request into a "DNS over HTTPS" request, and resolv.conf would be updated to use that resolver.

madmax96 · 8 years ago
Totally agree re: implementation details.

And, like you said, this is a bad idea. Is there something wrong with efforts like DNSCrypt + DNSSec? That's supposed to provide authentication and encryption to DNS without sending everything over HTTP.

Did Mozilla just totally ignore the work that's already been done in this area?

tptacek · 8 years ago
DNSSEC doesn't do any of what DoH does. It doesn't provide query privacy. It doesn't even typically protect the last mile between end-user resolvers and servers; it's a server-to-server protocol. I'm deeply skeptical (to put it nicely) about DNSSEC, but you don't even have to share that perspective to see why DoH is useful --- it covers a set of problems DNSSEC simply doesn't address.

DNSCrypt is a niche protocol that basically does do what DoH does. Very few people use it. You can use it instead of DoH, if you like; just disable DoH in Firefox and set up DNSCrypt with your system resolver.

HurrdurrHodor · 8 years ago
Just a guess but maybe they wanted to build this in a way that it would actually get used.
Leon · 8 years ago
There are lots of Open Source projects that will do what you are asking. Here is the first top hit on using bind to do that - https://github.com/wrouesnel/dns-over-https-proxy

However I disagree that it is a bad idea and that the implementation is bad. Regardless of how software _should_ behave, Firefox operates in how software is actually run for their users. DNS is a source of security vulnerabilities and headaches.

Demanding a higher level abstraction is not always an option for many, but using Firefox often is. This is especially important for mobile, where a lot of people don't have access or knowledge to set in place a system wide proxy after rooting their phones, but it is very easy to install Firefox mobile.

What about web browser usage on library or campus computers? Often they will have several browsers installed as well.

The point is that making security more available and easier to use where it matters most is a good idea.

novaleaf · 8 years ago
I just spent some time searching, and I actually don't see much in the way of clients. Most search results seem to be talking about how wizz-bang dns-over-https is, or talking about firefox's implementation.

If you know of a DNS over HTTPS client for Windows, please link it!

moderation · 8 years ago
I'm using Cloudflare's cloudflared [0] on all of my machines and it working well and does what you are looking for. Nice bonus is being able to collect metrics from each of the agents in Prometheus.

[0] https://github.com/cloudflare/cloudflared

jedisct1 · 8 years ago
dnscrypt-proxy is probably the most popular DNS-over-HTTPS client. https://github.com/jedisct1/dnscrypt-proxy
codewithcheese · 8 years ago
I am in Indonesia where Reddit, Vimeo, The Pirate Bay and other sites are blocked. I just enabled TRR in Firefox 60 (They mention best support is in 62) and now I have full unblocked access to all those sites. Awesome.
dtech · 8 years ago
Using an alternative DNS resolver like 8.8.8.8 (Google) or 1.1.1.1 (Cloudflare) could solve that already, and not only in Firefox.
flamemyst · 8 years ago
That will not work. Some ISP do DNS transparent proxy (forward all udp packet 53 to their own DNS server). You need DNSCrypt or VPN.
stordoff · 8 years ago
It depends how the block is implemented. If it's done by intercepting DNS requests, it doesn't matter what server you _try_ to reach. There's also the Virgin (UK) approach of allowing the correct DNS response, but tampering with HTTP requests (though switching to DNS-over-HTTPS wouldn't help in that case).
gsich · 8 years ago
Or Quad9, but this could be intercepted.
codewithcheese · 8 years ago
I was already using 1.1.1.1, did not unblock. Would be very curious if the ISPs were actually proxying my DNS requests. Any tips on how I might test that on Ubuntu?
eknkc · 8 years ago
They generally inspect DNS packets and block hostnames from resolving correctly. Does not matter the destination.
KwanEsq · 8 years ago
Be aware that using it in 60 you may run into frequent crashes from https://bugzilla.mozilla.org/show_bug.cgi?id=1441131
codewithcheese · 8 years ago
Thanks for the heads up, ill keep that in mind if i start seeing issues. So far so good.
ksec · 8 years ago
Ok that is news to me, why Indonesia blocks it?
flamemyst · 8 years ago
porn, hate speech, and pirate related stuff is blocked by government
sildur · 8 years ago
Well, shame on you. All the armchair experts at hacker news say that the implementation is really bad and you should feel really bad for using it. So please stop using it and make them happy.
tototomtoboro · 8 years ago
Reddit, Vimeo? What is their justification for blocking those?
textmode · 8 years ago
"Caveats

TRR doesn't read or care about /etc/hosts

There's no way to exclude or white list specific domains"

For me, the primary advantage of HOSTS/DNS is the ability to control answers to application queries for addresses and block ads.

This seems to remove all control a user might have through controlling such lookups. Yikes.

I think DOH is useful but in a different way. For example, it is useful for retrieving bulk DNS data using RFC 2616 pipelining, alleviating dependence on piecemeal DNS lookups, thus increasing speed and privacy. Data can be stored locally and refreshed periodically, if necessary (I have been doing this witout problems for 15 years). It's also useful for retrieving data from a variety of caches, allowing answers to be compared.

dogecoinbase · 8 years ago
TRR doesn't read or care about /etc/hosts There's no way to exclude or white list specific domains

Sigh. This is aggressively breaking normal DNS behavior (and will be an absurd hassle for a very large number of organizations, but in terms of extremely normal split-horizon and orgs with regulatory obligations to catch HTTPS traffic).

Applications should not contain their own encapsulated resolvers, let alone resolvers that default to sending all of my DNS traffic to for-profit companies that have previously experienced massive data leaks (and fun CF fact, they invited the then-CTO of Cambridge Analytica to talk at their Internet Summit event in SF last year).

unethical_ban · 8 years ago
They're trying to improve the security of a fundamental protocol - if we waited for committees every time we wanted something new, we wouldn't have HTTP2, HTML5 or a dozen other technologies.

I agree they shouldn't take away the "god-mode" /etc/hosts, which is only ever populated very intentionally by sysadmins and power users. If anything, that should be a flag just like the various modes of using TRR.

And finally - it's an open protocol in development, and anyone can set it up who wants to. If you don't want to use Google or Cloudflare, you don't have to. And FWIW millions of people are already using 8.8.8.8 and 1.1.1.1 and Cisco's OpenDNS as their primary resolver. That GOOG and CF are at the forefront of another increment of Internet standards should not be surprising.

riquito · 8 years ago
> will be an absurd hassle for a very large number of organizations

they can disable it, any organization that modify /etc/hosts can also change Firefox's preferences file

niftich · 8 years ago
Firefox's DOH client ignores /etc/hosts, but it shouldn't be too hard to host your own DOH server [1][2] that you could then configure how you see fit. I can see this pattern becoming widespread someday, and with DOH, people can re-use their experience in setting up webservers.

[1] https://github.com/st3fan/tinydoh [2] https://github.com/m13253/dns-over-https

peterwwillis · 8 years ago
You think regular end users having to set up and maintain server software in order to force a name for an IP is going to become a widespread pattern? That's horrifying. I don't want to live in that world.
toomuchtodo · 8 years ago
I think there’s great value in DOH caching servers running on home routers; all the benefits of DOH but “regular DNS” between clients and your home router.
chupasaurus · 8 years ago
Just run local dnscrypt-proxy (it supports DoH) on your machine/router and everything would be fine.
fanf2 · 8 years ago
That [1] implements a very old draft so I doubt it is compatible with Firefox.

Deleted Comment

acqq · 8 years ago
Also problematic:

"0 - Off (default). use standard native resolving"

...

"5 - Explicitly off. Also off, but selected off by choice and not default."

It seems that the plan for the "0 - default" is to switch the users to other modes without the user knowing it, and to keep the behavior off the user must specifically change the option to "5."

anonymfus · 8 years ago
No, it is not problematic, it's a good engineering. Imagine in the future DNS over HTTPS will be supported by OS and there will be an OS-wide setting for it. Then it will make sense to change default setting in FireFox to use OS-wide setting.
bagder · 8 years ago
"I better speculate on the reason here because surely Daniel is part of a conspiracy meant destroy the browsing experience of millions"

or...

It could be prepared for when the user gets asked what they want and then Firefox can remember an explicit "no" as compared to not selection ever made.

/ Daniel (author of the blog post)

snvzz · 8 years ago
I'm strongly against this. Bypassing the system's DNS is a no-go.

If this passes, it's going to be a nightmare to system administrators. Basically, each and every split horizon will be broken.

But, promoting DNS-over-HTTPS in the browser and providing an easy-to-install, separate tool for Windows/OSX to solve through DNS-over-HTTPS is something I could get behind.

Like how on my network I do use dnscrypt-proxy, so everything is already using DNS-over-HTTPS.

davidu · 8 years ago
This further drives centralization of the Internet and the idea that the Internet == Web == Browser.

It's disappointing to see this coming from someone so steeped in Internet contributions and history and believing it to be a good idea.

Bypassing system DNS is not just an enterprise no-go, it probably will end up reducing privacy.

The idea that ISPs sniff DNS is largely a red-herring, and further it's already easily addressed by DNSCrypt or DNS over (d)TLS.

dannyw · 8 years ago
My ISP in Australia already censors various domain names due to copyright lawsuits. ISPs tampering with DNS is not a red herring, but a real issue.

Furthermore, ISPs in some countries like India forces all port 53 traffic to their own censored servers. DNS over TLS won’t solve that.

zackbloom · 8 years ago
People also don't realize that their ISP literally sells their DNS traffic. There is a strong market for it, it's not theoretical.
xg15 · 8 years ago
> It also makes it easy to use a name server of your choice for a particular application instead of the one configured globally (often by someone else) for your entire system.

I can see app developers wanting this, but as a user, I really hope, this doesn't happen. It's bad enough that many applications today manage their own certificate stores, making the next part of internet infrastructure app-specific seems to me a way to more fragmentation and less understanding or oversight I'd have over my own system.

tialaramex · 8 years ago
If anything we want _more_ trust stores rather than less, although I'd certainly take "uses the latest Mozilla NSS trust store" over "I pasted in this list I found on the Internet fifteen years ago and have never updated it" in most applications.

One reason to desire separate trust stores is that your model of trust is almost certainly not "I wish my application trusted exactly the same CAs as [say] the Firefox web browser and I will incorporate all the same special rules and exception as that browser".

Example: Back in 2016 the US government expressed interest in operating its own public CA. This probably won't happen under Trump of course, but in Firefox this would have been no problem if they met its other criteria, it's easily able to accept a new CA and apply constraints to it. So that in Firefox a US Federal Web PKI cert for whitehouse.gov works, but one for gov.uk or google.com does not. But does your application have that logic? Or would it just blindly trust the new CA because Mozilla added it to their trust store?

The other reason is that your application is (especially if you aren't on the ball enough to run your own trust store) probably not able to keep up with the security treadmill, and falling off may be painful.

Example: If your system depended upon SHA-1 certificates to function, but you used the Web PKI CAs from somewhere like Mozilla's NSS store, magically in 2016 no more new certificates were available. No problem for the browser vendors, they had voted for exactly this outcome. Too bad for your application.

effie · 8 years ago
The intent is good - to give users of the applications more power in their relation with their employer and state, similarly to what the GNU project and the GNU Hurd kernel promote. Of course, there is danger that develepers will misuse this and hardwire their preferred DNS into their application. If this becomes a problem, the users will have to either reject the application or apply some MITM remedy.
j0057 · 8 years ago
I'm not very happy we're now going to send all DNS traffic to 6 centralized DNS-over-HTTPS servers[1]. We can't trust our ISP, but we can trust Google and Cloudflare?

I also noticed that when I configure my Android's proxy settings to point at a Privoxy container that routes through a VPN, I still get DNS-hijacked to my provider's "thepiratebay.org has been blocked for you" page -- this only happens in Chrome mobile, not Firefox mobile. I was used to DNS resolving through the proxy server.

[1] https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-av...

sp332 · 8 years ago
Mozilla actually has a contract with CloudFlare to protect user data. It's stricter than CF's normal privacy policy which applies to other users of the DNS-over-HTTPS service. Only 3 types of aggregate information will be kept for more than 24 hours. https://developers.cloudflare.com/1.1.1.1/commitment-to-priv...
apatters · 8 years ago
That depends on your use case. If you live in a country where you could be prosecuted for making a DNS request to a politically sensitive website, yeah, you're probably better off trusting Google with your DNS history.
ectospheno · 8 years ago
The good news is that blocking traffic to those six servers is trivial. You can stick the rule right after the one blocking traffic to port 853.