A word of warning, client side support of name constraints may still be incomplete. I know it works on modern Firefox and Chrome, but there's lots of other software that uses HTTPS.
This repo links to BetterTLS, which previously audited name constraint support, but BetterTLS only checked name constraint support at the intermediary certificates not at the trust anchors. I reported[1] the oversight a year back, but Netflix hasn't re-engineered the tests.
Knowing how widely adopted name constraints are on the client side would be really useful, but I haven't seen a sound caniuse style analysis.
Personally, I think the public CA route is better and I built a site that explores this[2].
I prefer to assign an external name to an internal device and grab a free SSL cert from LetsEncrypt, but using DNS challenge instead as internal IP addresses aren't reachable by their servers.
Yep. I tried the custom-root-CA approach for a long time, but there were just too many problems with it:
* Loading it into every device was more work than it sounds. We have Android, iOS, Mac, Windows, and Linux, all of which have their own rules.
* Even once loaded, some applications come with their own set of root CAs. Some of those have a custom way of adding a new one (Firefox), others you just had to accept the invalid cert each time, and still others just refused to work.
* I deploy my self-hosted stuff with Docker, which means that not only does each device need to have the root CA added to it but every Docker image that talks to the internal network needs to have it as well. This ends up being a mix of the previous two problems, as I now have to figure out how to mount the CA on an eclectic bunch of distros and I often then have to figure out why the dockerized application isn't using the CA.
In the end I settled on a DNS-challenge wildcard SSL cert loaded into Caddy, with Caddy terminating TLS for everything that's on my home server. It's way simpler to configure the single server (or even 2-3 servers) than every single client.
> I deploy my self-hosted stuff with Docker, which means that not only does each device need to have the root CA added to it but every Docker image that talks to the internal network needs to have it as well. This ends up being a mix of the previous two problems, as I now have to figure out how to mount the CA on an eclectic bunch of distros and I often then have to figure out why the dockerized application isn't using the CA.
FWIW, I solve this problem with wildcards + a central reverse proxy for containerized apps. I host most services on a subdomain of the machine that hosts containers, like "xxx.container.internal", "xxx2.container.internal", etc. Instead of each container doing it's own SSL I have one central reverse proxy container that binds to 443 and each app container gets put on an internal Docker network with the reverse proxy. Reverse proxy has a wildcard certificate for the host system domain name "*.container.internal" and you can just add an endpoint for each service SNI. I'm using Zoraxy, which makes it very easy to just add a new endpoint if I install a new app with a couple clicks, but this works with lots of other reverse proxies like Caddy, Nginx, etc. If containers need to talk to each other over the external endpoint for some reason and thus need the root CA you can mount the host system's certificate store into the container, which seems to work pretty well the one or two times I needed to do it.
I haven't really solved the annoyance of deploying my root CA to all the devices that need it, which truly is a clusterfuck, but I only have to do it once a year so it isn't that bad. Very open to suggestions if people have good ways to automate this, especially in a general way that can cover Windows/Mac/iOS/Android/various Linuxes uniformly since I have a lot of devices. I've experimented with Ansible, but that doesn't cover mobile devices, which are the ones that make it most difficult.
Historically, before wildcard certificates were suddenly available for free, this leaked all internal domains to the internet, but now it's mostly a solved problem.
I've used this method for development successfully (generating CAs and certs on Mac with mkcert), but Apple has broken certificates in iOS 18. Root CAs are not showing up in the trust UI on iPhones after you install them. It's a big issue for developers, and has broken some people's E-mail setups as well. Also some internal software deployments.
Apple is aware of it, but it's still not fixed in iOS 18.1.
These are exactly the challenges and toil I ran into over time with my self-hosted/homelab setup. I use regular domains now as well with DNS challenges for Let's Encrypt. I've been experimenting lately with CloudFlare Tunnel + Zero Trust Access as well for exposing only the endpoints I need from an application for local development like webhooks, with the rest of the site locked behind Access.
I do this as well, but be aware that these external names you're using for internal devices become a matter of public record this way. If that's okay for you (it is for me), then this is a good solution. The advantage is also that you run no risk of name clashes because you actually own the domain
I decided to try split DNS to avoid leaking the internal IPs, but it turned out a bit more fragile than I imagined.
Especially Android is finicky, ignoring your DNS server if it doesn't like your setup. For example, if it gets an IPv6 address, it requires the DNS server to also have an IPv6 address, or it'll use Google's DNS servers.
It works now but I'm not convinced it's worth it for me.
> be aware that these external names you're using for internal devices become a matter of public record this way
Yes, I sometimes think about that, but have come to the conclusion that it's not likely to make any difference. If someone is trying to infiltrate my home network, then it's not going to really help them to know internal IP addresses as by the time they get to use them, they're already in.
you can use a wildcard of type *.internal.example.com or use names that do not relate to the service name if you want to obfuscate the tech stack used.
The only thing public is that you may have an internal network with nodes.
I last looked at LetsEncrypt maybe 8-9 years ago, I thought it was awesome but not suitable for my internal stuff due to the http challenge requirement, so I went down the self signed CA route and stuck with that, and didn’t really keep up with developments in the space
It was only until recently someone told me about the DNS challenge and I immediately ported everything over with a wildcard cert - its been great!
LetsEncrypt + DNS challenge + DNS provider with letsencrpyt compatible API for modifying records works fantastically well for getting "real" https/SSL working for private IP addresses, the automatic renewals make it largely set and forget with very little config or setup required.
I've had working validly signed SSL on literally all my private home self-hosted services and load-balancers internally for years this way.
It also easily switches to a production like setup if you later did decide to host something on the public internet.
This sounds like something I'd want to do! Is the idea that you'd have a public domain name like "internal.thatcherc.com" resolve to an internal IP address like 10.0.10.5? I've wondered about setting this up for some local services I have but I wasn't sure if it was a commonly-done thing.
Obligatory if DNS validation is good enough, DANE should've been too. Yes, MITM things could potentially ensue on untrusted networks without DNSSEC, but that's perfect being the enemy of good territory IMO.
This would allow folks to have .internal with auto-discovered, decentralized, trusted PKI. It would also enable something like a DNSSEC on/off toggle switch for IoT devices to allow owners to MITM them and provide local functionality for their cloud services.
DANE rollout was attempted. It didn't work reliably (middleboxes freak out about DNSSEC), slowed things down when it did, and didn't accomplish any security goals (even on its own terms) because it can't plausibly be deployed DANE-only on the modern Internet. Even when the DANE working group came up with a no-additional-RTTs model for it (stapling), it fell apart for security reasons (stripping). DANE is a dead letter.
It happens. I liked HPKP, which was also tried, and also failed.
This would be cool, but I think we're still a far way off from that being an option. DANE requires DNSSEC validation by the recursive resolver and a secure connection from the user's device to that resolver. DoH appears to be the leading approach for securing the connection between the user's device and the resolver, and modern browser support is pretty good, but the defaults in use today are not secure:
> It disables DoH when [...] a network tells Firefox not to use secure DNS. [1]
If we enabled DANE right now, then a malicious network could tell the browser to turn off DoH and to use a malicious DNS resolver. The malicious resolver could set the AD flag, so it would look like DNSSEC had been validated. They'd then be able to intercept traffic for all domains with DANE-validated TLS certificates. In contrast, it's difficult for an attacker to fraudulently obtain a TLS certificate from a public CA.
Even if we limit DANE to .internal domains, imagine connecting to a malicious network and loading webmail.internal. A malicious network would have no problem generating a DANE-validated TLS certificate to impersonate that domain.
Yeah that's what I do. If you use anything other than Cloudflare its really really hard to get the authentication plugins going on every different web server though. Every server supports a different subset of providers and usually you have to install the plugins separately. It's a bit of a nightmare. But once it's dialled in it's ok.
I didn't like this approach because I don't like to leak information about my internal setup but I found that you don't even have to register your servers on a public DNS so it's ok. Just the domain has to exist. It does create very temporary TXT records though.
I use Dynu.com as my DNS provider (they're cheap, provide APIs and very fast to update which is great for home IP addresses that may change). Then, to get the certificates, I use https://github.com/acmesh-official/acme.sh which is a shell script that supports multiple certificate and DNS providers. Copying the certificates to the relevant machines is done by a custom BASH script that runs the relevant acme.sh commands.
One advantage of DNS challenge is that it can be run anywhere (i.e. doesn't need to run on the webserver) - it just needs the relevant credentials to add a DNS TXT record. I've got my automation wrapped up into a Docker container.
Not OP but I have a couple of implementations: one using caddyserver[0] as a reverse proxy in a docker-compose set up, and the other is a Kubernetes cluster using cert-manager[1].
> The Name Constraints extension lives on the certificate of a CA but can’t actually constrain what a bad actor does with that CA’s private key
> Therefore, it is up to the TLS _client_ to verify that all constraints are satisfied
> However, as we extended our test suite beyond basic tests we rapidly began to lose confidence. We created a battery of test certificates which moved the subject name between the certificate’s subject common name and Subject Alternate Name extension, which mixed the use of Name Constraint whitelisting and blacklisting, and which used both DNS names and IP names in the constraint. The result was that every browser (except for Firefox, which showed a 100% pass rate) and every HTTPS client (such as Java, Node.JS, and Python) allowed some sort of Name Constraint bypass.
That’s the danger of any solution that requires trusting a self-signed CA. Better just trust the leaf certificate, maybe make it wildcard, so you only have to go through the trust-invalid-cert once?
I wa t ti be able to import a cert into by browser and specify what to trust it for myself. “Only trust this cert for domain.com” did example.
The name constraints can give me a hint what it’s designed for, but if I import a cert to MITM devsite.org, I don’t want that cert working for mybank.com.
I did some research, write-up and scripting about the state of X.509 Name Constraints, so that people you give your CA cert to don't need to trust you not to MitM them on other domains.
Packaged into a convenient one-liner to create a wildcard cert under for the new .internal TLD.
For example my government uses non-standard CA and some websites rely on it. But importing CA obviously makes them able to issue google.com and MITM me if they want to. And they already tried, so trust is broken.
I imagine something like generating separate name-constrained certificate, sign existing CA with this name-constrained certificate (I think it's called cross-sign or something like that) and import things into OS, expecting that browser will use name-constraints of the "Root-Root" certificate. Could it work?
Yes, I do it in my work to restrict my company CA to company servers [1]. You generate your own CA, and cross sign other cert with any constraint you want. It works great, but requires some setup, and of course now you have your own personal CA to worry about.
[1] Yes, company is ok with it, most of my team does it, and this makes everyone more secure. Win-win.
Niklas, if you are reading this - it was a pleasure to interview with you some 6 (or so) years ago :) thanks for the script and the research, I will make use of it.
Looks good, but I want to MitM my network. I want youtube.com to redirect to my internal server that only has a few approved videos. My kids do some nice piano lessons from youtube, but every time I let them they wait until I'm out of the room and then switch to something else. There are lots of other great educational videos on youtube, but also plenty to waste their time on. (I want this myself as well since I won't have ads on my internal youtube server - plus it will add an extra step and thus keep me from getting distracted to something that isn't a good use of my time to watch))
Increasingly that kind of requirement puts you in the same camp as oppressive nation states. Being a network operator and wanting to MitM your DNS makes you a political actor. Devices you paid for, but don't actually own, will end-run your efforts by using their own hard-coded DNS servers. (See https://pc.nanog.org/static/published/meetings/NANOG77/2033/...)
Fortunately I own my firewall. Though mostly I'm talking about linux machines that I own and control the software on.
Though I fully understand I'm in the same camp as oppressive nation states. But until my kids get older I'm in charge, I need to set them up for success in life, which is a complex balance of letting them have freedom without allowing them to make too many bad decisions. Not getting their homework done because they are watching videos is on bad decisions I'm trying to prevent.
> Devices you paid for, but don't actually own, will end-run your efforts by using their own hard-coded DNS servers.
Not just devices, Jetbrains software has hardcoded DNS too. I've had to resort to blocking its traffic entirely because of the sheer number of servers and ports it tries in order to work around my DNS blocking, now I allow traffic only during license/update checks. I'm sure other large vendors do something similar.
With mikrotik and presumably other vendors you can force dns to your dns. I do this so i can pi-hole everything and see what sneaky things devices are doing.
What services are you self hosting for local YouTube? Right now I just hand pick videos and they get lifted by my plex server, but having a nice route to my internal YouTube will be great for when my kids get to that age!
Currently I'm not. I would like to, but I'm not sure how to make it work. If I have a youtube video that I downloaded, I can make youtube.com point to my own web server, but everything after the domain needs to point to the correct things to make it play and I'm not sure how to do that (I also haven't looked).
Dumb question: lots of folks are talking about name constraints not being understood by old clients since they don’t understand that extension. But is this not exactly the point of critical designation in extensions: is the client not supposed to fail if it comes across a critical extension it doesn’t understand?
For one thing, the fact something's supposed to fail on unexpected input doesn't always mean it will fail.
For another, some implementations thought they understood name constraints, but had bugs in their implementations. For example, applying name constraints correctly to the certificate's Subject Alternate Name but not applying them to the Common Name.
As for the overall X.509 ecosystem (not limited to name constraints), the certification validation logic of common clients accepts various subtly, but completely, invalid certificates because CAs used to sign (or even use as root certificate) various kinds of invalid certificates, one can probably even find a certificate, that should be logically trusted, but isn't even a valid DER encoding of the (TBS)Certificate.
I went down this path, but installing CA certificates is a pain. There isn't just one trust store per device, there are many. Make your own CA if want to find out how many there are...
Like others I went with just having my own domain and getting real certs for things.
This repo links to BetterTLS, which previously audited name constraint support, but BetterTLS only checked name constraint support at the intermediary certificates not at the trust anchors. I reported[1] the oversight a year back, but Netflix hasn't re-engineered the tests.
Knowing how widely adopted name constraints are on the client side would be really useful, but I haven't seen a sound caniuse style analysis.
Personally, I think the public CA route is better and I built a site that explores this[2].
[1] https://github.com/Netflix/bettertls/issues/19
[2] https://www.getlocalcert.net/
* Loading it into every device was more work than it sounds. We have Android, iOS, Mac, Windows, and Linux, all of which have their own rules.
* Even once loaded, some applications come with their own set of root CAs. Some of those have a custom way of adding a new one (Firefox), others you just had to accept the invalid cert each time, and still others just refused to work.
* I deploy my self-hosted stuff with Docker, which means that not only does each device need to have the root CA added to it but every Docker image that talks to the internal network needs to have it as well. This ends up being a mix of the previous two problems, as I now have to figure out how to mount the CA on an eclectic bunch of distros and I often then have to figure out why the dockerized application isn't using the CA.
In the end I settled on a DNS-challenge wildcard SSL cert loaded into Caddy, with Caddy terminating TLS for everything that's on my home server. It's way simpler to configure the single server (or even 2-3 servers) than every single client.
FWIW, I solve this problem with wildcards + a central reverse proxy for containerized apps. I host most services on a subdomain of the machine that hosts containers, like "xxx.container.internal", "xxx2.container.internal", etc. Instead of each container doing it's own SSL I have one central reverse proxy container that binds to 443 and each app container gets put on an internal Docker network with the reverse proxy. Reverse proxy has a wildcard certificate for the host system domain name "*.container.internal" and you can just add an endpoint for each service SNI. I'm using Zoraxy, which makes it very easy to just add a new endpoint if I install a new app with a couple clicks, but this works with lots of other reverse proxies like Caddy, Nginx, etc. If containers need to talk to each other over the external endpoint for some reason and thus need the root CA you can mount the host system's certificate store into the container, which seems to work pretty well the one or two times I needed to do it.
I haven't really solved the annoyance of deploying my root CA to all the devices that need it, which truly is a clusterfuck, but I only have to do it once a year so it isn't that bad. Very open to suggestions if people have good ways to automate this, especially in a general way that can cover Windows/Mac/iOS/Android/various Linuxes uniformly since I have a lot of devices. I've experimented with Ansible, but that doesn't cover mobile devices, which are the ones that make it most difficult.
Apple is aware of it, but it's still not fixed in iOS 18.1.
Especially Android is finicky, ignoring your DNS server if it doesn't like your setup. For example, if it gets an IPv6 address, it requires the DNS server to also have an IPv6 address, or it'll use Google's DNS servers.
It works now but I'm not convinced it's worth it for me.
https://www.merklemap.com/
Yes, I sometimes think about that, but have come to the conclusion that it's not likely to make any difference. If someone is trying to infiltrate my home network, then it's not going to really help them to know internal IP addresses as by the time they get to use them, they're already in.
The only thing public is that you may have an internal network with nodes.
Deleted Comment
It was only until recently someone told me about the DNS challenge and I immediately ported everything over with a wildcard cert - its been great!
I've had working validly signed SSL on literally all my private home self-hosted services and load-balancers internally for years this way.
It also easily switches to a production like setup if you later did decide to host something on the public internet.
Works great.
In my case everything points to a tailscale operator endpoint, which goes to nginx ingress, which routes to the appropriate pods.
It's very much a set-and-forget solution.
This would allow folks to have .internal with auto-discovered, decentralized, trusted PKI. It would also enable something like a DNSSEC on/off toggle switch for IoT devices to allow owners to MITM them and provide local functionality for their cloud services.
It happens. I liked HPKP, which was also tried, and also failed.
> It disables DoH when [...] a network tells Firefox not to use secure DNS. [1]
If we enabled DANE right now, then a malicious network could tell the browser to turn off DoH and to use a malicious DNS resolver. The malicious resolver could set the AD flag, so it would look like DNSSEC had been validated. They'd then be able to intercept traffic for all domains with DANE-validated TLS certificates. In contrast, it's difficult for an attacker to fraudulently obtain a TLS certificate from a public CA.
Even if we limit DANE to .internal domains, imagine connecting to a malicious network and loading webmail.internal. A malicious network would have no problem generating a DANE-validated TLS certificate to impersonate that domain.
[1] https://support.mozilla.org/en-US/kb/dns-over-https#w_defaul...
According to that, it's not supported by Chrome, nor Firefox.
I didn't like this approach because I don't like to leak information about my internal setup but I found that you don't even have to register your servers on a public DNS so it's ok. Just the domain has to exist. It does create very temporary TXT records though.
One advantage of DNS challenge is that it can be run anywhere (i.e. doesn't need to run on the webserver) - it just needs the relevant credentials to add a DNS TXT record. I've got my automation wrapped up into a Docker container.
[0] https://caddyserver.com/ [1] https://cert-manager.io/
> The Trouble with Name Constraints
> The Name Constraints extension lives on the certificate of a CA but can’t actually constrain what a bad actor does with that CA’s private key
> Therefore, it is up to the TLS _client_ to verify that all constraints are satisfied
> However, as we extended our test suite beyond basic tests we rapidly began to lose confidence. We created a battery of test certificates which moved the subject name between the certificate’s subject common name and Subject Alternate Name extension, which mixed the use of Name Constraint whitelisting and blacklisting, and which used both DNS names and IP names in the constraint. The result was that every browser (except for Firefox, which showed a 100% pass rate) and every HTTPS client (such as Java, Node.JS, and Python) allowed some sort of Name Constraint bypass.
That’s the danger of any solution that requires trusting a self-signed CA. Better just trust the leaf certificate, maybe make it wildcard, so you only have to go through the trust-invalid-cert once?
The situation has improved since then, see the linked https://news.ycombinator.com/item?id=37544094
Deleted Comment
The name constraints can give me a hint what it’s designed for, but if I import a cert to MITM devsite.org, I don’t want that cert working for mybank.com.
Packaged into a convenient one-liner to create a wildcard cert under for the new .internal TLD.
Please scrutinize!
I use this to provide e.g. at home:
to provide transport encryption of these services in the local WiFi.Friends and family can add the CA root to their devices without having to worry about me MitM'ing their other connections.
For example my government uses non-standard CA and some websites rely on it. But importing CA obviously makes them able to issue google.com and MITM me if they want to. And they already tried, so trust is broken.
I imagine something like generating separate name-constrained certificate, sign existing CA with this name-constrained certificate (I think it's called cross-sign or something like that) and import things into OS, expecting that browser will use name-constraints of the "Root-Root" certificate. Could it work?
[1] Yes, company is ok with it, most of my team does it, and this makes everyone more secure. Win-win.
Deleted Comment
Increasingly that kind of requirement puts you in the same camp as oppressive nation states. Being a network operator and wanting to MitM your DNS makes you a political actor. Devices you paid for, but don't actually own, will end-run your efforts by using their own hard-coded DNS servers. (See https://pc.nanog.org/static/published/meetings/NANOG77/2033/...)
Though I fully understand I'm in the same camp as oppressive nation states. But until my kids get older I'm in charge, I need to set them up for success in life, which is a complex balance of letting them have freedom without allowing them to make too many bad decisions. Not getting their homework done because they are watching videos is on bad decisions I'm trying to prevent.
Not just devices, Jetbrains software has hardcoded DNS too. I've had to resort to blocking its traffic entirely because of the sheer number of servers and ports it tries in order to work around my DNS blocking, now I allow traffic only during license/update checks. I'm sure other large vendors do something similar.
https://intellij-support.jetbrains.com/hc/en-us/community/po...
For another, some implementations thought they understood name constraints, but had bugs in their implementations. For example, applying name constraints correctly to the certificate's Subject Alternate Name but not applying them to the Common Name.
Like others I went with just having my own domain and getting real certs for things.