The *.home.arpa domain in RFC 8375 has been approved for local use since 2018, which is long enough ago that most hardware and software currently in use should be able to handle it.
RFC 8375 seems to have approved it specifically to use in Home Networking Control Protocol, though it also states "it is not intended that the use of 'home.arpa.' be restricted solely to networks where HNCP is deployed. Rather, 'home.arpa.' is intended to be the correct domain for uses like the one described for '.home' in [RFC7788]: local name service in residential homenets."
Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...
It's ugly and clunky, which is why after seven years it's had very little adoption. Home users aren't network engineers so these things actually do matter even if it seems silly in a technical sense.
Too much typing, and Chromium-based browsers don't understand it yet and try to search for mything.internal instead, which is annoying - you have to type out the whole http://mything.internal.
This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)
eh, you can just add search domain via dhcp or static configuration and just type out http://mything/ no need to enter whole domain unless you need todo ssl
I wrote a super basic DNS server in go (mostly fun and go practice) which allows you to specify hosts and ips in a json config file. This eliminates the need for editing your /etc/hosts file. If it matches a host in the json config file it returns that ip, else uses Cloudflare public DNS resolver as a fallback. Please; easy on my go code :-). I am a total beginner with go.
It would be great if there was an easy way to get trusted certificates for reserved domains without rolling out a CA. There are a number of web technologies that don't work without a trusted HTTPS origin, and it's such a pain in the ass to add root CAs everywhere.
*.localhost is reserved for accessing the loopback interface. It is literally the perfect use for it. In fact on many operating systems (apparently not macOS) anything.localhost already resolves to the loopback address.
> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.
> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.
So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).
> if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine
Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.
This is used for scenarios where you don't want to hardcode port numbers, like when running multiple projects on your machine at the same time.
Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.
Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.
Now, wether you have a use for such a setup or not is up to you.
This nginx local dev config snippet is one-and-done:
# Proxy to a backend server based on the hostname.
if (-d vhosts/$host) {
proxy_pass http://unix:vhosts/$host/server.sock;
break;
}
Your local dev servers must listen on a unix domain socket, and you must drop a symlink to them at eg /var/lib/nginx/vhosts/inclouds.localhost/server.sock.
Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!
Chrome and i think Firefox resolve all <name>.localhost domains to localhost per default, so you don't have to add them to the hosts file. I setup a docker proxy on port 80 that resolves all requests from <containername>.localhost to the first exposed port of that container (in order of appearing in the docker compose file) automatically which makes everything smooth without manual steps for docker compose based setups.
It's probably both. Browsers now have built-in DoH so they usually do their own resolving. Only if you disable "secure DNS" (or you use group policies) will you fall back to the system resolver anymore.
If you’re interested in doing local web development with “real” domain names, valid ssl certs, etc, you may enjoy my project Localias. It’s built on top of Caddy and has a nice CLI and config file format that you can commit to your team’s shared repo. It also has some nice features like making .local domain aliases available to any other device on your network, so you can more easily do mobile device testing on a real phone. It also syncs your /etc/hosts so you never need to edit it manually.
Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)
Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.
How do valid certs for localhost work? Does that require installing an unconstraint root certificate to sign the dev certs? Or is there a less risky way (name constraints?)
- If Caddy has not already generated a local root certificate:
- Generate a local root certificate to sign TLS certificates
- Install the local root certificate to the system's trust stores, and the Firefox certificate store if it exists and an be accessed.
Any subdomain of .localhost works out-of-the-box on Linux, OpenBSD and plenty of other platforms.
Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.
It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.
I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.
It's easy to be tricked into thinking macOS supports it, because both Chrome and Curl support it. However, ping does not, nor do more basic tools like Python's request library (and I presume urllib as well).
This usually happens because you have a Linux setup that doesn't use systemd-resolved and it also doesn't have myhostname early enough in the list of name resolvers. Not sure how many Linux systems default to this, but if you want this behavior, adjust your NSS configuration, most likely.
$ ping hello.localhost
PING hello.localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.057 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms
Against much well-informed advice, I use a vanity domain for my internal network at home. Through a combination Smallstep CA, CoreDNS, and Traefik, any services I host in my Docker Swarm cluster automatically are immediately issued a signed SSL certificate, load-balanced, and resolvable. Traefik also allows me to configure authentication for any services that I may not wish to expose without such.
That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...
I added a fake .com record in my internal DNS that resolves to my development server. All development clients within that network have an mkcert-generated CA installed.
Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?
For home it's not that bad, but there could be conflicts at some point. Your clients will send data to the Internet unknowingly when dns is missconfigured.
It's better to use domain you control.
I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.
To be clear, I didn’t register anything. I just have a configuration that serves records for a zone like “artichoke.” on my DNS server. Internal hosts are then accessible via https://gitlab.artichoke, for example.
What's the argument against using one's own actual domain? In these modern times where every device and software wants to force HTTPS, being able to get rid of all the browser warnings is nice.
I think this is ideal. You make a great point that even if you were to use .internal TLD that is reserved for internal use, you wouldn't be able to use letsencrypt to get a SSL certificate for it. Not sure if there are other ssl options for .internal. But, self-signed is a PITA.
I guess the lesson is to deploy a self-signed root ca in your infra early.
OP: If you're already using Caddy, why not just use a purchased domain (you can get some for a few dollars) with a DNS-01 challenge? This way you don't need to add self-signed certificates to your trust store and browsers/devices don't complain. You'll still keep your services private to your internal network, and Caddy will automatically keep all managed certificates renewed so there's no manual intervention once everything is set up.
So basically pay protection money? We have engineered such a system that the only way to use your own stuff is to pay a tax for it and rely on centralized system, even though you don't need to be public at all?
If you really want to keep things local without paying any fees, you could also use Smallstep (https://smallstep.com/) to issue certificates for your services. This way you only need to add one CA to your trust store on your devices, and the certificates still renew periodically and satisfy the requirements for TLS.
I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.
I was on a similar thought process, but this leaves you only with the option to set the A record of the public DNS entry to 127.0.0.1, if you want to use it on the go.
Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.
It's not, just a different way of satisfying the certificate challenge. Look into a DNS-01 challenge vs a HTTP-01 challenge. Let's Encrypt has a good breakdown: https://letsencrypt.org/docs/challenge-types/.
[1]: https://en.wikipedia.org/wiki/.internal
The OpenWrt wiki on Homenet suggests the project might be dead: https://openwrt.org/docs/guide-user/network/zeroconfig/hncp_...
Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...
Deleted Comment
This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)
That's hardly the only example of annoying MONOBAR behavior.
This problem could have been avoided if we had different widgets for doing different things. Someone should have thought of that.
https://github.com/nodesocket/godns
Ref: https://www.icann.org/en/board-activities-and-meetings/mater...
> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.
https://www.icann.org/en/board-activities-and-meetings/mater...
> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.
https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).
Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.
Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.
Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.
Now, wether you have a use for such a setup or not is up to you.
1. not all browsers are the same
2. there is no official standard
3. even if there was, standards are often ignored
4. what is true today can be false tomorrow
5. this is mitigation, not security
Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!
Dead Comment
Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)
Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.
https://github.com/peterldowns/localias
Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.
It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.
I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.
No, not here.
That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...
Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?
It's better to use domain you control.
I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.
I guess the lesson is to deploy a self-signed root ca in your infra early.
I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.
Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.
Is that a new thing? I heard previously that if you wanted to do DNS/domain for local network you had to expose the list external.