Readit News logoReadit News
hardaker · 9 months ago
You might check out .internal instead which was recently approved [1] for local use.

[1]: https://en.wikipedia.org/wiki/.internal

GrumpyYoungMan · 9 months ago
The *.home.arpa domain in RFC 8375 has been approved for local use since 2018, which is long enough ago that most hardware and software currently in use should be able to handle it.
johnmaguire · 9 months ago
RFC 8375 seems to have approved it specifically to use in Home Networking Control Protocol, though it also states "it is not intended that the use of 'home.arpa.' be restricted solely to networks where HNCP is deployed. Rather, 'home.arpa.' is intended to be the correct domain for uses like the one described for '.home' in [RFC7788]: local name service in residential homenets."

The OpenWrt wiki on Homenet suggests the project might be dead: https://openwrt.org/docs/guide-user/network/zeroconfig/hncp_...

Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...

Mountain_Skies · 9 months ago
It's ugly and clunky, which is why after seven years it's had very little adoption. Home users aren't network engineers so these things actually do matter even if it seems silly in a technical sense.
styfle · 9 months ago
Why use that over *.localhost which has been available since 1999 (introduced in RFC 2606)

Deleted Comment

alexvitkov · 9 months ago
Too much typing, and Chromium-based browsers don't understand it yet and try to search for mything.internal instead, which is annoying - you have to type out the whole http://mything.internal.

This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)

nsteel · 9 months ago
Isn't just typing the slash at the end enough to avoid it searching? e.g. mything/
thaumasiotes · 9 months ago
> Chromium-based browsers don't understand it yet and try to search for mything.internal instead, which is annoying

That's hardly the only example of annoying MONOBAR behavior.

This problem could have been avoided if we had different widgets for doing different things. Someone should have thought of that.

tepmoc · 9 months ago
eh, you can just add search domain via dhcp or static configuration and just type out http://mything/ no need to enter whole domain unless you need todo ssl
codetrotter · 9 months ago
In that case I would prefer naming as

  <virtual>.<physical-host>.internal
So for example

  phpbb.mtndew.internal
And I’d probably still add

  phpbb.localhost 
To /etc/hosts on that host like OP does

nodesocket · 9 months ago
I wrote a super basic DNS server in go (mostly fun and go practice) which allows you to specify hosts and ips in a json config file. This eliminates the need for editing your /etc/hosts file. If it matches a host in the json config file it returns that ip, else uses Cloudflare public DNS resolver as a fallback. Please; easy on my go code :-). I am a total beginner with go.

https://github.com/nodesocket/godns

eddyg · 9 months ago
.home, .corp and .mail are on ICANN’s “high risk” list so won’t ever be gTLDs, so they are also good (short) options.

Ref: https://www.icann.org/en/board-activities-and-meetings/mater...

candiddevmike · 9 months ago
It would be great if there was an easy way to get trusted certificates for reserved domains without rolling out a CA. There are a number of web technologies that don't work without a trusted HTTPS origin, and it's such a pain in the ass to add root CAs everywhere.
GoblinSlayer · 9 months ago
You can configure them to send requests through http proxy.
kevincox · 8 months ago
*.localhost is reserved for accessing the loopback interface. It is literally the perfect use for it. In fact on many operating systems (apparently not macOS) anything.localhost already resolves to the loopback address.
MaKey · 9 months ago
It seems like it has not been standardized yet:

> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.

jwilk · 9 months ago
It's been reserved by ICANN:

https://www.icann.org/en/board-activities-and-meetings/mater...

> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.

sdwolfz · 9 months ago
Note: browsers also give you a Secure Context for .localhost domains.

https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).

c-hendricks · 8 months ago
> if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine

Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.

sdwolfz · 8 months ago
This is used for scenarios where you don't want to hardcode port numbers, like when running multiple projects on your machine at the same time.

Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.

Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.

Now, wether you have a use for such a setup or not is up to you.

bolognafairy · 9 months ago
Well shit. TIL. Time to go reduce the complexity of our dev environment.
jrvieira · 9 months ago
you should never trust browsers default behavior

1. not all browsers are the same

2. there is no official standard

3. even if there was, standards are often ignored

4. what is true today can be false tomorrow

5. this is mitigation, not security

sigil · 9 months ago
This nginx local dev config snippet is one-and-done:

  # Proxy to a backend server based on the hostname.
  if (-d vhosts/$host) {
    proxy_pass http://unix:vhosts/$host/server.sock;
    break;
  }
Your local dev servers must listen on a unix domain socket, and you must drop a symlink to them at eg /var/lib/nginx/vhosts/inclouds.localhost/server.sock.

Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!

hn92726819 · 9 months ago
I'm not that familiar with nginx config. Does this protect against path traversal? Ex: host=../../../docker.sock
sigil · 9 months ago
nginx validates hostnames per the spec, and to your question specifically it rejects requests that would put a slash in $host: https://github.com/nginx/nginx/blob/b6e7eb0f5792d7a52d2675ee...
ku1ik · 9 months ago
This is neat!
jFriedensreich · 9 months ago
Chrome and i think Firefox resolve all <name>.localhost domains to localhost per default, so you don't have to add them to the hosts file. I setup a docker proxy on port 80 that resolves all requests from <containername>.localhost to the first exposed port of that container (in order of appearing in the docker compose file) automatically which makes everything smooth without manual steps for docker compose based setups.
globular-toast · 9 months ago
Source for this? Are you sure it's not your system resolver doing it?
TingPing · 8 months ago
There is a draft spec over it, Ill find it later, but they do hardcode it now and never touch dns.
kbolino · 9 months ago
It's probably both. Browsers now have built-in DoH so they usually do their own resolving. Only if you disable "secure DNS" (or you use group policies) will you fall back to the system resolver anymore.
jFriedensreich · 8 months ago
Pretty sure its hard coded in the browser and never touches any resolvers. It does not work the same in safari for example.

Dead Comment

peterldowns · 9 months ago
If you’re interested in doing local web development with “real” domain names, valid ssl certs, etc, you may enjoy my project Localias. It’s built on top of Caddy and has a nice CLI and config file format that you can commit to your team’s shared repo. It also has some nice features like making .local domain aliases available to any other device on your network, so you can more easily do mobile device testing on a real phone. It also syncs your /etc/hosts so you never need to edit it manually.

Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)

Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.

https://github.com/peterldowns/localias

bestham · 9 months ago
There is also mkcert by Filippo Valsorda (no relation to mkcert.org) at https://github.com/FiloSottile/mkcert
peterldowns · 8 months ago
Yup, mkcert is used by caddy which is used by localias :)
CodesInChaos · 9 months ago
How do valid certs for localhost work? Does that require installing an unconstraint root certificate to sign the dev certs? Or is there a less risky way (name constraints?)
sangeeth96 · 9 months ago
It's mentioned in the README:

  - If Caddy has not already generated a local root certificate:
     - Generate a local root certificate to sign TLS certificates
     - Install the local root certificate to the system's trust stores, and the Firefox certificate store if it exists and an be accessed.
So yes. I had written about how I do this directly with Caddy over here: https://automagic.blog/posts/custom-domains-with-https-for-y...

worewood · 9 months ago
I think an alternative to local root certs would be to use a public cert + dnsmasq on your LAN to resolve the requests to a local address.
novoreorx · 8 months ago
After reading this blog, I immediately thought of Localias. I use it frequently, preferring the .test domain.
WhyNotHugo · 9 months ago
Any subdomain of .localhost works out-of-the-box on Linux, OpenBSD and plenty of other platforms.

Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.

It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.

I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.

telotortium · 9 months ago
It's easy to be tricked into thinking macOS supports it, because both Chrome and Curl support it. However, ping does not, nor do more basic tools like Python's request library (and I presume urllib as well).
jwilk · 9 months ago
> Any subdomain of .localhost works out-of-the-box on Linux

No, not here.

jchw · 9 months ago
This usually happens because you have a Linux setup that doesn't use systemd-resolved and it also doesn't have myhostname early enough in the list of name resolvers. Not sure how many Linux systems default to this, but if you want this behavior, adjust your NSS configuration, most likely.
oulipo · 9 months ago
Just did that on my mac and it seems to work?

    $ ping hello.localhost
    PING hello.localhost (127.0.0.1): 56 data bytes
    64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.057 ms
    64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms

tedunangst · 9 months ago
That's because your DNS server sends back 127.0.0.1. The query isn't resolved locally.
Shywim · 9 months ago
Not for me, MacOS 15.4:

    $ ping hello.localhost
    ping: cannot resolve hello.localhost: Unknown host

parasti · 9 months ago
I am doing this on macOS with no problem.
octagons · 9 months ago
Against much well-informed advice, I use a vanity domain for my internal network at home. Through a combination Smallstep CA, CoreDNS, and Traefik, any services I host in my Docker Swarm cluster automatically are immediately issued a signed SSL certificate, load-balanced, and resolvable. Traefik also allows me to configure authentication for any services that I may not wish to expose without such.

That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...

hobo_mark · 9 months ago
I added a fake .com record in my internal DNS that resolves to my development server. All development clients within that network have an mkcert-generated CA installed.

Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?

szszrk · 9 months ago
For home it's not that bad, but there could be conflicts at some point. Your clients will send data to the Internet unknowingly when dns is missconfigured.

It's better to use domain you control.

I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.

octagons · 9 months ago
To be clear, I didn’t register anything. I just have a configuration that serves records for a zone like “artichoke.” on my DNS server. Internal hosts are then accessible via https://gitlab.artichoke, for example.
thot_experiment · 9 months ago
I alias home.com to my local house stuff. I don't really understand why anyone thinks it's a bad idea either.
kreetx · 9 months ago
I run a custom (unused) tld with mkcert the same way, with nginx virtual hosts set up for each app.
tbyehl · 9 months ago
What's the argument against using one's own actual domain? In these modern times where every device and software wants to force HTTPS, being able to get rid of all the browser warnings is nice.
waynesonfire · 9 months ago
I think this is ideal. You make a great point that even if you were to use .internal TLD that is reserved for internal use, you wouldn't be able to use letsencrypt to get a SSL certificate for it. Not sure if there are other ssl options for .internal. But, self-signed is a PITA.

I guess the lesson is to deploy a self-signed root ca in your infra early.

smjburton · 9 months ago
OP: If you're already using Caddy, why not just use a purchased domain (you can get some for a few dollars) with a DNS-01 challenge? This way you don't need to add self-signed certificates to your trust store and browsers/devices don't complain. You'll still keep your services private to your internal network, and Caddy will automatically keep all managed certificates renewed so there's no manual intervention once everything is set up.
whatevaa · 9 months ago
So basically pay protection money? We have engineered such a system that the only way to use your own stuff is to pay a tax for it and rely on centralized system, even though you don't need to be public at all?
smjburton · 9 months ago
If you really want to keep things local without paying any fees, you could also use Smallstep (https://smallstep.com/) to issue certificates for your services. This way you only need to add one CA to your trust store on your devices, and the certificates still renew periodically and satisfy the requirements for TLS.

I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.

qwertox · 9 months ago
I was on a similar thought process, but this leaves you only with the option to set the A record of the public DNS entry to 127.0.0.1, if you want to use it on the go.

Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.

shadowpho · 9 months ago
> You'll still keep your services private to your internal network,

Is that a new thing? I heard previously that if you wanted to do DNS/domain for local network you had to expose the list external.

smjburton · 9 months ago
It's not, just a different way of satisfying the certificate challenge. Look into a DNS-01 challenge vs a HTTP-01 challenge. Let's Encrypt has a good breakdown: https://letsencrypt.org/docs/challenge-types/.