I think certificates for IP addresses can be useful.
However, if Let‘s encrypt were to support S/MIME certificates, it would have a far greater impact. Since a few years, we have an almost comical situation with email encryption: Finally, most important mail user agents (aka mail clients) support S/MIME encryption out of the box. But you need a certificate from a CA to have a smooth user experience, just like with the web. However, all CAs that offer free trustworthy¹ S/MIME certificates with a duration of a year or more² have disappeared. The result: No private entities are using email encryption.
(PGP remains unused outside of geek circles because it is too awkward to use.)
Let‘s encrypt our emails!
¹ A certificate isn‘t trustworthy if the CA generates the secret key for you.
² With S/MIME you need to keep your old certificates around to decrypt old mails, so having a new one frequently is not practical
> ² With S/MIME you need to keep your old certificates around to decrypt old mails, so having a new one frequently is not practical
You don't need to change your decryption key - the new certificate can use the same decryption keys as the old one (certbot even has a flag: --reuse-key). Whether this is a good idea or not is a separate question.
I think the biggest benefit would be ACME-like automatic certificate issuance. Currently getting a new certificate is just too much friction.
The other thing I would hope for is wildcard certificates. I stopped using S/MIME because I usually create a new email (based on the same domain) for each vendor that I deal with. I would find it useful to be able to get a single certificate covering all email with that domain. Obviously that does mean that anyone else using an email from that domain would have to share the certificate, but for private use that can be acceptable - I don't worry about my wife reading my currently unencrypted email!
PGP has WKD[1] (Web Key Directory) now if you want to use the TLS web of trust for email. TLS certificates are much easier to get than S/MIME certificates. Having a third party do the identity management can be good, but many people do not work for a company where that makes sense. ... and if you do work for such a company, it is better to do the identity management within the company.
I am currently appalled at how little of a wakeup call Signalgate 1.0 was and is[2]. Signalgate was, yet again, a failure of identity management in end to end encryption. You know, the exact thing that S/MIME certificates (or WKD) could help solve in a government environment if the resulting system was actually usable.
A beautiful vision, but not practically viable. The average user isn't ready to handle private keys -- many can barely be trusted with their email passwords.
This means you either need centrally issued certificates for each domain, or face situations where legitimate users fail to obtain certificates, while cyber criminals send S/MIME-signed emails on the users' behalf.
Once a few generations of users have been trained to use passkeys then we can consider letting users handle key pairs.
If Lets encrypt sets up an automated system then MUAs could automatically request certificates for you. So the user wouldn’t have to deal with the issue.
There may be a need to distribute your certificate if you read and write mail on multiple devices.
I know little about s/mime encryption. But why do we need to decrypt old emails with the same protocol? In my head, I imagine certs would be for transport, and your server or host should handle encryption at rest no? So short lived transport certs, and whatever storage encryption you want. What am I missing here?
S/MIME is about the mail (content) itself, not the transport. For the transport part there are things like (START)TLS and MTA-STS. With S/MIME you include your certificate in the mail and can either sign the mail with a signature (with your private key, others can verify it using your public key from the certificate) or encrypt the mail (with the receiver's public key, so only he can decrypt it using his private key). Certificate trust is determined normally via the CA chain and trusted CAs.
The first two major points they pose against email can be summed up as ‘people don’t use security unless it is by default, and because it wasn’t built-in to email we shouldn’t try.’ To which I respond with: perfect is the enemy of progress. Clearly, email is sticky (many other things have tried to replace it), and it has grown to do more than just send plaintext messages. People use it for document transfer, agreements, as a way to send commands over the internet, etc. Email encryption and authentication is simply an attempt to add some cryptographic tooling to a tool we already use for so many things. Thus, these points feel vacuous to me.
The last two points are less to do with email and more to do with encryption in general, and it is probably the most defeatist implication of the fact that there is no ‘permanent encryption.’ It is an argument against encryption as a whole, and paints the picture for me that the author would find other reasons to dislike email encryption because they already dislike encryption. These last two points are an extension of wanting an ideal solution and refusing to settle for anything less.
This is totally off-topic. With all respect, if you want to discuss email encryption I would suggest that you write a blog post or start a separate thread.
So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
If the addressing bodies are not on board then it's a user responsibility to validate the host header and reject unwanted IP address based connections until any legacy certs are gone / or revoke any legacy certs. Or just wait to use your shiny new IP?
I wonder how many IP certs you could get for how much money with the different cloud providers.
>So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
>Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
I don't see how this is any different than custom/vanity domains you can get from cloud providers. For instance on azure you can assign a DNS name to your VMs in the form of myapp.westus.cloudapp.azure.com, and CAs will happily issue certificates for it[1]. There's no cooloff for those domains either, so theoretically someone else can snatch the domain from you if your VM gets decommissioned.
There is in fact weird cool off times for these cloud resources. I’m less familiar with AWS, but I know in azure once you delete/release one of these subdomains, it remains tied to your organization/tenant for 60 or 90 days.
You can reclaim it during that time, but any other tenant/organization will get an error that the name is in use. You can ping support to help you there if you show them you own both organizations. I was doing a migration of some resources across organizations and ran into that issue
The attack here isn't "snatch a domain from somebody who previously got a cert for it", it's "get an IP, get a cert issued, then release it and let somebody interesting pick it up".
In practice, this seems relatively low risk. You'd need to get a certificate for the IP, release it, have somebody else pick it up, have that person actually be doing something that is public-facing and also worth MITMing, then hijack either BGP or DNS to MITM traffic towards that IP. And you have ~6 days to do it. Plus if you can hijack your target's DNS or IPs... you can just skip the shenanigans and get a valid fresh cert for that target.
This is why Azure now uses a unique hash appended to the hostname by default (can be changed if desired.) You can't attack a dangling CNAME subdomain record if it points to a hashed-appended hostname, and Azure allows you to control the uniqueness either globally/tenant/region so you can have common names in that tenant/region if you wish.
You can renew your HTTPS certificate for 90 days the day before your domain expires. Your CA can't see if the credit card attached to your auto renewal has hit its limit or not.
I don't think the people using IP certificates will be the same people that abandon their IP address after a week. The most useful thing I can think of is either some very weird legacy software, or Encrypted Client Hello/Encrypted SNI support without needing a shared IP like with Cloudflare. The former won't drop IPs on a whim, the latter wouldn't succeed in setting up a connection to the real domain.
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right?
My guess is that it's going to be approached the other way around. After all, it's not the ISPs' job to issue IP addresses in conformance with TLS; it's a TLS provider's job to "validate identity" — i.e. to issue TLS certificates in conformance with how the rest of the ecosystem attaches identity to varyingly-ephemeral resources (IPs, FQDNs, etc.)
The post doesn't say how they're going to approach this one way or the other, but my intuition is that LetsEncrypt is going to have/maintain some gradually-growing whitelist for long-lived IP-issuer ASNs — and then will only issue certs for IPs that exist under those ASNs; and invalidate IP certs if their IP is ever sold to another ASN not on the list. This list would likely be a public database akin to Mozilla's Public Suffix List, that LetsEncrypt would expect to share (and possibly co-maintain) with other ACME issuers that want to do IP issuance.
Iirc AWS Application Load Balancers (HTTP/L7) will cycle through IPs as fast as every 30 minutes (based on me tracking them ~5 years ago). I think they set a 10 minute ttl on their DNS records
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
... or put multiple customers on the same IP address at the same time. But presumably they wouldn't be willing to complete the dance necessary to actually get certs for addresses they were using that way.
Just in case, though, it's probably a good idea to audit basically all software everywhere to make sure that it will not use IP-based SANs to validate connections, even if somebody does, say, embed a raw IP address in a URL.
There was a prior concern in the history of Let's Encrypt about hosting providers that have multiple customers on the same server. In fact, phenomena related to that led to the deprecation of one challenge method and the modification of another one, because it's important that one customer not be able to pass CA challenges on behalf of another customer just because the two are hosted on the same server.
But there was no conclusion that customers on the same server can't get certificates just because they're on the same server, or that whoever legitimately controls the default server for an IP address can't get them.
This would be a problem if clients would somehow treat https://example.com/ and https://96.7.128.175/ as the same identifier or security context just because example.com resolves to 96.7.128.175, but I'm not aware of clients that ever do so. Are you?
If clients don't confuse these things in some automated security sense, I don't see how IP address certs are worse than (or different from) certs for different customers who are hosted on the same IP address.
I recall one of the main arguments against Encrypted server name indication (ESNI) is that it would only be effective for the giant https proxy services like Cloudflare, that the idea of IP certs was floated as a solution but dismissed as a pipe dream. With IP address certificates, now every server can participate in ESNI, not just the giants. If it becomes common enough for clients to assume that all web servers have an IP cert and attempt to use ESNI on every connection, it could be a boon for privacy across the internet.
4. Send ESNI, get separate cert for www.secret.com, validate that
... and the threat you're mitigating is presumably that you don't want to disclose the name "www.secret.com" unless you're convinced you're talking to the legitimate 1.2.3.4, so that some adversary can't spoof the IP traffic to and from 1.2.3.4, and thereby learn who's making connections to www.secret.com. Is that correct?
But the DNS resolution step is still unprotected. So, two broad cases:
1. Your adversary can subvert DNS. In this case IP certificates add no value, because they can just point you to 5.6.7.8, and you'll happily disclose "www.secret.com" to them. And you have no other path to communicate any information about what keys to trust.
2. Your adversary cannot subvert DNS. But if you can rely on DNS, then you can use it as a channel for key information; you include a key to encrypt the ESNI for "www.secret.com" in a DNS record. Even if the adversary can spoof the actual IP traffic to and from 1.2.3.4, it won't do any good because it won't have the private key corresponding to that ESNI key in the DNS. And those keys are already standardized.
So what adversary is stopped by IP certificates who isn't already stopped by the ESNI key records in the DNS?
Plenty of other responses with good use cases, but I didn’t see NTS mentioned.
If you want to use NTS, but can’t get an IP cert, then you are left requiring DNS before you can get a trusted time. If DNS is down- then you can’t get the time. A common issue with DNSSEC is having the wrong time- causing validation failures. If you have DNSSEC enforced and have the wrong time- but NTS depends on DNS, then you are out of luck with no way to recover. Having IP as part of your cert allows trusted time without the DNS requirement, which can then fix your broken DNSSEC enforcement.
this seems possible to avoid as an issue without needing IP certs by having the configuration supply both an IP and a hostname, with the hostname used for the TLS validation.
Sometimes you want to have valid certs while your dns is undergoing major redesign. For instance to keep your dashboards available, or to be triple sure no old automation will fail due to dns issues.
In other cases dns is just not needed at all. You might prefer simplicity, independence from dns propagation, so you will have your, say, Cockpit exposed instantly on a test env.
It might be interesting for "opportunistic" DoTLS towards authdns servers, which might listen on the DoTLS port with a cert containing a SAN that matches the public IP of the authdns server. (You can do this now with authdns server hostnames, but there could be many varied names for one public authdns IP, and this kinda ties things together more-clearly and directly).
It might also he useful to hide the SNI in HTTPS requests. With the current status of ESNI/ECH you need some kind of proxy domain, but for small servers that only host a few sites, every domain may be identifiable (as opposed to, say, a generic Cloudflare certificate or a generic Azure certificate).
One use-case is connecting to a DoT (DNS-over-TLS) server directly rather than using a hostname. If you make a TLS connection to an IP address via OpenSSL, it will verify the IP SAN and fail if it's not there.
What about internal IPv4 addresses? Can we have browsers ignore 192.168.x.x, 172.16.x.x and 10.x.x.x if we can't get certs for those or can we get a public wildcard for internal networks?
I don’t believe that makes sense in the context of certificates generated by a public CA. Unlike domain names, there’s not one owner of 10.10.10.10, there are millions of “owners”…
But what problem is it that you want to solve?
For local development, one can use a tool such as mkcert. For shared internal resources (e.g. within a company), it’s probably easier to use a TLS cert tied to a domain instead of using naked IP addresses.
Every time I open a browser I need to click two buttons to get past the certificate error. Sure I could configure a real domain, do split DNS and get a certificate but these cameras require manual uploading a certificate. I would need to do this every three months for every camera and eventually even more frequently.
If you can somehow prove that your device can manage to have a private key that will never be extractable then you should already be able to do that with any regular CA.
The problem with certificates for internal addresses is that every single time someone tries to pull it off, it doesn't take long for someone to buy one of those devices, extract the private key, and then post about it online, requiring the key to be revoked immediately.
There is a solution to that, of course. If you trust your device, import its certificate manually so you can visit the page without errors, or if you have a lot of devices, set up a certificate authority to distribute these certificates. There are open source ACME servers that'll let you publish certificates the exact same way you'd do with Let's Encrypt, except now you can keep everything local.
Point your dns to an existing public server, get a cert, copy it to internal server, point your dns to 192.168.... address and copy the cert and key over.
Only problem is some routers blackhole dns responses pointing to local addresses so you need to test it
With automated certs having shorter and shorter expiration this becomes a tedious waste of time just so one can access ones cameras without having to click past the browser warnings.
No but maybe yes:
It would be impossible, and undesirable to issue certificates for local addresses. There's no way to verify local addresses because, inherently, they're local and not globally routable.
However, if a router manufacturer was so inclined, they _could_ have the device request a certificate for their public IPv4 address, given that it's not behind CG-NAT. v6 should be relatively easy since (unless you're at a cursed ISP) all v6 is generally globally routable.
Even behind CGNAT, you could probably get away with DNS here. If you provide your customers with customeraccount.manufacturerrouters.com, you can then use DNS validation to get a valid certificate for *.customeraccount.manufacturerrouters.com. Put a record in there that points to the local router IP (I.E. settings.customeraccount.manufacturerrouters.com) and you can get HTTPS logins on your local network, even with local IP addresses if the CAB still allows that.
It's not exactly user friendly, but it'll work.
Personally, I have a private CA that I use. My home router has a domain name pointing towards it and has been loaded up with my private certificate. I get the certificate error once a year when the thing expires but in the mean time I can access my router securely.
No and it shouldn’t. You can just run a proxy with a real domain and a real cert and then use dns rewrites to point that domain to a local host
For example you can use nginx manager if you want a ui and adguard for dns. Set your router to use adguard as the exclusive dns. Add a rewrite rule for your domain to point to the proxy. Register the domain and get a real cert. problem solved
No, they won't issue a certificate for a private IP address because you don't have exclusive control over it (i.e., the same IP address would point to a different machine on someone else's network).
Then I realised that when my internet was down, 192-18-1-1.foo.com wouldn't resolve. And when my internet is down is exactly when I want to access my router's admin page.
I decided simply using unencrypted HTTP is a much better choice.
Cloudflare DNS (probably others as well) allows you to enter private IPs for subdomains, so you don't have to run your own DNS. There's no AXFR enabled, so no issues with privacy unless you have someone really determined to dictionary-attack your subdomains.
No, on the contrary. You can't get a valid certificate for non-global IP, but you can already get a certificate for a domain name and point it to 192.168.0.1.
Nice, another exploit for TLS. The previous ones were all about generating valid certs for a domain you don't own. This one will be for generating a cert for an IP you don't own. The blackhats will be hooting and hollering on telegram :)
I guess a bunch of "roll your own X.509 validation"-logic will have that bug, but to exploit it you need a misbehaving CA to issue you such a cert (i.e. low likelihood)
However, if Let‘s encrypt were to support S/MIME certificates, it would have a far greater impact. Since a few years, we have an almost comical situation with email encryption: Finally, most important mail user agents (aka mail clients) support S/MIME encryption out of the box. But you need a certificate from a CA to have a smooth user experience, just like with the web. However, all CAs that offer free trustworthy¹ S/MIME certificates with a duration of a year or more² have disappeared. The result: No private entities are using email encryption.
(PGP remains unused outside of geek circles because it is too awkward to use.)
Let‘s encrypt our emails!
¹ A certificate isn‘t trustworthy if the CA generates the secret key for you.
² With S/MIME you need to keep your old certificates around to decrypt old mails, so having a new one frequently is not practical
You don't need to change your decryption key - the new certificate can use the same decryption keys as the old one (certbot even has a flag: --reuse-key). Whether this is a good idea or not is a separate question.
I think the biggest benefit would be ACME-like automatic certificate issuance. Currently getting a new certificate is just too much friction.
I am currently appalled at how little of a wakeup call Signalgate 1.0 was and is[2]. Signalgate was, yet again, a failure of identity management in end to end encryption. You know, the exact thing that S/MIME certificates (or WKD) could help solve in a government environment if the resulting system was actually usable.
[1] https://datatracker.ietf.org/doc/draft-koch-openpgp-webkey-s...
[2] https://articles.59.ca/doku.php?id=em:sg (my article)
This means you either need centrally issued certificates for each domain, or face situations where legitimate users fail to obtain certificates, while cyber criminals send S/MIME-signed emails on the users' behalf.
Once a few generations of users have been trained to use passkeys then we can consider letting users handle key pairs.
There may be a need to distribute your certificate if you read and write mail on multiple devices.
https://www.latacora.com/blog/2020/02/19/stop-using-encrypte...
The first two major points they pose against email can be summed up as ‘people don’t use security unless it is by default, and because it wasn’t built-in to email we shouldn’t try.’ To which I respond with: perfect is the enemy of progress. Clearly, email is sticky (many other things have tried to replace it), and it has grown to do more than just send plaintext messages. People use it for document transfer, agreements, as a way to send commands over the internet, etc. Email encryption and authentication is simply an attempt to add some cryptographic tooling to a tool we already use for so many things. Thus, these points feel vacuous to me.
The last two points are less to do with email and more to do with encryption in general, and it is probably the most defeatist implication of the fact that there is no ‘permanent encryption.’ It is an argument against encryption as a whole, and paints the picture for me that the author would find other reasons to dislike email encryption because they already dislike encryption. These last two points are an extension of wanting an ideal solution and refusing to settle for anything less.
Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
If the addressing bodies are not on board then it's a user responsibility to validate the host header and reject unwanted IP address based connections until any legacy certs are gone / or revoke any legacy certs. Or just wait to use your shiny new IP?
I wonder how many IP certs you could get for how much money with the different cloud providers.
>Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
I don't see how this is any different than custom/vanity domains you can get from cloud providers. For instance on azure you can assign a DNS name to your VMs in the form of myapp.westus.cloudapp.azure.com, and CAs will happily issue certificates for it[1]. There's no cooloff for those domains either, so theoretically someone else can snatch the domain from you if your VM gets decommissioned.
[1] https://crt.sh/?identity=westus.cloudapp.azure.com&exclude=e...
You can reclaim it during that time, but any other tenant/organization will get an error that the name is in use. You can ping support to help you there if you show them you own both organizations. I was doing a migration of some resources across organizations and ran into that issue
The attack here isn't "snatch a domain from somebody who previously got a cert for it", it's "get an IP, get a cert issued, then release it and let somebody interesting pick it up".
In practice, this seems relatively low risk. You'd need to get a certificate for the IP, release it, have somebody else pick it up, have that person actually be doing something that is public-facing and also worth MITMing, then hijack either BGP or DNS to MITM traffic towards that IP. And you have ~6 days to do it. Plus if you can hijack your target's DNS or IPs... you can just skip the shenanigans and get a valid fresh cert for that target.
I don't think the people using IP certificates will be the same people that abandon their IP address after a week. The most useful thing I can think of is either some very weird legacy software, or Encrypted Client Hello/Encrypted SNI support without needing a shared IP like with Cloudflare. The former won't drop IPs on a whim, the latter wouldn't succeed in setting up a connection to the real domain.
My guess is that it's going to be approached the other way around. After all, it's not the ISPs' job to issue IP addresses in conformance with TLS; it's a TLS provider's job to "validate identity" — i.e. to issue TLS certificates in conformance with how the rest of the ecosystem attaches identity to varyingly-ephemeral resources (IPs, FQDNs, etc.)
The post doesn't say how they're going to approach this one way or the other, but my intuition is that LetsEncrypt is going to have/maintain some gradually-growing whitelist for long-lived IP-issuer ASNs — and then will only issue certs for IPs that exist under those ASNs; and invalidate IP certs if their IP is ever sold to another ASN not on the list. This list would likely be a public database akin to Mozilla's Public Suffix List, that LetsEncrypt would expect to share (and possibly co-maintain) with other ACME issuers that want to do IP issuance.
I wonder if they'll offer wildcard certs at some point.
Deleted Comment
... or put multiple customers on the same IP address at the same time. But presumably they wouldn't be willing to complete the dance necessary to actually get certs for addresses they were using that way.
Just in case, though, it's probably a good idea to audit basically all software everywhere to make sure that it will not use IP-based SANs to validate connections, even if somebody does, say, embed a raw IP address in a URL.
This stuff is just insanity.
But there was no conclusion that customers on the same server can't get certificates just because they're on the same server, or that whoever legitimately controls the default server for an IP address can't get them.
This would be a problem if clients would somehow treat https://example.com/ and https://96.7.128.175/ as the same identifier or security context just because example.com resolves to 96.7.128.175, but I'm not aware of clients that ever do so. Are you?
If clients don't confuse these things in some automated security sense, I don't see how IP address certs are worse than (or different from) certs for different customers who are hosted on the same IP address.
I recall one of the main arguments against Encrypted server name indication (ESNI) is that it would only be effective for the giant https proxy services like Cloudflare, that the idea of IP certs was floated as a solution but dismissed as a pipe dream. With IP address certificates, now every server can participate in ESNI, not just the giants. If it becomes common enough for clients to assume that all web servers have an IP cert and attempt to use ESNI on every connection, it could be a boon for privacy across the internet.
1. Want to connect to https://www.secret.com.
2. Resolve using DNS, get 1.2.3.4
3. Connect to 1.2.3.4, validate cert
4. Send ESNI, get separate cert for www.secret.com, validate that
... and the threat you're mitigating is presumably that you don't want to disclose the name "www.secret.com" unless you're convinced you're talking to the legitimate 1.2.3.4, so that some adversary can't spoof the IP traffic to and from 1.2.3.4, and thereby learn who's making connections to www.secret.com. Is that correct?
But the DNS resolution step is still unprotected. So, two broad cases:
1. Your adversary can subvert DNS. In this case IP certificates add no value, because they can just point you to 5.6.7.8, and you'll happily disclose "www.secret.com" to them. And you have no other path to communicate any information about what keys to trust.
2. Your adversary cannot subvert DNS. But if you can rely on DNS, then you can use it as a channel for key information; you include a key to encrypt the ESNI for "www.secret.com" in a DNS record. Even if the adversary can spoof the actual IP traffic to and from 1.2.3.4, it won't do any good because it won't have the private key corresponding to that ESNI key in the DNS. And those keys are already standardized.
So what adversary is stopped by IP certificates who isn't already stopped by the ESNI key records in the DNS?
That's never going to be a safe assumption; private and/or dynamically assigned IP addresses are always going to be a thing.
If you want to use NTS, but can’t get an IP cert, then you are left requiring DNS before you can get a trusted time. If DNS is down- then you can’t get the time. A common issue with DNSSEC is having the wrong time- causing validation failures. If you have DNSSEC enforced and have the wrong time- but NTS depends on DNS, then you are out of luck with no way to recover. Having IP as part of your cert allows trusted time without the DNS requirement, which can then fix your broken DNSSEC enforcement.
In other cases dns is just not needed at all. You might prefer simplicity, independence from dns propagation, so you will have your, say, Cockpit exposed instantly on a test env.
Only our imagination limits us here.
There's no reason AT ALL to bring IP addresses into the mix.
Deleted Comment
But what problem is it that you want to solve?
For local development, one can use a tool such as mkcert. For shared internal resources (e.g. within a company), it’s probably easier to use a TLS cert tied to a domain instead of using naked IP addresses.
Every time I open a browser I need to click two buttons to get past the certificate error. Sure I could configure a real domain, do split DNS and get a certificate but these cameras require manual uploading a certificate. I would need to do this every three months for every camera and eventually even more frequently.
The problem with certificates for internal addresses is that every single time someone tries to pull it off, it doesn't take long for someone to buy one of those devices, extract the private key, and then post about it online, requiring the key to be revoked immediately.
There is a solution to that, of course. If you trust your device, import its certificate manually so you can visit the page without errors, or if you have a lot of devices, set up a certificate authority to distribute these certificates. There are open source ACME servers that'll let you publish certificates the exact same way you'd do with Let's Encrypt, except now you can keep everything local.
Only problem is some routers blackhole dns responses pointing to local addresses so you need to test it
However, if a router manufacturer was so inclined, they _could_ have the device request a certificate for their public IPv4 address, given that it's not behind CG-NAT. v6 should be relatively easy since (unless you're at a cursed ISP) all v6 is generally globally routable.
It's not exactly user friendly, but it'll work.
Personally, I have a private CA that I use. My home router has a domain name pointing towards it and has been loaded up with my private certificate. I get the certificate error once a year when the thing expires but in the mean time I can access my router securely.
For example you can use nginx manager if you want a ui and adguard for dns. Set your router to use adguard as the exclusive dns. Add a rewrite rule for your domain to point to the proxy. Register the domain and get a real cert. problem solved
All of my local services use https
- get a domain name (foo.com) and get certificates for *.foo.com
- run a DNS resolver that maps a.b.c.d.foo.com (or a-b-c-d.foo.com) to the corresponding private IP a.b.c.d
- install the foo.com certificate on that private IP's device
then you can connect to devices in your local network via IP by using https ://192-18-1-1.foo.com
Since you need to install the certificate in step 3 above, this works better with long-lived certificates, of course, but aotomation helps there
Then I realised that when my internet was down, 192-18-1-1.foo.com wouldn't resolve. And when my internet is down is exactly when I want to access my router's admin page.
I decided simply using unencrypted HTTP is a much better choice.
Why can't local internet registries issue certificates
Why are third parties needed to verify RIR and LIR registrants
Apparently the job is too much for domainname registrars
Are the reasons the same