Hey, I just decided to run a DNS server and a couple of web services on my lan from a raspberry pi over the weekend. I used Nginx for the reverse proxy so all of the services could be addressable without port numbers. It was very easy to set up, it's funny how when you learn something new, you start seeing it all over the place.
This idea we seem to have moved towards where every applications ALSO includes their own ACME support really annoys me actually. I much prefer the idea that there's well written clients who's job it is to do the ACME handling.
Is my Postfix mailserver soon going to have an ACME shoehorned in? I've already seen GitHub issues for AdGuardHome (a DNS server that supports blocklists) to have an ACME client built in, thankfully thus far ignored.
Proxmox (a VM Hypervisor!) has an ACME Client built in.
I realise of course the inclusion of an ACME client in a product doesn't mean I need to use their implementation, I'm free to keep using my own independant client. But it seems to me adding ACME clients to everything is going to cause those projects more PRs, more baggage to drag forward etc. And confusion for users as now there's multiple places they could/should be generating certificates.
Anyway, grumpy old man rant over. It just seems Zawinski's Law "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." can be replaced these days with MuppetMan's law of "Every program attempts to expand until it can issue ACME certificates."
Careful posting systemd satire here, there is a high likelihood that your comment becomes the reason this feature gets built and PRed by someone bored enough to also read HN comment section.
If we teach systemd socket activation to do TLS handshakes we can completely offload TLS encryption to the kernel (and network devices) and you get all of this for free.
It's actually not a crazy idea in the world of kTLS to centralize TLS handshaking into systems
Unironicaly, I think having systemd-something util that would provide TLS certs for .services upon encountering specific config knob in [Service] section would be much better that having multitude uncoordinated ACME clients that will quickly burn through allowed rate limits. Even just as a courtesy to LE/ISRG's computational resources.
I'm with you on this. I run my ACME clients as least-privileged standalone applications.
On a machine where you're only running a webserver I suppose having Nginx do it the ACME renewal makes sense.
On many of the machines I support I also need certificates for other services, too. In many cases I also have to distribute the certificate to multiple machines.
I find it easy to manage and troubleshoot a single application handling the ACME process. I can't imagine having multiple logs to review and monitor would be easier.
The idea that the thing that needs the certificate, gets the certificate doesn't seem that perverse to me. The interface/port-bound httpd needs to known what domains it's serving, what certificates it's using.
Automating this is pure benefit to those that want it, and a non-issue to those who don't — just don't use it.
I personally think nginx is the kind of project I'd allow to have its own acme client. It's extremely extremely widely used software and I would be surprised if less than 50% of the certs LE issues are not exclusively served via nginx.
Now if Jenkins adds acme support then yes I'll say maybe that one is too far.
But it's a webserver. I'm sure it farms out sending emails from forms it serves, I doubt it has a PHP library built in, surely it farms that out to php-fpm? It doesn't have a REDIS library or NodeJS built in. Why's ACME different?
It makes sense to me. If an application needs a signed certificate to function properly, why shouldn't it include code to obtain that certificate automatically when possible?
Maybe if there were OS level features for doing the same thing you could argue the applications should call out to those instead, but at least on Linux that's not really the case. Why should admins need to install and configure a separate application just to get basic functionality working?
Proxmox is not a hypervisor. It is a Linux distribution. As such it has a web server, kvm, zfs, and many other pieces. Maybe the acme client is built in to the web server. Maybe the acme client is built into their custom management software. Maybe they're just scripting around certbot.
I do tend to find that I need multiple services with tls on the same machine, such as a web server and RabbitMQ, or postfix and dovecot. I don't know how having every program have its own acme client would end up working out. That seems like it could be a mess. On the other hand, I have been having trouble getting them all to take updated certificates correctly without me manually restarting services after cert bots cron job does an update.
I’m of the opposite opinion, really: Automatic TLS certificate requests are just an implementation detail of software able to advertise as accepting encrypted connections. Similarly many applications include an OAuth client that automatically takes care of requesting access tokens and refreshing them automatically, all using a discovery URI and client credentials.
Lots of apps should support this automatically, with no intervention necessary, and just communicate securely with each other. And ACME is the way to enable that.
Why should every software need to support encrypted connections? That is a rabbit hole of complexity which can easily be implemented incorrectly, and is a security risk of its own.
Instead, it would make more sense for TLS to be handled centrally by a known and trusted implementation, which proxies the communication with each backend. This is a common architecture we've used for decades. It's flexible, more secure, keeps complexity compartmentalized, and is much easier to manage.
I believe caddy was the first standalone software to include automated acme. It's a web server (and a proxy) so it's a very good fit. One software many domains. Proxmox likewise is a hypervisor hosting many VMs (hence domains). Another good fit. Though as far as I know they don't provide the service for the VMs "yet".
You just don't load the module and use certbot and that will work which is what I'm doing. People get carried away with this stuff. The software is quite modular. It's fine for people to simplify it.
For a bunch of tech-aware people the inability for you all here to modify your software to meet your needs is insane. As a 14 year old I was using the ck patch series to have a better (for me) scheduler in the kernel. Every other teenager could do this shit.
In my 30s I have a low friction set up where each bit of software only does one thing and it's easy for me to replicate. Teenagers can do this too.
Somehow you guys can't do either of these things. I don't get it. Are you stupid? Just don't load the module. Use stunnel. Use certbot. None of these things are disappearing. I much prefer. I much prefer. I much prefer. Christ. Never seen a userbase that moans as much about software (I moan about moaning - different thing) while being unable to do anything about it as HN.
Congratulations to the folks involved. I'm sure this wasn't a trivial lift. And the improvement to free security posture is a net positive for our community.
I have moved most of my personal stuff to caddy, but I look forward to testing out the new release for a future project and learning about the differences in the offerings.
Nginx introduces native support for ACME protocol - https://news.ycombinator.com/item?id=44889941 - Aug 2025 (298 comments)
I realise of course the inclusion of an ACME client in a product doesn't mean I need to use their implementation, I'm free to keep using my own independant client. But it seems to me adding ACME clients to everything is going to cause those projects more PRs, more baggage to drag forward etc. And confusion for users as now there's multiple places they could/should be generating certificates.
Anyway, grumpy old man rant over. It just seems Zawinski's Law "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." can be replaced these days with MuppetMan's law of "Every program attempts to expand until it can issue ACME certificates."
If we teach systemd socket activation to do TLS handshakes we can completely offload TLS encryption to the kernel (and network devices) and you get all of this for free.
It's actually not a crazy idea in the world of kTLS to centralize TLS handshaking into systems
On a machine where you're only running a webserver I suppose having Nginx do it the ACME renewal makes sense.
On many of the machines I support I also need certificates for other services, too. In many cases I also have to distribute the certificate to multiple machines.
I find it easy to manage and troubleshoot a single application handling the ACME process. I can't imagine having multiple logs to review and monitor would be easier.
Automating this is pure benefit to those that want it, and a non-issue to those who don't — just don't use it.
Now if Jenkins adds acme support then yes I'll say maybe that one is too far.
triple-negative, too hard to parse
To avoid a splintered/disjoint ecosystem, library code can be reused across many applications.
Maybe if there were OS level features for doing the same thing you could argue the applications should call out to those instead, but at least on Linux that's not really the case. Why should admins need to install and configure a separate application just to get basic functionality working?
I do tend to find that I need multiple services with tls on the same machine, such as a web server and RabbitMQ, or postfix and dovecot. I don't know how having every program have its own acme client would end up working out. That seems like it could be a mess. On the other hand, I have been having trouble getting them all to take updated certificates correctly without me manually restarting services after cert bots cron job does an update.
Lots of apps should support this automatically, with no intervention necessary, and just communicate securely with each other. And ACME is the way to enable that.
Instead, it would make more sense for TLS to be handled centrally by a known and trusted implementation, which proxies the communication with each backend. This is a common architecture we've used for decades. It's flexible, more secure, keeps complexity compartmentalized, and is much easier to manage.
For a bunch of tech-aware people the inability for you all here to modify your software to meet your needs is insane. As a 14 year old I was using the ck patch series to have a better (for me) scheduler in the kernel. Every other teenager could do this shit.
In my 30s I have a low friction set up where each bit of software only does one thing and it's easy for me to replicate. Teenagers can do this too.
Somehow you guys can't do either of these things. I don't get it. Are you stupid? Just don't load the module. Use stunnel. Use certbot. None of these things are disappearing. I much prefer. I much prefer. I much prefer. Christ. Never seen a userbase that moans as much about software (I moan about moaning - different thing) while being unable to do anything about it as HN.
I have moved most of my personal stuff to caddy, but I look forward to testing out the new release for a future project and learning about the differences in the offerings.
Thanks for this!
Dead Comment
Deleted Comment
nginx-module-acme is available there, too, so you don't need to compile it manually.