One area we have found Caddy invaluable is for local testing of APIs with HTTP2 during development. Most dev servers are HTTP1 only, and so you are limited to max of 6 concurrent connections to localhost. HTTP2 requires SSL, which would normally make it a PITA to test/setup locally for development.
Throw a Caddy reverse proxy in front of your normal dev server and you immediately get HTTP2 via the root certificate it installs in your OS trust store. (https://caddyserver.com/docs/automatic-https)
We (ElectricSQL) recommend it for our users as our APIs do long polling, which with HTTP2 doesn't lock up those 6 concurrent connections.
I've also found that placing it in front of Vite for normal development makes reloads much faster. Vite uses the JS module system for loading individual files in the browser with support for HMR (hot module replacement), this can result in a lot of concurrent requests for larger apps, creating a queue for those files on the six connections. Other bundlers/build tools bundle the code during development, reducing the number of files loaded into the browser, this created a bit of a debate last year on which is the better approach. With HTTP2 via Caddy in front of Vite you solve all those problems!
Strictly speaking it doesn't, unencrypted HTTP2 is allowed per the spec (and Caddy supports that mode) but the browsers chose not to support it so it's only really useful when testing non-browser clients or routing requests between servers. HTTP3 does require encryption for reals though, there's no opting out anymore.
Another way is to create a regular DNS name, and ave it redirect to localhost. If you are unable or unwilling to do so, there are free DNS services like https://traefik.me/ that provide you with a real domain name and related certificates.
I personally use traefik.me for my hobbyist fiddling, and I have a working HTTP/2 local development experience. It's also very nice to be able to build for production and test the performance locally, without having to deploy to a dev environment.
Just a note, because this comment made me curious and prompted me to look into it:
Vite does use HTTP2 automatically if you configure the cert, which is easy to do locally without Caddy. In that case specifically there's no real reason to use Caddy locally that I can see, other than wanting to use Caddy's local cert instead of mkcert or the Vite plugin that automatically provides a local cert.
Completely agree. If you want a nice way to do this with a shared config that you can commit to a git repo, check out my project, Localias. It also lets you visit dev servers from other devices on the same wifi network — great for mobile testing!
Localias is built on Caddy; my whole goal is to make local web dev with https as simple as possible.
That only works on localhost, right? I am looking for a solution for intranet that doesn't require complex sysadmin skills such as setting up DNS servers and installing root certificates. This is for my customers who need to run my web server on the intranet while encrypting traffic (no need to verify that the server is who it claims to be).
The six connections thing is just a default that you can change in about:config. Really it should probably have a higher default in $currentYear, but I don't expect major browser vendors to care.
I assumed almost everyone (product, enterprise) uses ngork to expose development/localhost server to get HTTP2 now a days, but it's good to realize Caddy can do the job well.
> so you are limited to max of 6 concurrent connections to localhost.
I think a web server listening on 0.0.0.0 will accept “localhost” connections on 127.0.0.2, 127.0.0.3, 127.0.0.4 … etc., and that you could have six connections to each.
After switching from nginx to caddy-docker-proxy a year ago I just recently made the move to Pangolin[0] and am really enjoying the experience. It's a frontend to traefik with built-in auth and ability to tunnel traffic through Wireguard. I needed the TCP forwarding for my Minecraft server and this made it very simple.
Would recommend it for anyone wanting a better version of Nginx Proxy Manager. The documentation is a little lacking so far but the maintainers are very helpful in their Discord.
And all your traffic would be watched and monitored by an US American company which already has access to vast amounts of your internet browsing behaviour.
Thanks for this comment. Ive recently been looking to use a domain for a server (instead of ISP assigned address) to make it publicly accessible. The server machine still physically sits in a residential location so I dont want that exposed. This is another setup solution I can look into.
I have been looking into doing an ec2 or DO droplet with a static ip with tailscale funnel for the traffic proxy. I just like that its easy to go into the web interface for the ec2/droplet and control which IPs it allows ssh connections.
what is the use of SSO there? How this would work with other selfhosted applications that require their own auth? Because if you need to authenticate 2 times then it would not be good.
A lot of positivity in this thread. I don't have anything bad to say about Caddy, but the only advantage I'm hearing over Nginx is easier cert setup. If you're struggling with that, I can see how that's a benefit.
I configured my kubernetes cluster to automatically create and renew certs a few years ago. It's all done through Ingress now. I just point my Nginx load balancer to my new domain and it figures it out.
I don't often need local https but when I do I also need outside access so Stripe or whatever can ping my dev server (testing webhooks). For that I have a server running Nginx which I use to proxy back to localhost, I just have to run 1 command to temporarily expose my machine under a fixed domain.
Works for me. Maybe not everyone but I'll keep doing this since I don't have any reason to switch
Here's one: it does not support dynamically loadable modules, like most (all?) Go programs. So if you need e.g. geoip, you have to build your own, and then maintain it, tracking CVEs, etc. You can't rely on your distribution's package maintainer to do the work.
It's not like you have to maintain a fork, it's pretty minimal, all you need is a Dockerfile with what you want and build the container. Other than that you just keep bumping the version like you would the standard distribution.
For example to use rate limiting I just have a Dockerfile like this:
FROM caddy:2.9.1-builder AS builder
RUN xcaddy build --with github.com/mholt/caddy-ratelimit
Golang fundamentally doesn't support dynamically loaded libraries. It appears at first that it does, which can waste your time, but actually it doesn't.
This was the big deal-breaker for me when I last looked a little while ago.
I need route 53 and a few other DNS providers built in for let's encrypt support and the docs implied that I was going to have to build those plugins myself?!!!
I stopped reading at that point because cert bot is trivial to install and just works with the web server that was also one command to install. At no point did I have to create a ephemeral container just to build nginx or certbot...
Caddy is an opinionated alternative to nginx with modern defaults.
I’m perfectly able to configure all of the bits and pieces of nginx or apache, but instead of spending 15 minutes or so doing it I tell caddy “here’s my domain name” and move on with my life. The massive benefit is that the features are easily replaced or replicated so if I do decide I want to use traefik or nginx for a specific feature, I can do it when I care about that. But caddy is just batteries included
> but instead of spending 15 minutes or so doing it I
you forgot to add on the months or years of experience you already have that lets you do that in 15 minutes. Maybe today with an LLM I could figure out certs but the every time I've tried it in the past there's tons and tons of jargon and tons and tons of options and everything is written from the POV of someone that already knows it all.
It's alright — the main upside for me is that it supports parameterized includes, thus letting you reuse large chunks of configuration without relying on something like ansible or bash + envsubst.
This is it for me. I got frustrated trying to do something with Nginx, which I had ~five years of experience with at the time.
Someone recommended I try Caddy, I was surprised I could just `chmod +x caddy; caddy start`, and I had replaced my laborious Nginx configuration + the new reverse proxy I wanted in ten minutes.
If I already knew Nginx in-and-out, I'd not have had the impetus to use Caddy. If Nginx config is a daunting task or something that takes longer than two minutes, I'd recommend taking a few minutes to try out Caddy.
I didn't try Caddy yet, but to me there are obvious downsides to nginx due to which i'd like to run away from it eventually. First and foremost: slow query detection. Out of the box, it doesn't allow you to do that easily, you either need to hack your own log format (and parse it), or to get a paid version. Another is simple stuff like log rotation (access.log/error.log). Could just use journald, but it doesn't. There are others, but these are enough to find a better alternative.
I mostly don't look at the logs outside of dev. Errors are caught by Sentry. Still, I can see the use for that... I'd probably try to ingest it into Grafana or some silly k8s solution if I cared enough.
I'd used nginx for work stuff for close to 10 years and still didn't trust myself writing configs from scratch without comparing with known-good ones. I never had that problem with Apache, so take from that what you want.
I'm not doing SRE stuff at work anymore (or it's on AWS) - so I've been using caddy for my own stuff for a couple of years with nearly zero problems.
For work I still might use traefik or nginx, my only reason against caddy were bad experiences in their support forum, but that was years ago.
I think nginx is great if your enterprise and want to squeeze the most utility out of your boxes. The issue is there's a large disconnect between nginx and Nginx plus, and you quickly end up making cursed configs to do basic things if you're using the former. It's literally what drove me to seek out alternatives and settle on caddy years ago.
I absolutely love Caddy. Used it for years. Very reliable and so easy to setup once you learn the basics. The documentation is a bit hard to get, but it saved me so much time and energy compared to trying to get letsencrypt working reliable ontop of NGINX.
I used Caddy for a couple of years but eventually went back to Nginx.
For the Let's Encrypt certs I use certbot and have my Nginx configs set up to point to the appropriate directories for the challenges for each domain.
The only difficulty I sometimes have is the situation where I am setting up a new domain or subdomain, and Nginx refuses to start all together because I don’t have the cert yet.
It’s probably not too complicated to get the setup right so that Nginx starts listening on port 80 only, instead of refusing to start just because it doesn’t have the cert for TLS needed to start up the listener on port 443.
But for me it happens just rarely enough that I instead first make the config and outcomment the TLS/:443 parts and start it so that I can respond to the request from Let’s Encrypt for the /.well-known/blah blah stuff, and then I re-enable listening on with TLS and restart Nginx.
I also used DNS verification for a while as well, so I’m already aware that’s an option too. But I kind of like the response on :80 method. Even if I’ve managed to make it a bit inconvenient for myself to do so.
I tried to setup Caddy last year as a reverse proxy for all paths matching "/backend", and serve the rest as static files from a directory. I had to give up, because the documentation was not good enough.
I tried the JSON config format that seems to be the recommended format, but most examples on Google use the old format. To make it even more complicated the official documentation mentions configuration options, without informing that it requires plugins that is not necessarily installed on Ubuntu. Apparently they just assume that you will compile it from scratch with all options included. Lots of time was wasted before I found a casual mention of it in some discussion forum (maybe stack overflow, don't remember). I just wanted the path to be rewritten to remove the "/backend" path before proxying it to the service. I guess that is uncommon for a reverse proxy, and have to be placed in a separate module
I may appear overly critical, but I really spent a lot of time and made an honest attempt
I'll go back to nginx. Setting up let's encrypt requires some additional steps, but at least it's well documented and can be found in Google searches if necessary
Reading the website top to bottom, I’m now unsure about the trustworthiness of a project that seems so full of itself. Passage after passage about how great it is leaves a bad aftertaste. Maybe it’s just me—unsure.
I no longer trust the authors to be honest about known shortcomings, let alone be upfront, truthful, and transparent when dealing with security issues and reported vulnerabilities.
I hope I’m wrong. Does anyone know how they’ve handled disclosures in the past?
I dislike this style of documentation as well, but Caddy is a proven piece of technology. It can easily replace nginx or any other reverse proxy unless you're using a real niche configuration. Not needing to deal with certbot is also pretty nice.
Caddy's writing style isn't necessary big-enterprise-middle-management-friendly, but luckily for big enterprises that want lengthy, dry, and boring, there are plenty of alternatives.
I just had my first experience with Caddy setting it up as a reverse proxy in front of Vaultwarden. Following along with Vaultwarden's documentation it worked like a charm and I was left thinking, "What a neat little project for hobbyists who want to get going quickly with the basics."
Then I checked out the home page and it's all "The most advanced HTTPS server in the world Raaawwrrr!"
Quite the divergence, but as other comments in the thread say, it's a legit good project.
You're unsure about a product because the landing page is positive, and even go so far as to not trust the authors any more? That does sound like a strange expectation for a landing page, which is usually intended to make you want to use a project.
I agree with the GP that hyperbole on a landing page (or anywhere else in the project’s communication) makes me not want to use the project. It communicates that the project lacks confidence that a down-to-earth description would speak for itself.
I understand the attitude because there are a lot of corporate websites which similarly claim the moon and the stars and when you dig right down a lot of it is bullshit. I have worked in places like this.
Such companies tend to imply that their product can do anything and tend to have pages of verbiage rather than the brass tacks README with examples you get on a good open source project's github page.
The friendly licensing (Apache v2) is important too, especially w/ Caddy's modular architecture (single, static binary compiled for any platform).
Meaning ecosystems around Caddy to make it even simpler and more secure, e.g. keep your server private while serving Internet clients. So VPNs like Tailscale (1) or zero implicit trust like OpenZiti (also Apache v2; (2)). Similar to what we have seen with open source k8s ecosystem for example.
(1) https://tailscale.com/blog/caddy (and other VPNs but the proprietary bits in the commercial TS service make it easier to use)
I prefer to keep certificate management separate from individual applications like web servers, mail servers, XMPP servers, database servers and all the other services I run. All of these need certificates so I have centralised certificate management and distribution. This comes down to running certbot in a container with some hook scripts to distribute new or updated certificates to services (running on different containers and machines) which need them, restarting those services when needed. Adding a new site to nginx comes down to copying a template configuration, changing the site name to the correct one, adding whatever configuration needed for the specific service and requesting a new certificate for it. The new certificate automatically gets copied to the container or machine running the service so it is available after reloading the nginx configuration. The same is true for most other services, several of which share certificates because they're running in the same domain. I used the same scheme back when I used lighttpd and will probably use it should I move to another web (or mail or XMPP or whatnot) server.
Same here (not certbot and containers, but the part about reusing certificates for multiple services): it feels wrong to couple certificate acquisition with a web server. Apparently it is convenient when there is just a web server out of TLS-using services, or at least when it is in the center of the setup and HTTP-based certificate acquisition is used, which seems to be a common enough case to justify this, but still an odd coupling in general.
I also do this same thing, but my Nginx configs are templated out via automation. It gives me the best of both worlds: 95% of my sites and their certs are templated out from 3 lines of config each, then for the last special 5% I can insert literal Nginx config. For most uses I have the same experience as someone with Caddy, but for that last 5% I love the "access the full power of Nginx config from the same place" escape hatch.
I migrated all my Nginx hosts to use Caddy a while back. It doesn't do anything Nginx can't, but the default configuration is identical to the way I'd previously manually configured servers. It's so pleasant to get an HTTPS site up and running with 3 lines of setup.
Throw a Caddy reverse proxy in front of your normal dev server and you immediately get HTTP2 via the root certificate it installs in your OS trust store. (https://caddyserver.com/docs/automatic-https)
We (ElectricSQL) recommend it for our users as our APIs do long polling, which with HTTP2 doesn't lock up those 6 concurrent connections.
I've also found that placing it in front of Vite for normal development makes reloads much faster. Vite uses the JS module system for loading individual files in the browser with support for HMR (hot module replacement), this can result in a lot of concurrent requests for larger apps, creating a queue for those files on the six connections. Other bundlers/build tools bundle the code during development, reducing the number of files loaded into the browser, this created a bit of a debate last year on which is the better approach. With HTTP2 via Caddy in front of Vite you solve all those problems!
Strictly speaking it doesn't, unencrypted HTTP2 is allowed per the spec (and Caddy supports that mode) but the browsers chose not to support it so it's only really useful when testing non-browser clients or routing requests between servers. HTTP3 does require encryption for reals though, there's no opting out anymore.
I personally use traefik.me for my hobbyist fiddling, and I have a working HTTP/2 local development experience. It's also very nice to be able to build for production and test the performance locally, without having to deploy to a dev environment.
Vite does use HTTP2 automatically if you configure the cert, which is easy to do locally without Caddy. In that case specifically there's no real reason to use Caddy locally that I can see, other than wanting to use Caddy's local cert instead of mkcert or the Vite plugin that automatically provides a local cert.
Deleted Comment
Localias is built on Caddy; my whole goal is to make local web dev with https as simple as possible.
https://github.com/peterldowns/localias
I think a web server listening on 0.0.0.0 will accept “localhost” connections on 127.0.0.2, 127.0.0.3, 127.0.0.4 … etc., and that you could have six connections to each.
https://superuser.com/questions/393700/what-is-the-127-0-0-2...
( a comment there says “not on macOS” though)
This does not sound like the kind of feature I would want in a web server
Would recommend it for anyone wanting a better version of Nginx Proxy Manager. The documentation is a little lacking so far but the maintainers are very helpful in their Discord.
[0] github.com/fosrl/pangolin
Deleted Comment
I have been looking into doing an ec2 or DO droplet with a static ip with tailscale funnel for the traffic proxy. I just like that its easy to go into the web interface for the ec2/droplet and control which IPs it allows ssh connections.
I configured my kubernetes cluster to automatically create and renew certs a few years ago. It's all done through Ingress now. I just point my Nginx load balancer to my new domain and it figures it out.
I don't often need local https but when I do I also need outside access so Stripe or whatever can ping my dev server (testing webhooks). For that I have a server running Nginx which I use to proxy back to localhost, I just have to run 1 command to temporarily expose my machine under a fixed domain.
Works for me. Maybe not everyone but I'll keep doing this since I don't have any reason to switch
Here's one: it does not support dynamically loadable modules, like most (all?) Go programs. So if you need e.g. geoip, you have to build your own, and then maintain it, tracking CVEs, etc. You can't rely on your distribution's package maintainer to do the work.
For example to use rate limiting I just have a Dockerfile like this:
FROM caddy:2.9.1-builder AS builder
RUN xcaddy build --with github.com/mholt/caddy-ratelimit
FROM caddy:2.9.1
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
I need route 53 and a few other DNS providers built in for let's encrypt support and the docs implied that I was going to have to build those plugins myself?!!!
I stopped reading at that point because cert bot is trivial to install and just works with the web server that was also one command to install. At no point did I have to create a ephemeral container just to build nginx or certbot...
I’m perfectly able to configure all of the bits and pieces of nginx or apache, but instead of spending 15 minutes or so doing it I tell caddy “here’s my domain name” and move on with my life. The massive benefit is that the features are easily replaced or replicated so if I do decide I want to use traefik or nginx for a specific feature, I can do it when I care about that. But caddy is just batteries included
you forgot to add on the months or years of experience you already have that lets you do that in 15 minutes. Maybe today with an LLM I could figure out certs but the every time I've tried it in the past there's tons and tons of jargon and tons and tons of options and everything is written from the POV of someone that already knows it all.
It's alright — the main upside for me is that it supports parameterized includes, thus letting you reuse large chunks of configuration without relying on something like ansible or bash + envsubst.
https://caddyserver.com/docs/caddyfile/directives/import
- Config so easy you can remember how to do it in a day
Someone recommended I try Caddy, I was surprised I could just `chmod +x caddy; caddy start`, and I had replaced my laborious Nginx configuration + the new reverse proxy I wanted in ten minutes.
If I already knew Nginx in-and-out, I'd not have had the impetus to use Caddy. If Nginx config is a daunting task or something that takes longer than two minutes, I'd recommend taking a few minutes to try out Caddy.
This is your package manager's job, which even Windows has these days. Other operating systems solved this problem decades ago.
https://winstall.app/apps/nginxinc.nginx
I mostly don't look at the logs outside of dev. Errors are caught by Sentry. Still, I can see the use for that... I'd probably try to ingest it into Grafana or some silly k8s solution if I cared enough.
I'm not doing SRE stuff at work anymore (or it's on AWS) - so I've been using caddy for my own stuff for a couple of years with nearly zero problems.
For work I still might use traefik or nginx, my only reason against caddy were bad experiences in their support forum, but that was years ago.
For the Let's Encrypt certs I use certbot and have my Nginx configs set up to point to the appropriate directories for the challenges for each domain.
The only difficulty I sometimes have is the situation where I am setting up a new domain or subdomain, and Nginx refuses to start all together because I don’t have the cert yet.
It’s probably not too complicated to get the setup right so that Nginx starts listening on port 80 only, instead of refusing to start just because it doesn’t have the cert for TLS needed to start up the listener on port 443.
But for me it happens just rarely enough that I instead first make the config and outcomment the TLS/:443 parts and start it so that I can respond to the request from Let’s Encrypt for the /.well-known/blah blah stuff, and then I re-enable listening on with TLS and restart Nginx.
I also used DNS verification for a while as well, so I’m already aware that’s an option too. But I kind of like the response on :80 method. Even if I’ve managed to make it a bit inconvenient for myself to do so.
I tried the JSON config format that seems to be the recommended format, but most examples on Google use the old format. To make it even more complicated the official documentation mentions configuration options, without informing that it requires plugins that is not necessarily installed on Ubuntu. Apparently they just assume that you will compile it from scratch with all options included. Lots of time was wasted before I found a casual mention of it in some discussion forum (maybe stack overflow, don't remember). I just wanted the path to be rewritten to remove the "/backend" path before proxying it to the service. I guess that is uncommon for a reverse proxy, and have to be placed in a separate module
I may appear overly critical, but I really spent a lot of time and made an honest attempt
I'll go back to nginx. Setting up let's encrypt requires some additional steps, but at least it's well documented and can be found in Google searches if necessary
I had the same experience. And also somewhat bothered me that even a very basic and common functionality like rate limiting is not built-in.
https://caddyserver.com/docs/caddyfile/directives/handle_pat...
I no longer trust the authors to be honest about known shortcomings, let alone be upfront, truthful, and transparent when dealing with security issues and reported vulnerabilities.
I hope I’m wrong. Does anyone know how they’ve handled disclosures in the past?
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=fal...
https://en.wikipedia.org/wiki/Caddy_(web_server)
Caddy's writing style isn't necessary big-enterprise-middle-management-friendly, but luckily for big enterprises that want lengthy, dry, and boring, there are plenty of alternatives.
Then I checked out the home page and it's all "The most advanced HTTPS server in the world Raaawwrrr!"
Quite the divergence, but as other comments in the thread say, it's a legit good project.
Such companies tend to imply that their product can do anything and tend to have pages of verbiage rather than the brass tacks README with examples you get on a good open source project's github page.
Deleted Comment
Deleted Comment
Dead Comment
Meaning ecosystems around Caddy to make it even simpler and more secure, e.g. keep your server private while serving Internet clients. So VPNs like Tailscale (1) or zero implicit trust like OpenZiti (also Apache v2; (2)). Similar to what we have seen with open source k8s ecosystem for example.
(1) https://tailscale.com/blog/caddy (and other VPNs but the proprietary bits in the commercial TS service make it easier to use)
(2) https://github.com/openziti-test-kitchen/ziti-caddy (disclosure: maintainer...there may be other open source zero implicit trust options with these types of Caddy integrations)
> single, static binary compiled for any platform
Huh? Aren't these exact opposites?