Readit News logoReadit News
geenat · 2 years ago
At this point I'd rather be good at Caddyfile and have a project folder of:

  /home/me/project/caddy
  /home/me/project/Caddyfile
No sudo, no config spew across my filesystem. Competition is good, and I had a lot of fun with nginx back in the day but it's too little too late for me.

klabb3 · 2 years ago
This is more of a linuxism no? I agree though, I have used Linux for decades but I never remember the usr, bin, local, etc permutations of magical paths, nor do I think it makes any sense. It’s a mess honestly, and is almost never what I want. When I was younger I thought I was holding it wrong but these days I’m sure it will never, ever map well to my mental model. It feels like a lot of distro-specific trivia that’s leaking all over the floor.

It seems like a lot of the hipster tooling dropped those things from the past and honestly it’s so much nicer to have things contained in a single file/dir or at most two. That may be a big reason why kids these days prefer them, honestly.

As for nginx itself it’s actually much better suited for high performance proxies with many conns, imo. I ran some benchmarks and the Go variants (traefik, caddy) eat a lot of memory per conn. Some of that’s unavoidable because of minimum per-goroutine stacks. Now I’m sure they’re better in many ways but I was very impressed with nginxs footprint.

kbenson · 2 years ago
Windows has the same thing, it's just much less exposed, and none of the paths are magical, they're well defined and mostly adhered to for all major distros. The core of how it works in Linux is fairly straightforward.

The main difference is in how additional software is handled. Windows, because of its history with mostly third party software being installed, generally installed applications into a folder and that folder contained the application... Mostly. Uninstalling was never as simple as that might imply.

Linux distros had an existing filesystem layout (from Unix) to conform to, so when they started developing package managers, they had to support files all over the place, so they make sure packages include manifests. Want to know where use executable are? Check bin. Superuser.com executables? Check sbin (don't want those cluttering the available utils in the path of regular users). Libs for in libs.

/bin and /usr/bin and the others are holdovers from the long past when disks were small, and recent distros often symlink usr to / so they're different in name only. /usr/local is for admin local modifications that are not handled through a package. /opt is for whatever, and often used for software installed into a contained folder, like in windows.

Just know what bin, sbin, lib, opt and etc are for and most the rest is irrelevant as long as you know how to query the package manager for what files a package provides or as it what package a specific filer belongs to. If you looked I to windows and the various places it puts things I suspect you'd find it at least complicated, if not much more.

Note: what I said may not match the LSB (which someone else usefully posted) perfectly, but for the most part it should work as a simple primer.

antifa · 2 years ago
> I ran some benchmarks and the Go variants (traefik, caddy) eat a lot of memory per conn.

I'm pretty annoyed with how many people on HN are shouting in every thread "caddy is soon much better!" and the only material benefit to caddy I can glean from these threads is that it's easier for noobs. Which, to be clear, as far as I can tell, it does a good job of that and will probably win the next decade over nginx for that reason alone, but nginx really isn't that hard to set up, and I'm surprised there isn't so much push back against the pro-caddy narrative. It's just an uphill battle for an application written in golang to be faster than a mature application written in C. Obviously I will continue to use nginx until hard evidence of on-par performance is published, but at the same time I'm more likely to hold out for a competitor written in rust.

jeffreygoesto · 2 years ago
I think the reason is that you needed to save on places where to look for files and sort by function in LFS. The package manager converts a package view into a system view, it kind of "turns around file system layout by 90 degrees". Both views have their pros and cons.

It is annoying however, that configuration is not standardized across distros.

gjvc · 2 years ago
honestly
nevermore24 · 2 years ago
I don't know if not being able to remember filesystem conventions is Linux's fault. Computers have a lot of esoterica and random facts to recall. How is this one any different?

See also: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html

sofixa · 2 years ago
I'm really not a fan of Caddy. It tries to be too smart and make things simple for users, which means that the second something goes wrong or you step out of the expected box, you get weird behaviour which is hard to debug.

Fun example from last week, a colleague was trying to try out ACME with a custom ACME server, and configured it. For some reason Caddy was not using it and instead used its own internal cert issuer, even if explicitly told to use the ACME provider as configured. Turns out that if you use the .local domain, Caddy will insist on using its own cert issuer even if there's an ACME provider configured. Does that make sense? Yeah, somewhat, but it's the kind of weird implicit behaviour that makes me mistrust it.

My go-tos are nginx for static stuff, Traefik for dynamic stuff.

vgb2k18 · 2 years ago
I bought into Caddy last year for its simplicity. Loved it, from the moment I saw the off-horizontal default page. I switched back to nginx last month because, like you said, I stepped outside of the expected box. Skill issue? Maybe... I transferred my gunvicorn webapp from WAN to LAN. No more dotcom, just 10.x.x.x ip. To say Caddy didn't like it: error messages were non-specific. Community knowledge (ie, stack overflow etc) was lacking. Then nginx.. It worked perfect with almost default config. Skill-issue or Caddy issue, the point is: Caddy is simpler sometimes.
mholt · 2 years ago
What was the config? It should never override explicit configuration...
cpach · 2 years ago
AFAIK, nginx doesn’t require root. If you’re thinking about the ability to bind port 80/443, you should be able to do that via CAP_NET_BIND_SERVICE.

With that said, Caddy is pretty rad.

kaptainscarlet · 2 years ago
Caddy sounds like the go to tool for people who care a lot about getting things done. It's time for me to try it.
diarrhea · 2 years ago
A coworker of mine dislikes it as it bundles everything into a single binary. For example, to have ACME DNS-01 challenges for certificate issuance working, I need to compile in a Google DNS-specific plugin.

But then it... just works. Try the same with most other web servers/proxies and you're in for a world of pain. Having this much functionality bundled into a single binary is as much a curse as it is a blessing.

That said, having your own little 'Cloudflare Workers' in the form of Nginx Unit with wasm sounds great. Not sure Caddy can do that.

9dev · 2 years ago
If you want to see a real-life example of what Caddy can do, feel free to check the configuration of my iss-metrics project:

https://github.com/Radiergummi/iss-metrics/blob/main/caddy/C...

I was in the same boat as you and wanted to try out what Caddy is capable of. I was immediately convinced. So many features, where you expect them. Consistent configuration language. Environment interpolation, everywhere. Flexible API. It’s really all there.

page_fault · 2 years ago
It's a fine project right up to the point of you needing additional functionality that's split into one of the plugins. Since Go applications do not support proper .so in practice, you have to build your own binaries or rely on their build service, and this puts the responsibility of supporting and updating such custom configuration on you.

So no setting up unattended-upgrades and forgetting about it.

NetOpWibby · 2 years ago
I recently setup a Flarum forum and the instructions mentioned Apache and Nginx. I sighed until I saw Caddy immediately below.

Caddy really is the most pleasant webserver software I’ve ever used.

bavell · 2 years ago
Eh, it's a bit over hyped imo although I do like the config format and built-in acme. My production clusters all run nginx though and give me minimal fuss with a lot of flexibility.
asmor · 2 years ago
has anyone figured out why caddy is substantially slower (thoughput, not latency) than nginx at reverse proxying? i've switched it around for my seafile and it's a night and day difference.
ac130kz · 2 years ago
Garbage collection pauses might have something to do with that.

Deleted Comment

renewiltord · 2 years ago
These are things for sure, but nginx config files are well understood by LLMs so I get good advice from them. That's really the limiting factor for most equivalent tools for me these days, how well the LLM handles it.

If someone hooks them up to a man page I think it might level the playing field.

gedw99 · 2 years ago
I also prefer caddy with wazero plugin to run WASM .

So easy and works on everything

gnaman · 2 years ago
Unfortunately, Caddy does not support and do not plan on supporting anything other than HTTP/HTTPS. These days I find myself going back to nginx only for TCP/UDP reverse proxy.
m_sahaf · 2 years ago
It supports it with the caddy-l4 plugin: https://github.com/mholt/caddy-l4. It was also indicated we might move the plugin into standard Caddy once given enough feedback from the user-base and are comfortable with the implementation solidity.
callahad · 2 years ago
Hi! I'm currently in charge of Unit. If you're using it, I'd love to chat with you to understand what sucks, what doesn't, and what's missing. I have my own ideas, but external validation is always nice. :)

Contact info is in my profile.

Deleted Comment

ngrilly · 2 years ago
Neat! What is the benefit of using this over "standalone" nginx? The HTTP API enabling configuration change at runtime without downtime (like Caddy)? No need for a process supervisor like supervisord or systemd as nginx Unit is managing the backends?
9dev · 2 years ago
It’s pretty much like Caddy vs. nginx: Language runtime, static asset serving, TLS, routing and so on bundled in a single package. That makes it very easy to deploy a container, for example.

Thinking of a typical PHP app, which exposes both dynamically routed endpoints and static asset. With a traditional setup, you’d let nginx handle all paths as static assets and fallback to the index.php file to serve the app. When you package that as a container, you’ll either have to use separate PHP-FPM and nginx containers, or run two processes in a single container. Both of which is not ideal. And it gets ever more complex with TLS, and so on.

Using unit or caddy, you can simplify this to a single container that achieves it all, easily.

SahAssar · 2 years ago
What is caddys language runtime? Which languages does it support? Or are you thinking of the frankenphp plugin?
callahad · 2 years ago
Exactly, they're complements: you'd deploy your application on Unit and put that behind NGINX or another reverse proxy like Caddy or Traefik.

Unit can serve static assets, directly host Python / PHP / WebAssembly workloads, automatically scale worker processes on the same node, and dynamically reconfigure itself without downtime.

Unit cannot do detailed request/response rewriting, caching, compression, HTTP/2, or automatic TLS... yet. ;)

attentive · 2 years ago
It's an app server. It can run your asgi or wsgi app.
supriyo-biswas · 2 years ago
For me I'd rather ship a single binary with PHP support in it when using containers.
la_fayette · 2 years ago
Can you elaborate on that? Especially, where is the php runtime and the webserver?
simonw · 2 years ago
For some reason I had a thought lodged in my head that Unit wasn't open source, but I just checked their GitHub repo and it's been Apache 2 since they first added the license file seven years ago.

I must have been confusing it with NGINX Plus.

rvnx · 2 years ago
“Oops, sorry, thank you for letting us know, we will change that to the proprietary license instead”
callahad · 2 years ago
I wouldn't bet on that. :)

F5 isn't the most visible corporation in terms of grassroots engagement, but NGINX itself has remained F/OSS all these years and newer projects like the Kubernetes Ingress Controller [0], Gateway Fabric [1], and NGINX Agent [2] are all Apache 2.0 licensed. Just like Unit.

We do have commercial offerings, including the aforementioned NGINX Plus, but I think we've got a decent track record of keeping useful things open.

[0]: https://github.com/nginxinc/kubernetes-ingress

[1]: https://github.com/nginxinc/nginx-gateway-fabric

[2]: https://github.com/nginx/agent

vdfs · 2 years ago
Usually it's done on purpose, they wait until it get very popular and used everywhere before pulling the carpet
casperb · 2 years ago
I tried a setup with Nginx Unit and php-fpm inside a Docker container, but the way to load the config is so combersome I never was confident to use it in production. It feels like I am doing something wrong. Is there a way to just load a config file from the filesystem?
callahad · 2 years ago
We're very actively working on improving Unit's UX/DX along those lines. Our official Docker images will pick up and read configuration files from `/docker-entrypoint.d/`, so you can bind mount your config into your container and you should be off to the races. More details at https://unit.nginx.org/installation/#initial-configuration

But that's still kinda rough, so we're also overhauling our tooling, including a new (and very much still-in-development) `unitctl` CLI which you can find at https://github.com/nginx/unit/tree/master/tools/unitctl. With unitctl today, you can manually run something like `unitctl --wait-timeout-seconds=3 --wait-max-tries=4 import /opt/unit/config` to achieve the same thing, but expect further refinements as we get closer to formally releasing it.

casperb · 2 years ago
That sounds much better, thanks for the effort.
jonatron · 2 years ago
https://unit.nginx.org/howto/docker/#apps-in-a-containerized...

> We’ve mapped the source config/ to /docker-entrypoint.d/ in the container; the official image uploads any .json files found there into Unit’s config section if the state is empty.

casperb · 2 years ago
I saw that, but I do like to make my own container. So I did roughly the same steps as they do. But it feels complicated.
ajayvk · 2 years ago
I am building https://github.com/claceio/clace. It allows you to install multiple apps. Instead of messing with routing rules, each app gets a dedicated path (can be a domain). That way you cannot break one app while working on another.

Clace manages the containers (using either Docker or Podman), with a blue-green (staged) deployment model. Within the container, you can use any language/framework.

gawa · 2 years ago
The docs mentions:

> The control API is the single source of truth about Unit’s configuration. There are no configuration files that can or should be manipulated; this is a deliberate design choice

(https://unit.nginx.org/controlapi/#no-config-files)

So yeah, the way to go is to run something like `curl -X PUT --data-binary @/config.json --unix-socket /var/run/control.unit.sock http://localhost/config/` right after you start your nginx-unit.

The way to manage a separate config step depends on how you manage to run the process nginx-unit (systemd, docker, podman, kubernetes...). Here's an example I found where the command is put in the entrypoint script of the container (see toward the end): https://blog.castopod.org/containerize-your-php-applications...

casperb · 2 years ago
I did that, but sometimes it takes a short moment before Unit is started, so you need a loop to check if Unit is responding before you can send the config. In total it was around 20 lines just to load the config. It feels like doing something wrong. Or using the wrong tool.
egberts1 · 2 years ago
Ouch

From NGINX Unit webite: "Also on Linux-based systems, wildcard listeners can’t overlap with other listeners on the same port due to rules imposed by the kernel. For example, :8080 conflicts with 127.0.0.1:8080; in particular, this means :8080 can’t be immediately replaced by 127.0.0.1:8080 (or vice versa) without deleting it first."

Systemd (PID 1) also needs to start stopping their opening network sockets. In short, systemd now needs a systemd-socketd to arbitrate socket allocations.

I am never a fan of PROM, firmware, nor PID1 opening a network socket; it is very bad security practice.

ethagnawl · 2 years ago
I've been dabbling with Unit when I've had some downtime over the last few days. It's definitely compelling and I quite like that it could potentially replace language-centric (I know there are ways to bend them to your will ...) web servers like gunicorn, unicorn, Puma, etc. It's also compelling that you can easily slot disparate applications, static assets, etc. alongside each other in a simple and straightforward way within a single container.

As others have said and the team has owned up to, the current config strategy is not ideal but the Docker COPY strategy has been working well enough for my experiments.

The other somewhat annoying part of the experience for me has been logging. I would want access logs to be enabled by default and it'd be great to (somehow) more easily surface errors to stderr/out when using the unit Docker images. I know you can tap into `docker logs ...` but, IMO, it'd be ideal if you didn't have to. It's possible there's a way to do this at the config level and I just haven't come across it yet.

Also, and I know this is a bit orthogonal, but it'd be great if the wasm image exposed cargo, wasmtime, et al so you could use a single image to build, run and debug* your application. *This was a pain point for me and I got hung up on a file permissions issue for a few hours.

On the whole, though, I think it's pretty compelling and I was able to stand up Flask, Django and Rust WASM applications in short order. I'm planning to add some documentation and publish the code samples as time permits.

random_savv · 2 years ago
How does this compare to OpenResty? Could it somehow help with OIDC support (e.g. by integrating a relevant nodejs lib)?