Readit News logoReadit News
evgpbfhnr · 3 years ago
FWIW gixy (nginx configuration checker) catches this: https://github.com/yandex/gixy/blob/master/docs/en/plugins/a...

(and nixos automatically runs gixy on a configuration generated through it, so the system refuses to build <3)

BitPirate · 3 years ago
If a webserver requires additional tools for the user to avoid all these pitfalls, maybe just maybe it should re-evaluate its defaults.
jgalt212 · 3 years ago
Yeah, the config checker should be built-in, and if it does not pass, then one must use --force or similar to start the server.
ndocjdn · 3 years ago
But then how will nginx continue to pretend that it is still 1995?

nginx was once amazing, but it’s decidedly bad now when compared to modern webservers.

tsak · 3 years ago
Thank you. I didn't know about gixy and ran it on my home server which found a vulnerability ($uri in a 301 redirect)
wredue · 3 years ago
I just gave nix a go and so far it seems great.

But do you know, if they’re a nicer options finder? The one I found where you just search all several thousand options kinda sucks. I want to just see my package (say, ssh) and just the ssh options, but the results get littered with irrelevancy.

evgpbfhnr · 3 years ago
When I roughly know what I'm doing I use search.nixos.org; if you give it the full services.foo prefix it's usually relevant enough, e.g. for ssh you'd want "services.openssh", which you can find skimming through the results of just searching 'ssh' first:

https://search.nixos.org/options?channel=unstable&from=0&siz...

For anything I'm not 100% sure will be obvious I search through a local clone of the nixpkgs repo directly, but I'll be honest and say I just never took time to search for a better tool

smoldesu · 3 years ago
> if they’re a nicer options finder?

https://mynixos.com/

> I want to just see my package (say, ssh) and just the ssh options

https://mynixos.com/nixpkgs/options/programs.ssh

licebmi__at__ · 3 years ago
I would suggest using man and searching like any piece of documentation. Specifically you are looking for `man configuration.nix`
JasonSage · 3 years ago
My main usage of Nix is on non-NixOS machines, and I use Home Manager, and while it has a similar problem, just searching the options in the packages it provides configuration for is a smaller issue.

Not sure if this helps you at all or not, it really depends on your usage of Nix, but for managing user configuration I do recommend Home Manager.

bembo · 3 years ago
I found this a few weeks ago: https://github.com/mlvzk/manix
GlitchMr · 3 years ago
NixOS doesn't run Gixy anymore, see https://github.com/NixOS/nixpkgs/pull/209075.
SuperSandro2000 · 3 years ago
NixOS core maintainer here. That's about nginx' own test. Gixy is still run when writing any nginx config file with the writer helper function https://github.com/NixOS/nixpkgs/blob/b6cc06826812247fe54655...
542458 · 3 years ago
At risk of asking a dumb question, is there any good reason that you’d want nginx to allow traversing into “..” from a URL path? It just seems like problems waiting to happen.

Edit: Actually, I’m a bit lost as to what’s happening in the original vuln. http://localhost/foo../secretfile.txt gets interpreted as /var/www/foo/../secretfile.txt or whatever… but why wouldn’t a server without the vulnerability interpret http://localhost/foo/../secretfile.txt the same way? Why does “..” in paths only work sometimes?

lyu07282 · 3 years ago
That has been a known issue in nginx for a very long time and its a common attack vector at CTFs:

https://book.hacktricks.xyz/network-services-pentesting/pent...

magicalhippo · 3 years ago
There is a LFI vulnerability because:

    /imgs../flag.txt
Transforms to:

    /path/images/../flag.txt
I've only implemented a handful of HTTP servers for fun, but I've always resolved relative paths and constrained them. So I'd turn "/path/images/../flag.txt" into "/path/flag.txt", which would not start with the root "/path/images/" and hence denied without further checks.

Am I wrong, or, why doesn't nginx do this?

Deleted Comment

pravus · 3 years ago
The problem is that a URL isn't actually a path. It's an abstract address to a resource which can be a directory or file (or an executable or stream or ...).

In this case part of the URL is being interpreted by nginx as a directory (http://localhost/foo) due to how that URL is mapped in the configuration to the local filesystem. Apparently it references a directory, so when nginx constructs the full path to the requested resource, it ends up with "${mapped_path}/../secretfile.txt" which would be valid on the local filesystem even if it doesn't make sense in the URL. Notice how the location of the slashes doesn't matter because URLs don't actually have path elements (even if we pretend they do), they are just strings.

This is a very common problem that I have noticed with web servers in general since the web took off. Mapping URLs directly to file paths was popular because it started with simple file servers with indexes. That rapidly turned into a mixed environment where URLs became application identifiers instead of paths since apps can be targeted by part of the path and the rest is considered parameters or arguments.

And no, it generally doesn't usually make sense to honor '.' or '..' in URLs for filesystem objects and my apps sanitize the requested path to ensure a correct mapping. It's also good to be aware that browsers do treat URLs as path-like when building relative links so you have to be careful with how and when you use trailing '/'s because it can target different resources which have different semantics on the server side.

SahAssar · 3 years ago
Not in any "normal" use-case, no. It'd make sense to make this behavior opt-in, like having a `allow_parent_traversal on;` flag in the location.
aidenn0 · 3 years ago
Just guessing, but NginX probably either checks for "/foo/bar/.." and disallows it, or normalizes it to "/foo/" but "/foo/bar.." is a perfectly valid file name, so it doesn't get caught by the net checking for this.
dumpsterdiver · 3 years ago
> Why does “..” in paths only work sometimes?

That fully depends upon the file permissions. In this case, let's assume that a user has permissions to read files all the way from the web index directory (../index.html) back to the root directory (/). At that point, since they have permission to traverse down to the root directory, they now have permission to view any world viewable file that can be traversed to from the root directory, for instance /etc/passwd.

In other words, imagine a fork with three prongs, and your web server resides on the far right prong. Imagine that the part of the fork where the prongs meet (the "palm" of the fork) is the file system. If your web server residing on the far right prong of that fork allows file permission to files and directories that lead all the way to the palm of the fork, at that point you could continue accessing files on other prongs once you have reached the palm.

komali2 · 3 years ago
Isn't setting correct permissions for www-data like, the first note in a bunch of "secure your web server" tutorials? I thought if read is only set for the directory with actual public files, and not for the parent directory, there should be no traversal possible like this?
amluto · 3 years ago
How is this not seen as a vulnerability in nginx? This behavior is utterly absurd, seems to have no beneficial purpose, and straightforwardly exploitable.
phendrenad2 · 3 years ago
It's done for speed. Straightforward text replacement is so much faster than checking to see if a path is properly terminated by a slash. And remember that Nginx became popular due to benchmarks that showed that it was more "web scale" than Apache2.
amluto · 3 years ago
I find it hard to believe that searching for “..” would even show up in a benchmark.

In any case, it seems that nginx does try to search for .. but has a bug in the corner case where the “location” doesn’t end with a slash. I assume there’s some kind of URL normalization pass that happens before the routing pass, and if the route matches part of a path component, nothing catches the ..

If I’m right, this is just an IMO rather embarrassing bug and should he fixed.

okeuro49 · 3 years ago
Your comment makes nginx sound like some fly-by-night server that only achieved its performance by making lots of tiny-yet-dangerous "optimisations" like this one.

More likely it is an omission, which could be rectified with a warning or failure running nginx -t (verify configuration).

The actual performance comes from an architectural choice between event vs process based servers, as detailed in the C10k problem article. [1]

[1] http://www.kegel.com/c10k.html

sofixa · 3 years ago
> And remember that Nginx became popular due to benchmarks that showed that it was more "web scale" than Apache2.

More like because it was much faster out of the box, and came with many batteries included while Apache2 required mods to be separately install.

hedora · 3 years ago
They could simply normalize the paths when parsing the configuration file. The overhead wouldn’t show up in benchmark because it only happens once at startup (and maybe when the conf file changes)

Dead Comment

technion · 3 years ago
OK hear me out: a Linux capability like option that removes the .. option from the kernels file name parser.

Like web apps have been seen various bypasses involving somehow smuggling two dots somewhere since we were on dial up modems. It's time to look for a way to close this once and for all, as the Linux kernel has done with several other classes of user land bugs.

loeg · 3 years ago
https://man7.org/linux/man-pages/man2/openat2.2.html RESOLVE_BENEATH

(FreeBSD has this in ordinary openat(2) as O_RESOLVE_BENEATH.)

m00x · 3 years ago
That would break so many things that it would be insane to do.

You could just run nginx as a separate user with very limited rights, or just run it on Docker. This, plus updating regularly usually fixes 90% of security issues.

archi42 · 3 years ago
Most (I hope all) distributions already run nginx as a separate user. It's best practice.

But that won't help if you alias to "/foo/bar/www" and the the application has a SQLite database at "/foo/bar/db.db", which the nginx user has to have access to. Same if you run it in a container (or lock down permissions using systemd).

martinflack · 3 years ago
But the issue is -- would it break the things a web server is doing? It doesn't have to be a universal solution.
ilyt · 3 years ago

    /some/../path 
should pretty much 100% of the time be disallowed, there is no sensible use case that is not "someone wrote ugly code"

../some/path makes sense sometimes at least

... but I'd imagine it wouldn't as useful as you think it is, because many apps resolve .. before passing it to the OS

vbezhenar · 3 years ago
I don't agree. Those kinds of paths are often result of concatenation of several configuration options. Like APP_DIR=/some/app/bin; LOG_DIR="$APP_DIR/../logs". And APP_DIR comes to you from distro scripts, so you're not going to fork those scripts and support your own fork across updates, you just build upon those scripts.
junon · 3 years ago
That makes no difference. Code often normalizes paths before they ever touch the filesystem API
ikekkdcjkfke · 3 years ago
It's something else in the kernel, there we have the permission system which we rely on.

If you are serving files to web from the folder, the web framework should handle not taversing the public root folder it was tasked to serve. If are rolling your own, well now you have to consider all kinds of stuff, including this.

dxuh · 3 years ago
I don't think this would have prevented it. Removing ".." segments from paths is part of URL parsing and required by the HTTP specification. Nginx very likely does this too.
HenriTEL · 3 years ago
> The Google VRP Team recognized our work by awarding us a $500 reward for uncovering this vulnerability. They believed the impact on the application wasn't severe enough to warrant a larger reward.

Exposing email and private keys of GCP accounts only gives you $500 reward? WTF. Google being Google I guess.

Decabytes · 3 years ago
Glad that the leaks are still encrypted. Even companies that specialize in this sort of stuff are not immune to leaks, so this is honestly the best case scenario.
gostsamo · 3 years ago
The title is significantly editorialized. The post title is:

Hunting for Nginx Alias Traversals in the wild

and the hn submission highlights the bitwarden vulnerability while there is a google one discussed as well.

dang · 3 years ago
Ok, we've reverted the title. Submitted title was "Leaking Bitwarden's Vault with a Nginx vulnerability".
kibwen · 3 years ago
If all you need is a simple way to serve static files that minimizes resource consumption and is reliably secure, what is the state of the art these days? In the past I would probably reach for Nginx, but I wonder if a more focused/less configurable tool would be preferable from a security standpoint.
cyrnel · 3 years ago
I use https://static-web-server.net/

Cross-platform, written in Rust, straightforward configuration, secure defaults, also has a hardened container image and a hardened NixOS module.

I wouldn't recommend Caddy. Their official docker image runs as root by default [1], and they don't provide a properly sandboxed systemd unit file [2].

[1]: https://github.com/caddyserver/caddy-docker/issues/104

[2]: https://github.com/caddyserver/dist/blob/master/init/caddy.s...

EDITED: phrasing

trillic · 3 years ago
I use this...

    [Unit]
    Description=Caddy webserver
    Documentation=https://caddyserver.com/docs/
    After=network-online.target
    Wants=network-online.target systemd-networkd-wait-online.service
    StartLimitIntervalSec=14400
    StartLimitBurst=10

    [Service]
    User=caddy
    Group=caddy

    # environment: store secrets here such as API tokens
    EnvironmentFile=-/var/lib/caddy/envfile
    # data directory: uses $XDG_DATA_HOME/caddy
    # TLS certificates and other assets are stored here
    Environment=XDG_DATA_HOME=/var/lib
    # config directory: uses $XDG_CONFIG_HOME/caddy
    Environment=XDG_CONFIG_HOME=/etc

    ExecStart=/usr/bin/caddy run --config /etc/caddy/Caddyfile
    ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile

    # Do not allow the process to be restarted in a tight loop.
    Restart=on-abnormal

    # Use graceful shutdown with a reasonable timeout
    KillMode=mixed
    KillSignal=SIGQUIT
    TimeoutStopSec=5s

    # Sufficient resource limits
    LimitNOFILE=1048576
    LimitNPROC=512

    # Grants binding to port 443...
    AmbientCapabilities=CAP_NET_BIND_SERVICE
    # ...and limits potentially inherited capabilities to this
    CapabilityBoundingSet=CAP_NET_BIND_SERVICE

    # Hardening options
    LockPersonality=true
    NoNewPrivileges=true
    PrivateTmp=true
    PrivateDevices=true

    ProtectControlGroups=true
    ProtectHome=true
    ProtectKernelTunables=true
    ProtectKernelModules=true
    ProtectSystem=strict

    ReadWritePaths=/var/lib/caddy
    ReadWritePaths=/etc/caddy/autosave.json
    ReadOnlyPaths=/etc/caddy
    ReadOnlyPaths=/var/lib/caddy/envfile
    [Install]
    WantedBy=multi-user.target

ptx · 3 years ago
What's wrong with the unit file?
mholt · 3 years ago
If you want a sandboxed unit file, why not just sandbox it yourself?
francislavoie · 3 years ago
Shameless plug: Caddy does a great job here. Automatic HTTPS, written in Go so memory safety bugs are not a concern, has a solid file_server module.
princevegeta89 · 3 years ago
+1 to Caddy. Just tried it recently and I was very happy to forget all the nginx jargon the next moment.
username135 · 3 years ago
Isn't everything forced to https now
alexalx666 · 3 years ago
Im using caddy, it's great!
pepa65 · 3 years ago
I have used Caddy for years, automatic SSL certificates, does file serving, does reverse proxy, very easy and clear to configure. Single-binary (Go) so easy to "install", single configfile.
adventured · 3 years ago
Caddy is pretty simple to configure and serve static files from.
housemusicfan · 3 years ago
pepa65 · 3 years ago
Last release 2016??
calvinmorrison · 3 years ago
werc, shttpd, etc.

Treat any web request like you would a real user on a Linux system you'd need to give access to to download files via scp. Chroot, strict permissions, etc. Can't escape what you can't escape. A ../ should return the same as expected in the shell, permission denied

dylan604 · 3 years ago
how is a static site served from S3 considered in these parts of the interweb? i've never done this, but see it as an option, yet i never really hear others using it either.
sofixa · 3 years ago
In my view, it's perfect (okay, maybe slightly less than perfect, and dedicated platforms taking ot to the next level like Netlify, CloudFlare Pages, Firebase Hosting, etc are for their added related services and tools, as well as their generous free tiers). It's pay as you go, scales from zero to infinite, and has zero attack surface or maintenance.

I've run a couple of websites (WordPress or Hugo based, including my personal blog) like that and it's great.

crote · 3 years ago
You probably want some kind of CDN to avoid a HN frontpage link from making you go bankrupt, but it's a pretty decent solution.

I personally prefer something like Github Pages, though - it doesn't get much more hands-off than that!

chrisweekly · 3 years ago
Good Q. Using S3 as origin behind Cloudfront seems like a pretty standard AWS CDN setup for static assets... but S3 isn't a traditional web server.
BOOSTERHIDROGEN · 3 years ago
Could you give a commentary to traefik also ? In terms security and reliability, thanks