Readit News logoReadit News
thesorrow · 6 years ago
FYI : If you have a NextCloud or Owncloud installation. The recommended nginx configuration is vulnerable [1]

[1] https://nextcloud.com/blog/urgent-security-issue-in-nginx-ph...

swiley · 6 years ago
I wish there was a webdav server that wasn't a huge PHP thing and had decent authentication/authorization.

Almost everything has SFTP built in anyway now though, it's only a matter of time before OSes other than Linux based ones integrate it into the shells and then webdav won't matter so much.

Lorkki · 6 years ago
Seafile has been working for me as a personal Dropbox replacement, with s3ql for mass storage. It's very light in relation to Nextcloud/Owncloud (a primary criterion for me trying to cheap out on servers), supports WebDAV, role-based access and a bunch of SSO options. The biggest possible drawback I can think of is that it doesn't store files in the plain, so you can't trivially tie in SFTP or serve files from the storage directly.
xienze · 6 years ago
Do you just want WebDAV and nothing else? There’s plenty of Docker images for that and most of them are just Apache with the relevant plugin and config.
jacquesm · 6 years ago
Or something that includes NextCloud or Owncloud even if you do not use them, such as Mailinabox.
jeremija · 6 years ago
Thanks for the link! The example in the link does not contain the

   set $path_info $fastcgi_path_info;
line after the `fastcgi_split_path_info` directive.

My old configuration used the `$fastcgi_path_info`, and the new one uses the `$path_info` variable, so I got the following error while starting nginx:

    nginx emerg unknown "path_info" variable
Might be worth checking out the sample from the Nextcloud Admin Manual[1]

[1]: https://docs.nextcloud.com/server/17/admin_manual/installati...

ralala · 6 years ago
The production-fpm docker image has not yet received any updates - correct?
heavyset_go · 6 years ago
This is a case study in why you shouldn't expose your self-hosted services to the internet.
nominated1 · 6 years ago
It’s more evidence that you should assume everything is vulnerable and layer protection.

For a home network simple multi-port knocking should be enough (combined with --ctstate NEW even better). If port knocking or SPA is too cumbersome then at least consider limiting access based on GeoIP, block tor exit nodes, etc (ipset is pretty amazing).

This can be applied to any service on your network btw, including Wireguard. I like knowing that a portscan of my network shows nothing open. I don’t end up on a list that gets used in the next ‘spray and pray’ attack.

Disclaimer: I’m not advocating this for serious use due to replay attacks and IP spoofing via a VPS. This is for home network protection (a boring Class C non target).

UnoriginalGuy · 6 years ago
Google has gone the opposite direction.

I feel like throwing everything behind a VPN and pretending it is secure is a crux.

Several famous break-ins over the last ten years have hypothetically been on the inside of that wall.

Better to isolate services from each other limiting cross service jumping, than to build security around a single point of failure.

noja · 6 years ago
No it's not.
kuzimoto · 6 years ago
I have been thinking about this a lot lately. What is the best alternative, only accessing your services through a VPN?
dvfjsdhgfv · 6 years ago
No, it was definitely not true in the past and is not true now. First, technically there is no much difference between a given app self-hosted by you and hosted by a company charging you for that except that in theory they should worry about these things instead of you. In practice, your experience will vary - companies happen to be as vulnerable as you, and for various reasons their reaction time might be longer.

Second, bugs are found every day, and your best bet is to use automatic security updates provided by your distro. Yes, if you host anything, you need to be a bit of a security guy and a small amount of paranoia won't hurt. But to say you must not self-host for security reasons is a gross oversimplification.

hnarn · 6 years ago
From the CVE:

> Solution

> On October 24, PHP 7.3.11 (current stable) and PHP 7.2.24 (old stable) were released to address this vulnerability along with other scheduled bug fixes. Those using nginx with PHP-FPM are encouraged to upgrade to a patched version as soon as possible.

> If patching is not feasible, the suggested workaround is to include checks to verify whether or not a file exists. This is achieved either by including the try_files directive or using an if statement, such as if (-f $uri).

tetha · 6 years ago
Hmm, so looking at the exploit and the patch... do I read it right: There is a buffer underflow in php-fpm if the environment variables SCRIPT_FILENAME and PATH_INFO have a state that violates an assumption. And currently a widespread configuration of nginx + php-fpm is configured such that the URL can be suffiently mangled such that nginx sets these parameters in a violating manner.

However, that means anything utilizing php-fpm in this version remains vulnerable, and it's just unknown if or how apache + php-fpm, or other reverse proxies for php-fpm are vulnerable - right?

So while I don't need to panic right now, I'll certainly have to take a look at our setups running php-fpm on monday.

arpa · 6 years ago
If i read the commit fixing the vulnerability correctly, you need to have env PATH_INFO set; SCRIPT_FILENAME is unaffected. I would expect other reverse proxies strip newline characters and such from the url, but YMMV. In the bug tracker someone suggested adding an url rewrite that strips away \n and anything that follows. That might be a viable mitigation for versions no longer patched.
mantoto · 6 years ago
On Monday?

Assume your systems are compromised and act accordingly.

tetha · 6 years ago
In this case we're talking about 5 VMs running php-fpm and no nginx in sight, so these VMs aren't immediately affected by said exploit. Also, these VMs only consume some public, unauthenticated APIs of the company and render the concent into some pretty HTML. These boxes have no persistence, no access to PII and no access to anything you can't get with curl right now.

Worst-Case, they can try sending some spam mails or DDoS-attacks, in which case the hoster would zero-route/force-stop them as soon as that's detected. And then I'd have to rebuild them with ~30 minutes waiting for teraform.

So yes, I'm going to act accordingly. By not bothering on a sunday, because the systems are properly isolated and there are procedures in place.

root_axis · 6 years ago
Some people choose not to work on weekends. Work/life balance etc.

Deleted Comment

kijin · 6 years ago
Ubuntu has a try_files directive in /etc/nginx/snippets/fastcgi-php.conf that is included by default. It was put there years ago to guard against another problem (also mentioned by OP), but it seems that the try_files directive will block this one, too.

Unfortunately, too many people still copy & paste three-liners from random blogs and call it a day, often overwriting the safe defaults provided by their distro, er, I mean, Debian/Ubuntu. (edit: The RPM world is a whole different beast. When you install typical LEMP components on CentOS 7, both MySQL and Memcached listen on all interfaces by default. Seriously?!)

lioeters · 6 years ago
I confirmed that Ubuntu's default config in /etc/nginx/snippets/fastcgi-php.conf has a try_files directive that prevents this exploit.
wolco · 6 years ago
CentOS isn't marketed as a desktop distro. The listening default is helpful and when I switched over to Ubuntu that default of not listening confused me. Not sure I see the benefit.. it's like installing windows but the internet is disabled by default and must be configured manually.. installing another browser and you must configure it manually.
Dylan16807 · 6 years ago
Listening on localhost, or a socket, is a reasonable default. Listening to nothing is annoying, and listening to everything is a terrible idea.

If you're spreading one service across multiple servers, you can spare the few seconds to open up IPs/ports. The default should keep things moderately secure on a single host.

calibas · 6 years ago
Should probably have specified in the title that it's a PHP-FPM bug, had me worried there.
dang · 6 years ago
Ok, we've added that to the title.
samat · 6 years ago
For those of you not speaking Russian, Russian for ‘dick’ & ‘cunt’ (also meaning ‘something very bad happening’) are in the title.
fortran77 · 6 years ago
I'd say it's in Croatian. In Russian, it's "пизда"
owl57 · 6 years ago
No, that's certainly just transliterated Russian: https://github.com/neex/phuip-fpizdam/blob/d43b788a65f83ba6f... (those literals mean "Fucking: your mom").
throwawayRO1999 · 6 years ago
In Romanian as well
cnst · 6 years ago
How? Romanian is not a Slavic language, I thought both of these are Slavic-rooted words.
geoffmcc · 6 years ago
> If a webserver runs nginx + php-fpm and nginx have a configuration like

And it's the config settings every blog ive ever seen about nginx + php-fpm said to use. So I think a lot of sites are vulnerable right now.

cnst · 6 years ago
Well, for what it's worth, I think the best practice was always to test the existence of the PHP script, either with `try_files`, or with `if`, so, if you do that, then you aren't vulnerable, according to the exploit.

E.g., if you follow the "PHP FastCGI Example" from nginx.com, then nginx would protect you from this vulnerability in PHP-FPM:

* http://web.archive.org/web/20150928021324/https://www.nginx....

Here's the current version of the page, which seems to have the same info as the archived one above:

* https://www.nginx.com/resources/wiki/start/topics/examples/p...

(I think it used to be at another URL prior to the involvement of the marketing department in 2015; not sure if it's worth finding at this point, because the bug is not even in nginx in the first place.)

jacquesm · 6 years ago
Mailinabox as well.
zubspace · 6 years ago
According to [1] mailinabox seems to be not affected.

[1] https://github.com/mail-in-a-box/mailinabox/issues/1663#issu...