I wish there was a webdav server that wasn't a huge PHP thing and had decent authentication/authorization.
Almost everything has SFTP built in anyway now though, it's only a matter of time before OSes other than Linux based ones integrate it into the shells and then webdav won't matter so much.
Seafile has been working for me as a personal Dropbox replacement, with s3ql for mass storage. It's very light in relation to Nextcloud/Owncloud (a primary criterion for me trying to cheap out on servers), supports WebDAV, role-based access and a bunch of SSO options. The biggest possible drawback I can think of is that it doesn't store files in the plain, so you can't trivially tie in SFTP or serve files from the storage directly.
Do you just want WebDAV and nothing else? There’s plenty of Docker images for that and most of them are just Apache with the relevant plugin and config.
It’s more evidence that you should assume everything is vulnerable and layer protection.
For a home network simple multi-port knocking should be enough (combined with --ctstate NEW even better). If port knocking or SPA is too cumbersome then at least consider limiting access based on GeoIP, block tor exit nodes, etc (ipset is pretty amazing).
This can be applied to any service on your network btw, including Wireguard. I like knowing that a portscan of my network shows nothing open. I don’t end up on a list that gets used in the next ‘spray and pray’ attack.
Disclaimer: I’m not advocating this for serious use due to replay attacks and IP spoofing via a VPS. This is for home network protection (a boring Class C non target).
No, it was definitely not true in the past and is not true now. First, technically there is no much difference between a given app self-hosted by you and hosted by a company charging you for that except that in theory they should worry about these things instead of you. In practice, your experience will vary - companies happen to be as vulnerable as you, and for various reasons their reaction time might be longer.
Second, bugs are found every day, and your best bet is to use automatic security updates provided by your distro. Yes, if you host anything, you need to be a bit of a security guy and a small amount of paranoia won't hurt. But to say you must not self-host for security reasons is a gross oversimplification.
> On October 24, PHP 7.3.11 (current stable) and PHP 7.2.24 (old stable) were released to address this vulnerability along with other scheduled bug fixes. Those using nginx with PHP-FPM are encouraged to upgrade to a patched version as soon as possible.
> If patching is not feasible, the suggested workaround is to include checks to verify whether or not a file exists. This is achieved either by including the try_files directive or using an if statement, such as if (-f $uri).
Hmm, so looking at the exploit and the patch... do I read it right: There is a buffer underflow in php-fpm if the environment variables SCRIPT_FILENAME and PATH_INFO have a state that violates an assumption. And currently a widespread configuration of nginx + php-fpm is configured such that the URL can be suffiently mangled such that nginx sets these parameters in a violating manner.
However, that means anything utilizing php-fpm in this version remains vulnerable, and it's just unknown if or how apache + php-fpm, or other reverse proxies for php-fpm are vulnerable - right?
So while I don't need to panic right now, I'll certainly have to take a look at our setups running php-fpm on monday.
If i read the commit fixing the vulnerability correctly, you need to have env PATH_INFO set; SCRIPT_FILENAME is unaffected. I would expect other reverse proxies strip newline characters and such from the url, but YMMV. In the bug tracker someone suggested adding an url rewrite that strips away \n and anything that follows. That might be a viable mitigation for versions no longer patched.
In this case we're talking about 5 VMs running php-fpm and no nginx in sight, so these VMs aren't immediately affected by said exploit. Also, these VMs only consume some public, unauthenticated APIs of the company and render the concent into some pretty HTML. These boxes have no persistence, no access to PII and no access to anything you can't get with curl right now.
Worst-Case, they can try sending some spam mails or DDoS-attacks, in which case the hoster would zero-route/force-stop them as soon as that's detected. And then I'd have to rebuild them with ~30 minutes waiting for teraform.
So yes, I'm going to act accordingly. By not bothering on a sunday, because the systems are properly isolated and there are procedures in place.
Ubuntu has a try_files directive in /etc/nginx/snippets/fastcgi-php.conf that is included by default. It was put there years ago to guard against another problem (also mentioned by OP), but it seems that the try_files directive will block this one, too.
Unfortunately, too many people still copy & paste three-liners from random blogs and call it a day, often overwriting the safe defaults provided by their distro, er, I mean, Debian/Ubuntu. (edit: The RPM world is a whole different beast. When you install typical LEMP components on CentOS 7, both MySQL and Memcached listen on all interfaces by default. Seriously?!)
CentOS isn't marketed as a desktop distro. The listening default is helpful and when I switched over to Ubuntu that default of not listening confused me. Not sure I see the benefit.. it's like installing windows but the internet is disabled by default and must be configured manually.. installing another browser and you must configure it manually.
Listening on localhost, or a socket, is a reasonable default. Listening to nothing is annoying, and listening to everything is a terrible idea.
If you're spreading one service across multiple servers, you can spare the few seconds to open up IPs/ports. The default should keep things moderately secure on a single host.
Well, for what it's worth, I think the best practice was always to test the existence of the PHP script, either with `try_files`, or with `if`, so, if you do that, then you aren't vulnerable, according to the exploit.
E.g., if you follow the "PHP FastCGI Example" from nginx.com, then nginx would protect you from this vulnerability in PHP-FPM:
(I think it used to be at another URL prior to the involvement of the marketing department in 2015; not sure if it's worth finding at this point, because the bug is not even in nginx in the first place.)
[1] https://nextcloud.com/blog/urgent-security-issue-in-nginx-ph...
Almost everything has SFTP built in anyway now though, it's only a matter of time before OSes other than Linux based ones integrate it into the shells and then webdav won't matter so much.
My old configuration used the `$fastcgi_path_info`, and the new one uses the `$path_info` variable, so I got the following error while starting nginx:
Might be worth checking out the sample from the Nextcloud Admin Manual[1][1]: https://docs.nextcloud.com/server/17/admin_manual/installati...
For a home network simple multi-port knocking should be enough (combined with --ctstate NEW even better). If port knocking or SPA is too cumbersome then at least consider limiting access based on GeoIP, block tor exit nodes, etc (ipset is pretty amazing).
This can be applied to any service on your network btw, including Wireguard. I like knowing that a portscan of my network shows nothing open. I don’t end up on a list that gets used in the next ‘spray and pray’ attack.
Disclaimer: I’m not advocating this for serious use due to replay attacks and IP spoofing via a VPS. This is for home network protection (a boring Class C non target).
I feel like throwing everything behind a VPN and pretending it is secure is a crux.
Several famous break-ins over the last ten years have hypothetically been on the inside of that wall.
Better to isolate services from each other limiting cross service jumping, than to build security around a single point of failure.
Second, bugs are found every day, and your best bet is to use automatic security updates provided by your distro. Yes, if you host anything, you need to be a bit of a security guy and a small amount of paranoia won't hurt. But to say you must not self-host for security reasons is a gross oversimplification.
> Solution
> On October 24, PHP 7.3.11 (current stable) and PHP 7.2.24 (old stable) were released to address this vulnerability along with other scheduled bug fixes. Those using nginx with PHP-FPM are encouraged to upgrade to a patched version as soon as possible.
> If patching is not feasible, the suggested workaround is to include checks to verify whether or not a file exists. This is achieved either by including the try_files directive or using an if statement, such as if (-f $uri).
However, that means anything utilizing php-fpm in this version remains vulnerable, and it's just unknown if or how apache + php-fpm, or other reverse proxies for php-fpm are vulnerable - right?
So while I don't need to panic right now, I'll certainly have to take a look at our setups running php-fpm on monday.
Assume your systems are compromised and act accordingly.
Worst-Case, they can try sending some spam mails or DDoS-attacks, in which case the hoster would zero-route/force-stop them as soon as that's detected. And then I'd have to rebuild them with ~30 minutes waiting for teraform.
So yes, I'm going to act accordingly. By not bothering on a sunday, because the systems are properly isolated and there are procedures in place.
Deleted Comment
Unfortunately, too many people still copy & paste three-liners from random blogs and call it a day, often overwriting the safe defaults provided by their distro, er, I mean, Debian/Ubuntu. (edit: The RPM world is a whole different beast. When you install typical LEMP components on CentOS 7, both MySQL and Memcached listen on all interfaces by default. Seriously?!)
If you're spreading one service across multiple servers, you can spare the few seconds to open up IPs/ports. The default should keep things moderately secure on a single host.
And it's the config settings every blog ive ever seen about nginx + php-fpm said to use. So I think a lot of sites are vulnerable right now.
E.g., if you follow the "PHP FastCGI Example" from nginx.com, then nginx would protect you from this vulnerability in PHP-FPM:
* http://web.archive.org/web/20150928021324/https://www.nginx....
Here's the current version of the page, which seems to have the same info as the archived one above:
* https://www.nginx.com/resources/wiki/start/topics/examples/p...
(I think it used to be at another URL prior to the involvement of the marketing department in 2015; not sure if it's worth finding at this point, because the bug is not even in nginx in the first place.)
[1] https://github.com/mail-in-a-box/mailinabox/issues/1663#issu...