My open source project has some daily users, but not thousands. Plenty to attract malicious content, I think a lot of people are sending it to themselves though (like onto a malware analysis VM that is firewalled off and so they look for a public website to do the transfer), but even then the content will be on the site for a few hours. After >10 years of hosting this, someone seems to have fed a page into a virus scanner and now I'm getting blocks left and right with no end in sight. I'd be happy to give every user a unique subdomain instead of short links on the main domain, and then put the root on the PSL, if that's what solves this
Folks around here are generally uneasy about tracking in general too, but remove big brother monitoring from Safe Browsing and this story could still be the same: whole domain blacklisted by Google, only due to manual reporting instead.
"Oh, but a human reviewer would've known `*.statichost.eu` isn't managed by us"—not in a lot of cases, not really.
But you're right, complaining about big tech surveillance didn't help with making that point at all.
I guess control-panel.statichost.eu is still possible, of course, but that already seems like a pretty long shot.
XKCD 1053 is not a valid excuse for what amounts to negligence in a production service.
Regarding the PSL - and I can't believe I'm writing this again: you cannot get on there before your service is big enough and "the request authentically merits such widespread inclusion"[1]. So it's kind of a chicken and egg situation.
Regarding the best practice of hosting user content on a separate domain: this has basically two implications: 1. Cookie scope of my own assets (e.g. dashboard), which one should limit in any case and which I'm of course doing. So this is not an issue. 2. Blacklisting, which is what all of this has been about. I did pay the price here. This has nothing to do with security, though.
I'm sorry to be so frank, but you don't know anything about me or my security practices and your claim of negligence is extremely unfounded.
[1] https://github.com/publicsuffix/list/wiki/Guidelines#validat...
This read like a dark twist in a horror novel - the .page tld is controlled by Google!
And for what it's worth, it feels great to actually pay for something Google provides!
By putting UGC on the same TLD you also put your own security at risk, so they basically did you a favor…
Do you think I'm reading/writing sensitive data to/from subdomain-wide cookies?
Also, yes, the PSL is a great tool to mitigate (in practice eliminate) the problem of cross-domain cookies between mutually untrusting parties. But getting on that list is non-trivial and they (voluntary maintainers) even explicitly state that you can forget getting on there before your service is big enough.
Post author is throwing a lot of sand at Google for a process that has (a) been around for, what, over a decade now and (b) works. The fact of the matter is this hosting provider was too open, several users of the provider used it to put up content intended to attack users, and as far as Google (or anyone else on the web is concerned) the TLD is where the buck stops for that kind of behavior. This is one of the reasons why you host user-generated content off your TLD, and several providers have gotten the memo; it is unfortunate statichost.eu had not yet.
I'm sorry this domain admin had to learn an industry lesson the hard way, but at least they won't forget it.
What I'm trying to say in the post specifically about Google is that I personally think that they have too much power. They can and will shut down a whole domain for four billion users. That is too much power no matter the intentions, in my opinion. I can agree that the intentions are good and that the net effect is positive on the whole, though.
On the "different aspects" side of things, I'm not sure I agree with the _works_ claim you make. I guess it depends on what your definition of works is, but having a blacklist as you tool to fight bad guys is not something that works very well in my opinion. Yes, specifically my own assets would not have been impacted, had I used a separate domain earlier. But the point still stands.
The fact that it took so long to move user content off the main domain is of course on me. I'm taking some heat here for saying this is more important than one (including me) might think. But nonetheless, let it be a lesson for those of you out there who think that moving that forum / upload functionality / wiki / CMS to its own domain (not subdomain) can be done tomorrow instead of today.
TL;DR takeaway for HN techies: when executing resource-intensive workloads on Node.js, pay attention to its max heap size. It can be increased with the `--max-old-space-size` option, e.g. via the env var `NODE_OPTIONS="--max-old-space-size=16384"`.