If you're going to host user content on subdomains, then you should probably have your site on the Public Suffix List https://publicsuffix.org/list/ .
That should eventually make its way into various services so they know that a tainted subdomain doesn't taint the entire site....
In the past, browsers used an algorithm which only denied setting wide-ranging cookies for top-level domains with no dots (e.g. com or org). However, this did not work for top-level domains where only third-level registrations are allowed (e.g. co.uk). In these cases, websites could set a cookie for .co.uk which would be passed onto every website registered under co.uk.
Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain (the policies differ with each registry), the only method is to create a list. This is the aim of the Public Suffix List.
(https://publicsuffix.org/learn/)
So, once they realized web browsers are all inherently flawed, their solution was to maintain a static list of websites.
God I hate the web. The engineering equivalent of a car made of duct tape.
> Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain
A centralized list like this not just for domains as a whole (e.g. co.uk) but also specific sites (e.g. s3-object-lambda.eu-west-1.amazonaws.com) is both kind of crazy in that the list will bloat a lot over the years, as well as a security risk for any platform that needs this functionality but would prefer not to leak any details publicly.
We already have the concept of a .well-known directory that you can use, when talking to a specific site. Similarly, we know how you can nest subdomains, like c.b.a.x, and it's more or less certain that you can't create a subdomain b without the involvement of a, so it should be possible to walk the chain.
Example:
c --> https://b.a.x/.well-known/public-suffix
b --> https://a.x/.well-known/public-suffix
a --> https://x/.well-known/public-suffix
Maybe ship the domains with the browsers and such and leave generic sites like AWS or whatever to describe things themselves. Hell, maybe that could also have been a TXT record in DNS as well.
> God I hate the web. The engineering equivalent of a car made of duct tape.
Most of the complex thing I have seen being made (or contributed to) needed duct tape sooner or later. Engineering is the art of trade-offs, of adapting to changing requirements (that can appear due to uncontrollable events external to the project), technology and costs.
I think it's somewhat tribal webdev knowledge that if you host user generated content you need to be on the PSL otherwise you'll eventually end up where Immich is now.
I'm not sure how people not already having hit this very issue before is supposed to know about it beforehand though, one of those things that you don't really come across until you're hit by it.
Besides user uploaded content it's pretty easy to accidentally destroy the reputation of your main domain with subdomains.
For example:
1. Add a subdomain to test something out
2. Complete your test and remove the subdomain from your site
3. Forget to remove the DNS entry and now your A record points to an IP address
At this point if someone else on that hosting provider gets that IP address assigned, your subdomain is now hosting their content.
I had this happen to me once with PDF books being served through a subdomain on my site. Of course it's my mistake for not removing the A record (I forgot) but I'll never make that mistake again.
10 years of my domain having a good history may have gotten tainted in an unrepairable way. I don't get warnings visiting my site but traffic has slowly gotten worse over time since around that time, despite me posting more and more content. The correlation isn't guaranteed, especially with AI taking away so much traffic but it's something I do think about.
Looking through some of the links in this post, I there are actually two separate issues here:
1. Immich hosts user content on their domain. And should thus be on the public suffic list.
2. When users host an open source self hosted project like immich, jellyfin, etc. on their own domain it gets flagged as phishing because it looks an awful lot like the publicly hosted version, but it's on a different domain, and possibly a domain that might look suspicious to someone unfamiliar with the project, because it includes the name of the software in the domain. Something like immich.example.com.
The first one is fairly straightforward to deal with, if you know about the public suffix list. I don't know of a good solution for the second though.
I don't think the Internet should be run by being on special lists (other than like, a globally run registry of domain names)...
I get that SPAM, etc., are an issue, but, like f* google-chrome, I want to browse the web, not some carefully curated list of sites some giant tech company has chosen.
A) you shouldn't be using google-chrome at all B) Firefox should definitely not be using that list either C) if you are going to have a "safe sites" list, that should definitely be a non-profit running that, not an automated robot working for a large probably-evil company...
> I don't know of a good solution for the second though.
I know the second issue can be a legitimate problem but I feel like the first issue is the primary problem here & the "solution" to the second issue is a remedy that's worse than the disease.
The public suffix list is a great system (despite getting serious backlash here in HN comments, mainly from people who have jumped to wildly exaggerated conclusions about what it is). Beyond that though, flagging domains for phishing for having duplicate content smells like an anti-self-host policy: sure there's phishers making clone sites, but the vast majority of sites flagged are going to be legit unless you employ a more targeted heuristic, but doing so isn't incentivised by Google's (or most company's) business model.
> When users host an open source self hosted project like immich, jellyfin, etc. on their own domain...
I was just deploying your_spotify and gave it your-spotify.<my services domain> and there was a warning in the logs that talked about thud, linking the issue:
The second is a real problem even with completely unique applications. If they have UI portions that have lookalikes, you will get flagged. At work, I created an application with a sign-in popup. Because it's for internal use only, the form in the popup is very basic, just username and password and a button. Safe Browsing continues to block this application to this day, despite multiple appeals.
Even the first one only works if there's no need to have site-wide user authentication on the domain, because you can't have a domain cookie accessible from subdomains anymore otherwise.
I thought this story would be about some malicious PR that convinced their CI to build a page featuring phishing, malware, porn, etc. It looks like Google is simply flagging their legit, self-created Preview builds as being phishing, and banning the entire domain. Getting immich.cloud on the PSL is probably the right thing to do for other reasons, and may decrease the blast radius here.
> Is that actually relevant when only images are user content?
Yes. For instance in circumstances exactly as described in the thread you are commenting in now and the article it refers to.
Services like google's bad site warning system may use it to indicate that it shouldn't consider a whole domain harmful if it considers a small number of its subdomains to be so, where otherwise they would. It is no guarantee, of course.
In another comment in this thread, it was confirmed that these PR host names are only generated from branches internal to Immich or labels applied by maintainers, and that this does not automatically happen for arbitrary PRs submitted by external parties. So this isn’t the use case for the public suffix list - it is in no way public or externally user-generated.
What would you recommend for this actual use case? Even splitting it off to a separate domain name as they’re planning merely reduces the blast radius of Google’s false positive, but does not eliminate it.
If these are dev subdomains that are actually for internal use only, then a very reliable fix is to put basic auth on them, and give internal staff the user/password. It does not have to be strong, in fact it can be super simple. But it will reliably keep out crawlers, including Google.
Browsers already do various levels of isolation based on domain / subdomains (e.g. cookies). PSL tells them to treat each subdomain as if it were a top level domain because they are operated (leased out to) different individuals / entities. WRT to blocking, it just means that if one subdomain is marked bad, it's less likely to contaminate the rest of the domain since they know it's operated by different people.
This is not about user content, but about their own preview environments! Google decided their preview environments were impersonating... Something? And decided to block the entire domain.
I think this only is true if you host independent entities. If you simply construct deep names about yourself with demonstrable chain of authority back, I don't think the PSL wants to know. Otherwise there is no hierarchy the dots are just convenience strings and it's a flat namespace the size of the PSLs length.
There is no law appointing that organization as a world wide authority on tainted/non tainted sites.
The fact it's used by one or more browsers in that way is a lawsuit waiting to happen.
Because they, the browsers, are pointing a finger to someone else and accusing them of criminal behavior. That is what a normal user understands this warning as.
Turns out they are wrong. And in being wrong they may well have harmed the party they pointed at, in reputation and / or sales.
It's remarkable how short sighted this is, given that the web is so international. Its not a defense to say some third party has a list, and you're not on it so you're dangerous
Never host your test environments as Subdomains of your actual production domain.
You'll also run into email reputation as well as cookie hell. You can get a lot of cookies from the production env if not managed well.
This. I cannot believe the rest of the comments on this are seemingly completely missing the problem here & kneejerk-blaming Google for being an evil corp. This is a real issue & I don't feel like the article from the Immich team acknowledges it. Far too much passing the buck, not enough taking ownership.
It's true that putting locks on your front door will reduce the chance of your house getting robbed, but if you do get robbed, the fact that your front door wasn't locked does not in any way absolve the thief for his conduct.
Similarly, if an organization deploys a public system that engages in libel and tortious interference, the fact that jumping through technical hoops might make it less likely to be affected by that system does not in any way absolve the organization for operating it carelessly in the first place.
Just because there are steps you can take to lessen the impact of bad behavior does not mean that the behavior itself isn't bad. You shouldn't have restrict how you use your own domains to avoid someone else publishing false information about your site. Google should be responsible for mitigating false positives, not the website owners affected by them.
.cloud is used to host the map embedded in their webapp.
In fairness, in my local testing sofar, it appears to be an entirely unauthenticated/credential-less service so there's no risk to sessions right now for this particular use-case. That leaves the only risk-factors being phishing & deploy environment credentials.
The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker". The same for Microsoft with their unknown executables.
They used to be more generic saying "We don't know if its safe" but now they are quite assertive at stating you are indeed an attacker.
"The people living at this address might be pedophiles and sexual predators. Not saying that they are, but if your children are in the vicinity, I strongly suggest you get them back to safety."
You can’t possibly use the “they use the word ‘might’” argument and not mention the death red screen those words are printed over. If you are referring to abidance to the law, you are technically right. If we remove the human factor, you technically are.
Imagine if you bought a plate at Walmart and any time you put food you bought elsewhere on it, it turned red and started playing a warning about how that food will probably kill you because it wasn't Certified Walmart Fresh™
Now imagine it goes one step further, and when you go to eat the food anyway, your Walmart fork retracts into its handle for your safety, of course.
No brand or food supplier would put up with it.
That's what it's like trying to visit or run non-blessed websites and software coming from Google, Microsoft, etc on your own hardware that you "own".
This is the future. Except you don't buy anything, you rent the permission to use it. People from Walmart can brick your carrots remotely even when you don't use this plate, for your safety ofc
> The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker"
Being wrong doesn't count as libel.
If a company has a detection tool, makes reasonable efforts to make sure it is accurate, and isn't being malicious, you'll have a hard time making a libel case
There is a truth defence to libel in the USA but there is no good faith defence. Think about it like a traffic accident, you may not have intended to drive into the other car but you still caused damage. Just because you meant well doesn't absolve you from paying for the damages.
If the false positive rate is consistently 0.0%, that is a surefire sign that the detector is not effective enough to be useful.
If a false positive is libel, then any useful malware detector would occasionally do libel. Since libel carries enormous financial consequences, nobody would make a useful malware detector.
I am skeptical that changing the wording in the warning resolves the fundamental tension here. Suppose we tone it down: "This executable has traits similar to known malware." "This website might be operated by attackers."
Would companies affected by these labels be satisfied by this verbiage? How do we balance this against users' likelihood of ignoring the warning in the face of real malware?
The problem is that it's so one sided. They do what they want with no effort to avoid collateral damage and there's nothing we can do about it.
They could at least send a warning email to the RFC2142 abuse@ or hostmaster@ address with a warning and some instructions on a process for having the mistake reviewed.
The first step in filing a libel lawsuit is demanding a retraction from the publisher. I would imagine Google's lawyers respond pretty quickly to those, which is why SafeBrowsing hasn't been similarly challenged.
Happened to me last week. One morning we wake up and the whole company website does not work.
Not advice with some time to fix any possible problem, just blocked.
We gave very bad image to our clients and users, and had to give explanations of a false positive from google detection.
The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).
After just moving the webmail to another domain, removing one of the redirections just in case, and asking politely 4 times to be unblocked.. took about 12 hours. And no real recourse, feedback or anything about when its gonna be solved. And no responsibility.
The worse is the feeling of not in control of your own business, and depending on a third party which is not related at all with us, which made a huge mistake, to let out clients use our platform.
It would be glorious if everybody unjustly screwed by Google did that. Barring antitrust enforcement, this may be the only way to force them to behave.
In all US states corporations may be represented by lawyers in small claims cases. The actual difference is that in higher courts corporations usually must be represented by lawyers whereas many states allow normal employees to represent corporations when defending small claims cases, but none require it.
I've been thinking for a while that a coordinated and massive action against a specific company by people all claiming damages in small claims court would be a very effective way of bringing that company to heel.
> The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).
I find it hard to believe that the double redirect itself tripped it: multiple redirects in a row is completely normal—discouraged in general because it hurts performance, but you encounter them all the time. For example, http://foo.example → https://foo.example → https://www.foo.example (http → https, then add or remove www subdomain) is the recommended pattern. And site root to app path to login page is also pretty common. This then leads me to the conclusion that they’re not disclosing what actually tripped it. Maybe multiple redirects contributed to it, a bad learned behaviour in an inscrutable machine learning model perhaps, but it alone is utterly innocuous. There’s something else to it.
Want to see how often Microsoft accounts redirect you? I'd love to see Google block all of Microsoft, but of course that will never happen, because these tech giants are effectively a cartel looking out for each other. At least in comparison to users and smaller businesses.
I suspect you're right... The problem is, and i've experienced this with many big tech companies, you never really get any explanation. You report an issue, and then, magically, it's "fixed," with no further communication.
I'm permanently banned from the Play Store because 10+ years ago I made a third-party Omegle client, called it Yo-megle (neither Omegle nor Yo-megle still exist now), got a bunch of downloads and good ratings, then about 2 years later got a message from Google saying I was banned for violating trademark law. No actual legal action, just a message from Google. I suppose I'm lucky they didn't delete my entire Google account.
I'm beginning to seriously think we need a new internet, another protocol, other browsers just to break up the insane monopolies that has been formed, because the way things are going soon all discourse will be censored, and competitors will be blocked soon.
We need something that's good for small and medium businesses again, local news and get an actual marketplace going - you know what the internet actually promised.
The community around NOSTR are basically building a kind of semantic web, where users identities are verified via their public key, data is routed through content agnostic relays, and trustworthiness is verified by peer recommendation.
They are currently experimenting with replicating many types of services which are currently websites as protocols with data types, with the goal being that all of these services can share available data with eachother openly.
It's definitely more of a "bazaar" model over a "catherdral" model, with many open questions and it's also tough to get a good overview of what is really going on there. But at least it's an attempt.
We have a “new internet”. We have the indie web, VPNs, websites not behind Cloudflare, other browsers. You won’t have a large audience, but a new protocol won't fix that.
Also, plenty of small and medium businesses are doing fine on the internet. You only hear about ones with problems like this. And if these problems become more frequent and public, Google will put more effort into fixing them.
I think the most practical thing we can do is support people and companies who fall through the cracks, by giving them information to understand their situation and recover, and by promoting them.
Stop trying to look for technological answers to political problems. We already have a way to avoid excessive accumulation of power by private entities, it's called "anti-trust laws" (heck, "laws" in general).
Any new protocol not only has to overcome the huge incumbent that is the web, it has to do so grassroots against the power of global capital (trillions of dollars of it). Of course, it also has to work in the first place and not be captured and centralised like another certain open and decentralised protocol has (i.e., the Web).
Is that easier than the states doing their jobs and writing a couple pages of text?
It's very, very hard to overcome the gravitational forces which encourage centralization, and doing so requires rooting the different communities that you want to exist in their own different communities of people. It's a political governance problem, not a technical one.
IPFS has been doing some great work around decentralization that actually scales (Netflix uses it internally to speed up container delivery), but a) it's only good for static content, b) things still need friendly URLs, and c) once it becomes the mainstream, bad actors will find a way to ruin it anyway.
These apply to a lot of other decentralized systems too.
It won't get anywhere unless it addresses the issue of spam, scammers, phishing etc. The whole purpose of Google Safe Browsing is to make life harder for scammers.
I own what I think are the key protocols for the future of browsers and the web, and nobody knows it yet. I'm not committed to forking the web by any means, but I do think I have a once-in-a-generation opportunity to remake the system if I were determined to and knew how to remake it into something better.
I'm afraid this can't be built on the current net topology which is owned by the Stupid Money Govporation and inherently allows for roadblocks in the flow of information. Only a mesh could solve that.
But the Stupid Money Govporation must be dethroned first, and I honestly don't see how that could happen without the help of an ELE like a good asteroid impact.
It will take the same or less amount of time, to get where we are with current Web.
What we have is the best sim env to see how stuff shape up. So fixing it should be the aim, avoiding will get us on similar spirals. We'll just go on circles.
This may not be a huge issue depending on mitigating controls but are they saying that anyone can submit a PR (containing anything) to Immich, tag the pr with `preview` and have the contents of that PR hosted on https://pr-<num>.preview.internal.immich.cloud?
Doesn't that effectively let anyone host anything there?
I think only collaborators can add labels on github, so not quite. Does seem a bit hazardous though (you could submit a legit PR, get the label, and then commit whatever you want?).
Exposure also extends not just to the owner of the PR but anyone with write access to the branch from which it was submitted. GitHub pushes are ssh-authenticated and often automated in many workflows.
It's the result of failures across the web, really. Most browsers started using Google's phishing site index because they didn't want to maintain one themselves but wanted the phishing resistance Google Chrome has. Microsoft has SmartScreen, but that's just the same risk model but hosted on Azure.
Google's eternal vagueness is infuriating but in this case the whole setup is a disaster waiting to happen. Google's accidental fuck-up just prevented "someone hacked my server after I clicked on pr-xxxx.imiche.app" because apparently the domain's security was set up to allow for that.
You can turn off safe browsing if you don't want these warnings. Google will only stop you from visiting sites if you keep the "allow Google to stop me from visiting some sites" checkbox enabled.
I really don't know how they got nerds to think scummy advertising is cool. If you think about it, the thing they make money on - no user actually wants ads or wants to see them, ever. Somehow Google has some sort of nerd cult that people think its cool to join such an unethical company.
If you ask, the leaders in that area of Google will tell you something like "we're actually HELPING users because we're giving them targeted ads that are for the things they're looking for at the time they're looking for it, which only makes things for the user better." Then you show them a picture of YouTube ads or something and it transitions to "well, look, we gotta pay for this somehow, and at least's it's free, and isn't free information for all really great?"
It's super simple. Check out all the Fediverse alternatives. How many people that talk a big game actually financially support those services? 2% maybe, on the high end.
Things cost money, and at a large scale, there's either capitalism, or communism.
The open internet is done. Monopolies control everything.
We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out. Nothing changed in 3 years.
Getting sick of these companies able to have this level of control over everything, you can’t even self host anymore apparently.
> We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out.
God I hate the web. The engineering equivalent of a car made of duct tape.
A centralized list like this not just for domains as a whole (e.g. co.uk) but also specific sites (e.g. s3-object-lambda.eu-west-1.amazonaws.com) is both kind of crazy in that the list will bloat a lot over the years, as well as a security risk for any platform that needs this functionality but would prefer not to leak any details publicly.
We already have the concept of a .well-known directory that you can use, when talking to a specific site. Similarly, we know how you can nest subdomains, like c.b.a.x, and it's more or less certain that you can't create a subdomain b without the involvement of a, so it should be possible to walk the chain.
Example:
Maybe ship the domains with the browsers and such and leave generic sites like AWS or whatever to describe things themselves. Hell, maybe that could also have been a TXT record in DNS as well.This is mostly a browser security mistake but also partly a product of ICANN policy & the design of the domain system, so it's not just the web.
Also, the list isn't really that long, compared to, say, certificate transparency logs; now that's a truly mad solution.
Kind of. But do you have a better proposition?
Most of the complex thing I have seen being made (or contributed to) needed duct tape sooner or later. Engineering is the art of trade-offs, of adapting to changing requirements (that can appear due to uncontrollable events external to the project), technology and costs.
Related, this is how the first long distance automobile trip was done: https://en.wikipedia.org/wiki/Bertha_Benz#First_cross-countr... . Seems to me it had quite some duct tape.
Idk any other way to solve it for the general public (ideally each user would probably pick what root certs they trust), but it does seem crazy.
Deleted Comment
I'm not sure how people not already having hit this very issue before is supposed to know about it beforehand though, one of those things that you don't really come across until you're hit by it.
Fun learning new things so often but I never once heard of the public suffix list.
That said, I do know the other best practices mentioned elsewhere
For example:
At this point if someone else on that hosting provider gets that IP address assigned, your subdomain is now hosting their content.I had this happen to me once with PDF books being served through a subdomain on my site. Of course it's my mistake for not removing the A record (I forgot) but I'll never make that mistake again.
10 years of my domain having a good history may have gotten tainted in an unrepairable way. I don't get warnings visiting my site but traffic has slowly gotten worse over time since around that time, despite me posting more and more content. The correlation isn't guaranteed, especially with AI taking away so much traffic but it's something I do think about.
I wish this comment were top ranked so it would be clear immediately from the comments what the root issue was.
Dead Comment
1. Immich hosts user content on their domain. And should thus be on the public suffic list.
2. When users host an open source self hosted project like immich, jellyfin, etc. on their own domain it gets flagged as phishing because it looks an awful lot like the publicly hosted version, but it's on a different domain, and possibly a domain that might look suspicious to someone unfamiliar with the project, because it includes the name of the software in the domain. Something like immich.example.com.
The first one is fairly straightforward to deal with, if you know about the public suffix list. I don't know of a good solution for the second though.
I get that SPAM, etc., are an issue, but, like f* google-chrome, I want to browse the web, not some carefully curated list of sites some giant tech company has chosen.
A) you shouldn't be using google-chrome at all B) Firefox should definitely not be using that list either C) if you are going to have a "safe sites" list, that should definitely be a non-profit running that, not an automated robot working for a large probably-evil company...
I know the second issue can be a legitimate problem but I feel like the first issue is the primary problem here & the "solution" to the second issue is a remedy that's worse than the disease.
The public suffix list is a great system (despite getting serious backlash here in HN comments, mainly from people who have jumped to wildly exaggerated conclusions about what it is). Beyond that though, flagging domains for phishing for having duplicate content smells like an anti-self-host policy: sure there's phishers making clone sites, but the vast majority of sites flagged are going to be legit unless you employ a more targeted heuristic, but doing so isn't incentivised by Google's (or most company's) business model.
I was just deploying your_spotify and gave it your-spotify.<my services domain> and there was a warning in the logs that talked about thud, linking the issue:
https://github.com/Yooooomi/your_spotify/issues/271
Dead Comment
This is very clearly just bad code from Google.
Dead Comment
Normally I see the PSL in context of e.g. cookies or user-supplied forms.
Yes. For instance in circumstances exactly as described in the thread you are commenting in now and the article it refers to.
Services like google's bad site warning system may use it to indicate that it shouldn't consider a whole domain harmful if it considers a small number of its subdomains to be so, where otherwise they would. It is no guarantee, of course.
What would you recommend for this actual use case? Even splitting it off to a separate domain name as they’re planning merely reduces the blast radius of Google’s false positive, but does not eliminate it.
I appreciate the issue it tries to solve but it doesn't seem like a sane solution to me.
Browsers already do various levels of isolation based on domain / subdomains (e.g. cookies). PSL tells them to treat each subdomain as if it were a top level domain because they are operated (leased out to) different individuals / entities. WRT to blocking, it just means that if one subdomain is marked bad, it's less likely to contaminate the rest of the domain since they know it's operated by different people.
The fact it's used by one or more browsers in that way is a lawsuit waiting to happen.
Because they, the browsers, are pointing a finger to someone else and accusing them of criminal behavior. That is what a normal user understands this warning as.
Turns out they are wrong. And in being wrong they may well have harmed the party they pointed at, in reputation and / or sales.
It's remarkable how short sighted this is, given that the web is so international. Its not a defense to say some third party has a list, and you're not on it so you're dangerous
Incredible
Deleted Comment
Similarly, if an organization deploys a public system that engages in libel and tortious interference, the fact that jumping through technical hoops might make it less likely to be affected by that system does not in any way absolve the organization for operating it carelessly in the first place.
Just because there are steps you can take to lessen the impact of bad behavior does not mean that the behavior itself isn't bad. You shouldn't have restrict how you use your own domains to avoid someone else publishing false information about your site. Google should be responsible for mitigating false positives, not the website owners affected by them.
1. You should host dev stuff and separate domains.
2. Google shouldn't be blocking your preview environments.
In fairness, in my local testing sofar, it appears to be an entirely unauthenticated/credential-less service so there's no risk to sessions right now for this particular use-case. That leaves the only risk-factors being phishing & deploy environment credentials.
They used to be more generic saying "We don't know if its safe" but now they are quite assertive at stating you are indeed an attacker.
No they're not. The word "scammer" does not appear. They're saying attackers on the site and they use the word "might".
This includes third-party hackers who have compromised the site.
They never say the owner of the site is the attacker.
I'm quite sure their lawyers have vetted the language very carefully.
I think that might count as libel.
I’m not a lawyer, but this hasn’t ever been taken to court, has it? It might qualify as libel.
Now imagine it goes one step further, and when you go to eat the food anyway, your Walmart fork retracts into its handle for your safety, of course.
No brand or food supplier would put up with it.
That's what it's like trying to visit or run non-blessed websites and software coming from Google, Microsoft, etc on your own hardware that you "own".
Being wrong doesn't count as libel.
If a company has a detection tool, makes reasonable efforts to make sure it is accurate, and isn't being malicious, you'll have a hard time making a libel case
If the false positive rate is consistently 0.0%, that is a surefire sign that the detector is not effective enough to be useful.
If a false positive is libel, then any useful malware detector would occasionally do libel. Since libel carries enormous financial consequences, nobody would make a useful malware detector.
I am skeptical that changing the wording in the warning resolves the fundamental tension here. Suppose we tone it down: "This executable has traits similar to known malware." "This website might be operated by attackers."
Would companies affected by these labels be satisfied by this verbiage? How do we balance this against users' likelihood of ignoring the warning in the face of real malware?
They could at least send a warning email to the RFC2142 abuse@ or hostmaster@ address with a warning and some instructions on a process for having the mistake reviewed.
For instance: https://reason.com/volokh/2020/07/27/injunction-in-libel-cas... (That was a default judgment, though, which means Spamhaus didn't show up, probably due to jurisdictional questions.)
The first step in filing a libel lawsuit is demanding a retraction from the publisher. I would imagine Google's lawyers respond pretty quickly to those, which is why SafeBrowsing hasn't been similarly challenged.
Not advice with some time to fix any possible problem, just blocked.
We gave very bad image to our clients and users, and had to give explanations of a false positive from google detection.
The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).
After just moving the webmail to another domain, removing one of the redirections just in case, and asking politely 4 times to be unblocked.. took about 12 hours. And no real recourse, feedback or anything about when its gonna be solved. And no responsibility.
The worse is the feeling of not in control of your own business, and depending on a third party which is not related at all with us, which made a huge mistake, to let out clients use our platform.
It’s actually pretty quick and easy. They cannot defend themselves with lawyers, so a director usually has to show up.
I find it hard to believe that the double redirect itself tripped it: multiple redirects in a row is completely normal—discouraged in general because it hurts performance, but you encounter them all the time. For example, http://foo.example → https://foo.example → https://www.foo.example (http → https, then add or remove www subdomain) is the recommended pattern. And site root to app path to login page is also pretty common. This then leads me to the conclusion that they’re not disclosing what actually tripped it. Maybe multiple redirects contributed to it, a bad learned behaviour in an inscrutable machine learning model perhaps, but it alone is utterly innocuous. There’s something else to it.
We need something that's good for small and medium businesses again, local news and get an actual marketplace going - you know what the internet actually promised.
Anyone working on something like this?
They are currently experimenting with replicating many types of services which are currently websites as protocols with data types, with the goal being that all of these services can share available data with eachother openly.
It's definitely more of a "bazaar" model over a "catherdral" model, with many open questions and it's also tough to get a good overview of what is really going on there. But at least it's an attempt.
Also, plenty of small and medium businesses are doing fine on the internet. You only hear about ones with problems like this. And if these problems become more frequent and public, Google will put more effort into fixing them.
I think the most practical thing we can do is support people and companies who fall through the cracks, by giving them information to understand their situation and recover, and by promoting them.
Any new protocol not only has to overcome the huge incumbent that is the web, it has to do so grassroots against the power of global capital (trillions of dollars of it). Of course, it also has to work in the first place and not be captured and centralised like another certain open and decentralised protocol has (i.e., the Web).
Is that easier than the states doing their jobs and writing a couple pages of text?
Technical alternatives already exist, see for example GNUnet.
These apply to a lot of other decentralized systems too.
I own what I think are the key protocols for the future of browsers and the web, and nobody knows it yet. I'm not committed to forking the web by any means, but I do think I have a once-in-a-generation opportunity to remake the system if I were determined to and knew how to remake it into something better.
If you want to talk more, reach out!
But the Stupid Money Govporation must be dethroned first, and I honestly don't see how that could happen without the help of an ELE like a good asteroid impact.
What we have is the best sim env to see how stuff shape up. So fixing it should be the aim, avoiding will get us on similar spirals. We'll just go on circles.
Doesn't that effectively let anyone host anything there?
It's more like sites.google.com.
Google's eternal vagueness is infuriating but in this case the whole setup is a disaster waiting to happen. Google's accidental fuck-up just prevented "someone hacked my server after I clicked on pr-xxxx.imiche.app" because apparently the domain's security was set up to allow for that.
You can turn off safe browsing if you don't want these warnings. Google will only stop you from visiting sites if you keep the "allow Google to stop me from visiting some sites" checkbox enabled.
Things cost money, and at a large scale, there's either capitalism, or communism.
Dead Comment
We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out. Nothing changed in 3 years.
Getting sick of these companies able to have this level of control over everything, you can’t even self host anymore apparently.
Crazy! If you can elaborate here, please do.
Dead Comment