Readit News logoReadit News
arccy · 2 months ago
If you're going to host user content on subdomains, then you should probably have your site on the Public Suffix List https://publicsuffix.org/list/ . That should eventually make its way into various services so they know that a tainted subdomain doesn't taint the entire site....
0xbadcafebee · 2 months ago

  In the past, browsers used an algorithm which only denied setting wide-ranging cookies for top-level domains with no dots (e.g. com or org). However, this did not work for top-level domains where only third-level registrations are allowed (e.g. co.uk). In these cases, websites could set a cookie for .co.uk which would be passed onto every website registered under co.uk.

  Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain (the policies differ with each registry), the only method is to create a list. This is the aim of the Public Suffix List.
  
  (https://publicsuffix.org/learn/)
So, once they realized web browsers are all inherently flawed, their solution was to maintain a static list of websites.

God I hate the web. The engineering equivalent of a car made of duct tape.

KronisLV · 2 months ago
> Since there was and remains no algorithmic method of finding the highest level at which a domain may be registered for a particular top-level domain

A centralized list like this not just for domains as a whole (e.g. co.uk) but also specific sites (e.g. s3-object-lambda.eu-west-1.amazonaws.com) is both kind of crazy in that the list will bloat a lot over the years, as well as a security risk for any platform that needs this functionality but would prefer not to leak any details publicly.

We already have the concept of a .well-known directory that you can use, when talking to a specific site. Similarly, we know how you can nest subdomains, like c.b.a.x, and it's more or less certain that you can't create a subdomain b without the involvement of a, so it should be possible to walk the chain.

Example:

  c --> https://b.a.x/.well-known/public-suffix
  b --> https://a.x/.well-known/public-suffix
  a --> https://x/.well-known/public-suffix
Maybe ship the domains with the browsers and such and leave generic sites like AWS or whatever to describe things themselves. Hell, maybe that could also have been a TXT record in DNS as well.

lucideer · 2 months ago
> God I hate the web

This is mostly a browser security mistake but also partly a product of ICANN policy & the design of the domain system, so it's not just the web.

Also, the list isn't really that long, compared to, say, certificate transparency logs; now that's a truly mad solution.

modeless · 2 months ago
Show me a platform not made out of duct tape and I'll show you a platform nobody uses.
lukan · 2 months ago
"The engineering equivalent of a car made of duct tape"

Kind of. But do you have a better proposition?

jacquesm · 2 months ago
I think we lost the web somewhere between PageRank and JavaScript. Up to there it was just linked documents and it was mostly fine.
formerly_proven · 2 months ago
Why is it a centrally maintained list of domains, when there is a whole extensible system for attaching metadata to domain names?
a456463 · 2 months ago
I love the web. It's the corporate capitalistic ad fueled and govt censorship web that is the problem.
vladms · 2 months ago
> God I hate the web. The engineering equivalent of a car made of duct tape.

Most of the complex thing I have seen being made (or contributed to) needed duct tape sooner or later. Engineering is the art of trade-offs, of adapting to changing requirements (that can appear due to uncontrollable events external to the project), technology and costs.

Related, this is how the first long distance automobile trip was done: https://en.wikipedia.org/wiki/Bertha_Benz#First_cross-countr... . Seems to me it had quite some duct tape.

starfallg · 2 months ago
That's the nature of decentralised control. It's not just DNS, phone numbers work in the same way.
ApolloFortyNine · 2 months ago
All web encryption is backed by static list of root certs each browser maintains.

Idk any other way to solve it for the general public (ideally each user would probably pick what root certs they trust), but it does seem crazy.

thomasjb · 2 months ago
What we need is a web made in a similar way to the wicker-bodied cars of yesteryear

Deleted Comment

jrochkind1 · 2 months ago
I'm not sure I'm following what inherent flaw you are suggesting browsers had that the public suffix list originators knew they had.
samlinnfer · 2 months ago
Wait until you learn about the HSTS preload list.
CaptainOfCoit · 2 months ago
I think it's somewhat tribal webdev knowledge that if you host user generated content you need to be on the PSL otherwise you'll eventually end up where Immich is now.

I'm not sure how people not already having hit this very issue before is supposed to know about it beforehand though, one of those things that you don't really come across until you're hit by it.

hu3 · 2 months ago
This is the first time I hear about https://publicsuffix.org
no_wizard · 2 months ago
I’ve been doing this for at least 15 years and it’s the first I heard of this.

Fun learning new things so often but I never once heard of the public suffix list.

That said, I do know the other best practices mentioned elsewhere

nickjj · 2 months ago
Besides user uploaded content it's pretty easy to accidentally destroy the reputation of your main domain with subdomains.

For example:

    1. Add a subdomain to test something out
    2. Complete your test and remove the subdomain from your site
    3. Forget to remove the DNS entry and now your A record points to an IP address
At this point if someone else on that hosting provider gets that IP address assigned, your subdomain is now hosting their content.

I had this happen to me once with PDF books being served through a subdomain on my site. Of course it's my mistake for not removing the A record (I forgot) but I'll never make that mistake again.

10 years of my domain having a good history may have gotten tainted in an unrepairable way. I don't get warnings visiting my site but traffic has slowly gotten worse over time since around that time, despite me posting more and more content. The correlation isn't guaranteed, especially with AI taking away so much traffic but it's something I do think about.

bo0tzz · 2 months ago
The Immich domains that are hit by this issue are -not- user generated content.
fn-mote · 2 months ago
Clearly they are not reading HN enough. It hasn’t even been two weeks since this issue last hit the front page.

I wish this comment were top ranked so it would be clear immediately from the comments what the root issue was.

Dead Comment

tonyhart7 · 2 months ago
so its skill issue ??? or just google being bad????
thayne · 2 months ago
Looking through some of the links in this post, I there are actually two separate issues here:

1. Immich hosts user content on their domain. And should thus be on the public suffic list.

2. When users host an open source self hosted project like immich, jellyfin, etc. on their own domain it gets flagged as phishing because it looks an awful lot like the publicly hosted version, but it's on a different domain, and possibly a domain that might look suspicious to someone unfamiliar with the project, because it includes the name of the software in the domain. Something like immich.example.com.

The first one is fairly straightforward to deal with, if you know about the public suffix list. I don't know of a good solution for the second though.

smaudet · 2 months ago
I don't think the Internet should be run by being on special lists (other than like, a globally run registry of domain names)...

I get that SPAM, etc., are an issue, but, like f* google-chrome, I want to browse the web, not some carefully curated list of sites some giant tech company has chosen.

A) you shouldn't be using google-chrome at all B) Firefox should definitely not be using that list either C) if you are going to have a "safe sites" list, that should definitely be a non-profit running that, not an automated robot working for a large probably-evil company...

lucideer · 2 months ago
> I don't know of a good solution for the second though.

I know the second issue can be a legitimate problem but I feel like the first issue is the primary problem here & the "solution" to the second issue is a remedy that's worse than the disease.

The public suffix list is a great system (despite getting serious backlash here in HN comments, mainly from people who have jumped to wildly exaggerated conclusions about what it is). Beyond that though, flagging domains for phishing for having duplicate content smells like an anti-self-host policy: sure there's phishers making clone sites, but the vast majority of sites flagged are going to be legit unless you employ a more targeted heuristic, but doing so isn't incentivised by Google's (or most company's) business model.

VTimofeenko · 2 months ago
> When users host an open source self hosted project like immich, jellyfin, etc. on their own domain...

I was just deploying your_spotify and gave it your-spotify.<my services domain> and there was a warning in the logs that talked about thud, linking the issue:

https://github.com/Yooooomi/your_spotify/issues/271

liqilin1567 · 2 months ago
That means the Safe Browsing abuse could be weaponized against self-hosted services, oh my...
fuzzy2 · 2 months ago
The second is a real problem even with completely unique applications. If they have UI portions that have lookalikes, you will get flagged. At work, I created an application with a sign-in popup. Because it's for internal use only, the form in the popup is very basic, just username and password and a button. Safe Browsing continues to block this application to this day, despite multiple appeals.
asddubs · 2 months ago
Even the first one only works if there's no need to have site-wide user authentication on the domain, because you can't have a domain cookie accessible from subdomains anymore otherwise.
david_van_loon · 2 months ago
The issue isn't the user-hosted content - I'm running a release build of Immich on my own server and Google flagged my entire domain.
mixologic · 2 months ago
Is it on your own domain?

Dead Comment

crtasm · 2 months ago
Is the subdomain named immich or something more general?
827a · 2 months ago
They aren't hosting user content; it was their pull request preview domains that was triggering it.

This is very clearly just bad code from Google.

antonvs · 2 months ago
Or anticompetitive behavior.
aftbit · 2 months ago
I thought this story would be about some malicious PR that convinced their CI to build a page featuring phishing, malware, porn, etc. It looks like Google is simply flagging their legit, self-created Preview builds as being phishing, and banning the entire domain. Getting immich.cloud on the PSL is probably the right thing to do for other reasons, and may decrease the blast radius here.
LennyHenrysNuts · 2 months ago
The root cause is bad behaviour by google. This is merely a workaround.

Dead Comment

o11c · 2 months ago
Is that actually relevant when only images are user content?

Normally I see the PSL in context of e.g. cookies or user-supplied forms.

dspillett · 2 months ago
> Is that actually relevant when only images are user content?

Yes. For instance in circumstances exactly as described in the thread you are commenting in now and the article it refers to.

Services like google's bad site warning system may use it to indicate that it shouldn't consider a whole domain harmful if it considers a small number of its subdomains to be so, where otherwise they would. It is no guarantee, of course.

jkaplowitz · 2 months ago
In another comment in this thread, it was confirmed that these PR host names are only generated from branches internal to Immich or labels applied by maintainers, and that this does not automatically happen for arbitrary PRs submitted by external parties. So this isn’t the use case for the public suffix list - it is in no way public or externally user-generated.

What would you recommend for this actual use case? Even splitting it off to a separate domain name as they’re planning merely reduces the blast radius of Google’s false positive, but does not eliminate it.

snowwrestler · 2 months ago
If these are dev subdomains that are actually for internal use only, then a very reliable fix is to put basic auth on them, and give internal staff the user/password. It does not have to be strong, in fact it can be super simple. But it will reliably keep out crawlers, including Google.
fc417fc802 · 2 months ago
How does the PSL make any sense? What stops an attacker from offering free static hosting and then making use of their own service?

I appreciate the issue it tries to solve but it doesn't seem like a sane solution to me.

arccy · 2 months ago
PSL isn't a list of dangerous sites per-se.

Browsers already do various levels of isolation based on domain / subdomains (e.g. cookies). PSL tells them to treat each subdomain as if it were a top level domain because they are operated (leased out to) different individuals / entities. WRT to blocking, it just means that if one subdomain is marked bad, it's less likely to contaminate the rest of the domain since they know it's operated by different people.

fukka42 · 2 months ago
This is not about user content, but about their own preview environments! Google decided their preview environments were impersonating... Something? And decided to block the entire domain.
ggm · 2 months ago
I think this only is true if you host independent entities. If you simply construct deep names about yourself with demonstrable chain of authority back, I don't think the PSL wants to know. Otherwise there is no hierarchy the dots are just convenience strings and it's a flat namespace the size of the PSLs length.
andrewstuart2 · 2 months ago
Aw. I saw Jothan Frakes and briefly thought my favorite Starfleet first officer's actor had gotten into writing software later in life.
r_lee · 2 months ago
Does Google use this for Safe Browsing though?
ZeWaka · 2 months ago
Oh - of course this is where I find the answer why there's a giant domain list bloating my web bundles (tough-cookie/tldts).
BartjeD · 2 months ago
There is no law appointing that organization as a world wide authority on tainted/non tainted sites.

The fact it's used by one or more browsers in that way is a lawsuit waiting to happen.

Because they, the browsers, are pointing a finger to someone else and accusing them of criminal behavior. That is what a normal user understands this warning as.

Turns out they are wrong. And in being wrong they may well have harmed the party they pointed at, in reputation and / or sales.

It's remarkable how short sighted this is, given that the web is so international. Its not a defense to say some third party has a list, and you're not on it so you're dangerous

Incredible

snowwrestler · 2 months ago
I love all the theoretical objections to something that has been in use for nearly 20 years.
jtwaleson · 2 months ago
As far as I know there is currently no international alternative authority for this. So definitely not ideal, but better than not having the warnings.

Deleted Comment

mads_quist · 2 months ago
Never host your test environments as Subdomains of your actual production domain. You'll also run into email reputation as well as cookie hell. You can get a lot of cookies from the production env if not managed well.
lucideer · 2 months ago
This. I cannot believe the rest of the comments on this are seemingly completely missing the problem here & kneejerk-blaming Google for being an evil corp. This is a real issue & I don't feel like the article from the Immich team acknowledges it. Far too much passing the buck, not enough taking ownership.
Gormo · 2 months ago
It's true that putting locks on your front door will reduce the chance of your house getting robbed, but if you do get robbed, the fact that your front door wasn't locked does not in any way absolve the thief for his conduct.

Similarly, if an organization deploys a public system that engages in libel and tortious interference, the fact that jumping through technical hoops might make it less likely to be affected by that system does not in any way absolve the organization for operating it carelessly in the first place.

Just because there are steps you can take to lessen the impact of bad behavior does not mean that the behavior itself isn't bad. You shouldn't have restrict how you use your own domains to avoid someone else publishing false information about your site. Google should be responsible for mitigating false positives, not the website owners affected by them.

kevincox · 2 months ago
Both things can be problems.

1. You should host dev stuff and separate domains.

2. Google shouldn't be blocking your preview environments.

a456463 · 2 months ago
Yes they could do better, but who appointed Google "chief of web security"? Google can eff right off.
mads_quist · 2 months ago
Yep. Still I feel bad for them.
breakingcups · 2 months ago
There's quite a few comments of people having this happen to them when they self-host Immich, the issue you point out seems minor in comparison.
bcye · 2 months ago
I think immich.app is the production domain, not cloud?
lucideer · 2 months ago
.cloud is used to host the map embedded in their webapp.

In fairness, in my local testing sofar, it appears to be an entirely unauthenticated/credential-less service so there's no risk to sessions right now for this particular use-case. That leaves the only risk-factors being phishing & deploy environment credentials.

jdsully · 2 months ago
The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker". The same for Microsoft with their unknown executables.

They used to be more generic saying "We don't know if its safe" but now they are quite assertive at stating you are indeed an attacker.

crazygringo · 2 months ago
> They are directly calling you a scammer and "attacker".

No they're not. The word "scammer" does not appear. They're saying attackers on the site and they use the word "might".

This includes third-party hackers who have compromised the site.

They never say the owner of the site is the attacker.

I'm quite sure their lawyers have vetted the language very carefully.

msl · 2 months ago
"The people living at this address might be pedophiles and sexual predators. Not saying that they are, but if your children are in the vicinity, I strongly suggest you get them back to safety."

I think that might count as libel.

josfredo · 2 months ago
You can’t possibly use the “they use the word ‘might’” argument and not mention the death red screen those words are printed over. If you are referring to abidance to the law, you are technically right. If we remove the human factor, you technically are.
pasteldream · 2 months ago
> The one thing I never understood about these warnings is how they don't run afoul of libel laws.

I’m not a lawyer, but this hasn’t ever been taken to court, has it? It might qualify as libel.

altairprime · 2 months ago
I know of no such cases, and would love to know if someone finds one.
modzu · 2 months ago
you only sue somebody poorer than you
heavyset_go · 2 months ago
Imagine if you bought a plate at Walmart and any time you put food you bought elsewhere on it, it turned red and started playing a warning about how that food will probably kill you because it wasn't Certified Walmart Fresh™

Now imagine it goes one step further, and when you go to eat the food anyway, your Walmart fork retracts into its handle for your safety, of course.

No brand or food supplier would put up with it.

That's what it's like trying to visit or run non-blessed websites and software coming from Google, Microsoft, etc on your own hardware that you "own".

yard2010 · 2 months ago
This is the future. Except you don't buy anything, you rent the permission to use it. People from Walmart can brick your carrots remotely even when you don't use this plate, for your safety ofc
shkkmo · 2 months ago
> The one thing I never understood about these warnings is how they don't run afoul of libel laws. They are directly calling you a scammer and "attacker"

Being wrong doesn't count as libel.

If a company has a detection tool, makes reasonable efforts to make sure it is accurate, and isn't being malicious, you'll have a hard time making a libel case

jdsully · 2 months ago
There is a truth defence to libel in the USA but there is no good faith defence. Think about it like a traffic accident, you may not have intended to drive into the other car but you still caused damage. Just because you meant well doesn't absolve you from paying for the damages.
acoustics · 2 months ago
This is tricky to get right.

If the false positive rate is consistently 0.0%, that is a surefire sign that the detector is not effective enough to be useful.

If a false positive is libel, then any useful malware detector would occasionally do libel. Since libel carries enormous financial consequences, nobody would make a useful malware detector.

I am skeptical that changing the wording in the warning resolves the fundamental tension here. Suppose we tone it down: "This executable has traits similar to known malware." "This website might be operated by attackers."

Would companies affected by these labels be satisfied by this verbiage? How do we balance this against users' likelihood of ignoring the warning in the face of real malware?

donmcronald · 2 months ago
The problem is that it's so one sided. They do what they want with no effort to avoid collateral damage and there's nothing we can do about it.

They could at least send a warning email to the RFC2142 abuse@ or hostmaster@ address with a warning and some instructions on a process for having the mistake reviewed.

dpifke · 2 months ago
Spamhaus has been sued—multiple times, I believe—for publishing DNS-based lists used to block email from known spammers.

For instance: https://reason.com/volokh/2020/07/27/injunction-in-libel-cas... (That was a default judgment, though, which means Spamhaus didn't show up, probably due to jurisdictional questions.)

The first step in filing a libel lawsuit is demanding a retraction from the publisher. I would imagine Google's lawyers respond pretty quickly to those, which is why SafeBrowsing hasn't been similarly challenged.

dmoreno · 2 months ago
Happened to me last week. One morning we wake up and the whole company website does not work.

Not advice with some time to fix any possible problem, just blocked.

We gave very bad image to our clients and users, and had to give explanations of a false positive from google detection.

The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).

After just moving the webmail to another domain, removing one of the redirections just in case, and asking politely 4 times to be unblocked.. took about 12 hours. And no real recourse, feedback or anything about when its gonna be solved. And no responsibility.

The worse is the feeling of not in control of your own business, and depending on a third party which is not related at all with us, which made a huge mistake, to let out clients use our platform.

MrDarcy · 2 months ago
File a small claim for damages up to 10,000 to 20,000 USD depending on your local statues.

It’s actually pretty quick and easy. They cannot defend themselves with lawyers, so a director usually has to show up.

mcv · 2 months ago
It would be glorious if everybody unjustly screwed by Google did that. Barring antitrust enforcement, this may be the only way to force them to behave.
voxic11 · 2 months ago
In all US states corporations may be represented by lawyers in small claims cases. The actual difference is that in higher courts corporations usually must be represented by lawyers whereas many states allow normal employees to represent corporations when defending small claims cases, but none require it.
Teever · 2 months ago
I've been thinking for a while that a coordinated and massive action against a specific company by people all claiming damages in small claims court would be a very effective way of bringing that company to heel.
dymk · 2 months ago
And now your Gmail account has been deleted as well as any other accounts you had with Google
account42 · 2 months ago
Do small claims apply to things like this where damages are indirect?
chrismorgan · 2 months ago
> The culprit, according to google search console, was a double redirect on our web email domain (/ -> inbox -> login).

I find it hard to believe that the double redirect itself tripped it: multiple redirects in a row is completely normal—discouraged in general because it hurts performance, but you encounter them all the time. For example, http://foo.examplehttps://foo.examplehttps://www.foo.example (http → https, then add or remove www subdomain) is the recommended pattern. And site root to app path to login page is also pretty common. This then leads me to the conclusion that they’re not disclosing what actually tripped it. Maybe multiple redirects contributed to it, a bad learned behaviour in an inscrutable machine learning model perhaps, but it alone is utterly innocuous. There’s something else to it.

mcv · 2 months ago
Want to see how often Microsoft accounts redirect you? I'd love to see Google block all of Microsoft, but of course that will never happen, because these tech giants are effectively a cartel looking out for each other. At least in comparison to users and smaller businesses.
56J8XhH7voFRwPR · 2 months ago
I suspect you're right... The problem is, and i've experienced this with many big tech companies, you never really get any explanation. You report an issue, and then, magically, it's "fixed," with no further communication.
masafej536 · 2 months ago
This looks like the same suicide inducing type of crap by google that previously only android devs on playstore were subject to.
immibis · 2 months ago
I'm permanently banned from the Play Store because 10+ years ago I made a third-party Omegle client, called it Yo-megle (neither Omegle nor Yo-megle still exist now), got a bunch of downloads and good ratings, then about 2 years later got a message from Google saying I was banned for violating trademark law. No actual legal action, just a message from Google. I suppose I'm lucky they didn't delete my entire Google account.
kossTKR · 2 months ago
I'm beginning to seriously think we need a new internet, another protocol, other browsers just to break up the insane monopolies that has been formed, because the way things are going soon all discourse will be censored, and competitors will be blocked soon.

We need something that's good for small and medium businesses again, local news and get an actual marketplace going - you know what the internet actually promised.

Anyone working on something like this?

sharperguy · 2 months ago
The community around NOSTR are basically building a kind of semantic web, where users identities are verified via their public key, data is routed through content agnostic relays, and trustworthiness is verified by peer recommendation.

They are currently experimenting with replicating many types of services which are currently websites as protocols with data types, with the goal being that all of these services can share available data with eachother openly.

It's definitely more of a "bazaar" model over a "catherdral" model, with many open questions and it's also tough to get a good overview of what is really going on there. But at least it's an attempt.

armchairhacker · 2 months ago
We have a “new internet”. We have the indie web, VPNs, websites not behind Cloudflare, other browsers. You won’t have a large audience, but a new protocol won't fix that.

Also, plenty of small and medium businesses are doing fine on the internet. You only hear about ones with problems like this. And if these problems become more frequent and public, Google will put more effort into fixing them.

I think the most practical thing we can do is support people and companies who fall through the cracks, by giving them information to understand their situation and recover, and by promoting them.

andrepd · 2 months ago
Stop trying to look for technological answers to political problems. We already have a way to avoid excessive accumulation of power by private entities, it's called "anti-trust laws" (heck, "laws" in general).

Any new protocol not only has to overcome the huge incumbent that is the web, it has to do so grassroots against the power of global capital (trillions of dollars of it). Of course, it also has to work in the first place and not be captured and centralised like another certain open and decentralised protocol has (i.e., the Web).

Is that easier than the states doing their jobs and writing a couple pages of text?

pjc50 · 2 months ago
It's very, very hard to overcome the gravitational forces which encourage centralization, and doing so requires rooting the different communities that you want to exist in their own different communities of people. It's a political governance problem, not a technical one.
sureglymop · 2 months ago
You make it seem like the problem is of technical nature (instead of regulatory or other). Would you mind explaining why?

Technical alternatives already exist, see for example GNUnet.

fsflover · 2 months ago
How about the Invisible Internet Project, https://geti2p.net?
chuckadams · 2 months ago
IPFS has been doing some great work around decentralization that actually scales (Netflix uses it internally to speed up container delivery), but a) it's only good for static content, b) things still need friendly URLs, and c) once it becomes the mainstream, bad actors will find a way to ruin it anyway.

These apply to a lot of other decentralized systems too.

Timwi · 2 months ago
It won't get anywhere unless it addresses the issue of spam, scammers, phishing etc. The whole purpose of Google Safe Browsing is to make life harder for scammers.
conartist6 · 2 months ago
I'm not sure, but it's on my mind.

I own what I think are the key protocols for the future of browsers and the web, and nobody knows it yet. I'm not committed to forking the web by any means, but I do think I have a once-in-a-generation opportunity to remake the system if I were determined to and knew how to remake it into something better.

If you want to talk more, reach out!

account42 · 2 months ago
This is not a technical problem. You will not solve it with purely technical solutions.
wartywhoa23 · 2 months ago
I'm afraid this can't be built on the current net topology which is owned by the Stupid Money Govporation and inherently allows for roadblocks in the flow of information. Only a mesh could solve that.

But the Stupid Money Govporation must be dethroned first, and I honestly don't see how that could happen without the help of an ELE like a good asteroid impact.

simultsop · 2 months ago
It will take the same or less amount of time, to get where we are with current Web.

What we have is the best sim env to see how stuff shape up. So fixing it should be the aim, avoiding will get us on similar spirals. We'll just go on circles.

account42 · 2 months ago
Have you talked to your lawyer? Making Google pay for their carelessness is the ONLY way to get them to care.
kevinsundar · 2 months ago
This may not be a huge issue depending on mitigating controls but are they saying that anyone can submit a PR (containing anything) to Immich, tag the pr with `preview` and have the contents of that PR hosted on https://pr-<num>.preview.internal.immich.cloud?

Doesn't that effectively let anyone host anything there?

daemonologist · 2 months ago
I think only collaborators can add labels on github, so not quite. Does seem a bit hazardous though (you could submit a legit PR, get the label, and then commit whatever you want?).
ajross · 2 months ago
Exposure also extends not just to the owner of the PR but anyone with write access to the branch from which it was submitted. GitHub pushes are ssh-authenticated and often automated in many workflows.
rixed · 2 months ago
So basically like https://docs.google.com/ ?
jeroenhd · 2 months ago
Yes, except on Google Docs you can't make the document steal credentials or download malware by simply clicking on the link.

It's more like sites.google.com.

bo0tzz · 2 months ago
No, it doesn't work at all for PRs from forks.
tgsovlerkhgsel · 2 months ago
That was my first thought - have the preview URLs possibly actually been abused through GitHub?
warkdarrior · 2 months ago
Excellent idea for cost-free phishing.
heavyset_go · 2 months ago
Insane that one company can dictate what websites you're allowed to visit. Telling you what apps you can run wasn't far enough.
mmmpetrichor · 2 months ago
US congress not functioning for over a decade causes a few problems.
jeroenhd · 2 months ago
It's the result of failures across the web, really. Most browsers started using Google's phishing site index because they didn't want to maintain one themselves but wanted the phishing resistance Google Chrome has. Microsoft has SmartScreen, but that's just the same risk model but hosted on Azure.

Google's eternal vagueness is infuriating but in this case the whole setup is a disaster waiting to happen. Google's accidental fuck-up just prevented "someone hacked my server after I clicked on pr-xxxx.imiche.app" because apparently the domain's security was set up to allow for that.

You can turn off safe browsing if you don't want these warnings. Google will only stop you from visiting sites if you keep the "allow Google to stop me from visiting some sites" checkbox enabled.

liquid_thyme · 2 months ago
I really don't know how they got nerds to think scummy advertising is cool. If you think about it, the thing they make money on - no user actually wants ads or wants to see them, ever. Somehow Google has some sort of nerd cult that people think its cool to join such an unethical company.
jazzyjackson · 2 months ago
Turns out it's cool to make lots of money
CobrastanJorji · 2 months ago
If you ask, the leaders in that area of Google will tell you something like "we're actually HELPING users because we're giving them targeted ads that are for the things they're looking for at the time they're looking for it, which only makes things for the user better." Then you show them a picture of YouTube ads or something and it transitions to "well, look, we gotta pay for this somehow, and at least's it's free, and isn't free information for all really great?"
chrneu · 2 months ago
unfortunately nobody wants to sacrifice anything nowadays so everyone will keep using google, and microsoft, and tiktok and meta and blah blah
LinXitoW · 2 months ago
It's super simple. Check out all the Fediverse alternatives. How many people that talk a big game actually financially support those services? 2% maybe, on the high end.

Things cost money, and at a large scale, there's either capitalism, or communism.

fHr · 2 months ago
Absolutely fuck Google

Dead Comment

zackify · 2 months ago
The open internet is done. Monopolies control everything.

We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out. Nothing changed in 3 years.

Getting sick of these companies able to have this level of control over everything, you can’t even self host anymore apparently.

srik · 2 months ago
> We have an iOS app in the store for 3 years and out of the blue apple is demanding we provide new licenses that don’t exist and threaten to kick our app out.

Crazy! If you can elaborate here, please do.

Dead Comment

gomox · 2 months ago
Story of when it happened to my company: https://news.ycombinator.com/item?id=25802366