I stopped letting Google filter my world when I saw that it had de-indexed conversations between extremely well credentialed scientists when they came to unpopular conclusions regarding Covid policy. This story appears to confirm that Google sees ideological influence as part of their core mission. I don't really object to that when the ideology is about transparency, neutrality, objectivity, so this isn't anti-ideology. But I do object when it's about censorship, advocacy and feelings, because my own ideology is pro-Enlightenment. So I've de-Googled, and I will be ready to de-Kagi, de-Zoho, de-LineageOS, etc. if they become similarly captured. I just hope that ends with me still having access to modern tech.
You position is still kind of anti-ideology because transparency and neutrality doesn't force a perspective, it just maximizes available information. So there is a staunch difference to how Google does behave since it delegates decision making to those that should make it. And people will do so with different results.
That's around the time that the Google Now feed became blatantly opinionated. It would find articles about things that I'm interested that were also related to political hot topics, and if I clicked the button to tell it I wasn't interested in the hot topic, the only option was to tell it I wasn't interested in the topic I am. Even if there was plenty of content about the topic I am interested in that wasn't being tied into the zeitgeist.
> This story appears to confirm that Google sees ideological influence as part of their core mission
That's confirmation bias. I could do the same argument going the other way. That google profits of misinformation by promoting content that is demonstrably false and have caused harm in the real world. BTW, scientist can have opinions too, and such opinions can be wrong. What opinions were expressed? That closing borders doesn't work? Yes, once there are multiple spreading points restricting movement in international scale doesn't work and would be unpopular due the economic harms it brings. That mask doesn't work? Yes, they don't work because to be correctly used you literally have to have them 100% of the time when you are outside of your bubble, something that the public isn't very disciplined about.
Treating content without bias and having some things you don't like crop up next to the things you do like isn't "promoting" anything one way or the other. The moment you start meddling with boosting or deboosting things, you get into censorship territory. This is what the people fixated on "misinformation" or "disinformation" or "malinformation" or "information I don't like" don't get. Doubly so when they act like their favorite scientists, journalists and pundits are always right. Speaking of confirmation bias...
And speaking of masks as well, recall how early on in the corona debacle, The Science said that they don't work... :)
> This story appears to confirm that Google sees ideological influence as part of their core mission.
Google isn't threatening to deindex them, they're threatening to pull ads on the site. This is likely not because Google-the-ad-company cares very much but rather because their advertisers are very sensitive about what kinds of content their brand is associated with.
Additionally, the content that is most likely triggering Google's let's-not-scare-off-advertisers alarms is most likely not in the content itself but rather in the comment sections. For example, one of the articles they call out as being flagged falsely [0] for vaccine misinformation and hateful speech doesn't mention vaccines in the body but does have comments that would get flagged to death on HN, one of which mentions vaccines in a hostile way.
So this really isn't a case of Google censoring Naked Capitalism's content, it's a case of Google's advertisers not wanting to be associated with an unmoderated right-wing comment section. We can discuss whether that is broken (for example are the advertisers less skittish about unmoderated left-wing comments?), but it's not as clear cut as you make it sound.
A) Advertisers request content changes before showing advertising is not forcing you to not have that content, just saying you have to do that if you want their advertising. Why support Google's ad business?
B) Here's a spreadsheet of complaints - if we look at this one last entry, it's not right therefore "Google's demand is capricious, arbitrary, and demonstrably false" is quite the hilarious leap.
Based on it's own logic I think I now have more than enough bad faith argument examples to consider this site "capricious, malicious and demonstrably false"
30 years ago all respected news organizations had a strict separation between ads and content. The ad side was not allowed to tell the news side what to publish. either you buy an ad, or you don't, but never would the ad side tell the news side why someone pulled ads.
Manufacturing Consent appeared almost 40 years ago, and it complains about the exact same kind of influence. News organizations have always been highly dependent on ads (at least the ones that weren't dependent on being some magnate's vanity rag), so the reality is that they have always been deeply aware of how to avoid making the ads people too angry with the news they publish.
Maybe today they are more brazen in this intermingling, but it has always been there.
I had the same initial reaction as you, but steel-manning I think it makes sense that they wrote it awkwardly. I still think they're being overly dismissive and reactionary, but given this affects their income they are probably a little justified in receiving this bot-driven algorithmic bad news poorly.
I'm guessing they mean something like, "let's look at the case with everything in the book on it and see if it's valid. It's not valid, therefore this is capricious and arbitrary". For that to work, I'm also assuming that they aren't software engineers and don't understand how ML and Google work. They might even be thinking that there is a human reviewing their site and giving that determination. If you think that it was a human, I could definitely see where you'd find it capricious and arbitrary.
But on that note actually I just touched one of my own nerves. Maybe we should be holding them to the "human" standard. I am abolutely sick of living in the algorithmic world where some algorithm makes a decision about me and I'm stuck with the fallout of that decision with no recourse. I think we need to start considering bot activity from a company as the same level of liability as a human.
> I'm also assuming that they aren't software engineers and don't understand how ML and Google work
NC has superior tech journalism to most dedicated "tech journalism" outlets. Yves et al have been sounding the alarm on algorithmic rulemaking sans human intervention for some time.
>But on that note actually I just touched one of my own nerves. Maybe we should be holding them to the "human" standard. I am abolutely sick of living in the algorithmic world where some algorithm makes a decision about me and I'm stuck with the fallout of that decision with no recourse. I think we need to start considering bot activity from a company as the same level of liability as a human.
I sometimes take on a very uncharitable view of Big Tech and its focus on scale: they want all the power and profit of technological force-multipliers, without extending any responsibility and humanity for what it brings. I don't think it's unreasonable at all to expect well-resourced organizations like Google to invest in mitigating the externalities and edge cases their scale naturally imposes on the world. The alternative is basically letting a sociopathic toddler run wild with a flamethrower.
Google needs to die, in order for the Internet to live; their near-monopoly is stifling and they have repeatedly proven they are not good stewards of the outsized amount of power they have.
The internet is dead, by the end of the year most of it will be generated, including the comments here.
Both Google and Microsoft are racing to put 'generate' button on every single input field, emails, forms, documents etc.
Nobody can index and properly rank the amount of vomit that is spilling out right now. The cost per byte created is basically zero, but the cost per byte indexed is far from it.
Let them force censorship, the transformer will generate whatever they want. Just paste the policy in gemini's 10m token model and ask it to rewrite it accordingly, no harm done.
The don't have to silence opinions; they could forgo the ads. and hosting services. and anyone else upset by their content.
and build their own!
There's some other folks on the other side of the political spectrum had similar problems not long ago. Perhaps their alternative networks provide an opportunity for you? They been through this already.
Yes, it's not impossible, but let's be serious for a moment, Google does control an enormous amount of web traffic. Are we really comfortable with advertising dictating which content is easier to popularize? I know I'm not.
Or Google could perhaps not enforce censorship for people wanting to show ads. One the contrary, it should put pressure on advertisers that demand it because they are in a dominant position.
It would even strengthen their own standing in a lot of ways, so it seems to be overall bad strategy, even if you want to maximize profit.
There's other ways to find websites, like HN and word-of-mouth; I wish people made more of an effort to create and maintain alternative indices of websites. Too many people have gotten used to googling things instead of bookmarks and other ways to find a site.
His commenters are saying too many "bad words" that google ads doesn't like. It's been like this for over a decade, but from my perspective the "bad word list" has grown out of control to where people can't have anything more than kintergarden conversations on pages hosting Google ads. I kind of wish they would just ban every word in the dictionary so we can move on from their control of the web.
If HN had google ads, the URL list in this spreadsheet would reach the moon.
> I kind of wish they would just ban every word in the dictionary so we can move on from their control of the web.
You know, when a company doing stupid things cause the entire world to self-censor, it means there's something very wrong with the intended democracy where it's hosted and the other ones letting their people being preyed upon.
This is the one area where Google's massive power on web advertising could be used for good. If they would push back against this stuff to advertisers, the advertisers would (for the most part) just lighten up.
What really happens is that both Google and the advertisers want it to be this way. Many Google employees hate the idea that their search engine "surfaces harmful information," and they define "harmful" in a much more broad way than most people do.
So no, Google doesn't get to use this excuse about "we're just hamstrung by the advertisers!"
This is the danger of hosting your own comment sections; by the powers that be (be it companies like Google or coporations) you are the end-responsible for user generated content. There's some leniency of course, since you didn't write it yourself.
Personal anecdotes. Members posted porn gifs on my old fashioned forums 20 odd years ago, Google did not appreciate that.
Members posted stills from an unreleased game on my current forums; we got DMCA takedown notices from at least three separate legal people associated with the company that published said game. If we didn't do the takedowns ourselves, they probably would've gone to the host and have our site shut down.
TL;DR, if you use a 3rd party (host, service), your content has to comply to their terms and services.
Are we also not sure that this is a danger of letting one advertizer become dominant on the entire internet and controlling a large part of human discussion?
That's confirmation bias. I could do the same argument going the other way. That google profits of misinformation by promoting content that is demonstrably false and have caused harm in the real world. BTW, scientist can have opinions too, and such opinions can be wrong. What opinions were expressed? That closing borders doesn't work? Yes, once there are multiple spreading points restricting movement in international scale doesn't work and would be unpopular due the economic harms it brings. That mask doesn't work? Yes, they don't work because to be correctly used you literally have to have them 100% of the time when you are outside of your bubble, something that the public isn't very disciplined about.
And speaking of masks as well, recall how early on in the corona debacle, The Science said that they don't work... :)
Did you actually read the article? Google literally is telling them to remove content that they don't want. This article is confirming bias.
Dead Comment
Google isn't threatening to deindex them, they're threatening to pull ads on the site. This is likely not because Google-the-ad-company cares very much but rather because their advertisers are very sensitive about what kinds of content their brand is associated with.
Additionally, the content that is most likely triggering Google's let's-not-scare-off-advertisers alarms is most likely not in the content itself but rather in the comment sections. For example, one of the articles they call out as being flagged falsely [0] for vaccine misinformation and hateful speech doesn't mention vaccines in the body but does have comments that would get flagged to death on HN, one of which mentions vaccines in a hostile way.
So this really isn't a case of Google censoring Naked Capitalism's content, it's a case of Google's advertisers not wanting to be associated with an unmoderated right-wing comment section. We can discuss whether that is broken (for example are the advertisers less skittish about unmoderated left-wing comments?), but it's not as clear cut as you make it sound.
[0] https://www.nakedcapitalism.com/2022/06/blowback-for-the-twe...
Deleted Comment
Dead Comment
B) Here's a spreadsheet of complaints - if we look at this one last entry, it's not right therefore "Google's demand is capricious, arbitrary, and demonstrably false" is quite the hilarious leap.
Based on it's own logic I think I now have more than enough bad faith argument examples to consider this site "capricious, malicious and demonstrably false"
Maybe today they are more brazen in this intermingling, but it has always been there.
I'm guessing they mean something like, "let's look at the case with everything in the book on it and see if it's valid. It's not valid, therefore this is capricious and arbitrary". For that to work, I'm also assuming that they aren't software engineers and don't understand how ML and Google work. They might even be thinking that there is a human reviewing their site and giving that determination. If you think that it was a human, I could definitely see where you'd find it capricious and arbitrary.
But on that note actually I just touched one of my own nerves. Maybe we should be holding them to the "human" standard. I am abolutely sick of living in the algorithmic world where some algorithm makes a decision about me and I'm stuck with the fallout of that decision with no recourse. I think we need to start considering bot activity from a company as the same level of liability as a human.
NC has superior tech journalism to most dedicated "tech journalism" outlets. Yves et al have been sounding the alarm on algorithmic rulemaking sans human intervention for some time.
I sometimes take on a very uncharitable view of Big Tech and its focus on scale: they want all the power and profit of technological force-multipliers, without extending any responsibility and humanity for what it brings. I don't think it's unreasonable at all to expect well-resourced organizations like Google to invest in mitigating the externalities and edge cases their scale naturally imposes on the world. The alternative is basically letting a sociopathic toddler run wild with a flamethrower.
Deleted Comment
pressure from people → pressure from advertisers → pressure from google → pressure on sites
Here's an example of the FB PORN:
https://m.facebook.com/profile.php/?id=61556223509538
https://m.facebook.com/profile.php/?id=61556194929290
https://m.facebook.com/profile.php/?id=61551211684729
I've tried reporting them, and constantly told this is OK with community standards.
Both Google and Microsoft are racing to put 'generate' button on every single input field, emails, forms, documents etc.
Nobody can index and properly rank the amount of vomit that is spilling out right now. The cost per byte created is basically zero, but the cost per byte indexed is far from it.
Let them force censorship, the transformer will generate whatever they want. Just paste the policy in gemini's 10m token model and ask it to rewrite it accordingly, no harm done.
The real Eternal September has just begun.
Running the data centres has massive costs in terms of the energy required.
and build their own!
There's some other folks on the other side of the political spectrum had similar problems not long ago. Perhaps their alternative networks provide an opportunity for you? They been through this already.
Yes, it's not impossible, but let's be serious for a moment, Google does control an enormous amount of web traffic. Are we really comfortable with advertising dictating which content is easier to popularize? I know I'm not.
It would even strengthen their own standing in a lot of ways, so it seems to be overall bad strategy, even if you want to maximize profit.
His commenters are saying too many "bad words" that google ads doesn't like. It's been like this for over a decade, but from my perspective the "bad word list" has grown out of control to where people can't have anything more than kintergarden conversations on pages hosting Google ads. I kind of wish they would just ban every word in the dictionary so we can move on from their control of the web.
If HN had google ads, the URL list in this spreadsheet would reach the moon.
You know, when a company doing stupid things cause the entire world to self-censor, it means there's something very wrong with the intended democracy where it's hosted and the other ones letting their people being preyed upon.
Deleted Comment
What really happens is that both Google and the advertisers want it to be this way. Many Google employees hate the idea that their search engine "surfaces harmful information," and they define "harmful" in a much more broad way than most people do.
So no, Google doesn't get to use this excuse about "we're just hamstrung by the advertisers!"
Personal anecdotes. Members posted porn gifs on my old fashioned forums 20 odd years ago, Google did not appreciate that.
Members posted stills from an unreleased game on my current forums; we got DMCA takedown notices from at least three separate legal people associated with the company that published said game. If we didn't do the takedowns ourselves, they probably would've gone to the host and have our site shut down.
TL;DR, if you use a 3rd party (host, service), your content has to comply to their terms and services.
I know it is not google.
Since then I knew that some links will falls to obscurity because some form of moderation/censorship.