How many such stories we have to come across before we as a community come together? Apple and Google's monopolies have to be broken. It's insane that your livelihood depends upon the mercy of one organization.
If they cannot support their customers at the scale at which they operate, they should not be allowed to do business at that scale. Google clearly cannot, and they trivially mow people down, as ruthlessly as any careless driver plowing through a street cart, with no accountability for their actions, and no recourse for the customer.
Yes, they shouldn't be dependent on Alphabet, they should back up their content and diversify platforms, but because we decided to allow monopolization of monetization of the web, and to vigorously encourage the surveillance based adtech of Google and Facebook, they control the full stack and effectively hold audiences hostage; you have to play on their platforms in order to engage with the audience you build, and a vast majority of the consumers of content are ignorant of the roles platforms play. If you leave the platform, you lose the access; if you have multiple channels, you get shadowbans and other soft-penalties to discourage people from being disloyal to Google.
We should have a massive diversity of federated, decentralized platforms, with compatible protocols and tools. People should have to think about CDNs and platforms as little as they think about what particular ISP is carrying their traffic between a server and their home.
There should be a digital bill of rights that curtails the power of platforms in controlling access, reach, and forces interoperability, and eliminates arbitrary algorithmic enforcement, and allow due process with mandatory backout periods giving people the reasonable opportunity to recover digital assets, communicate with audience, and migrate to a new platform.
The status quo is entirely untenable; these companies should not have the power to so casually and arbitrarily destroy people's livelihoods.
It's not really that simple. There are already alternative video hosting and streaming sites. In the article it mentions that this creator is already using one in fact. The reason why youtube is such a big deal is because of it's market dominance. Everyone watches there, and therefore it is valuable. "breaking it up" just turns it into another one of the many many competitors that already exist.
Don't get me wrong, I'm not defending Youtube's behavior here. It's bad and shouldn't just be shrugged off. I just don't think that shouting "monopoly!" actually fixes anything. If you want a video hosting and streaming site that has less market dominance and better moderation policies, that already exists. Everyone is free to use them.
> "breaking it up" just turns it into another one of the many many competitors that already exist.
That's very much the point: collaring and tranquilizing the 900 pound gorilla in the room so that the reasons people might have to interact with the 30 other monkeys become relevant.
There's a sort of circular problem where basically every creator's videos are on YouTube, but many don't replicate their videos to other video platforms. Viewers won't leave in part because other sites lack content, creators won't cross-post because other sites lack viewers.
Some of that would be alleviated if we separated hosting/serving videos from the frontend and indexing, perhaps with a radio-like agreement on what the host gets paid for serving the video to a customer of the frontend. Frontend/index makes money off ads, and then pays some of that back to the host. Creators could in theory be paid by the video hosts, since views make the host money.
Then heavy handed moderation could be a disadvantage then, because they would be lacking content other sites have (though some of that content would be distasteful enough most frontends would ban it).
Or maybe breaking up YouTube allows for a syndication standard to take its place and we'd get an explosion of value for consumers like we got in podcasting
These companies have simply too much influence on a global scale for the US to ever kneecap them. For every valid worry the West has of TikTok the exact same argument could be made of YouTube in reverse.
There's some kind of basic theorem about situations like this: doing something about injustice happens at a rate proportional to both (a) the injustice and (b) the ease of doing something about it. The injustice is pervasive (low-level, but constant, and indicative of a situation in which people have unaccountable power over the public). But doing something about it requires a type of organizing that... nobody knows how to do. Or at least nobody remembers how to do. So the barrier to it happening is extremely high.
And do what exactly? Personally I avoid youtube as much as possible, I might watch two or three short videos per month. I also never bought an Apple product save for an ipod years ago. No one needs any of those things.
It's not a consumer issue. The fix we would need is laws that are analogous to laws that protect workers from their employers, though that is pretty far away in the current US political economy, and would presumably require creators bringing some kind of organized pressure on Big Tech or their government, analogous to a union.
If an automated system is making the decision to cancel a customer's account, then companies should be required to give cancelled customers a way to speak to a human about the inevitable false positives.
They do what they want and you (or me or anyone else not on the board of directors) don't have any say over it. They could even have a daily lottery that randomly chooses a couple of people and have all their accounts permanently frozen/closed/cancelled with no recourse at all, ever.
We often make fun at stupid European regulations, like AI ones, but it is typically in such a case that it is useful. So to ensure that it could not happen when companies like that have such a monopoly that users have no power.
Do those regulations really "ensure that [incidents like this] could not happen"?
I ask this in good faith, because my observation of the last few years is that the incidents still occur, with all of the harms to individuals also occurring. Then, after N number of incidents, the company pays a fine*, and the company does not necessarily make substantive changes. Superficial changes, but not always meaningful changes that would prevent future harms to individuals.
*Do these fines tend to be used to compensate the affected individuals? I am not educated on that detail, and would appreciate info from someone who is.
I don't recall the full stack of EU rwgulations in detail, but a requirement that appeal to an actual human is possible after automated decisions is in there somewhere AFAIK.
I don't think there's any regulation that can really help here. You can't force a plumber to do business with Rita, American Airlines to accept Steve who's been super rude to the stewards on board, you can't force anybody really to do business with you.
The only exception I know of, for which there is some regulation where they can't just say "no", legally, are banks. And trust me, if banks don't want you as a customer they will do everything in their power to maliciously comply to the point your account is useless and perma frozen.
What is this lunacy about Google regulation about? If Google doesn't want Enderman, you can't force them to have him.
I get what you really mean is regulating so companies are forced to process and communicate via non-automated, non-AI systems for whatever a, b, c issue or reason, but this doesn't change anything because of how simple and cheap is malicious compliance.
All Google needs to do is "yeah, okay, we'll also review it with human", and put some intern to press a green button manually.
Unless you can prove discrimination, it's their house, it's their business, they can and should do what they want.
The issue is that Youtube is one of the strongest and hardest to break monopolies on the internet. It's the hardest part of the degoogling process.
Then they shouldn't be permitted to operate at a scale where their unwillingness to do business with you causes you to be unable to transact with entire business sectors.
If digikey decides they don't want to do business with me, I am not suddenly unable to buy from 30% of the world's manufacturers, unable to sell to 70% of my customers and locked out of my manufacturing line's plc.
If Safeway decides to decline my business, I am not locked out of eating bread from anyone who buys their flour from them.
If Cocacola doesn't want to renew our contract because I mentioned to my customers that we also stock Pepsi, I can still buy Cocacola from the wholesaler and resell it, and regardless I don't lose access to my accountant and mailbox when they terminate that relationship.
> If Google or any other platform doesn't want you on their platform, nobody can force them to have you.
This is demonstrably false.
Where I live, stores aren't allowed to refuse a sale under most circumstances (barring some specifically-listed exceptions like selling alcohol to minors). Same for schools, we don't have a concept of "expulsion" unless it's court-mandated. There's no reason a similar regulation couldn't be applied to digital platforms.
Whether such a regulation should exist is a different matter entirely. Fighting fraud and scams is difficult enough already, making them harder to fight means we get more of them. Either that, or Google starts demanding rigorous ID verification from everybody who wants a Youtube channel.
> If Google or any other platform doesn't want you on their platform, nobody can force them to have you.
That's just not true.
Up till now, no government has (to my knowledge) tried to dictate to a major American platform owner that they may not ban certain users or classes of users, but that doesn't mean that they can't.
It's really not the same thing as the issue of forcing an employer to rehire an illegally-fired employee—where the employee then remains there under a cloud, because they have to continually interact with the people who wanted them gone. In 99.999% of cases, when a platform removes a user, there's zero relationship between that user and the people involved in making that decision.
If Congress made a law tomorrow (laughable in the current environment, I know) that said that any public video platform provider with over X users couldn't ban anyone except for specific reasons, then YouTube would, indeed, have to keep such people on their platform.
What's up with the TeamYouTube account advising him to delete his X post for security reasons because the post contains a channel ID? Like channel ID is not public information and some secret private key or something?
The reason this happened will probably never be revealed, but I predict it's probably because the channel was uploading videos through a VPN, and wound up sharing an IP address with someone who was using the same VPN for piracy.
Because the current hype cycle of "AI" has subsumed the terms "Algorithm" or "Machine Learning" to classify automated decision making processes that rely on some level of modeling / applied statistics instead of deterministic code.
I don't love the way language around this is evolving as it is mostly a marketing tool to make these tools seem much more than they are. Primarily this is driven by the current generative "AI" bubble
"A popular tech YouTuber with over 350,000 subscribers has lost his channel after YouTube’s automated systems flagged him for an alleged connection to a completely unrelated Japanese channel that received copyright strikes."
This is a very difficult situation for creators. It is a hassle, but spreading videos out across different platforms seems to be the only viable solution. This is not great as some platforms bring more revenue than others.
Yes, they shouldn't be dependent on Alphabet, they should back up their content and diversify platforms, but because we decided to allow monopolization of monetization of the web, and to vigorously encourage the surveillance based adtech of Google and Facebook, they control the full stack and effectively hold audiences hostage; you have to play on their platforms in order to engage with the audience you build, and a vast majority of the consumers of content are ignorant of the roles platforms play. If you leave the platform, you lose the access; if you have multiple channels, you get shadowbans and other soft-penalties to discourage people from being disloyal to Google.
We should have a massive diversity of federated, decentralized platforms, with compatible protocols and tools. People should have to think about CDNs and platforms as little as they think about what particular ISP is carrying their traffic between a server and their home.
There should be a digital bill of rights that curtails the power of platforms in controlling access, reach, and forces interoperability, and eliminates arbitrary algorithmic enforcement, and allow due process with mandatory backout periods giving people the reasonable opportunity to recover digital assets, communicate with audience, and migrate to a new platform.
The status quo is entirely untenable; these companies should not have the power to so casually and arbitrarily destroy people's livelihoods.
Don't get me wrong, I'm not defending Youtube's behavior here. It's bad and shouldn't just be shrugged off. I just don't think that shouting "monopoly!" actually fixes anything. If you want a video hosting and streaming site that has less market dominance and better moderation policies, that already exists. Everyone is free to use them.
That's very much the point: collaring and tranquilizing the 900 pound gorilla in the room so that the reasons people might have to interact with the 30 other monkeys become relevant.
Some of that would be alleviated if we separated hosting/serving videos from the frontend and indexing, perhaps with a radio-like agreement on what the host gets paid for serving the video to a customer of the frontend. Frontend/index makes money off ads, and then pays some of that back to the host. Creators could in theory be paid by the video hosts, since views make the host money.
Then heavy handed moderation could be a disadvantage then, because they would be lacking content other sites have (though some of that content would be distasteful enough most frontends would ban it).
This is only the beginning of fucking around and finding out how putting "AI" into everything will create all kinds of problems for humanity.
Relevant Idiocracy clip:
https://www.youtube.com/watch?v=7THG28GprSM
I ask this in good faith, because my observation of the last few years is that the incidents still occur, with all of the harms to individuals also occurring. Then, after N number of incidents, the company pays a fine*, and the company does not necessarily make substantive changes. Superficial changes, but not always meaningful changes that would prevent future harms to individuals.
*Do these fines tend to be used to compensate the affected individuals? I am not educated on that detail, and would appreciate info from someone who is.
Regulations never prevent stuff happening. They offer recompense when they do. Laws don't either.
In terms of distribution of fines, it is rare.
The only exception I know of, for which there is some regulation where they can't just say "no", legally, are banks. And trust me, if banks don't want you as a customer they will do everything in their power to maliciously comply to the point your account is useless and perma frozen.
What is this lunacy about Google regulation about? If Google doesn't want Enderman, you can't force them to have him.
I get what you really mean is regulating so companies are forced to process and communicate via non-automated, non-AI systems for whatever a, b, c issue or reason, but this doesn't change anything because of how simple and cheap is malicious compliance.
All Google needs to do is "yeah, okay, we'll also review it with human", and put some intern to press a green button manually.
Unless you can prove discrimination, it's their house, it's their business, they can and should do what they want.
The issue is that Youtube is one of the strongest and hardest to break monopolies on the internet. It's the hardest part of the degoogling process.
If digikey decides they don't want to do business with me, I am not suddenly unable to buy from 30% of the world's manufacturers, unable to sell to 70% of my customers and locked out of my manufacturing line's plc.
If Safeway decides to decline my business, I am not locked out of eating bread from anyone who buys their flour from them.
If Cocacola doesn't want to renew our contract because I mentioned to my customers that we also stock Pepsi, I can still buy Cocacola from the wholesaler and resell it, and regardless I don't lose access to my accountant and mailbox when they terminate that relationship.
This is demonstrably false.
Where I live, stores aren't allowed to refuse a sale under most circumstances (barring some specifically-listed exceptions like selling alcohol to minors). Same for schools, we don't have a concept of "expulsion" unless it's court-mandated. There's no reason a similar regulation couldn't be applied to digital platforms.
Whether such a regulation should exist is a different matter entirely. Fighting fraud and scams is difficult enough already, making them harder to fight means we get more of them. Either that, or Google starts demanding rigorous ID verification from everybody who wants a Youtube channel.
That's just not true.
Up till now, no government has (to my knowledge) tried to dictate to a major American platform owner that they may not ban certain users or classes of users, but that doesn't mean that they can't.
It's really not the same thing as the issue of forcing an employer to rehire an illegally-fired employee—where the employee then remains there under a cloud, because they have to continually interact with the people who wanted them gone. In 99.999% of cases, when a platform removes a user, there's zero relationship between that user and the people involved in making that decision.
If Congress made a law tomorrow (laughable in the current environment, I know) that said that any public video platform provider with over X users couldn't ban anyone except for specific reasons, then YouTube would, indeed, have to keep such people on their platform.
https://x.com/TeamYouTube/status/1985378776562168037
I don't love the way language around this is evolving as it is mostly a marketing tool to make these tools seem much more than they are. Primarily this is driven by the current generative "AI" bubble
That could be as simple as a database lookup against flagged accounts or a simple heuristic score.
We're over-AI-ing everything.
When we have a competitor only then it will be a problem for YouTube.