This is the problem I had with all the content removal around Covid. It never ends with that one topic we may not be unhappy to see removed.
From another comment: "Looks like some L-whateverthefuck just got the task to go through YT's backlog and cut down on the mention/promotion of alternative video platforms/self-hosted video serving software."
This is exactly what YT did with Covid related content.
Here in the UK, Ofcom held their second day-long livestreamed seminar on their implementation of the Online Safety Act on Wednesday this week. This time it was about keeping children "safe", including with "effective age assurance".
Ofcom refused to give any specific guidance on how platforms should implement the regime they want to see. They said this is on the basis that if they give specific advice, it may restrict their ability to take enforcement action later.
So it's up to the platforms to interpret the extremely complex and vaguely defined requirements and impose a regime which Ofcom will find acceptable. It was clear from the Q&A that some pretty big platforms are really struggling with it.
The inevitable outcome is that platforms will err on the side of caution, bearing in mind the potential penalties.
Many will say, this is good, children should be protected. The second part of that is true. But the way this is being done won't protect children in my opinion. It will result in many more topic areas falling below the censorship threshold.
In my experience with OFCOM, Child Safety is just the gateway to a vague list bullet points including “terrorism” and “hateful” content (vaguely defined); what could go wrong??
From another comment: "Looks like some L-whateverthefuck just got the task to go through YT's backlog and cut down on the mention/promotion of alternative video platforms/self-hosted video serving software."
This is exactly what YT did with Covid related content.
Here in the UK, Ofcom held their second day-long livestreamed seminar on their implementation of the Online Safety Act on Wednesday this week. This time it was about keeping children "safe", including with "effective age assurance".
Ofcom refused to give any specific guidance on how platforms should implement the regime they want to see. They said this is on the basis that if they give specific advice, it may restrict their ability to take enforcement action later.
So it's up to the platforms to interpret the extremely complex and vaguely defined requirements and impose a regime which Ofcom will find acceptable. It was clear from the Q&A that some pretty big platforms are really struggling with it.
The inevitable outcome is that platforms will err on the side of caution, bearing in mind the potential penalties.
Many will say, this is good, children should be protected. The second part of that is true. But the way this is being done won't protect children in my opinion. It will result in many more topic areas falling below the censorship threshold.