Disucssion of offenses related to: prostitution, drugs, abuse & insults, suicide, "stiring up of racial/religious hatred", fraud and "foreign interference".
So one imagines a university student discussing, say: earning money as a prostitute. Events/memories related to drug taking. Insulting their coursemates. Ridiculing the iconography of a religion. And, the worst crime of all, "repeating russian propaganda" (eg., the terms of a peace deal) -- which russians said it, and if it is true are -- of course -- questions never asked nor answered.
This free-thinking university student's entire online life seems to have been criminalised in mere discussion by the OSA, there may have been zero actual actions involved (consider, though, a majority of UK students have taken class-A drugs at most prominent universities).
This seems as draconian, censorious, illiberal, repressive and "moral panic"y as the highs of repressive christian moralism in the mid 20th C.
We grew up with the internet being a fun place where fun things happen and you don't need to take it so seriously. It was the symbol of freedom. Then internet evolved into a business center, where everything is taken extremely seriously, don't you dare break the etiquette. It's a sad change to witness, but it is what it is.
I'm no fan of this act but your characterisation is highly misleading.
To pick two examples from the document you linked:
Discussion of being a sex worker would not be covered. The only illegal content relating to sex work would be if you were actively soliciting or pimping. From the document:
* Causing or inciting prostitution for gain offence
* Controlling a prostitute for gain offence
Similarly, discussion of drug use wouldn't be illegal either per se, only using the forum to buy or sell drugs or to actively encourage others to use drugs:
* The unlawful supply, offer to supply, of controlled drugs
* The unlawful supply, or offer to supply, of articles for administering or preparing controlled drugs
* The supply, or offer to supply, of psychoactive substances
* Inciting any offence under the Misuse of Drugs Act 1971
That's very different to criminalising content where you talk about being (or visiting) a prostitute, or mention past or current drug use. Those things would all still be legal content.
Those are indeed against the law. The issue is what these platforms are required to censor on behalf of these other laws.
Recall that we just spent several years where discussion of major political issues of concern to society were censored across social media platforms. Taking an extremely charitable interpretation of what government demands will be made here isn't merely naïve but empirically false.
And the reason I chose those kinds of illegal activities was to show that these very laws themselves are plausibly oppressive as-is, plausibly lacking in "deep democractic" support (ie., perhaps suriving on very thin majorities) -- and so on.
And yet it is these laws for which mass interactive media will be censored.
> [..] as the highs of repressive christian moralism in the mid 20th C.
What makes you pick the mid-20th century as the high point of repressive christian moralism? That doesn't seem even close to the high point if you look back further in history.
I was specifically thinking of the censorship of mass media which took place in the west from the 20s-90s, which enforced a "family values" kind of christian moralism. Prior to the 20s, mass media wasn't particularly censored (https://en.wikipedia.org/wiki/Pre-Code_Hollywood):
> From 1737 to 1968, the Lord Chamberlain had the power to decide which plays would be granted a licence for performance; this meant that he had the capacity to censor theatre at his pleasure.
> To assist local authorities in identifying obscene films, the Director of Public Prosecutions released a list of 72 films the office believed to violate the Obscene Publications Act 1959.
This is because use of this data could create significant risks to the individual’s fundamental rights and freedoms. For example, the various categories are closely linked with:
- freedom of thought, conscience and religion;
- freedom of expression;
- freedom of assembly and association;
- the right to bodily integrity;
- the right to respect for private and family life; or
- freedom from discrimination.
Where does it say discussion of those offences is illegal content? It says "content that amounts to a relevant offence". Frustratingly that is nonsensical: content surely cannot "amount to an offence" in and of itself. Offences have elements, which fall into two categories: actus reus and mens rea. And "content" cannot be either. Perhaps posting some content or possessing some content is the actus reus of an offence but the content itself does not seem to me to sensibly be able to be regarded as "amounting to an offence" any more than a knife "amounts to an offence". A knife might be used in a violent offence or might be possessed as a weapons possession offence but it makes no sense to me to say that the knife "amounts to an offence".
Either way, the point of that document in aggregate seems to be that "illegal content" is content that falls afoul of existing criminal law already: (possession and distribution of) terrorist training material is already illegal and so it is illegal content. But saying that you committed an offence is not, in and of itself, an offence, so saying you took drugs at university doesn't seem to me like it could be illegal content. Encouraging people to do so might be, but it already is.
Maybe I missed the bit where it says discussing things is illegal, so correct me if I am wrong.
> This free-thinking university student's entire online life seems to have been criminalised in mere discussion by the OSA
There's nothing illegal about hosting a forum. The problem is that you as the site operator are legally required to take down certain kinds of content if and when it appears. Small sites with no money or staff don't have the resources to pay for a full time moderator. That cost scales with the number of users. And who knows whats in those 2.6M historical posts.
From TFA:
> The act will require a vast amount of work to be done on behalf of the Forums and there is no-one left with the availability to do it
Maybe an LLM can carry some of the load here for free forums like this to keep operating?
All you need to do is have a think about what reasonable steps you can take to protect your users from those risks, and write that down. It's not the end of the world.
1.36 Table 1.2 summarises the safety duties for providers of U2U services in relation to different types of illegal content. The duties are different for priority illegal content and relevant non-priority illegal content. Broadly they include:
a) Duties to take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content and minimising the length of time that such content is present on the service;
b) Duties to take or use proportionate measures relating to the design or operation of the service to design and operate systems in a way which mitigates and manages the risks identified in the service provider’s risk assessment;
c) A duty to operate the service using proportionate systems and processes designed to swiftly take down (priority or non-priority) illegal content when they become aware of it (the ‘takedown duty’); and
d) A duty to take or use proportionate measures relating to the design and operation of the service to mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence
That is false. The post you replied to virtuously linked directly to the UK government's own overview of this law. Just writing down "reasonable steps" [1] is insufficient - you also have the following duties (quoting from the document):
- Duties to take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content and minimising the length of time that such content is present on the service;
- Duties to take or use proportionate measures relating to the design or operation of the service to design and operate systems in a way which mitigates and manages the risks identified in the service provider’s risk assessment;
- A duty to operate the service using proportionate systems and processes designed to swiftly take down (priority or non-priority) illegal content when they become aware of it
- A duty to take or use proportionate measures relating to the design and operation of the service to mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence.
- The safety duty also requires providers to include provisions in their terms of service specifying how individuals are to be protected from illegal content, and to apply these provisions consistently.
Even if the language of this law was specific, it requires so many so invasive and difficult steps, no hobbyist, or even small company could reasonably meet. But it's anything but specific - it's full of vague, subjective language like "reasonable" and "proportionate", that would be ruinous to argue in court for anyone but billion dollar companies, and even for them, the end result will be that they are forced to accede to whatever demands some government-sanctioned online safety NGO will set, establishing a neverending treadmill of keeping up with what will become "industry standard" censorship. Because it's either that, or open yourself to huge legal risk that, in rejecting "industry standard" and "broadly recognized" censorship guidance to try to uphold some semblance of free discussion, you have failed to be "reasonable" and "proportionate" - you will be found to have "disregarded best practices and recognized experts in the field".
But, short of such an obvious breach, the rules regarding what can and can't be said, broadcast, forwarded, analysed are thought to be kept deliberately vague. In this way, everyone is on their toes and the authorities can shut down what they like at any time without having to give a reason. [2]
[1] Good luck arguing over what is "reasonable" in court if the government ever wants to shut you down.
That is a very tricky one to manage on an online forum. If an American expresses an opinion about UK policy, in a literal sense that is literally foreign interference. There isn't a technical way to tell propagandists from opinionated people. And the most effective propaganda, by far, is that which uses the truth to make reasonable and persuasive points - if it is possible to make a point that way then that is how it will be done.
The only way this works is to have a list of banned talking points from a government agency. I'd predict that effective criticism of [insert current government] is discovered to be driven mainly by foreign interference campaigns trying to promote division in the UK.
This runs into the same problem as all disinformation suppression campaigns - governments have no interest in removing the stuff everyone agrees is untrue - what is the point? the flat earthers are never going to gain traction and it doesn't matter if they do - the only topics worth suppressing are things that are plausible and persuasive. The topics most likely to turn out to be true in hindsight.
> The only way this works is to have a list of banned talking points from a government agency.
How so? The "obvious" solution to me, from the perspective of a politician, would be to 1. require online identity verification for signup to any forum hosted in your country, and then 2. using that information, only allow people who are citizens of your country to register.
The British legal system is a common law one like the U.S I believe, so it would be up to court interpretation.
Foreign interference would probably be interpreted as an organized campaign of interference being launched by a foreign power.
>This runs into the same problem as all disinformation suppression campaigns - governments have no interest in removing the stuff everyone agrees is untrue
at one time everyone agreed Anti-Vaxx was untrue, and now it's American government policy but still just as untrue.
The legislation follows the general structure of the health and safety act a couple of decades ago. That also caused a big right wing press crisis, and then we all sort of moved on, did a bit more paperwork, and now fewer people die in factory accidents. It's really quite helpful to start practically implementing this stuff rather than philosophising about it.
Yeah it's all a series of no biggies. But one day citizens in your sinking ship of a country will be looking overseas at countries like Afghanistan in longing as they flip ends of the leaderboard with you.
> This seems as draconian, censorious, illiberal, repressive and "moral panic"y as the highs of repressive christian moralism in the mid 20th C.
Given what has happened to the US as a result of unbridled free broadcast of misinformation and disinformation, we definitely need more "draconian, censorious, illiberal, repressive" rules around the propagation of such media.
Moral panic is EXACTLY what's called for!
You have captains of industry and thought leaders of the governing party throwing fucking nazi salutes, and this is broadcast to the masses! Insanity to defend free speech after the country is circling a drain as a result of said free speech.
Related post with a large discussion from someone who said:
"Lfgss shutting down 16th March 2025 (day before Online Safety Act is enforced)
[...] I run just over 300 forums, for a monthly audience of 275k active users. most of this is on Linode instances and Hetzner instances, a couple of the larger fora go via Cloudflare, but the rest just hits the server.
Anyone* would be crazy to run a UK-based or somewhat UK-centric forum today. Whether it be for a hobby, profession, or just social interaction. The government doesn’t perceive these sites as having any value (they don't employ people or generate corporation tax).
[*] Unless you are a multibillion $ company with an army of moderators, compliance people, lawyers.
Well I'm on a forum run by a UK company, hosted in the UK, and we've talked about this, but they're staying online. And, no, they're not a multibillion dollar company.
I don't see our moderators needing to do any more work than they're already doing, and have been doing for years, to be honest.
As long as they don't upset anyone with influence (government, media, etc.), they'll probably be fine. Otherwise, at best they'll be looking at a ruinously expensive legal battle to justify if what they did was "reasonable" or "proportionate" - the vague terms used by the law.
For my friends, everything; for my enemies, the law.
At least they're a UK company though so presumably they've at least got some money to support this. If you're an individual running a hobby forum then you're SOL
more than just forums, it's basically a failed state now. I knew when I left (I was the last of my school year to do so) it was going to get bad once Elizabeth died, and that would be soon, but I never imagined it would get this bad.
The plan for April is to remove the need for police to obtain a warrant to search peoples homes - that bad.
I'd say "there will be blood on the streets", but there already is...
No, the proposal is that there is a power of entry where the police have reasonable grounds to believe stolen property is on the premises and that this is supported by tracking data and that authority to enter is provided and recorded by a police inspector.
This is analogous to s18 PACE post-arrest powers, grafted onto s17 PACE.
The alternative is that we continue to require police to try and get a fast-time warrant while plotted up outside a premises; this is not a quick process, I've done it and it took nearly two hours.
The opposite is true. The new law makes it considerably more risky for large companies because the law is specifically designed to hold them to account for conduct on their platforms. The (perceived) risk for small websites is unintended and the requirements are very achievable for small websites. The law is intended for and will be used to eviscerate Facebook etc. for their wrongs. We are far more likely to see Facebook etc. leave the UK market than we are see any small websites suffer.
A small website operator can keep child pornography off their platform with ease. Facebook have a mountain to climb — regardless of their resources.
> A small website operator can keep child pornography off their platform with ease. Facebook have a mountain to climb — regardless of their resources.
Facebook can actually train AI to detect CSAM, and is probably already doing so in cooperation with NCMEC and similar organisations/authorities across the world.
Your average small website? No chance. Obtaining training material actively is seriously illegal everywhere, and keeping material that others upload is just as bad in most jurisdictions.
The big guys get the toys, the small guys have to worry all the goddamn time if some pedos are going to use their forum or whatnot.
HEXUS stopped publishing in 2021, and the company no longer exists. The forums were kept because they don't take much work to keep online. Now, there's a lot of work to do, like reading hundreds of pages of documents and submitting risk assessments. There's nobody to do that work now, so the idea was it could go into read only mode. The problem with that was, some users may want their data deleted if it becomes read only. Therefore, the only option is to delete it.
gdpr compliance depends a lot on who you ask, and only a court can make the final decision.
Stripping all usernames out of a forum certainly makes it safer, but I don't think anyone can say there still won't be a few pissed off users who wrote things they now regret on there, and can be tracked back to individuals based on context/writing style alone.
Summary: The UK has some Online Safety Act, any websites that let users interact with other users has to police illegal content on its site and must implement strong age verification checks. The law applies to any site that targets UK citizens or has a substantial number of UK users, where "substantial number" is not defined.
I'm going to guess this forum is UK-based just based on all the blimey's. Also the forum seems to have been locked from new users for some time, so it was already in its sunset era.
The admin could just make it read only except to users who manually reach out somehow to verify their age, but at the same time, what an oppressive law for small UK forums. Maybe that's the point.
> any websites that let users interact with other users has to police illegal content on its site and must implement strong age verification checks.
But I believe you only need age verification if pornography is posted. There's also a bunch of caveats about the size of user base - Ofcom have strongly hinted that this is primarily aimed at services with millions of users but haven't (yet) actually clarified whether it applies to / will be policed for, e.g., single-user self-hosted Fediverse instances or small forums.
I don't blame people for not wanting to take the risk. Personally I'm just putting up a page with answers to their self-assessment risk questionnaire for each of my hosted services (I have a surprising number that could technically come under OSA) and hoping that is good enough.
> Ofcom have strongly hinted that this is primarily aimed at services with millions of users but haven't (yet) actually clarified [...]
This has echoes of the Snooper's Charter and Apple's decision to withdraw ADP from all of UK.
It is not enough for regulators to say they won't anticipate to enforce the law against smaller operators. As long as the law is on the books, it can (and will) be applied to a suitable target regardless of their size.
I saw this this same bullshit play out in Finland. "No, you are all wrong, we will never apply this to anything outside of this narrow band" -- only to come down with the large hammer less than two years later because the target was politically inconvenient.
That's quite sizeable. How many sites can you name have 7 million monthly active UK users? That's over one-in-ten of every man, woman and child in the UK every month using your site.
Yes, the actual draft doesn't really add many requirements to non "large" services, pretty much having a some kind of moderation system, have some way of reporting complains to that, and a filed "contact" individual. I note it doesn't require proactive internal detection of such "harmful" content that many people here seem to assume, just what they already have 'reason to believe' it's illegal content. Even hash-based CASM detection/blacklisted URLs isn't required until you're a larger provider or a file share product.
It just seems like an overly formalized way of saying "All forums should have a "report" button that actually goes somewhere", I'd expect that to be already there on pretty much every forum that ever existed. Even 4chan has moderators.
Rather than shut it down, would it be possible to sell the forum to someone in the US for a little bit of money, like $20 or something?
Idea being the US-based owner migrates the DB with posts and user logins to servers hosted on US soil, then if the UK government comes knocking the former owners in the UK can say "Sorry it doesn't belong to us anymore, we sold it, here's the Paypal receipt." (Ideally they'd sell the domain too, but as long as you still have the DB you could always host the forum at a different domain.)
Any forum admins here willing to add another forum to their portfolio?
Or maybe open it up to scraping so someone can archive it--if the content is that useful, surely some hobbyist outside U.K. with a few GB of disk space would be willing to host it.
It's clear this law affects terribly bona fide grassroots online communities. I hope HN doesn't start geoblocking the UK away!
But then online hate and radicalization really is a thing. What do you do about it? Facebook seems overflowing with it, and their moderators can't keep up with the flow, nor can their mental health keep up. So it's real and it's going to surface somewhere.
At some level, I think it's reasonable that online spaces take some responsibility for staying clear of eg hate speech. But I'm not sure how you match that with the fundamental freedom of the Internet.
You don't. "Hate speech" is code for "the government knows better and controls what you say."
Yes, racism exists and people say hateful things.
Hate speech is in the interpretation. The US has it right with the first amendment - you have to be egregiously over the line for speech to be illegal, and in all sorts of cases there are exceptions and it's almost always a case-by-case determination.
Hateful things said by people being hateful is a culture problem, not a government problem. Locking people up because other people are offended by memes or shitposts is draconian, authoritarian, dystopian nonsense and make a mockery of any claims about democracy or freedom. Europe and the UK seem hellbent for leather to silence the people they should be talking with and to. The inevitable eventual blowback will only get worse if stifling, suppressing, and prosecuting is your answer to frustrations and legitimate issues felt deeply but badly articulated.
I see no reason why hate speech should be given the benefit of the doubt. And no, it's not because my government told me so, I have my own opinion, which is that freedom of speech ends where threats of violence appear.
If you don't want it tolerated online, which I don't, you need some kind of legal statement saying so. Like a law that says, you can't do it, and websites can't just shrug their shoulders and say it's not their problem.
I don't line this legislation as it seems to be excessive, but I disagree that the root issue it tries to address is a made up problem.
EDIT it just struck me that in speech and otherwise, the US has a far higher tolerance for violence - and yes I do mean violence. Free speech is taken much further in the US, almost to the point of inciting violence. Liberal gun laws mean lots of people have them, logically leading to more people being shot. School shootings are so much more common, and it appears there is no widespread conclusion to restrict gun ownership as a result.
Maybe that's a core difference. Europeans genuinely value lower violence environments. We believe all reasonable things can be said without it. That doesn't make this legislation good. But at least it makes sense in my head why some people glorify extreme free speech (bit of a tired expression in this age).
How would you feel about receiving daily credible death threats to you and your family? Should that be tolerated too in the name of the first amendment?
Point is, we must draw the line somewhere. It's never "everything goes". Tolerating intolerance always ends up reducing freedom of expression.
Look at the US, the government is doing everything it can to shove trans people back in the closet, their voices are silenced and government websites are rewritten to remove the T in LGBT. By the very same people who abused "the first amendment" to push their hateful rhetoric further and further until it's become basically fine to do nazi salutes on live TV.
"Free speech absolutism" is a mirage, only useful to hateful people who don't even believe in it.
Hate speech is the thing that plays on the radio station that directly causes the mass graves of the Rwandan genocide. The physical call to violence is just the very last step in a long chain of escalating hate speech, but it is no more culpable than the preceding hate speech that created the environment where that physical call to violence is acted on.
> But then online hate and radicalization really is a thing.
I'm not trying to be edgy, but genuinely why do you care if someone says or believes something you feel is hateful? Personally I'm not convinced this is even a problem. I'd argue this is something that the government has been radicalising people in the UK to believe is a problem by constantly telling us how bad people hating things is. Hate doesn't cause any real world harm – violence does. And if you're concerned about violence then there's better ways to address that than cracking down on online communities.
In regards to radicalisation, this is a problem imo. I think it's clear there is some link between terrorism and online radicalisation, but again, I'd question how big a problem this is and whether this is even right way to combat these issues... If you're concerned about things like terrorism or people with sexist views, then presumably you'd be more concerned about the tens of thousands of unvetted people coming into the country from extremist places like Afghanistan every year? It's not like online radicalisation is causing white Brits to commit terror attacks against Brits... This is obviously far more an issue of culture than online radicalisation.
So I guess what I'm asking is what radicalisation are you concerned with exactly and what do you believe the real world consequences of this radicalisation are? Do you believe the best way to stop Islamic terrorism in the UK is to crack down on content on the internet? Do we actually think this will make any difference? I don't really see the logic in it personally even if I do agree that some people do hold strange views these days because of the internet.
Hate and radicalization are products of existential purposelessness. You can’t make them go away by preventing existentially purposeless people from talking to each other.
> You can’t make them go away by preventing existentially purposeless people from talking to each other.
At least you can limit the speed of radicalization. Every village used to have their village loon, he was known and ignored to ridiculed. But now all the loons talk to each other and constantly reinforce their bullshit, and on top of that they begin to draw in the normies.
No, you can't, but also theres is no reason why the law about allow these to be up. Plenty of people have racist thoughts, and that's not illegal (thoughts in general aren't), but go print a bunch of leaflets inciting racist violence and that is illegal.
> online hate and radicalization really is a thing
People have always had opinions. Some people think other people's opinions are poor. Talking online was already covered by the law (eg laws re slander).
Creating the new category of 'hate speech' is more about ensuring legal control of messages on a more open platform (the internet) in a way that wasn't required when newspapers and TV could be managed covertly. It is about ensuring that the existing control structures are able to keep broad control of the messaging.
I mean we had the holocaust, Rwandan genocide and the transatlantic slave trade without the internet.
The discovery, by the governing classes, that people are often less-than-moral is just as absurd as it sounds. More malign and insidious is that these governors think it is their job to manage and reform the people -- that people, oppressed in their thinking and association enough -- will be easy to govern.
A riot, from time to time -- a mob -- a bully -- are far less dangerous than a government which thinks it can perfect its people and eliminate these.
It is hard to say that this has ever ended well. It is certainly a very stupid thing in a democracy, when all the people you're censoring will unite, vote you out, and take revenge.
It is a thing for sure. How often it happens, I don't know.
I read a number of stories about school children being cyber-bullied on some kind of semi-closed forum. Some of these ended in suicide. Hell, it uses to happen a lot on Facebook in the early days.
I totally understand a desire to make it illegal, past a certain threshold. I can see how you start off legislating with this in mind, then 20 committees later you end up with some kind of death star legislation requiring every online participant to have a public key and court-attested age certificate, renewed annually. Clearly that's nonsense, but I do understand the underlying desire.
Because without it, you have no recourse if you find something like this online. For action to be even available, there has to be a law that says it's illegal.
I mean, is it impossible that the commodified web is a sufficient but not necessary condition for atrocities? "But we had the Holocaust without it!" Okay, nobody said the internet was THE cause of ALL atrocities, just that it's actively contributing to today's atrocities. I think your logic is a bit... wrong.
Online hate is skyrocketing in large part because billionaires and authoritarian regimes are pumping in millions of dollars to uplift it. Let’s address this issue at its source.
UK is sensitive about verbal manners, that is 'of utmost importance' (among all the others of course), just to use one of the most popular phrase here. If you suffer some outrageous impact in your life and complain in bad manner you may be punished further some way, socially or even contractually. One example is the TOC of Natwest. They close your account immediately if your conduct is offensive or discriminatory towards the staff. What counts as offensive? That detail is not expanded. Cannot be. It is a bit worrisome for those paying attention being nice to others as well. How to do that exactly? Where is the limit nowadays or in that situation? It is often people get offended nowadays for example by looking at upsetting things, or could feel discriminated. The bbc.co.uk is flowing with articles of people felt very intensive about something unpleasant. Be very careful about your conduct or you bank will kick you out. We are not even talking about hatefulness or radicalization.
How so? This is just the UK. While the UK really does want to enforce this globally, they really have no enforcement power against non-UK citizens who do not reside in the UK.
Certainly it's possible (and perhaps likely!) that the EU and US will want to copycat this kind of law, but until that happens, I think your alarm is a bit of an overreaction.
A lot of people who travel internationally occasionally transit through UK jurisdiction, such as a connection at LHR. This potentially places forum operators in personal legal jeopardy. Would the UK authorities really go after some random citizen of another country for this? Probably not, but the risk isn't zero.
USA has backdoor laws afaik. Sweden is targeting Signal to force them create a backdoor. And this is only from regular news, I'm not even reading infosec industry updates. All govts are targeting privacy tools and the clock is ticking for them. I'm only hoping that one day these fuckers will be targeted themselves via exploits they have forced on us.
Table 1.1: Priority offences by category ( https://www.ofcom.org.uk/siteassets/resources/documents/onli... )
Disucssion of offenses related to: prostitution, drugs, abuse & insults, suicide, "stiring up of racial/religious hatred", fraud and "foreign interference".
So one imagines a university student discussing, say: earning money as a prostitute. Events/memories related to drug taking. Insulting their coursemates. Ridiculing the iconography of a religion. And, the worst crime of all, "repeating russian propaganda" (eg., the terms of a peace deal) -- which russians said it, and if it is true are -- of course -- questions never asked nor answered.
This free-thinking university student's entire online life seems to have been criminalised in mere discussion by the OSA, there may have been zero actual actions involved (consider, though, a majority of UK students have taken class-A drugs at most prominent universities).
This seems as draconian, censorious, illiberal, repressive and "moral panic"y as the highs of repressive christian moralism in the mid 20th C.
Maybe the future will be places guarded by real life trust.
To pick two examples from the document you linked:
Discussion of being a sex worker would not be covered. The only illegal content relating to sex work would be if you were actively soliciting or pimping. From the document:
* Causing or inciting prostitution for gain offence
* Controlling a prostitute for gain offence
Similarly, discussion of drug use wouldn't be illegal either per se, only using the forum to buy or sell drugs or to actively encourage others to use drugs:
* The unlawful supply, offer to supply, of controlled drugs
* The unlawful supply, or offer to supply, of articles for administering or preparing controlled drugs
* The supply, or offer to supply, of psychoactive substances
* Inciting any offence under the Misuse of Drugs Act 1971
That's very different to criminalising content where you talk about being (or visiting) a prostitute, or mention past or current drug use. Those things would all still be legal content.
Recall that we just spent several years where discussion of major political issues of concern to society were censored across social media platforms. Taking an extremely charitable interpretation of what government demands will be made here isn't merely naïve but empirically false.
And the reason I chose those kinds of illegal activities was to show that these very laws themselves are plausibly oppressive as-is, plausibly lacking in "deep democractic" support (ie., perhaps suriving on very thin majorities) -- and so on.
And yet it is these laws for which mass interactive media will be censored.
This is hardly a list with murder at the top.
What makes you pick the mid-20th century as the high point of repressive christian moralism? That doesn't seem even close to the high point if you look back further in history.
USA : * https://en.wikipedia.org/wiki/Hays_Code * https://en.wikipedia.org/wiki/Federal_Communications_Commiss...
UK : https://en.wikipedia.org/wiki/Lord_Chamberlain
> From 1737 to 1968, the Lord Chamberlain had the power to decide which plays would be granted a licence for performance; this meant that he had the capacity to censor theatre at his pleasure.
UK : https://en.wikipedia.org/wiki/Video_nasty
> To assist local authorities in identifying obscene films, the Director of Public Prosecutions released a list of 72 films the office believed to violate the Obscene Publications Act 1959.
as the highs of (repressive christian moralism in the mid 20th C.)
and not
as the highs of (repressive christian moralism) in the mid 20th C.
Deleted Comment
- freedom of thought, conscience and religion; - freedom of expression; - freedom of assembly and association; - the right to bodily integrity; - the right to respect for private and family life; or - freedom from discrimination.
Either way, the point of that document in aggregate seems to be that "illegal content" is content that falls afoul of existing criminal law already: (possession and distribution of) terrorist training material is already illegal and so it is illegal content. But saying that you committed an offence is not, in and of itself, an offence, so saying you took drugs at university doesn't seem to me like it could be illegal content. Encouraging people to do so might be, but it already is.
Maybe I missed the bit where it says discussing things is illegal, so correct me if I am wrong.
Not your lawyer not legal advice etc etc
There's nothing illegal about hosting a forum. The problem is that you as the site operator are legally required to take down certain kinds of content if and when it appears. Small sites with no money or staff don't have the resources to pay for a full time moderator. That cost scales with the number of users. And who knows whats in those 2.6M historical posts.
From TFA:
> The act will require a vast amount of work to be done on behalf of the Forums and there is no-one left with the availability to do it
Maybe an LLM can carry some of the load here for free forums like this to keep operating?
It can't give you any guarantees, and it can't be held liable for those mistakes.
a) Duties to take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content and minimising the length of time that such content is present on the service;
b) Duties to take or use proportionate measures relating to the design or operation of the service to design and operate systems in a way which mitigates and manages the risks identified in the service provider’s risk assessment;
c) A duty to operate the service using proportionate systems and processes designed to swiftly take down (priority or non-priority) illegal content when they become aware of it (the ‘takedown duty’); and
d) A duty to take or use proportionate measures relating to the design and operation of the service to mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence
---
That's a bit more than "have a think"
- Duties to take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content and minimising the length of time that such content is present on the service;
- Duties to take or use proportionate measures relating to the design or operation of the service to design and operate systems in a way which mitigates and manages the risks identified in the service provider’s risk assessment;
- A duty to operate the service using proportionate systems and processes designed to swiftly take down (priority or non-priority) illegal content when they become aware of it
- A duty to take or use proportionate measures relating to the design and operation of the service to mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence.
- The safety duty also requires providers to include provisions in their terms of service specifying how individuals are to be protected from illegal content, and to apply these provisions consistently.
Even if the language of this law was specific, it requires so many so invasive and difficult steps, no hobbyist, or even small company could reasonably meet. But it's anything but specific - it's full of vague, subjective language like "reasonable" and "proportionate", that would be ruinous to argue in court for anyone but billion dollar companies, and even for them, the end result will be that they are forced to accede to whatever demands some government-sanctioned online safety NGO will set, establishing a neverending treadmill of keeping up with what will become "industry standard" censorship. Because it's either that, or open yourself to huge legal risk that, in rejecting "industry standard" and "broadly recognized" censorship guidance to try to uphold some semblance of free discussion, you have failed to be "reasonable" and "proportionate" - you will be found to have "disregarded best practices and recognized experts in the field".
But, short of such an obvious breach, the rules regarding what can and can't be said, broadcast, forwarded, analysed are thought to be kept deliberately vague. In this way, everyone is on their toes and the authorities can shut down what they like at any time without having to give a reason. [2]
[1] Good luck arguing over what is "reasonable" in court if the government ever wants to shut you down.
[2] https://www.bbc.com/news/world-asia-china-41523073
That is a very tricky one to manage on an online forum. If an American expresses an opinion about UK policy, in a literal sense that is literally foreign interference. There isn't a technical way to tell propagandists from opinionated people. And the most effective propaganda, by far, is that which uses the truth to make reasonable and persuasive points - if it is possible to make a point that way then that is how it will be done.
The only way this works is to have a list of banned talking points from a government agency. I'd predict that effective criticism of [insert current government] is discovered to be driven mainly by foreign interference campaigns trying to promote division in the UK.
This runs into the same problem as all disinformation suppression campaigns - governments have no interest in removing the stuff everyone agrees is untrue - what is the point? the flat earthers are never going to gain traction and it doesn't matter if they do - the only topics worth suppressing are things that are plausible and persuasive. The topics most likely to turn out to be true in hindsight.
How so? The "obvious" solution to me, from the perspective of a politician, would be to 1. require online identity verification for signup to any forum hosted in your country, and then 2. using that information, only allow people who are citizens of your country to register.
(You know, like in China.)
Foreign interference would probably be interpreted as an organized campaign of interference being launched by a foreign power.
>This runs into the same problem as all disinformation suppression campaigns - governments have no interest in removing the stuff everyone agrees is untrue
at one time everyone agreed Anti-Vaxx was untrue, and now it's American government policy but still just as untrue.
Given what has happened to the US as a result of unbridled free broadcast of misinformation and disinformation, we definitely need more "draconian, censorious, illiberal, repressive" rules around the propagation of such media.
Moral panic is EXACTLY what's called for!
You have captains of industry and thought leaders of the governing party throwing fucking nazi salutes, and this is broadcast to the masses! Insanity to defend free speech after the country is circling a drain as a result of said free speech.
Dead Comment
Dead Comment
"Lfgss shutting down 16th March 2025 (day before Online Safety Act is enforced)
[...] I run just over 300 forums, for a monthly audience of 275k active users. most of this is on Linode instances and Hetzner instances, a couple of the larger fora go via Cloudflare, but the rest just hits the server.
and it's all being shut down [...]
"For the same reasons.
https://news.ycombinator.com/item?id=42433044
[*] Unless you are a multibillion $ company with an army of moderators, compliance people, lawyers.
I don't see our moderators needing to do any more work than they're already doing, and have been doing for years, to be honest.
So we'll see how the dice land.
For my friends, everything; for my enemies, the law.
The plan for April is to remove the need for police to obtain a warrant to search peoples homes - that bad.
I'd say "there will be blood on the streets", but there already is...
This video pretty much sums up what the UK is now. https://m.youtube.com/watch?v=zzstEpSeuwU
This is analogous to s18 PACE post-arrest powers, grafted onto s17 PACE.
The alternative is that we continue to require police to try and get a fast-time warrant while plotted up outside a premises; this is not a quick process, I've done it and it took nearly two hours.
>there will be blood on the streets
Oh, dry up.
This seems to be limited to stolen geo-tagged items: https://www.theguardian.com/uk-news/2025/feb/25/police-new-p...
I would agree that this law is a slippery slope, but at the same time we should not omit important facts.
How small was your school year?! What does Elizabeth (presumably the 2nd) dying have to do with anything?
What do you think she was doing?
Dead Comment
A small website operator can keep child pornography off their platform with ease. Facebook have a mountain to climb — regardless of their resources.
Facebook can actually train AI to detect CSAM, and is probably already doing so in cooperation with NCMEC and similar organisations/authorities across the world.
Your average small website? No chance. Obtaining training material actively is seriously illegal everywhere, and keeping material that others upload is just as bad in most jurisdictions.
The big guys get the toys, the small guys have to worry all the goddamn time if some pedos are going to use their forum or whatnot.
Stripping all usernames out of a forum certainly makes it safer, but I don't think anyone can say there still won't be a few pissed off users who wrote things they now regret on there, and can be tracked back to individuals based on context/writing style alone.
I'm going to guess this forum is UK-based just based on all the blimey's. Also the forum seems to have been locked from new users for some time, so it was already in its sunset era.
The admin could just make it read only except to users who manually reach out somehow to verify their age, but at the same time, what an oppressive law for small UK forums. Maybe that's the point.
> any websites that let users interact with other users has to police illegal content on its site and must implement strong age verification checks.
But I believe you only need age verification if pornography is posted. There's also a bunch of caveats about the size of user base - Ofcom have strongly hinted that this is primarily aimed at services with millions of users but haven't (yet) actually clarified whether it applies to / will be policed for, e.g., single-user self-hosted Fediverse instances or small forums.
I don't blame people for not wanting to take the risk. Personally I'm just putting up a page with answers to their self-assessment risk questionnaire for each of my hosted services (I have a surprising number that could technically come under OSA) and hoping that is good enough.
But if you let users interact with other users, you're not in control of whether pornographic material is posted, so it's safer to comply beforehand.
I commend you for keeping your site up and hoping for the best. I don't envy your position.
This has echoes of the Snooper's Charter and Apple's decision to withdraw ADP from all of UK.
It is not enough for regulators to say they won't anticipate to enforce the law against smaller operators. As long as the law is on the books, it can (and will) be applied to a suitable target regardless of their size.
I saw this this same bullshit play out in Finland. "No, you are all wrong, we will never apply this to anything outside of this narrow band" -- only to come down with the large hammer less than two years later because the target was politically inconvenient.
Deleted Comment
Deleted Comment
That's quite sizeable. How many sites can you name have 7 million monthly active UK users? That's over one-in-ten of every man, woman and child in the UK every month using your site.
It just seems like an overly formalized way of saying "All forums should have a "report" button that actually goes somewhere", I'd expect that to be already there on pretty much every forum that ever existed. Even 4chan has moderators.
Idea being the US-based owner migrates the DB with posts and user logins to servers hosted on US soil, then if the UK government comes knocking the former owners in the UK can say "Sorry it doesn't belong to us anymore, we sold it, here's the Paypal receipt." (Ideally they'd sell the domain too, but as long as you still have the DB you could always host the forum at a different domain.)
Any forum admins here willing to add another forum to their portfolio?
https://wiki.archiveteam.org/
https://archive.org/details/archiveteam
It's clear this law affects terribly bona fide grassroots online communities. I hope HN doesn't start geoblocking the UK away!
But then online hate and radicalization really is a thing. What do you do about it? Facebook seems overflowing with it, and their moderators can't keep up with the flow, nor can their mental health keep up. So it's real and it's going to surface somewhere.
At some level, I think it's reasonable that online spaces take some responsibility for staying clear of eg hate speech. But I'm not sure how you match that with the fundamental freedom of the Internet.
Yes, racism exists and people say hateful things.
Hate speech is in the interpretation. The US has it right with the first amendment - you have to be egregiously over the line for speech to be illegal, and in all sorts of cases there are exceptions and it's almost always a case-by-case determination.
Hateful things said by people being hateful is a culture problem, not a government problem. Locking people up because other people are offended by memes or shitposts is draconian, authoritarian, dystopian nonsense and make a mockery of any claims about democracy or freedom. Europe and the UK seem hellbent for leather to silence the people they should be talking with and to. The inevitable eventual blowback will only get worse if stifling, suppressing, and prosecuting is your answer to frustrations and legitimate issues felt deeply but badly articulated.
If you don't want it tolerated online, which I don't, you need some kind of legal statement saying so. Like a law that says, you can't do it, and websites can't just shrug their shoulders and say it's not their problem.
I don't line this legislation as it seems to be excessive, but I disagree that the root issue it tries to address is a made up problem.
EDIT it just struck me that in speech and otherwise, the US has a far higher tolerance for violence - and yes I do mean violence. Free speech is taken much further in the US, almost to the point of inciting violence. Liberal gun laws mean lots of people have them, logically leading to more people being shot. School shootings are so much more common, and it appears there is no widespread conclusion to restrict gun ownership as a result.
Maybe that's a core difference. Europeans genuinely value lower violence environments. We believe all reasonable things can be said without it. That doesn't make this legislation good. But at least it makes sense in my head why some people glorify extreme free speech (bit of a tired expression in this age).
Point is, we must draw the line somewhere. It's never "everything goes". Tolerating intolerance always ends up reducing freedom of expression.
Look at the US, the government is doing everything it can to shove trans people back in the closet, their voices are silenced and government websites are rewritten to remove the T in LGBT. By the very same people who abused "the first amendment" to push their hateful rhetoric further and further until it's become basically fine to do nazi salutes on live TV.
"Free speech absolutism" is a mirage, only useful to hateful people who don't even believe in it.
Dead Comment
I'm not trying to be edgy, but genuinely why do you care if someone says or believes something you feel is hateful? Personally I'm not convinced this is even a problem. I'd argue this is something that the government has been radicalising people in the UK to believe is a problem by constantly telling us how bad people hating things is. Hate doesn't cause any real world harm – violence does. And if you're concerned about violence then there's better ways to address that than cracking down on online communities.
In regards to radicalisation, this is a problem imo. I think it's clear there is some link between terrorism and online radicalisation, but again, I'd question how big a problem this is and whether this is even right way to combat these issues... If you're concerned about things like terrorism or people with sexist views, then presumably you'd be more concerned about the tens of thousands of unvetted people coming into the country from extremist places like Afghanistan every year? It's not like online radicalisation is causing white Brits to commit terror attacks against Brits... This is obviously far more an issue of culture than online radicalisation.
So I guess what I'm asking is what radicalisation are you concerned with exactly and what do you believe the real world consequences of this radicalisation are? Do you believe the best way to stop Islamic terrorism in the UK is to crack down on content on the internet? Do we actually think this will make any difference? I don't really see the logic in it personally even if I do agree that some people do hold strange views these days because of the internet.
At least you can limit the speed of radicalization. Every village used to have their village loon, he was known and ignored to ridiculed. But now all the loons talk to each other and constantly reinforce their bullshit, and on top of that they begin to draw in the normies.
I see this as an internet analogy.
Dead Comment
People have always had opinions. Some people think other people's opinions are poor. Talking online was already covered by the law (eg laws re slander).
Creating the new category of 'hate speech' is more about ensuring legal control of messages on a more open platform (the internet) in a way that wasn't required when newspapers and TV could be managed covertly. It is about ensuring that the existing control structures are able to keep broad control of the messaging.
Dead Comment
I mean we had the holocaust, Rwandan genocide and the transatlantic slave trade without the internet.
The discovery, by the governing classes, that people are often less-than-moral is just as absurd as it sounds. More malign and insidious is that these governors think it is their job to manage and reform the people -- that people, oppressed in their thinking and association enough -- will be easy to govern.
A riot, from time to time -- a mob -- a bully -- are far less dangerous than a government which thinks it can perfect its people and eliminate these.
It is hard to say that this has ever ended well. It is certainly a very stupid thing in a democracy, when all the people you're censoring will unite, vote you out, and take revenge.
I read a number of stories about school children being cyber-bullied on some kind of semi-closed forum. Some of these ended in suicide. Hell, it uses to happen a lot on Facebook in the early days.
I totally understand a desire to make it illegal, past a certain threshold. I can see how you start off legislating with this in mind, then 20 committees later you end up with some kind of death star legislation requiring every online participant to have a public key and court-attested age certificate, renewed annually. Clearly that's nonsense, but I do understand the underlying desire.
Because without it, you have no recourse if you find something like this online. For action to be even available, there has to be a law that says it's illegal.
Dead Comment
Dead Comment
Certainly it's possible (and perhaps likely!) that the EU and US will want to copycat this kind of law, but until that happens, I think your alarm is a bit of an overreaction.
Dead Comment
Dead Comment