Readit News logoReadit News
deepcyanide · 3 years ago
I am no fan of kiwifarms, I have my own reasons to hate that site but I also strongly support free speech so I simply don't visit. That's changed for the past month because keffals decided to open another pandora's box.

Anyone not only getting their info from keffals + supporters can see that this was a coordinated attack. A anonymous call from someone claiming to be a KW member swatting a politician prone to jumping the gun, a dormant account posting a threat that keffals supporters caught immediately and paraded around twitter, that causes matthew prince to cut off KW soon after.

Not only is this a bad precedent to anyone who hosts content that rejects keffals lifestyle choices but this gives many pro-censorship politicians ammunition to push more moderation of the internet while the progressive types continue to do exactly what they claimed kiwifarms was doing.

danielmalmer · 3 years ago
"At least three suicides have been tied to harassment stemming from the Kiwi Farms community, and many on the forum consider their goal to drive their targets to suicide. Members of the LGBTQ community and women are frequent targets."

https://www.washingtonpost.com/technology/2022/09/03/cloudfl...

BrainVirus · 3 years ago
A lengthy article about the ban supposedly written on the day of the ban by Taylor Lorenz? Taylor Lorenz who doxxed people[1], lied about companies and people, was actively involved in several deplatformings and is currently being sued for defamation[2]? If anything, this makes me even more suspicious of the narrative about Kiwi Farms.

This looks more and more like a coordinated campaign. A basic search reveals a huge amount of recent articles that clearly aim to sic people on Cloud Flare[3][4][5][etc].

By amazing coincidence all these articles got published within one week of each other and within one week of Cloudflare dropping the website? Sure.

[1] https://unherd.com/thepost/why-taylor-lorenz-can-dox-whoever...

[2] https://nypost.com/2021/08/16/ny-times-reporter-taylor-loren...

[3] https://thenewstack.io/cloudflares-kiwi-farms-support-may-so...

[4] https://www.nydailynews.com/news/national/ny-cloudflare-pres...

[5] https://www.protocol.com/newsletters/protocol-enterprise/clo...

ThrowawayTestr · 3 years ago
Every word of that is an outright lie.

Deleted Comment

uselpa · 3 years ago
This is a very naïve position on the matter. This is not about lifestyle choices but about coordinated harassment. Free speech stops the latest when you endanger other lives.
stillbourne · 3 years ago
Moderation is not Censorship. Moderation is Free Speech. Hate Speech is not Free Speech. Criticizing the termination of services against kiwifarms isn't an endorsement of free speech its an endorsement of universal toleration. Universal toleration is the idea that I'm not allowed to determine who I provide goods and services to despite the message of the those to whom I am providing services too, if I host a website that is a forum for lego builds, am I not allowed to kick Nazis spreading hate from my forum? My platform, my rules, entitles me to determine what messages I want to convey because the message that is provided on my platform is my right to free speech therefore moderation is free speech. Remember the Colorado Wedding Cake? So which is it? Do you support Free Speech or don't you? mic drop
googlryas · 3 years ago
Where do you draw the line on free speech vs harassment? Is harassment free speech?
convery · 3 years ago
No, but peoples definition of what is considered harassment varies wildly. As can be seen in this very thread: Keffals doxing people is not harassment, Keffals calling for their Ukrainian supporters to spread that KF's admin is pro-Putin is not harassment, "pressuring" KF's admins mother is not harassment, journalists doxing and lying about LibsOfTiktok is not harassment; but KF archiving Keffals tweets where they admit to illegal activities is, KF posting the dox of someone sharing DDoS tools is, KF laughing at someone being silly is.

Dead Comment

Dead Comment

thechao · 3 years ago
What does the internet have to do with free speech? I understand that the internet lets you dramatically amplify speech; but, at least in the US, the "internet" is a regulated private space — not a public space. The companies that own that space can (up to regulatory rules) moderate their spaces any way they choose. If KF wants proper, constitutional free speech they can do so in any of the public spaces they want. Hell, they could rent downtown Santa Fe, Texas — like the KKK did in the early 00's — and have a parade.

EDIT: I can see some people may not like what I'm saying; but I actively help organize real marches in Texas for orgs like NAACP, AAIA, PPC, etc. We have a stable of lawyers (TCRP, SCSJ, ...); and, I can assure you this is how free speech works in reality.

kelseyfrog · 3 years ago
Early internet builders and users had a strong libertarian streak running through them. The early internet was awash with the optimism that increasing access to communication and routing around censorship is unequivocally good. The culture that existed then when internet users themselves where a social subgroup echos in the digital debate today.

Is there anything metaphysically inherit to the internet as free speech? That's up for debate. Would the early internet had been successful if this type had not been the ones to build it, I'm on the fence.

high_derivative · 3 years ago
Let's just be honest about CloudFlare. They write these hand-wringing posts occasionally to try to justify their 'policies', but their policy is quite literally 'block someone if enough people demand it on twitter'.
hackerlight · 3 years ago
So they are lying when they say "targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life"?

You lack the imagination to think of alternative possibilities other than they're lying?

fsociety999 · 3 years ago
Is there any evidence that it is true other than the word of one person? The Cloudflare post didn’t provide any examples, and it seemed to be news to the Kiwi Farms people as well.

Just a few days ago Cloudflare took a pretty strong stance that they would not take action so for them to flip-flop like this in such a short period of time they must either have received:

- Strong proof that there has been an escalation, and there is an immediate threat to human life.

- Pressure from investors who are worried about the stock price and company’s image

- Their own set of threats against Cloudflare employees for refusing to take action

- Word that a large company who uses their platform was threatening to remove all traffic ($$) from Cloudflare unless they took immediate action

Extraordinary claims require extraordinary proof so if they can’t provide the proof, it seems far more likely they caved to social/investor pressure.

I hope you see the irony of taking a big tech CEO at their word and criticizing someone else for lacking an imagination when they suggest an alternative.

robobro · 3 years ago
I've had people post phony bomb threats on my services in the past in an attempt to make my life more difficult. Do you think that I shouldn't be allowed to host services because I'm targeted by bad actors?

Maybe we should shut down Twitter, Facebook, Gmail, Yahoo Mail et al. because they allow pseudonymous entities to say whatever they want until moderation addresses problematic posting (as Kiwifarms did with the bomb threat -- they deleted the post and banned the account of the user who made it).

Not trying to defend Kiwifarms here but looking at this an abstracted issue. The real reason why KF was kicked off of Cloudflare is because a lot of people told Cloudflare to stop hosting it, not simply because a bad actor made a malicious post.

yanderekko · 3 years ago
>So they are lying when they say "targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life"?

In a word, yes.

In several words, if the CEO wants to go on about due process, then he should provide it. Establish that there's an "unprecedented threat", either publicly or to Moon in private correspondence.

Also I really dislike that perception of KF required to be so negative that people are severely discounting the probability that this is a false flag attack, especially given the obvious incentives of activists to do this sort of stuff. If KF started producing material indicating that TRAs were plotting to murder Joshua Moon, would anyone take it seriously?

cft · 3 years ago
I am not a Kiwifarms user, but since it's now available at the new domain (and according to CF, KF is not cooperating) the threats should still be there. I cannot find them. Can you please link those threats? Otherwise this lends me to believe that the basis of Matthew Prince's sudden reversal on his "free speech" ideals was something else, not the KF content.
zppln · 3 years ago
Out of curiosity, what caused targeted threats to escalate over the last 48 hours? I'm not familiar with the site, but from what I understand they've been around for over a decade doing basically the same thing, so what caused this escalation?
nailer · 3 years ago
> "targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life"

The post directly addressed this matter.

gsich · 3 years ago
They lied before, so yes it's possible.
JohnHaugeland · 3 years ago
We've been on the internet for more than six weeks, rather

Don't be an apologist for these guys

empathy_m · 3 years ago
Bottom line is that somebody (foreign governments, weev) will always take whatever a site's ToS are and repeatedly ram up right against the edges of the policy until they have extracted as many lulz as possible. This is tough to deal with if you have a conscience, or need money.

What they tried at first, which is very "90s internet", is to become a bit of a troll yourself:

"We are deeply committed to the principles of free speech and will never deny a customer service to our critical infrastructure based on the content of their messages. You are banned. Now, go away, lolcow."

Reddit did something similar -- not with the /r/thefappening, but later on with /r/the_donald. Recall when /u/spez randomly edited comments left by contributors there, sending a clear message "we have no rules, and we will break you if we feel like it". The public apology ( https://www.reddit.com/r/announcements/comments/5frg1n/ ) conveys the subtext that it will happen again, there are no firm rules, and you are not welcome here.

I dunno quite what the plan is here going forward. weev and null are kind of an amazing force for chaos and if you can get them defensive and off kilter, that's certainly interesting.

Overall though I have found kiwifarms a helpful resource for understanding online harassment. I appreciate that members do their coordination in the open; by reading forum activity you can understanding roughly where the next massacre will happen. The community targets lulz as weakness and many of their most persistent attackers have gotten sucked in and doxxed themselves, notably in the Chris-Chan maelstrom.

The community has successfully killed people and the fact that they have a centralized repository with documentation will be helpful for the next Michelle Carter style prosecution.

mgdlbp · 3 years ago
Nah, Reddit's Kiwi Farms was r/jailbait.

Ironically, it's SomethingAwful that is credited* with rousing the press attention required for Reddit to rescind its free speech ideal (making SA chaotically the progenitor of both 4chan and r/ShitRedditSays). There too was suspicion of raiding with questionable activity from those who wanted the ban.

*I'm not sure; take anything about this internet drama with ample salt. Some primary sources are the graph of pages reachable from the references at https://en.wikipedia.org/wiki/Controversial_Reddit_communiti...

About this, however:

> We understand that this might make some of you worried about the slippery slope from banning one specific type of content to banning other types of content. We're concerned about that too, and do not make this policy change lightly or without careful deliberation. We will tirelessly defend the right to freely share information on reddit in any way we can, even if it is offensive or discusses something that may be illegal. (from Reddit's statement, https://www.reddit.com/r/blog/comments/pmj7f/a_necessary_cha...)

yea...

As for generalization, a 2012 writeup on Reddit's situation cites LiveJournal's incident in 2007 and further LambdaMOO's, from 1992: https://news.ycombinator.com/item?id=3585997

shitlord · 3 years ago
> by reading forum activity you can understanding roughly where the next massacre will happen

What? Citation needed. Has there been a single massacre committed by any of its members?

cluedos · 3 years ago
I feel sorry for Cloudflare's management team in having to deal with all this nonsense.

What they really want to focus on is growing their business, developing new cloud-scale technologies, and serving their customers the best they can.

Instead they've ended up stuck in this ridiculous online spat between two internet mobs headed by two unsavoury individuals.

If I was CEO of Cloudflare, and I and my employees were being harassed, doxxed, threatened by an online mob, I'd have done the same. Never mind taking a principled stand, it's not worth being involved in the first place.

malepoon · 3 years ago
> between two internet mobs headed by two unsavoury individuals

This is a bit too much "both sides are bad" for me. Fact is that Kiwi Farms is a horrible website where people harass others, share information about them, and drive them to suicide. That's pretty bad on its own.

tomjen3 · 3 years ago
The problem is that if you give in once, or are perceived to give in, then not only will both sides hate you for it, but you will be under even more pressure to give in next time.

To run something like Cloudflare you probably have to have the rule that you will not block services for anyone under any circumstances unless ordered to by a court, or they advocate for a blocking campaign against you, or host content that does. In this case they would have let kf of, but block Twitter.

2Gkashmiri · 3 years ago
I don't get it. Do trans people and women and all the alleged victims "are a part of that community" that they endured years of torture and talking down leading to suicide as is being alleged or do these KF people do doxing on sorts on other sites and use KF as a central place to coordinate attacks?

I am not on 8chan. I can never be harassed by anyone in /b/ or whatever the bad that forums are.

If I am aan active part of a community, I should not claim peer pressure and influence or should I not?

Deleted Comment

eternalban · 3 years ago
> If I was CEO of Cloudflare, and I and my employees were being harassed, doxxed, threatened by an online mob, I'd have done the same. Never mind taking a principled stand, it's not worth being involved in the first place.

Fair enough. But this CEO was blowing clouds in our face that they took this action because our legal system is not up to the task!

It's so sad that Mr. Prince apparently can't afford having a competent legal team to explain to him the concepts of "Rule of Law", "Due Process", "Courts", "Judges", "Juries", "Evidence", and all that other [quaint!] aspects of our (broken!) "traditional legal system".

matrix_overload · 3 years ago
They did it because of the cesspit environment inside every American corporation. If the CEO doesn't cave in under sufficient pressure, his internal political rivals will extend the guilt of association to him personally, and he will lose his position.

This is a modern-day equivalent of lynch mobs that has nothing to do with reason, good intentions or due process.

femiagbabiaka · 3 years ago
Lynch mobs would kill people you know. They’d hang them from a tree or worse. This isn’t equivalent to that at all.
ThrustVectoring · 3 years ago
To be fair to CloudFlare, their entire business model is dependent on the US regulatory environment and section 230 protections. If there's a credible threat from Congress along the lines of "if the internet won't self-regulate, we'll have to do it for them", it's existential to their business.
nkurz · 3 years ago
How do you differentiate between "if enough people demand it on Twitter" and "if our consciences demand that we act"? I agree that they don't seem to be adhering to the policy that they published just last week, but a company that didn't care passionately about free speech wouldn't have published what they did. It was clearly going to offend a lot of people on Twitter.

On the other hand, they clearly don't want their company to be used to kill anyone. Even if they are wrong that there is an "emergency", it seems likely they believe that there is one. Isn't a more plausible explanation that they truly believe free speech is important, but that it's not the only deciding factor?

ilammy · 3 years ago
> How do you differentiate between "if enough people demand it on Twitter" and "if our consciences demand that we act"?

Easy: for-profit organizations do not have conscience, in principle.

res0nat0r · 3 years ago
wodenokoto · 3 years ago
There’s protecting free speech and there’s enabling harassment.

Come on. Shouting “fire!” In a crowded room is not free speech and no company in their right mind should support that

criley2 · 3 years ago
>Come on. Shouting “fire!” In a crowded room is not free speech and no company in their right mind should support that

Actually, shouting "fire!" in a crowded theater IS explicitly free speech in America, and in fact, the idea that it's not is a layman's mistake (or a wives tale if you prefer) that is corrected in basically freshman legal classes.

In modern legal discussion, the idea of "fire in a theater!" is basically an immediate identifier that you are not educated in this part of the law at all and your opinion is low quality.

Not trying to insult you here, I too have used this very analogy on the internet in the past, just laying out what exactly others see when they see what you wrote.

_dain_ · 3 years ago
>Shouting “fire!” In a crowded room is not free speech

stop using this dumb analogy. it was invented to justify prosecuting people for speaking out against the draft, and it was later denounced by the very person who coined it.

in this case it's more like: a random person who never went to the theatre before shouted fire in the theatre, the person was summarily booted from the premises and banned by the theatre staff. then later on the police come by and shut down the theatre because they hosted fire-shouters.

andrew_ · 3 years ago
It's incredible how this perception persists completely devoid of nuance or understanding.

Dead Comment

boastful_inaba · 3 years ago
Josh Moon (Null), the owner and administrator of Kiwi Farms, responds to their removal from CloudFlare.

Their core claim is that the post which got attention (a shooting/bomb threat) was made by a sleeper account that had only ever posted a single time before the controversy started. Null also says the post was removed just after half an hour from its creation, having been reported seven times by other community members.

EDIT - RE: The submission title. I quoted the title exactly as per HN submission rules - but I guess the system changed the capitalisation and removed the punctuation present in the title?

rvz · 3 years ago
Yeah, even though or even if the post was removed the outrage mob on Twitter won’t care and it is always one-sided. They got what they wanted even though it won’t solve the problem. It is now the internet’s problem.

They will just flag this post and rejoice that they got a billon dollar internet company to bend the knee only for them to scream a false victory, despite KF still being up and running on another domain with Tor and DDoS protection with another provider. Nothing has changed.

When it comes to a Twitter mob, there is no “Sorry” or “Please I can explain”, or “It’s not what you think it is” or “But the post was removed”. With the Twitter mob, there is just no redemption.

So on to the next villain of the month.

concinds · 3 years ago
The primary goal is to increase the left's institutional power.

They know that KF will still exist and will move on from Cloudflare. The point isn't to hurt KF, but to hurt Cloudflare for daring to nominally support free speech. That's why they're botting hashtags, putting pressure on the stock price, and encouraging tech decision-makers to move their startups away from Cloudflare. The point is to say: if you do not pander to us, we will come for you.

It's an effective way to cement power over institutions from the bottom-up, without actually controlling those institutions.

It's a similar kind of social movement to the film censorship movement in early 20th century America, which was largely grassroots and yet ended up controlling the US's entire cultural output for most of a century.

viraptor · 3 years ago
> They got what they wanted even though it won’t solve the problem.

There's actually quite a bit of follow up action chasing up both the new providers and hosters. The deplatforming has been shown to work before. Just adding the speedbump of requiring the tor access will kill a lot of interest in continuing the abuse.

Even if you look at the hashtags and the website, you'll see cf wasn't the only target. It's a pretty well planned action, but it's very clearly against kf and not stopping.

threeseed · 3 years ago
> Nothing has changed

Exactly. Proving that this isn't really a problem at all.

Cloudflare has exercised their right to drop this user. And KiwiFarms has moved to someone else.

cloutchaser · 3 years ago
Gab CEO made the same claim when gab was banned everywhere. He said there was a suspicious increase in really bad content a few days before.

Obviously it’s completely unverifiable, but in the timeframes we are talking, if 30 minutes is too long to remove bad content, it’s super easy for any bad actor to take down any forum on the internet.

The precedents set here are terrible.

hackerlight · 3 years ago
> it’s super easy for any bad actor to take down any forum on the internet.

No it isn't. There's a bigger context to the decision. If you find a Justin Bieber fan club forum and post incitement to violence, I guarantee that nobody will want to take down the site, because everyone understands that the forum itself had nothing to do with the post.

googlryas · 3 years ago
I think that really depends. What is the other forum content? If it is benign stuff, and the post is completely out of left field, then the forum is probably safe. But if a lot of the content is toeing the line of illegal activity (but still legal), and then some illegal stuff gets posted, there probably isn't going to be any sympathy for the forum.
soulofmischief · 3 years ago
If you read the Cloudflare announcement, they stress that this is not general policy and to not see this decision as precedent.
shadowgovt · 3 years ago
Gab got business relationships terminated because a mass murderer became radicalized via interactions with Gab users (and people had the receipts). This was after a pattern of buildup of political rhetoric culminating in violence on their site that had repeated for years.

Deleted Comment

pseudo0 · 3 years ago
If this is what Matthew Prince referred to as:

> However, the rhetoric on the Kiwifarms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwifarms or any other customer before.

Seriously? I helped moderate a small community at one point and we would see stuff like this on a weekly basis. Or just look at a YouTube or Facebook comment section on a controversial topic. This looks like an excuse to cave to the Twitter mob.

stuaxo · 3 years ago
They were harrasing people to the point of suicide and "swatting" - attempting to murder via police.
boastful_inaba · 3 years ago
It should be noted for the record that Kiwi Farms has had a longstanding 'no touch' rule, usually displayed prominently at the top of every page. (It appears to be currently replaced by the admin notifications whilst the site deals with the current state of events.)

Much like the old site Portal of Evil, another best-of-the-worst aggregator from the early internet, making on-site trolling/harassment plans is a great way to get your account banned very quickly ... or becoming a thread topic yourself.

klibertp · 3 years ago
> They were harrasing people to the point of suicide

a random person on the Internet said. Meanwhile, another random person on the Internet[1] said KF has a ToS that bans anything illegal from the site.

I was unable to independently verify either side of the report.

[1] https://news.ycombinator.com/item?id=32712037

heartbeats · 3 years ago
How do you know it was them? It wouldn't make much sense for them to SWAT Marjorie Taylor Greene, now does it?
_dain_ · 3 years ago
Wrong, and wrong.

Deleted Comment

Dead Comment

danaris · 3 years ago
On the one hand, it probably is at least partly an excuse, given that Kiwifarms is baaasically set up to create exactly those conditions, and I'm skeptical that the most recent incidents are genuinely worse than anything they've done before.

On the other hand, about bloody time they got rid of the site, and if they need to post a public excuse to feel justified in it, sure, whatever.

desindol · 3 years ago
You are a company if you get death threats against your own workers you have to act. These threats are not anonymous. There are legal implications if you do nothing and something happens.
pseudo0 · 3 years ago
Matthew Prince did not provide any information on the nature of the threats he claimed, but the two that were widely discussed on Twitter are addressed in the submitted post. Neither of them involve Cloudflare employees, so I'm not sure what point you are trying to make.
ajsnigrutin · 3 years ago
Sure, if you get one or two...

If you get 5000 from a twitter mob, from people living all around the world, you can still report, but there's pretty much zero chance that anything will happen to any of those 5000 people, because the threats are considered as "not real" and nobody wants to deal with 5000 cases (paperwork, warrants, judical process after, etc.)

TechBro8615 · 3 years ago
Here’s what I don’t get. Cloudflare wasn’t providing hosting services to KiwiFarms. They were just proxying to their origin. So now they’ve terminated service, now what? The origin is still there, Kiwifarms is still available one way or another. All that’s been accomplished is Cloudflare is no longer proxying traffic to kiwifarms, and it’s easier to find their origin IP to continue harassment campaigns.

This rubs me the wrong way because it doesn’t feel like any sort of justice. It feels like you paid the security guard to step aside so you could read the address on the door, call your friends and mob the store. You didn’t call the cops to shutdown the store. You just cleared a path for more vigilante action.

It’s for this reason I see this as a failing of the legal system. If a website is so abhorrent as to be illegal, then you should be able to open an investigation, and investigators should be able to get warrants to seize the site hosting structure and take it down. If that doesn’t happen, why should it fall to vigilantes to harass and dox the site owners? That’s not justice, and Cloudflare is certainly not facilitating it by abdicating their position. In fact it’s the classic definition of cowardice.

convery · 3 years ago
> It’s for this reason I see this as a failing of the legal system. If a website is so abhorrent as to be illegal, then you should be able to open an investigation, and investigators should be able to get warrants to seize the site hosting structure and take it down.

They have been investigated a lot over the years, the operator works with law-enforcement to provide the information they request, and the moderators are pretty quick at removing any illegal content. One can argue that the police concluding that there's nothing illegal going on is because of their incompetence, but it's more likely that the US holds the 1A sacred and has no legal reason to take it down.

oivey · 3 years ago
Whether the content was strictly illegal or quickly taken down is irrelevant. It isn’t a good sign that supporters of Kiwi Farms repeatedly conflate the law with ethics.

Regardless of the law, Kiwi Farms is a place where a significant portion of the content, even when not veering into illegality, is based on harassment. There’s a history. Given the history and recent threats on people’s physical well being, Cloudflare decided they had enough. The same concept of freedom of speech that allows Kiwi Farms to exist is also what allows Cloudflare to decide to not do business with them.

googlryas · 3 years ago
Wikipedia says the operator refused to voluntarily work with law enforcement to give info on the Christchurch shooter.

Is that incorrect? Or are you just saying the operator responds only to warrants and subpoenas? I wouldn't call that "working with" law enforcement, I would call that staying within the bounds of the law in that particular matter.

pjc50 · 3 years ago
Background (1) : https://twitter.com/keffals/status/1566153033586810885

(2) : https://twitter.com/keffals/status/1566303971911909376

(3) : https://www.thetimes.co.uk/article/599a356e-2bc1-11ed-a4d5-a...

Keffals did something extremely brave in using herself as a live-fire target. She knew she was being doxxed, so she went to Northern Ireland and mentioned looking for a particular kind of restaurant. This inevitably resulted in bomb threats and fake calls to the police.

Bomb threats are taken seriously in Northern Ireland. It seems that this time the threat was given enough credibility for Cloudflare to take it seriously and cut them off.

(As a Brit, there's a deep irony in the idea of someone fleeing to Northern Ireland for their personal safety!)

Gigachad · 3 years ago
What is the possibility of someone wanting to have the site taken down posting these threats? The post was apparently deleted by mods within 14 minutes.
convery · 3 years ago
Actually, a mod checked the logs to find that the post was deleted by the poster 2-3 minutes after Keffals screenshot and Tweeted it. So it was up just long enough to serve its purpose.
nibbleshifter · 3 years ago
> As a Brit, there's a deep irony in the idea of someone fleeing to Northern Ireland for their personal safety!

Northern Ireland is probably one of the safest places in the entire UK to be if there's a threat to your life.

It's also, incidentally, the only place in the UK or Ireland where concealed carry permits are available.

Dead Comment

Dead Comment

schleck8 · 3 years ago
> Our website makes no money. All of our moderators are volunteers. It took Facebook (with a 24/7 staff of paid moderators) 29 minutes to remove the Christchurch shooting from Facebook Live.

You can read a hint of self-awareness between the lines here.

'yes, we have plenty of long-standing accounts who frequently use the hard n-word and ableist slurs just for the sake of it, have hooked crosses and/or hitler in their profile picture and one of our administrators uses an antisemitic one with a stereotypical jew and money, but we are sadly lacking ressources to prevent those users from casually partaking in and moderating our site for years'

pseudo0 · 3 years ago
Isn't their point about the moderation response time when it comes to illegal material? It took Facebook 29 minutes to remove the Christchurch video, while this whole controversy appears to be focused on a post that was up for 14 minutes.

User-generated content is a really hard problem, and we need to set reasonable best practices and guidelines. A small volunteer-run forum won't have 24/7 staff looking for illegal content or handling reports. KiwiFarms isn't a very sympathetic test case, but this example will be considered the precedent going forward for what Cloudflare should remove. If major tech companies can't meet this bar, how can we expect the shrinking hobbyist segment of the internet to manage?

shadowgovt · 3 years ago
> how can we expect the shrinking hobbyist segment of the internet to manage?

By moderating early and often.

KF created an atmosphere where doxxing and harassment were tolerated, accepted, and lauded. The easiest way to survive as a hobbyist is "Don't do that." Set up clear policies that discourage harassment early and enforce them.

ethanbond · 3 years ago
Here’s how: Don’t allow user generated content unless you can effectively moderate it. That should go for FB as well. We don’t need to start from the assumption that these services must or even should exist. If you can’t moderate, don’t build something that allows strangers to put information in front of millions of other strangers!

Note that if you simply don’t want to moderate content (ie free speech absolutist), that’s a much different argument than you can’t do it effectively. If you want to and can’t do it, then don’t.

ZGDUwpqEWpUZ · 3 years ago
> ableist slurs

Are you suggesting that Cloudflare should drop sites that don't censor the word "retard"? How about "moron" or "stupid"?

There's a huge difference between insults and active shooters and it's strange to see them juxtaposed.

schleck8 · 3 years ago
No, I did not suggest that. Read the big picture.
sweetheart · 3 years ago
Just because there is a huge difference doesn’t make using slurs innocent or harmless.

If a site is host to the culture that is normalizing genuine hate (and that does seem to be the case!) I say CF should absolutely drop them.

worldofmatthew · 3 years ago
That's false equivalence on your part. Downgrading a violent threat to compare it to shit-posting is logically incorrect. The website points out that it took Facebook 29 minutes to ban a live-stream of an active terrorist attack, a social media platform who has enough money to do better. A forum with volunteers monitoring taking down a threat in around 1 hour is incredibly quick.
schleck8 · 3 years ago
> There is also a false equivalent on your part.

The livestream (which by the way was shared by the Kiwifarms owner) was spread on Facebook by many users.

Facebook deployed a fingerprinting solution to identify this video being reposted. Not just taking it down once, but taking down all submissions.

Noone expects that from Kiwifarms because they don't need it anyways. No matter what they tell you, they don't have that big of a comment stream. It takes less than 30 seconds to find an account with some Nazist symbolism in username or profile picture and they persist for years. Tell me how that adds up please.

junon · 3 years ago
There's a false equivalence you're missing.

Facebook has billions of users. Kiwi farms, presumably, has several orders of magnitude less.

j-krieger · 3 years ago
> yes, we have plenty of long-standing accounts who frequently use the hard n-word and ableist slurs just for the sake of it, have hooked crosses and/or hitler in their profile picture and one of our administrators uses an antisemitic one with a stereotypical jew and money

None of this is illegal. None of this warrants censorship. Twitter is not real. You can block traffic you don‘t like or log out. None of the above actively influences your life until you make the choice to let it.

tsukikage · 3 years ago
It's also perfectly legal for a provider to decide they don't want to host such content.
dale_glass · 3 years ago
If that's the case, why would keeping the content be important? If it's "not real", then there's no real loss if the content is removed.

Deleted Comment