Readit News logoReadit News
conesus · a year ago
I run NewsBlur[0] and I’ve been battling this issue of NewsBlur fetching 403s across the web for months now. My users are revolting and asking for refunds. I’ve tried emailing dozens of site owners and publishers and only two of them have done the work of whitelisting their RSS feed. It’s maddening and is having a real negative effect on NewsBlur.

NewsBlur is an open-source RSS news reader (full source available at [1]), something we should all agree is necessary to support the open web! But Cloudflare blocking all of my feed fetchers is bizarre behavior. And we’re on the verified bots list for years, but it hasn’t made a difference.

Let me know what I can do. NewsBlur publishes a list of IPs that it uses for feed fetching that I've shared with Cloudflare but it hasn't made a difference.

I'm hoping Cloudflare uses the IP address list that I publish and adds them to their allowlist so NewsBlur can keep fetching (and archiving) millions of feeds.

[0]: https://newsblur.com

[1]: https://github.com/samuelclay/NewsBlur

srik · a year ago
RSS is an essential component to modern web publishing and it feels scary to see how one company’s inconsideration might harm its already fragile future. One day cloudflare will get big enough to be subject to antitrust regulation and this instance will be a strong data point working against them.
immibis · a year ago
It's not one company - it's an individual decision of every blog operator to block their own readers by signing up for cloudflare.
01HNNWZ0MV43FF · a year ago
It's not essential, I don't know anyone in real life who uses it.

I run an RSS feed on my blog out of principle and I don't bother reading other feeds I'm subscribed to

When I'm bored I come here, I go on Mastodon, and gods save me, I go on Reddit

AyyEye · a year ago
Three consenting parties trying to use their internet blocked by a single intermediary that's too big to care is just gross. It's the web we deserve.
eddythompson80 · a year ago
> Three consenting parties

Clearly they are not 100% consenting, or at best one of them (the content publisher) is misconfiguring/misunderstanding their setup. They enabled RSS on their service, then setup a rule to require human verification for accessing that RSS feed.

It's like a business advertising a singles only area, then hiring a security company and telling them to only allow couples in the building.

Deleted Comment

p4bl0 · a year ago
I've been a paying NewsBlur user since the downfall of Google Reader and I'm very happy with it. Thank you for NewsBlur!
brightball · a year ago
I use Cloudflare and have home built RSS feeds on my site. If you've run into any issues on mine, I'll be happy to look into them.

https://www.brightball.com/

miohtama · a year ago
Thank you for the hard work.

Newsblur was the first SaaS I could afford as a student. I have been subscriber for something like 20 years now. And I will keep doing it to the grave. Best money ever spent.

p4bl0 · a year ago
> I have been subscriber for something like 20 years now.

NewsBlur is "only" 15 years old (and GReader was there up until 11 years ago).

renaissancec · a year ago
Can't recommend Newsblur enough. I have been a customer since Fastladder was shut down. I love their integration of being able to use pinboard.in within the web interface to bookmark articles. An essential part of my web productivity flow.
hedora · a year ago
Maybe pay for residential proxy network access?

I used to get my internet from a small local ISP, and ip blacklisting basically means no one in our zipcode could have reliable internet.

These days, the 10-20% of us with an unobstructed sky view switched to starlink and didn’t look back.

The thing is, both ISPs use CGNAT, but there’s no way cloudflare is going to block Musk like they do the mom and pop shop.

Anyway, apparently residential proxy networks work pretty well if you hit a spurious ip block. I’ve had good luck with apple private relay too.

I’m hoping service providers realize how useless and damaging ip blocking is to their reputations, but I’m not holding my breath. Sometimes I think the endgame is just routing 100% of residential traffic through 8.8.8.8.

wooque · a year ago
You just bypass it with library like cloudscraper/hrequests.
kevincox · a year ago
I dislike advice of whitelisting specific readers by user-agent. Not only is this endless manual work that will only solve the problem for a subset of users but it also is easy to bypass by malicious actors. My recommendation would be to create a page rule that disables bot blocking for your feeds. This will fix the problem for all readers with no ongoing maintenance.

If you are worried about DoS attacks that may hammer on your feeds then you can use the same configuration rule to ignore the query string for cache keys (if your feed doesn't use query strings) and overriding the caching settings if your server doesn't set the proper headers. This way Cloudflare will cache your feed and you can serve any number of visitors without putting load onto your origin.

As for Cloudflare fixing the defaults, it seems unlikely to happen. It has been broken for years, Cloudflare's own blog is affected. They have been "actively working" on fixing it for at least 2 years according to their VP of product: https://news.ycombinator.com/item?id=33675847

benregenspan · a year ago
AI crawlers have changed the picture significantly and in my opinion are a much bigger threat to the open web than Cloudflare. The training arms race has drastically increased bot traffic, and the value proposition behind that bot traffic has inverted. Previously many site operators could rely on the average automated request being net-beneficial to the site and its users (outside of scattered, time-limited DDoS attacks) but now most of these requests represent value extraction. Combine this with a seemingly related increase in high-volume bots that don't respect robots.txt and don't set a useful User-Agent, and using a heavy-handed firewall becomes a much easier business decision, even if it may target some desirable traffic (like valid RSS requests).
vaylian · a year ago
I don't know if cloudflare offers it, but whitelisting the URL of the RSS feed would be much more effective than filtering user agents.
derkades · a year ago
Yes it supports it, and I think that's what the parent comment was all about
jks · a year ago
Yes, you can do it with a "page rule", which the parent comment mentioned. The CloudFlare free tier has a budget of three page rules, which might mean that you have to bundle all your rss feeds in one folder so they share a path prefix.

Deleted Comment

a-french-anon · a year ago
And for those of us using sfeed, the default UA is Curl's.
wenbin · a year ago
At Listen Notes, we rely heavily on Cloudflare to manage and protect our services, which cater to both human users and scripts/bots.

One particularly effective strategy we've implemented is using separate subdomains for services designed for different types of traffic, allowing us to apply customized firewall and page rules to each subdomain.

For example:

- www. listennotes.com is dedicated to human users. E.g., https://www.listennotes.com/podcast-realtime/

- feeds. listennotes.com is tailored for bots, providing access to RSS feeds. Eg., https://feeds.listennotes.com/listen/wenbin-fangs-podcast-pl...

- audio. listennotes.com serves both humans and bots, handling audio URL proxies. E.g., https://audio.listennotes.com/e/p/1a0b2d081cae4d6d9889c49651...

This subdomain-based approach enables us to fine-tune security and performance settings for each type of traffic, ensuring optimal service delivery.

kevindamm · a year ago
Where do you put your sitemap (or its equivalent)? Looking at the site, I don't notice one in the metadata but I do see a "site index" on the www subdomain, though possibly that's intended for humans not bots? I think the usual recommendation is to have a sitemap per subdomain and not mix them, but clearly they're meant for bots not humans...
wenbin · a year ago
Great question.

We only need to provide the sitemap (with custom paths, not publicly available) in a few specific places, like Google Search Console. This means the rules for managing sitemaps are quite manageable. It’s not a perfect setup, but once we configure it, we can usually leave it untouched for a long time.

amatecha · a year ago
I get blocked from websites with some regularity, running Firefox with strict privacy settings, "resist fingerprinting" etc. on OpenBSD. They just give a 403 Forbidden with no explanation, but it's only ever on sites fronted by CloudFlare. Good times. Seems legit.
wakeupcall · a year ago
Also running FF with strict privacy settings and several blockers. The annoyances are constantly increasing. Cloudflare, captchas, "we think you're a bot", constantly recurring cookie popups and absurd requirements are making me hate most of the websites and services I hit nowdays.

I tried for a long time to get around it, but now when I hit a website like this just close the tab and don't bother anymore.

afh1 · a year ago
Same, but for VPN (either corporate or personal). Reddit blocks it completely, requires you to sign-in but even the sign-in page is "network restricted"; LinkedIn shows you a captcha but gives an error when submitting the result (several reports online); and overall a lot of 403's. All go magically away when turning off the VPN. Companies, specially adtechs like Reddit and LinkedIn, do NOT want you to browse privately, to the point they rather you don't use their website at all unless without a condom.
anilakar · a year ago
Heck, I cannot even pass ReCAPTCHA nowadays. No amount of clicking buses, bicycles, motorcycles, traffic lights, stairs, crosswalks, bridges and fire hydrants will suffice. The audio transcript feature is the only way to get past a prompt.
Terr_ · a year ago
The worst part is that a lot of it is mysteriously capricious with no recourse.

Like, you visit Site A too often while blocking some javascript, and now Site B doesn't work for no apparent reason, and there's no resolution path. Worse, the bad information may become permanent if an owner uses it to taint your account, again with no clear reason or appeal.

I suspect Reddit effectively killed my 10+ year account (appeal granted, but somehow still shadowbanned) because I once used the "wrong" public wifi to access it.

lioeters · a year ago
Same here. I occasionally encounter websites that won't work with ad blockers, sometimes with Cloudflare involved, and I don't even bother with those sites anymore. Same with sites that display a cookie "consent" form without an option to not accept. I reject the entire site.

Site owners probably don't even see these bounced visits, and it's such a tiny percentage of visitors who do this that it won't make a difference. Meh, it's just another annoyance to be able to use the web on our own terms.

orbisvicis · a year ago
I have to solve captchas for Amazon while logged into my Amazon account.
doctor_radium · a year ago
Hey, same here! For better or worse, I use Opera Mini for much of my mobile browsing, and it fares far worse than Firefox with uBlock Origin and ResistFingerprinting. I complained about this roughly a year ago on a similar HN thread, on which a Cloudflare rep also participated. Since then something changed, but both sides being black boxes, I can't tell if Cloudflare is wising up or Mini has stepped up. I still get the same challenge pages, but Mini gets through them automatically now, more often than not.

But not always. My most recent stumbling block is https://www.napaonline.com. Guess I'm buying oxygen sensors somewhere else.

SoftTalker · a year ago
Same. If a site doesn't want me there, fine. There's no website that's so crucial to my life that I will go through those kinds of contortions to access it.
JohnFen · a year ago
> when I hit a website like this just close the tab and don't bother anymore.

Yeah, that's my solution as well. I take those annoyances as the website telling me that they don't want me there, so I grant them their wish.

amanda99 · a year ago
Yes and the most infuriating thing is the "we need to verify the security of your connection" text.
BiteCode_dev · a year ago
Cloudflare is a fantastic service with an unmatched value proposition, but it's unfortunately slowly killing web privacy, with 1000s paper cuts.

Another problem is "resist fingerprinting" prevents some canvas processing, and many websites like bluesky, linked in or substack uses canvas to handle image upload, so your images appear to be stripes of pixel.

Then you have mobile apps that just don't run if you don't have a google account, like chatgpt's native app.

I understand why people give up, trying to fight for your privacy is an uphill battle with no end in sight.

madeofpalk · a year ago
> Then you have mobile apps that just don't run if you don't have a google account, like chatgpt's native app.

Is that true? At least on iOS you can log into the ChatGPT with same email/password as the website.

I never use Google login for stuff and ChatGPT works fine for me.

pjc50 · a year ago
The privacy battle has to be at the legal layer. GDPR is far from perfect (bureaucratic and unclear with weak enforcement), but it's a step in the right direction.

In an adversarial environment, especially with both AI scrapers and AI posters, websites have to be able to identify and ban persistent abusers. Which unfortunately implies having some kind of identification of everybody.

KomoD · a year ago
> Then you have mobile apps that just don't run if you don't have a google account, like chatgpt's native app.

That's not true, I use ChatGPT's app on my phone without logging into a Google account.

You don't even need any kind of account at all to use it.

neilv · a year ago
Similar here. It's not unusual to be blocked from a site by CloudFlare when I'm running Firefox (either ESR or current release) on Linux.

I suspect that people operating Web sites have no idea how many legitimate users are blocked by CloudFlare.

And. based on the responses I got when I contacted two of the companies whose sites were chronically blocked by CloudFlare for months, it seemed like it wasn't worth any employee's time to try to diagnose.

Also, I'm frequently blocked by CloudFlare when running Tor Browser. Blocking by Tor exit node IP address (if that's what's happening) is much more understandable than blocking Firefox from a residential IP address, but still makes CloudFlare not a friend of people who want or need to use Tor.

jorams · a year ago
> I suspect that people operating Web sites have no idea how many legitimate users are blocked by CloudFlare.

I sometimes wonder if all Cloudflare employees are on some kind of whitelist that makes them not realize the ridiculous false positive rate of their bot detection.

pjc50 · a year ago
> CloudFlare not a friend of people who want or need to use Tor

The adversarial aspect of all this is a problem: P(malicious|Tor) is much higher than P(malicious|!Tor)

johnklos · a year ago
I've had several discussions that were literally along the lines of, "we don't see what you're talking about in our logs". Yes, you don't - traffic is blocked before it gets to your servers!
lovethevoid · a year ago
What are some examples? I've been running ff on linux for quite some time now and am rarely blocked. I just run it with ublock origin.
amatecha · a year ago
Yeah, I've contacted numerous owners of personal/small sites and they are usually surprised, and never have any idea why I was blocked (not sure if it's an aspect of CF not revealing the reason, or the owner not knowing how to find that information). One or two allowlisted my IP but that doesn't strike me as a solution.

I've contacted companies about this and they usually just tell me to use a different browser or computer, which is like "duh, really?" , but also doesn't solve the problem for me or anyone else.

amatecha · a year ago
Nice, today I found I'm blocked from subway.com, that's cool. Good bot detection, my brand new Debian Linux install with Firefox must be really suspicious.
mzajc · a year ago
I randomize my User-Agent header and many websites outright block me, most often with no captcha and no useless error message.

The most egregious is Microsoft (just about every Microsoft service/page, really), where all you get is a "The request is blocked." and a few pointless identifiers listed at the bottom, purely because it thinks your browser is too old.

CF's captcha page isn't any better either, usually putting me in an endless loop if it doesn't like my User-Agent.

pushcx · a year ago
Rails is going to make this much worse for you. All new apps include naive agent sniffing and block anything “old” https://github.com/rails/rails/pull/50505
charrondev · a year ago
Are you sending an actual random string as your UA or sending one of a set of actual user agents?

You’re best off just picking real ones. We’ve got hit by a botnet sending 10k+ requests from 40 different ASNs with 1000s of different IPs. The only way we’re able to identify/block the traffic was excluding user agents matching some regex (for whatever reason they weren’t spoofing real user agents but weren’t sending actual ones either).

lovethevoid · a year ago
Not sure a random UA extension is giving you much privacy. Try your results on coveryourtracks eff, and see. A random UA would provide a lot of identifying information despite being randomized.

From experience, a lot of the things people do in hopes of protecting their privacy only makes them far easier to profile.

pessimizer · a year ago
Also, Cloudflare won't let you in if you forge your referer (it's nobody's business what site I'm coming from.) For years, you could just send the root of the site you were visiting, then last year somebody at Cloudflare flipped a switch and took a bite out of everyone's privacy. Now it's just endless reloading captchas.
zamadatix · a year ago
Why go through that hassle instead of just removing the referer?
philsnow · a year ago
Ah, maybe this is what’s happening to me.. I use Firefox with uBlock origin, privacy badger, multi-account containers, and temporary containers.

Whenever I click a link to another site, i get a new tab in either a pre-assigned container or else in a “tmpNNNN” container, and i think either by default or I have it configured to omit Referer headers on those new tab navigations.

DrillShopper · a year ago
Maybe after the courts break up Amazon the FTC can turn its eye to Cloudflare.
gjsman-1000 · a year ago
A. Do you think courts give a darn about the 0.1% of users that are still using RSS? We might as well care about the 0.1% of users who want the ability to set every website's background color to purple with neon green anchor tags. RSS never caught on as a standard to begin with, peaking at 6% adoption by 2005.

B. Cloudflare has healthy competition with AWS, Akamai, Fastly, Bunny.net, Mux, Google Cloud, Azure, you name it, there's a competitor. This isn't even an Apple vs Google situation.

anthk · a year ago
Or any Dillo user, with a PSP User Agent which is legit for small displays.
anal_reactor · a year ago
On my phone Opera Mobile won't be allowed into some websites behind CloudFlare, most importantly 4chan
dialup_sounds · a year ago
4chan's CF config is so janky at this point it's the only site I have to use a VPN for.
Jazgot · a year ago
My rss reader was blocked on kvraudio.com by cloudflare. This issue wasn't solved for months. I simply stopped reading anything on kvraudio. Thank you cloudflare!
KPGv2 · a year ago
Reddit seems to do this to me (sometimes) when I use Zen browser. Switching over to Safari or Chrome and the site always works great.
kjkjadksj · a year ago
Reddit has been bad about it as of late too
viraptor · a year ago
I know it's not a solution for you specifically here, but if anyone has access to the CF enterprise plan, they can report specific traffic as non-bot and hopefully improve the situation. They need to have access to the "Bot Management" feature though. It's a shitty situation, but some of us here can push back a little bit - so do it if you can.

And yes, it's sad that the "make internet work again" is behind an expensive paywall..

meeb · a year ago
The issue here is that RSS readers are bots. Obviously perfectly sensible and useful bots, but they’re not “real people using a browser”. I doubt you could get RSS readers listed on Cloudflare’s “good bots” list either which would allow them the default bot protection feature given they’ll all run off random residential IPs.
jasonlotito · a year ago
Cloudflare has always been a dumpster fire in usability. The number of times it would block me in that way was enough to make me seriously question anyones technical knowledge that used it. It's a dumpster fire. Friends don't let friend use Cloudflare. To me, it's like the Spirit airlines of CDNs.

Sure, tech wise it might work great, but from your users perspective: it's trash.

immibis · a year ago
It's got the best vendor lock-in enshittification story - it's free - and that's all that matters.
jgrahamc · a year ago
My email is jgc@cloudflare.com. I'd like to hear from the owners of RSS readers directly on what they are experiencing. Going to ask team to take a closer look.
kalib_tweli · a year ago
There are email obfuscation and managed challenge script tags being injected into the RSS feed.

You simply shouldn't have any challenges whatsoever on an RSS feed. They're literally meant to be read by a machine.

kalib_tweli · a year ago
I confirmed that if you explicitly set the Content-Type response header to application/rss+xml it seems to work with Cloudflare Proxy enabled.

The issue here is that Cloudflare's content type check is naive. And the fact that CF is checking the content-type header directly needs to be made more explicit OR they need to do a file type check.

o11c · a year ago
Even outside of RSS, the injected scripts often make internet security significantly worse.

Since the user-agent has no way to distinguish scripts injected by cloudflare from scripts originating from the actual website, in order to pass the challenge they are forced to execute arbitrary code from an untrusted party. And malicious Javascript is practically ubiquitous on the general internet.

badlibrarian · a year ago
Thank you for showing up here and being open to feedback. But I have to ask: shouldn't Cloudflare be running and reviewing reports to catch this before it became such a problem? It's three clicks in Tableau for anyone who cares, and clearly nobody does. And this isn't the first time something like this has slipped through the cracks.

I tried reaching out to Cloudflare with issues like this in the past. The response is dozens of employees hitting my LinkedIn page yet no responses to basic, reproduceable technical issues.

You need to fix this internally as it's a reputational problem now. Less screwing around using Salesforce as your private Twitter, more leadership in triage. Your devs obviously aren't motivated to fix this stuff independently and for whatever reason they keep breaking the web.

015a · a year ago
The reality that HackerNews denizens need to accept, in this case and in a more general form, is: RSS feeds are not popular. They aren't just unpopular in the way that, say, Peacock is unpopular relative to Netflix; they're truly unpopular, used regularly by a number of people that could fit in an american football stadium. There are younger software engineers at Cloudflare that have never heard the term "RSS" before, and have no notion of what it is. It will probably be dead technology in ten years.

I'm not saying this to say its a good thing; it isn't.

Here's something to consider though: Why are we going after Cloudflare for this? Isn't the website operator far, far more at-fault? They chose Cloudflare. They configure Cloudflare. They, in theory, publish an RSS feed, which is broken because of infrastructure decisions they made. You're going after Ryobi because you've got a leaky pipe. But beyond that: isn't this tool Cloudflare publishes doing exactly what the website operators intended it to do? It blocks non-human traffic. RSS clients are non-human traffic. Maybe the reason you don't want to go after the website operators is because you know you're in the wrong? Why can't these RSS clients detect when they encounter this situation, and prompt the user with a captive portal to get past it?

viraptor · a year ago
It's cool and all that you're making an exception here, but how about including a "no, really, I'm actually a human" link on the block page rather than giving the visitor a puzzle: how to report the issue to the page owner (hard on its own for normies) if you can't even load the page. This is just externalising issues that belong to the Cloudflare service.
jgrahamc · a year ago
I am not trying to "make an exception", I'm asking for information external to Cloudflare so I can look at what people are experiencing and compare with what our systems are doing and figure out what needs to improve.
doctor_radium · a year ago
I had a conversation with a web site owner about this once. There apparently is such a feature, a way for sites to configure a "Please contact us here if you're having trouble reaching our site" page...usage of which I assume Cloudflare could track and then gain better insight into these issues. The problem? It requires a Premium Plan.
methou · a year ago
Some clients are more like a bot/service, imagine google reader that fetches and caches content for you. The client I’m currently using is miniflux, it also works in this way.

I understand that there are some more interactive rss readers, but from personal experience it’s more like “hey I’m a good bot, let me in”

is_true · a year ago
Maybe when you detect urls that return the rss mimetype notify the owner of the site/CF account that it might be a good idea to allow bots on that urls.

Ideally you could make it a simple switch in the config, somethin like: "Allow automated access on RSS endpoints".

prmoustache · a year ago
It is not only rss reader users that are affected. Any user with some extension to block trackers get regularly forbidden access to websites or have to deal with tons of captcha.

Deleted Comment

kevincox · a year ago
I'll mail you as well but I think public discussion is helpful. Especially since I have seem similar responses to this over the years and it feels very disingenuous. The problem is very clear (Cloudflare serves 403 blocks to feed readers for no reason) you have all of the logs. The solution is maybe not trivial but I fail to see how the perspective of someone seeing a 403 block is going to help much. This just starts to sound like a way to seem responsive without actually doing anything.

From the feed reader perspective it is a 403 response. For example my reader has been trying to read https://blog.cloudflare.com/rss/ and the last successful response it got was on 2021-11-17. It has been backing off due to "errors" but it still is checking every 1-2 weeks and gets a 403 every time.

This obviously isn't limited to the Cloudflare blog, I see it on many site "protected by" (or in this case broken by) Cloudflare. I could tell you what public cloud IPs my reader comes from or which user-agent it uses but that is besides the point. This is a URL which is clearly intended for bots so it shouldn't be bot-blocked by default.

When people reach out to customer support we tell them that this is a bug for the site and there isn't much we can do. They can try contacting the site owner but this is most likely the default configuration of Cloudflare causing problems that the owner isn't aware of. I often recommend using a service like FeedBurner to proxy the request as these services seem to be on the whitelist of Cloudflare and other scraping prevention firewalls.

I think the main solution would be to detect intended-for-robots content and exclude it from scraping prevention by default (at least to a huge degree).

Another useful mechanism would be to allow these to be accessed when the target page is cachable, as the cache will protect the origin from overload-type DoS attacks anyways. Some care needs to be taken to ensure that adding a ?bust={random} query parameter can't break through to the origin but this would be a powerful tool for endpoints that need protection from overload but not against scraping (like RSS feeds). Unfortunately cache headers for feeds are far from universal, so this wouldn't fix all feeds on its own. (For example the Cloudflare blog's feed doesn't set any caching headers and is labeled as `cf-cache-status: DYNAMIC`.)

quinncom · a year ago
Cloudflare-enabled websites have had this issue for years.[1] The problem is that website owners are not educated enough to understand that URLs meant for bots should not enable Cloudflare’s bot blocker.

Perhaps a solution would be for Cloudflare to have default page rules that disable bot-blocking features for common RSS feed URLs? Or pop-up a notice with instructions on how to create these page rules to users that appear to have RSS feeds on their website?

[1] Here is Overcast’s owner raising the issue in 2022: https://x.com/OvercastFM/status/1578755654587940865

erikrothoff · a year ago
As the owner of an RSS reader I love that they are making this more public. 30% of our support requests are ”my feed doesn’t” work. It sucks that the only thing we can say is ”contact the site owner, it’s their firewall”. And to be fair it’s not only Cloudflare, so many different firewall setups cause issues. It’s ironic that a public API endpoint meant for bots is blocked for being a bot.
belkinpower · a year ago
I maintain an RSS reader for work and Cloudflare is the bane of my existence. Tons of feeds will stop working at random and there’s nothing we can do about it except for individually contacting website owners and asking them to add an exception for their feed URL.
stanislavb · a year ago
I was recently contacted by one of my website users as their RSS reader was blocked by Cloudflare.
sammy2255 · a year ago
Unfortunately its not really Cloudflare but webadmins who have configured it to block everything thats not a browser, whether unknowingly or not
afandian · a year ago
If Cloudflare offer a product, for a particular purpose, that breaks existing conventions of that purpose, then it’s Cloudflare.
nirvdrum · a year ago
I contend this wasn’t an issue prior to Cloudflare making that an option. Sure, some IDS would block some users and geo blocks have been around forever. But, Cloudflare is so prolific and makes it so easy to block things inadvertently, that I don’t think they get a pass and blame the downstream user.

It’s particularly frustrating that they give their own WARP service a pass. I’ve run into many sites that will block VPN traffic, including iCloud Privacy Relay, but WARP traffic goes through just fine.

Dead Comment

elwebmaster · a year ago
Using Cloudflare on your website could be blocking Safari users, Chrome users, or just any users. It’s totally broken. They have no way of measuring the false positives. Website owners are paying for it in lost revenue. And poor users who lose access for no fault of their own. Until some C-level exec at a BigTech randomly gets blocked and makes noise. But even then, Cloudflare will probably just whitelist that specific domain/IP. It is very interesting how I have never been blocked when trying to access Cloudflare itself, only blocked on their customer’s sites.