by cpncrunch 12 hours ago

I've noticed that in recent months, even apart from these outages, cloudflare has been contributing to a general degradation and shittification of the internet. I'm seeing a lot more "prove you're human", "checking to make sure you're human", and there is normally at the very least a delay of a few seconds before the site loads.

I don't think this is really helping the site owners. I suspect it's mainly about AI extortion:

https://blog.cloudflare.com/introducing-pay-per-crawl/

james2doyle 12 hours ago | [-23 more]

You call it extortion of the AI companies, but isn’t stealing/crawling/hammering a site to scrape their content to resell just as nefarious? I would say Cloudflare is giving these site owners an option to protect their content and as a byproduct, reduce their own costs of subsidizing their thieves. They can choose to turn off the crawl protection. If they aren't, that tells you that they want it, doesn’t it?

cpncrunch 10 hours ago | [-22 more]

>You call it extortion of the AI companies, but isn’t stealing/crawling/hammering a site to scrape their content to resell just as nefarious?

You can easily block ChatGPT and most other AI scrapers if you want:

https://habeasdata.neocities.org/ai-bots

james2doyle 10 hours ago | [-4 more]

This is just using robots.txt and asking "pretty please, don’t scrape me".

Here is an article (from TODAY) about the case where Perplexity is being accused of ignoring robots.txt: https://www.theverge.com/news/839006/new-york-times-perplexi...

If you think a robots.txt is the answer to stopping the billion-dollar AI machine from scraping you, I don’t know what to say.

Aeolun 3 hours ago | [-1 more]

If someone has a robots.txt, and I want to request their page, but I want to do that in an automated way, should I open the browser to do it instead of issue a curl request? How about if I am going to ask claude to fetch the page for me?

kentm 2 hours ago | [-0 more]

Respect the robots.txt and don’t do it?

cpncrunch 5 hours ago | [-1 more]

Yes, I was referring to legitimate companies, and Perplexity doesn't seem to be one of those.

albedoa 12 minutes ago | [-0 more]

Oh for sure. When he wrote of the AI companies that are "stealing/crawling/hammering", you thought he meant the legitimate ones that do honor robots.txt. That makes sense.

jacobgkau 9 hours ago | [-6 more]

I'm guessing you don't manage any production web servers?

robots.txt isn't even respected by all of the American companies. Chinese ones (which often also use what are essentially botnets in Latin American and the rest of the world to evade detection) certainly don't care about anything short of dropping their packets.

cpncrunch 5 hours ago | [-4 more]

I have been managing production commercial web servers for 28 years.

Yes, there are various bots, and some of the large US companies such as Perplexity do indeed seem to be ignoring robots.txt.

Is that a problem? It's certainly not a problem with cpu or network bandwidth (it's very minimal). Yes, it may be an issue if you are concerned with scraping (which I'm not).

Cloudflare's "solution" is a much bigger problem that affects me multiple times daily (as a user of sites that use it), and those sites don't seem to need protection against scraping.

filleduchaos 4 hours ago | [-1 more]

It is rather disingenuous to backpedal from "you can easily block them" to "is that a problem? who even cares" when someone points out that you cannot in fact easily block them.

cpncrunch 4 hours ago | [-0 more]

I was referring to legitimate ones, which you can easily block. Obviously there are scammy ones as well, and yes it is an issue, but for most sites I would say the cloudflare cure is worse than the problem it's trying to cure.

kvirani 4 hours ago | [-1 more]

Security almost always brings inconvenience (to everyone involved, including end users). That is part of its cost.

cpncrunch 4 hours ago | [-0 more]

What security issue is actually being solved here though?

dingnuts 9 hours ago | [-0 more]

[dead]

chrneu 6 hours ago | [-0 more]

this is the equivalent of asking people not to speed on your street.

Sohcahtoa82 6 hours ago | [-1 more]

How are you this naive? Do you really think scrapers give a damn about your robots.txt?

cpncrunch 5 hours ago | [-0 more]

The legitimate ones do, which is what I was referring to. Obviously there are bastard ones as well.

mplewis 7 hours ago | [-4 more]

No you cannot! I blocked all of the user agents on a community wiki I run, and the traffic came back hours later masquerading as Firefox and Chrome. They just fucking lie to you and continue vacuuming your CPU.

cpncrunch 5 hours ago | [-3 more]

There shouldn't be any noticeable hit on your cpu from bots from a site like that. Are you sure it's not a DDoS?

Obviously it depends on the bot, and you can't block the scammy ones. I was really just referring to the major legitimate companies (which might not include Perplexity).

4 hours ago | [-0 more]
[deleted]
literalAardvark 4 hours ago | [-1 more]

There is a noticeable hit, there's also a noticeable cost, and it's not a ddos.

Not all sites can have full caching, we've tried.

cpncrunch 4 hours ago | [-0 more]

I was referring to the community wiki.

literalAardvark 6 hours ago | [-1 more]

Tell me you don't run a site without telling me you don't run a site

cpncrunch 5 hours ago | [-0 more]

Tell me you make incorrect assumptions without specifically saying so. (Yes, you're incorrect).

gblargg 5 hours ago | [-0 more]

More and more sites I can't even visit because of this "prove you're human" because it's not compatible with older web browsers, even though the website it's blocking is.

NooneAtAll3 12 hours ago | [-0 more]

it can't even spy on us silently, damn