

Are you talking about anubis? Because you’re very clearly wrong.
And now I think about it, regardless of which approach you were talking about, that’s some impressive arrogance to assume that everyone involved other than you was a complete idiot.
Eta:
Ahh, looking at your post history, I see you misunderstand why scrapers use a common user agent, and are confused about what a general increase in cost-per-page means to people who do bulk scraping.
Bruh, when I said “you misunderstand why scrapers use a common user agent” I didn’t require further proof.
Requests following an obvious bulk scraper pattern with user agents that almost certainly aren’t regular humans are trivially easy to handle using decades old techniques, which is why scrapers will not start using curl user agents.
See, the thing is with blocking ai scraping, you can actually see it work by looking at the logs. I’m guessing you don’t run any sites that get much traffic or you’d be able to see this too. Its efficacy is obvious.
Sure scrapers could start keeping extra state or brute forcing hashes, but at the scale they’re working at that becomes painfully expensive and the effort required to raise the challenge difficulty is minimal if it becomes apparent that scrapers are getting through. Which will be very obvious if it happens.
Presumably you haven’t had much experience with ai scrapers. They’re not a “one run and done” type thing, especially for sites with frequently changing content, like this one.
I don’t want to seem rude, but you appear to be speaking from a position of considerable ignorance, dismissing the work of people who actually have skin in the game and have demonstrated effective techniques for dealing with a problem. Maybe a little more research on the issue would help.