Count them as ad visits, to make big tech pay for better hardware or line?
That opens you up to getting accused of click fraud, as AdNauseam found out the hard way but its worth it if you can squeeze some cash out of them before that happens.
I mean, scraping bots would obviously obey robots.txt so those scraping - bots, i mean users can’t be bots
In my experience with bots, a portion of them obey robots.txt, but it’s tricky to find the user agent string that some bots react to.
So I recommend having a robots.txt that not only target specific bots, but also tell all bots to avoid specific paths/queries.
Example for dokuwiki
User-agent: * Noindex: /lib/ Disallow: /_export/ Disallow: /user/ Disallow: /*?do= Disallow: /*&do= Disallow: /*?rev= Disallow: /*&rev=
Would it be possible to detect the
gptbot
(or similar) of their user agent, and server them different data?Can they detect that?
yes, you can match on user agent, and then conditionally serve them other stuff (most webservers are fine with this). nepenthes and iocaine are the current preferred/recommended servers to serve them bot mazes
the thing is that the crawlers will also lie (openai definitely doesn’t publish all its own source IPs, I’ve verified this myself), and will attempt a number of workarounds (like using residential proxies too)
Can they detect that they’re being served different content though?
It’s a constant cat and mouse atm. Every week or so, we get another flood of scraping bots, which force us to triangulate which fucking DC IP range we need to start blocking now. If they ever start using residential proxies, we’re fucked.
I have a tiny neocities website which gets thousands of views a day, there is no way that anyone is viewing it often enough for that to be organic.
quickly, add some ad revenue :P
From ai vendors. Let them pay you for scraping you lol
at least OpenAI and probably others do currently use commercial residential proxying services, though reputedly only if you make it obvious you’re blocking their scrapers, presumably as an attempt on their end to limit operating costs
They have a botnet on residential devices?
the term of art is “residential proxy” and there’s a ton of them
for example: it’s the flipside of Bright’s free VPN service - through Bright Data they sell people access proxied via some user’s connection
And companies like honey that pay you (a pittance) to proxy people’s requests to porn sites.
Oh never heard of that. I have blocked their scrapers via agents but I haven’t felt residential proxy pain.
here’s a mastodon post and linked blog post with some details on what currently sets it off
PS: Looks like that sync issue between our instances is resolved now?
Daym, I should set me up some iocane as well I think