![](/static/11047678/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
And this specifically target AI training web crawlers.
There’s no way to distinguish between an AI training crawler and any other crawler. Per https://zadzmo.org/code/nepenthes/ :
“This is a tarpit intended to catch web crawlers. Specifically, it’s targetting crawlers that scrape data for LLM’s - but really, like the plants it is named after, it’ll eat just about anything that finds it’s way inside.”
Emphasis mine. Even the person who coded this thing knows that it can’t tell what a given crawler’s purpose is. They’re just willing to throw the baby out with the bathwater in this case, and mess with legitimate crawlers in order to bog down the ones gathering data for LLM training.
(In general, there is no way to tell for certain what is requesting a webpage. The User-Agent header that (usually) arrives with an HTTP(S) request isn’t regulated and can contain any arbitrary string. Crawlers habitually claim to be old versions of Firefox, and there isn’t much the server can do to identify what they actually are.)
Well, yeah, but obeying robots.txt is only a courtesy in the first place, so you can’t guarantee it’ll catch only LLM-related crawlers and no others, although it may lower the false positive rate.