Ah. I see the war has begun.
Soon: US Republicans introduce law to prohibit the use of AI tar pits; cite copyright law and freedom of speech.
They will cite the Magna Carta and Articles of Confederation.
“The Bible says…”
Post needs accessibility.
Previous posts on same topic
- Nepenthes: a dangerous tarpit to trap LLM crawlers – OSnews
- Nepenthes - A tarpit to catch AI web crawlers
- Developer Creates Infinite Maze That Traps AI Training Bots
- AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
- AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
- AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
- AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
- Finally setup Nepenthes and caught my first bot
That’s excluding all the posts like OP’s lacking searchable text due to inaccessibility.
It’s weird how this is written to make them sound more like animals or insects than a computer algorithm… “thrash around” lol
I understood it as “thrashing” in a computer science sense. When RAM fills up and the kernel has to swap aggressively, that’s called thrashing
It’s because the tool is named Nepenthes, after the pitcher plants, into which victims fall, cannot escape, and thrash around until they die and are digested.
Rage Against the Machine.
How can we help
Kyle Hill has an amazing video on how…
Let’s go!
Hell yeah, that rules
Cyberpunk as fuck.
Nowhere in the article or start of the readme did I find how this works. How does it differentiate between a human visitor and a scraper?
Probably invisible links A human would stop clicking too once they see garbage
Hell Yeah!
I am so confused by the low link lol.
- “AI haters build tarpits to trap and trick AI (!)” - Ohmy god poor AI :<
- “…that ignore robots.txt!” - …oh, so illegal AI…?
- “Attackers explain-” - YEAH! THE EVIL AGRESSIVE
- “how anti-spamdefense became an AI weapon” - …folk trying to defend from spam…?
FFS they try to paint people protecting themselves as evil but are keeping facts too much and it becomes an absolute confusing mess xD
It’s not really that confusing.
The software equivalent of armed masked men are illegally breaking in to your personal property, stealing everything that isn’t nailed down and ripping all the nails out of everything that is, and then leaving with it in order to reuse it for personal profit. It is, in all ways, similar to a home invasion. These invaders are then telling you that you’re a bad person because you don’t want them invading your property and stealing all your shit.
Its highly illegal, everyone involved with it knows for a fact that it’s highly illegal, so they best they can do is try and spin propaganda around it because nobody has the balls to try and arrest Sam Altman, et al about it.
If you pick the lock on my front door and enter my home without permission I am going to put a 12 gauge slug through your solar plexus. If I could do the same to an AI crawler I would.
There is a way to stop the IA Crawlers, but it involves using dynamite and a hard risk of landing in prison.
This is a terrible analogy.
First off, robots.txt has no force of law. It’s just a curtesy. You are free to ignore it (except where prohibited by EULA or contract).
Secondly, this is more similar to a supermarket hanging a sign that you can only access 3 of their 11 aisles.
What this is doing is if you try to access the 7 aisles they requested you not to use, you have to solve a math problem or two.
Ai scrapers are obnoxious loud drunk people who take way more than their fair share.
If you truly have something private (like your house) you should not expose it publically on the internet.
Beware of dog signs also have no enforcement of law.
But apart from that, if your crawler ends up stuck in an endless loop, that’s poor coding on your part. Human beings won’t browse a static website endlessly, neither should a crawler
Well, let’s turn this situation around then and see how it changes.
I hammer Meta’s backend services with 6.8m requests per second, ignoring all posted guidelines, absorbing all the data I can get my hands on from them and feeding it to my machine which is busy trying to build BaseFook based on Meta’s data that I’ve harvested from them.
Criminal DDOS? What’s that?
Copyright law? Surely this doesn’t apply to this.
Unauthorized access to backend systems? Nah, we’ll be fine, that’s definitely legal.
…
It is currently true that robots.txt doesn’t have legal teeth and relies on voluntary compliance, but there have been court cases involving it in the past, and in my opinion they should have resulted in an established legal precedent. Check these out (courtesy of Wikipedia:)
The robots.txt played a role in the 1999 legal case of eBay v. Bidder’s Edge,[12] where eBay attempted to block a bot that did not comply with robots.txt, and in May 2000 a court ordered the company operating the bot to stop crawling eBay’s servers using any automatic means, by legal injunction on the basis of trespassing.[13][14][12] Bidder’s Edge appealed the ruling, but agreed in March 2001 to drop the appeal, pay an undisclosed amount to eBay, and stop accessing eBay’s auction information.[15][16]
In 2007 Healthcare Advocates v. Harding, a company was sued for accessing protected web pages archived via The Wayback Machine, despite robots.txt rules denying those pages from the archive. A Pennsylvania court ruled “in this situation, the robots.txt file qualifies as a technological measure” under the DMCA. Due to a malfunction at Internet Archive, Harding could temporarly access these pages from the archive and thus the court found “the Harding firm did not circumvent the protective measure”.[17][18][19]
In 2013 Associated Press v. Meltwater U.S. Holdings, Inc. the Associated Press sued Meltwater for copyright infringement and misappropriation over copying of AP news items. Meltwater claimed that they did not require a license and that it was fair use, because the content was freely available and not protected by robots.txt. The court decided in March 2013 that “Meltwater’s copying is not protected by the fair use doctrine”, mentioning among several factors that “failure […] to employ the robots.txt protocol did not give Meltwater […] license to copy and publish AP content”.[20]
The critical difference that determines whether or not it’s illegal is how many lawyers the site owner has.
And also, it’s not stealing but unauthorized copying.
deleted by creator
More like clogging the entry to your exhibition for making copies of your licensed produce, no?

Hell yes! This is exactly what I have wanted to happen.
Seems like these traps would be trivially easy to defeat. I should get off my ass and run one, see how it goes.
deleted by creator
Agree. This is another revenge fantasy from people that think the idea is great, without understanding that the implementation part is where it’s gonna break down.
Yeah, much like the thorn, LLMs are more than capable of recognizing when they’re being fed Markov gibberish. Try it yourself. I asked one to summarize a bunch of keyboard auto complete junk.
The provided text appears to be incoherent, resembling a string of predictive text auto-complete suggestions or a corrupted speech-to-text transcription. Because it lacks a logical grammatical structure or a clear narrative, it cannot be summarized in the traditional sense.
I’ve tried the same with posts with the thorn in it and it’ll explain that the person writing the post is being cheeky - and still successfully summarizes the information. These aren’t real techniques for LLM poisoning.
this is for poisoning the training data, not the input into the generative model
An AI crawler is both. It extracts useful information from websites using LLMs in order to create higher quality data for training data. They’re also used for RAG.
Büt whāt æbœùt typīñg lîke thìß?
Appreciate you using the ß correctly instead of using it as a replacement for “B”
But this only has one s. :)
I know, but you can just read it as a long s as if they’re talking funny lol
I read this like static over a radio
You didn’t use the thorn!
I believe someone else on Lemmy has claimed that character as their own.
Doesn’t work either
The text you provided translates to:
“But what about typing like this?”. This style of writing involves replacing standard Latin letters with similar-looking characters from other alphabets or adding diacritical marks (accents, tildes, umlauts) available in the Unicode standard.🤔❓🧑💻⌨️👉📝❓
Even if the LLM dosent recognise it, the Human ghost workers will train/translate it
You’re only hindering people who have trouble reading
Here is a demo for anyone interested. It’s deliberately slow to load.
It’s deliberately slow to load
That kinda defeats the goal of feeding AI as much garbage as possible. They will just fetch a page from a different site in that time, instead of spending cycles on this page. It’s not like the crawler works strictly serially.
The idea is to protect own server from unnecessary loads. You’re welcome to provide a faster AI tar pit, just mind that ultimately this is a waste of resources.
I’m guessing that Markov chains are pretty efficient computationally compared to AI training. Don’t have a site currently, but I’d love to see a bot rip through hundreds of pages a minute.









