On Saturday, Triplegang CEO Oleksandr Tomchuk received a call that his company's e-commerce site was down. This appeared to be some kind of distributed denial of service attack.
He soon discovered that the culprit was an OpenAI bot that was relentlessly trying to scrape his entire massive site.
“We have over 65,000 products, and each product has a page,” Tomchuk told TechCrunch. “Each page contains at least three photos.”
OpenAI was sending “tens of thousands” of server requests trying to download all of them, hundreds of thousands of photos, along with detailed descriptions.
“OpenAI used 600 IPs for data collection. We're still analyzing last week's logs, but it's probably much more than that,” he said. Regarding IP addresses, he said.
“Their crawlers were crushing our site. It was basically a DDoS attack,” he said.
The Triplegangers website is that business. The seven-employee company has spent more than a decade building the largest database of “human digital doubles,” or 3D image files scanned from real human models, on the web.
We sell 3D object files and photos (from hands to hair to skin to whole bodies) to 3D artists, video game makers, and anyone who needs to digitally recreate real human features.
Tomchuk's team is based in Ukraine but also has a license in the US from Tampa, Florida, and has a terms of service page on the site that prohibits bots from taking images without permission. But nothing happened. Your website must use a properly configured robot.txt file that includes a tag that tells OpenAI's bot, GPTBot, to leave your site. (According to OpenAI's crawler information page, OpenAI has several other bots with their own tags: ChatGPT-User and OAI-SearchBot.)
Robot.txt (also known as Robot Exclusion Protocol) was created to tell search engine sites what not to crawl when indexing the web. OpenAI says on its information page that it will respect such files if they have their own set of no-crawl tags set, but it will take up to 24 hours for bots to recognize updated robot.txt files. It also warns that this may occur.
If a site doesn't use robot.txt properly, as Tomchuk experienced, OpenAI and others take that to mean they can scrape to their heart's content. It is not an opt-in system.
To add insult to injury, not only was Triple Rangers taken offline by OpenAI's bots during US business hours, but all of the CPU usage and download activity from the bots resulted in high AWS bills. Tomchuk predicts.
Robot.txt is also not failsafe. AI companies will comply voluntarily. Another AI startup, Perplexity, was rather famously called out last summer by a Wired investigation because there was some evidence suggesting that Perplexity didn't respect itself.
Each of these is one product, and the product page contains more photos. Used with permission. Image credit: Triplegangers (Opens in new window)
I don't know exactly what was taken
By Wednesday, a few days after OpenAI's bots returned, Trilegangers had a properly configured robot.txt file in place to support GPBot, Barkrowler (SEO crawler), and Bytespider (TokTok crawler). Tomczak is also hopeful that he was able to block crawlers from other AI modeling companies. The site did not crash Thursday morning, he said.
But Tomchuk still doesn't have a reasonable way to determine exactly what OpenAI succeeded in doing or to remove that material. He hasn't found a way to contact OpenAI and ask. OpenAI did not respond to TechCrunch's request for comment. And, as TechCrunch recently reported, OpenAI has so far failed to deliver its long-promised opt-out tools.
This is a particularly troublesome problem for triple gangers. “We're in a business where rights are a pretty serious issue because we're scanning real people,” he said. Laws like Europe's GDPR “make it impossible to take and use a photo of someone on the web.”
The Triplegangers website was also a particularly tasty find for AI crawlers. Multi-billion dollar startups like Scale AI are founded where humans painstakingly tag images to train AI. The Triplegangers site features photos tagged with details such as ethnicity, age, tattoos and scars, and all body types.
Ironically, it was the OpenAI bot's greed that alerted Tripleganger to how exposed it was. If he had rubbed it more gently, Mr. Tomczak would never have noticed, he said.
“It's scary because there seem to be loopholes that these companies are using to crawl your data by saying, 'You can opt out by updating your robot.txt with our tag.' It puts the onus on business owners,” Tomczak said. Understand how to block them.
Triplegangers' server logs showed how the OpenAI bot was relentlessly accessing the site from hundreds of IP addresses. Used with permission.
He wants other small online businesses to know that the only way to discover if AI bots are stealing copyrighted material from your website is to actively investigate. That's what I think. He is not alone in their fear. Other website owners recently told Business Insider how OpenAI bots caused their sites to crash and resulted in high AWS bills.
The problem is even bigger in 2024. A new study from digital advertising firm DoubleVerify found that AI crawlers and scrapers caused an 86% increase in “general invalid traffic” in 2024, or traffic that doesn’t come from real users.
Still, “most sites remain unaware that they have been scraped by these bots,” Tomczak warns. “Going forward, log activity should be monitored daily to identify these bots.”
If you think about it, the whole model is a bit like a mafia shakedown. AI bots will take what they want unless you protect them.
“In addition to scraping data, you need to ask for permission,” Tomczak says.