Reddit is speaking out against AI companies — or at least asking them to pay damages.
Earlier this week, Reddit announced it was changing its Robots Exclusion Protocol (also known as its robots.txt file), and this dry edit is part of a larger negotiation/battle between AI companies that want content they can use to train their language models, and the companies that actually own the content.
“robots.txt” is a way for a website to tell third parties how to crawl the website. A typical example is a website that allows Google to crawl it for inclusion in search results.
With AI, the value exchange is less clear. If you run a website whose business model relies on clicks and eyeballs, it's less appealing to have an AI company siphon your content and then never send you traffic. In some cases, the AI company may even steal your work outright.
So by modifying its robots.txt file, and continuing to rate-limit and block unknown bots and crawlers, Reddit appears to be trying to prevent the practice that companies like Perplexity AI have been criticized for.
Hit the play button to find out more, and let us know what you think in the comments.