In the age of generative AI, where chatbots can provide detailed answers to questions based on content retrieved from the internet, the lines between fair use and plagiarism, between routine web scraping and unethical summarization, are very blurred.
Perplexity AI is a startup that combines a search engine with a large-scale language model to generate answers that include detailed responses rather than just links. Unlike OpenAI's ChatGPT and Anthropic's Claude, Perplexity does not train its own underlying AI model, but rather uses open or commercially available models to take information it collects from the internet and turn it into answers.
But a series of allegations in June suggest the startup's approach borders on unethical: Forbes accused the company of plagiarizing one of Perplexity's news articles in the startup's beta “Perplexity Pages” feature, and Wired accused Perplexity of illegally scraping its website and other sites.
Perplexity, which was raising $250 million in April at a valuation of nearly $3 billion, denies it has done anything wrong. The Nvidia- and Jeff Bezos-backed company says it respects publishers' requests not to scrape their content and that it operates within the bounds of fair use copyright law.
The situation is complicated. At its heart, there are nuances surrounding two concepts. The first is the Robots Exclusion Protocol, a standard that websites can use to indicate that they don't want web crawlers to access or use their content. The second is fair use in copyright law, which establishes a legal framework that allows copyrighted material to be used without permission or payment under certain circumstances.
Secretly scrape web content
Image credit: Getty Images
Perplexity ignores the Robots Exclusion Protocol to secretly scrape areas of websites that publishers don't want bots to access, according to a June 19 article in Wired. Wired reported that it had observed Perplexity-linked machines doing this not only on the publisher's news sites, but also on other publications under parent company Condé Nast.
The report notes that developer Rob Knight conducted similar experiments and came to the same conclusion.
Wired's reporter and Knight tested their suspicions by asking Perplexity to summarize a set of URLs, then performing server-side observations of IP addresses associated with Perplexity visiting those sites. Perplexity then “summarized” the text of those URLs — except for a dummy website with limited content that Wired created for this purpose, in which case it just returned the raw text from the page.
This is where the nuances of robot exclusion protocols become important.
Web scraping, technically, is when automated software called a crawler combs the web to index and collect information from websites. Search engines like Google do this to include web pages in their search results. Other companies and researchers use crawlers to gather data from the internet for market analysis, academic research, and, as we've learned, training machine learning models.
Web scrapers that follow this protocol first look for a “robots.txt” file in a site's source code to see what is and isn't allowed. What's not allowed today is typically scraping publisher sites to build large training datasets for AI. Search engines and AI companies like Perplexity have announced that they follow this protocol, but they're not legally required to do so.
Perplexity's VP of operations Dmitry Shevelenko told TechCrunch that summarizing URLs is not the same as crawling. “Crawling is just sucking up information and adding it to your index,” Shevelenko said. He noted that Perplexity's IPs only show up as visitors to “robots.txt banned” websites when users type the URL into a query, which “doesn't meet the definition of crawling.”
“We are simply responding to a direct and specific request from a user to access the URL,” Shevelenko said.
In other words, when a user manually provides a URL to the AI, Perplexity's AI doesn't act as a web crawler, but as a tool to help retrieve and process the information the user requested, Perplexity said.
But for Wired and many other publishers, that's a distinction without a difference, because hitting a URL, pulling information from it, and summarizing the text looks exactly like scraping, when done thousands of times a day.
(Wired also reported that one of Perplexity's cloud service providers, Amazon Web Services, is investigating the startup for ignoring robots.txt protocol to scrape web pages cited by users in a prompt. AWS told TechCrunch that Wired's report was inaccurate and that it was handling media inquiries like any other reports alleging misuse of its services.)
Plagiarism or fair use?
Forbes accused Perplexity of plagiarizing a scoop about former Google CEO Eric Schmidt developing AI-powered combat drones. Image credit: Perplexity / Screenshot
Wired and Forbes have also accused Perplexity of plagiarism. Ironically, Wired said it had plagiarized the very article it accused the startup of secretly scraping web content.
A Wired reporter said that Perplexity's chatbot “produced a six-paragraph, 287-word summary detailing the article's conclusion and the evidence used to reach that conclusion.” One sentence was an exact copy of the original article, which Wired deemed plagiarism. The Poynter Institute's guidelines state that if an author (or an AI) uses seven consecutive words from an original article, that could be plagiarism.
Forbes also accused Perplexity of plagiarism. In early June, the news site published an investigative report claiming that Google CEO Eric Schmidt's new company was hiring a lot of people and testing AI-powered drones for military use. The next day, Forbes editor John Paczkowski posted on X that Perplexity had republished the scoop as part of a beta feature called the “Perplexity Page.”
Perplexity Pages is a new tool currently available only to certain Perplexity subscribers, which Perplexity promises will help users turn their research findings into “visually compelling, comprehensive content.” Examples of such content on the site, created by the startup's employees, include articles like “A Beginner's Guide to Drums” and “Steve Jobs: A Visionary CEO.”
“They plagiarized most of our reporting,” Paczkowski wrote, “citing us and a few people who reblogged us as sources, in a way that is most easily ignored.”
Forbes reported that many of the posts curated by the Perplexity team were “strikingly similar to original articles from multiple publications, including Forbes, CNBC and Bloomberg.” The posts had tens of thousands of views, Forbes said, and none of the publications were named in the article itself. Rather, Perplexity's articles were cited with “small, easily missed logos linked to them.”
Forbes also said the article about Schmidt contained “substantially identical language” to Forbes' scoop. The roundup also included an image created by Forbes' design team, but which appears to have been slightly altered by Perplexity.
Perplexity CEO Aravind Srinivas told Forbes at the time that the company would cite sources more prominently going forward, but the solution is not foolproof, as citation itself faces technical challenges. While ChatGPT and other models can hallucinate links, Perplexity uses an OpenAI model that may make it more susceptible to such hallucinations. In fact, Wired reported that it had observed Perplexity hallucinating entire stories.
Besides pointing out Perplexity's “rough edges,” Srinivas and company are largely asserting Perplexity's right to use such content for summarization.
This is where the nuances of fair use come into play: Plagiarism is bad, but it's not strictly illegal.
According to the U.S. Copyright Office, it is legal to use limited portions of a work, including quotations, for purposes such as commentary, criticism, news reporting, and academic reporting. AI companies like Perplexity argue that providing summaries of articles falls within the scope of fair use.
“Nobody has a monopoly on the facts,” Shevelenko said. “Once the facts are published, they are available to everyone.”
Shevelenko likened Perplexity summaries to the way journalists often use information from other news sources to bolster their own reporting.
Mark McKenna, a law professor at the UCLA Institute for Technology, Law and Policy, told TechCrunch that this situation isn't easy to sort out: In a fair use case, the court will likely consider whether the summary uses much of the original article's language or just the ideas. They might also consider whether reading the summary is a substitute for reading the article.
“There's no clear line,” McKenna said. “So [Perplexity] “Factually stating what an article or report is about is using non-copyrightable parts of a work — just facts and ideas. But the more actual words and sentences included in the summary, the more it starts to look like a copy rather than just a summary.”
Unfortunately for publishers, unless Perplexity uses the full wording (which it clearly does in some cases), its summaries may not be considered a violation of fair use.
How Perplexity Protects Itself
AI companies like OpenAI have signed media deals with various news publishers to access current and archived content to train their algorithms, and in return, OpenAI promises to surface news articles from those publishers in response to user queries on ChatGPT (though, as Nieman Lab reported last week, this too has some remaining problems to solve).
Perplexity has held off on announcing a series of its own media deals, presumably waiting for the criticism against it to die down, but the company is “full steam ahead” on a series of advertising revenue-sharing deals with publishers.
Perplexity will start placing ads in answering queries, and publishers whose content is cited in the answer will get a cut of the corresponding ad revenue. Shevelenko said Perplexity is also looking at giving publishers access to its technology so they can build Q&A experiences or embed related questions natively within their sites or products.
But is this just a facade for organized IP theft? Perplexity isn't the only chatbot that summarizes content so thoroughly that readers no longer feel the need to click through to the original source material.
And as these AI scrapers continue to steal publishers' work and repurpose it for their own business, publishers will find it harder to earn advertising revenue, which ultimately means there will be less content to scrape. With no content to scrape, generative AI systems will turn to training on synthetic data, which can lead to a vicious cycle of producing biased and inaccurate content.