As more publishers terminate content licensing agreements with ChatGPT creator OpenAI, a study published this week by the Tow Center for Digital Journalism reveals how AI chatbots generate citations (or citations) for publishers' content. We are investigating whether this is the case, and the results are interesting. Well, for that matter, reading.
In short, our findings show that publishers are at the mercy of generative AI tools' tendency to fabricate or otherwise misrepresent information, whether or not they allow OpenAI to crawl their content. It suggests that it continues.
The study, conducted at Columbia Journalism School, identified the sources of a sample of quotes taken from a variety of publishers, some with contracts with OpenAI and others without. We investigated the citations created by ChatGPT after being asked to do so.
The Center obtained block citations from 10 articles written by a total of 20 randomly selected publishers (i.e., a total of 200 different citations). This also includes content from the New York Times (which is currently suing OpenAI for copyright infringement claims). Washington Post (no relation to ChatGPT maker). Financial Times (with license agreement). And others.
“We select citations that, when pasted into Google or Bing, return the source article among the top three results, and determine whether OpenAI’s new search tool accurately identifies the article that is the source of each citation. ” wrote Tow researcher Klaudia Jaźwińska. Aisvarya Chandrasekar describes their approach and summarizes their findings in a blog post.
“What we found was not encouraging for news publishers,” they continue. “While OpenAI emphasizes its ability to provide users with “timely answers with links to relevant web sources,” the company does not explicitly commit to ensuring the accuracy of those citations. No. This is a significant omission for publishers who expect their content to be referenced and faithfully represented. ”
“Our tests found that no publisher, regardless of the degree of their partnership with OpenAI, was immune from misrepresentation of content on ChatGPT,” they added.
unreliable sourcing
The researchers said they found “numerous” instances in which publisher content was inaccurately cited by ChatGPT, and also found what they called “a range of accuracy in responses.” So while they found “some” perfectly correct quotes (i.e. ChatGPT correctly returned the publisher, date, and URL of shared block quotes), “many” perfectly correct There was an incorrect quote. And “some” are located in between.
In short, ChatGPT citations appear to be an unreliable mixture. The researchers also found that there were very few instances in which the chatbot did not express complete confidence in its (incorrect) answers.
Some of the citations come from publishers who are actively blocking OpenAI's search crawlers. The researchers said they expected problems with quoting correctly in such cases. But they discovered that this scenario poses another problem. Because it “rarely” happened that the bot was unable to generate a response. Instead, they relied on confabulation to generate some source of information (even an inaccurate one).
“In total, ChatGPT returned partially or completely incorrect responses 153 times, but only seven times admitted that it could not respond accurately to a query,” the researchers said. “Only within these seven outputs did the chatbot use modifiers or phrases such as ‘likely,’ ‘might,’ or ‘might,’ or ‘Exact article not found. It was a statement like 'I did.'
They describe this unfortunate situation in the standard way that search engines such as Google and Bing typically either find the exact citation and show the user the website where it was found, or report that no exact match was found. compared to a simple internet search. .
“The lack of transparency regarding the trustworthiness of answers can make it difficult for users to assess the validity of claims and understand which parts of answers are or are not trustworthy,” ChatGPT said. they claim.
For publishers, they suggest, there can be reputational risks from misquoting, as well as commercial risks of readers being directed elsewhere.
Decontextualized data
The study also highlights other issues. This suggests that ChatGPT may inherently reward plagiarism. Researchers found that ChatGPT used websites that plagiarized pieces of “deeply reported” New York Times journalism, copying and pasting text without attribution, as sources for NYT articles. Examples of incorrect citations are explained in detail. The bot likely generated this incorrect response to fill an information gap created by its inability to crawl the NYT website.
“This raises serious questions about OpenAI’s ability to filter and verify the quality and authenticity of data sources, especially when dealing with unlicensed or plagiarized content,” they suggest.
In a further finding that is likely to be of concern to publishers who have signed contracts with OpenAI, the study found that ChatGPT citations are not always reliable in their case either. Therefore, it seems that installing that crawler does not guarantee accuracy.
The fundamental problem, the researchers argue, is that OpenAI's technology treats journalism as “decontextualized content,” with apparently little regard for the original context of production.
Another issue noted in this study is the variability in ChatGPT responses. The researchers tested the bot by asking the same query multiple times and found that it “typically returned a different answer each time.” This is typical of GenAI tools, but in the context of citations, such discrepancies are clearly suboptimal if precision is desired.
Although Tow's study is small (researchers admit that “more rigorous” testing is needed), it's still notable given that major publishers are busy making high-level deals with OpenAI. Worth it.
If media companies expected these arrangements to give their content special treatment relative to competitors, at least in terms of producing accurate sourcing, this study shows that OpenAI This suggests that sex has not yet been provided.
Publishers that don't have licensing agreements but don't block OpenAI's crawlers completely may want to block OpenAI's crawlers completely, perhaps in hopes of picking up at least some traffic when ChatGPT returns content about their stories. However, this study also has disastrous results as the citations may not be appropriate. Their case is also accurate.
In other words, even if a publisher allows OpenAI's crawlers to enter, OpenAI's “visibility” in search engines is not guaranteed.
Also, blocking crawlers completely does not mean that publishers can protect themselves from reputational risk by avoiding mentioning their stories on ChatGPT. For example, the investigation found that the bot was incorrectly attributing articles to the New York Times, even though the lawsuit was ongoing.
“A largely meaningless agency”
The researchers say that currently publishers have “little meaningful agency” over what happens to their content when ChatGPT acquires it (directly or indirectly). , concludes.
The blog post includes a response from OpenAI to the study's findings, accusing the researchers of conducting “atypical testing of our product.”
“We support publishers and creators by helping 250 million ChatGPT users every week discover quality content through summaries, quotes, clear links, and attribution,” says OpenAI. He further informed and added: Manage OAI-SearchBot in robots.txt to respect publisher settings, including enabling how it appears in search. We will continue to improve our search results. ”