Illusions (basically, lies that generative AI models teach) are a huge problem for companies looking to integrate technology into their operations.
Because models have no real intelligence and simply predict words, images, sounds, music, and other data according to their private schemas, they can sometimes get it wrong. Big mistake. In a recent article in the Wall Street Journal, a source cited an example of Microsoft's generative AI fabricating meeting attendees and implying that the call was about topics that were not actually discussed on the call. It talks in detail.
As I wrote a while ago, hallucinations may be a problem that cannot be solved with today's transformer-based model architectures. However, many generative AI vendors have suggested that generative AI can be made more or less obsolete through a technical approach called search augmented generation (RAG).
One vendor, Squirro, pitches it this way:
At the core of this product is the concept of Acquisition Augmentation LLM or Acquisition Augmentation Generation (RAG) built into the solution. [our generative AI] It is unique in that it promises zero hallucinations. All information generated is traceable to the source, ensuring authenticity.
A similar suggestion from SiftHub is:
Using RAG technology and fine-tuned large-scale language models with industry-specific knowledge training, SiftHub enables companies to generate personalized responses without hallucinations. This ensures increased transparency and reduced risk, creating absolute trust in using AI for all your needs.
RAG was developed by data scientist Patrick Lewis, a researcher at Meta and University College London and lead author of the 2020 paper that coined the term. When you apply RAG to your model, it essentially uses a keyword search to retrieve documents that may be relevant to your question (such as a Wikipedia page about the Super Bowl) and generates an answer considering this additional context. request the model to do so.
“When you're working with a generative AI model like ChatGPT or Llama, and you ask a question, by default the model answers from its 'parametric memory', or the knowledge stored in its parameters. It's training using large amounts of data from the web,” explained David Wadden, a research scientist at his AI2, his AI-focused research arm at the nonprofit Allen Institute. “But just as having references can give you a more accurate answer; [like a book or a file] The same is true for the model in front of you and in some cases. ”
RAG is definitely useful. This allows you to attribute what your model has produced to the retrieved document and verify that fact (and, as an added benefit, avoids potential copyright infringement reversals). RAGs also allow companies that do not want to use their own documents to train models (for example, companies in highly regulated industries such as healthcare or law) to use them in a more secure and temporary manner. documentation will be available.
But RAG certainly can't stop the model from hallucinating. And there are also limitations that many vendors ignore.
Wadden said RAG is most effective in “knowledge-intensive” scenarios where users want to use the model to address an “information need,” such as finding out who won last year's Super Bowl. It states that there is. In such a scenario, the document that answers the question is likely to contain many of the same keywords as the question (e.g., “Super Bowl,” “last year”) and can be found relatively easily with a keyword search.
Things get even more complicated for “reasoning-heavy” tasks like coding and math. With keyword-based search queries, it is difficult to specify the concepts needed to answer the request, much less identify which documents are relevant.
Even for basic questions, the model can become “distracted” by extraneous content within the document, especially for long documents where the answers are not obvious. Alternatively, for reasons still unclear, we could simply ignore the contents of the retrieved document and rely on parametric memory instead.
RAGs are also expensive in terms of the hardware required to apply them at scale.
This is because documents retrieved from the web, internal databases, and other locations must be stored in memory, at least temporarily, so that the model can reference them. Another expense is computing the increased context that the model must process before generating a response. This represents a significant consideration for a technology already notorious for the amount of computation and power required for even basic operations.
This is not to suggest that RAG cannot be improved. Wadden mentioned a number of ongoing efforts to train models to better utilize the documents captured by RAG.
These efforts include models that let you “decide” when to use a document, or choose not to perform a search in the first place if you decide it's not needed. Others are focused on how to index large document datasets more efficiently, or on improving search through better representation of documents, representations that go beyond keywords.
“We're good at retrieving documents based on keywords, but we're not so good at retrieving documents based on more abstract concepts, such as proof techniques needed to solve math problems.” says Wadden. “Research is needed to build document representations and retrieval techniques that can identify documents that are relevant to more abstract production tasks. I think this is largely an open problem at this point.”
Therefore, while RAG can help reduce hallucinations in a model, it is not the answer to all hallucination problems in AI. Be wary of vendors who try to claim otherwise.