AI is notoriously bad at lying, but Microsoft has announced that there's a way to fix that. Naturally, some will be skeptical, but there's reason to be skeptical.
Microsoft today rolled out Correction, a service that automatically corrects AI-generated text that is factually incorrect. Correction first flags text that may be in error (for example, a summary of a company's quarterly earnings call that may be misquoted), and then compares that text against sources of truth (such as a transcript) to verify the facts.
Correction, available as part of Microsoft's Azure AI Content Safety API, can be used with any text-generation AI model, including Meta's Llama and OpenAI's GPT-4o.
“The corrections are made possible by a new process that leverages small and large language models to align the output to the underlying document,” a Microsoft spokesperson told TechCrunch. “We hope this new feature will support users and those building generative AI in fields like healthcare, where application developers find response accuracy highly important.”
Google introduced a similar feature to its AI development platform Vertex AI this summer, allowing customers to “ground” models using data from third-party providers, their own datasets, or Google searches.
But experts warn that such grounding approaches don't address the underlying causes of hallucinations.
“Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water,” says Oscar Keyes, a doctoral student at the University of Washington who studies the ethical implications of emerging technologies. “It's an essential part of how the technology works.”
Text generation models hallucinate because they don't actually “know” anything: they are statistical systems that identify patterns in sequences of words and predict which word will come next based on the millions of examples they've been trained on.
This means that the model's responses aren't answers, but merely predictions of how it would respond if the question was asked in its training set. As a result, the models are prone to straying far from the truth: One study found that OpenAI's ChatGPT got half the medical questions wrong.
Microsoft's solution is a pair of cross-referencing, copy-editor-esque metamodels designed to highlight and rewrite hallucinations.
The classification model looks for inaccurate, fabricated, or irrelevant pieces of AI-generated text (hallucinations). If a hallucination is detected, the classification model incorporates a second model (a language model) that attempts to correct the hallucination according to a specified “foundation document.”
Image credit: Microsoft
“The fixes will help application developers mitigate user frustration and potential reputational risk, and will significantly increase the reliability and trustworthiness of AI-generated content,” a Microsoft spokesperson said. “It's important to note that groundedness detection does not solve 'accuracy,' but rather helps align generative AI output with grounded documentation.”
Keyes is skeptical.
“This may mitigate some problems,” they say, “but it will also create new ones; after all, Correction's hallucination-detection library is likely capable of causing hallucinations as well.”
Asked about the background to the Correction model, a spokesperson pointed to a recent paper from Microsoft research that outlined the model's prototype architecture, but left out important details, such as which data sets were used to train the model.
Mike Cook, an AI researcher at Queen Mary University, argued that even if Correction worked as advertised, it risked exacerbating the trust and explainability problems surrounding AI: While the service might catch some errors, it could also lull users into a false sense of security, leading them to believe that models reflect the truth more often than they actually do.
“Microsoft, like OpenAI and Google, created the problem of their models being relied upon in scenarios where they make frequent mistakes,” he said. “What Microsoft is doing now is repeating mistakes at a higher level. Now let's say it goes from 90% to 99% safety. The problem wasn't actually in that 9%. It's always going to be the 1% of mistakes that haven't been detected yet.”
Cook added that there's also an ironic business side to the way Microsoft is bundling Correction: While the feature itself is free, the “basis detection” required to detect hallucinations so Correction can fix them is only free for up to 5,000 “text records” per month. After that, it costs 38 cents for every 1,000 text records.
Microsoft is certainly under pressure to prove to customers — and shareholders — that its AI is worth investing in.
In the second quarter alone, the tech giant spent nearly $19 billion on capital expenditures and equipment, mostly related to AI. But the company has yet to realize significant revenue from AI, and Wall Street analysts downgraded the company's shares this week, citing doubts about the company's long-term AI strategy.
According to an article in The Information, many early adopters are pausing the rollout of Microsoft's flagship generative AI platform, Microsoft 365 Copilot, due to performance and cost concerns. In one client's case using Copilot for Microsoft Teams meetings, the AI was making up attendees and suggesting calls were about subjects that were not actually discussed.
According to a KPMG survey, companies' biggest concerns when testing AI tools are accuracy and the possibility of hallucinations.
“If this were a normal product lifecycle, generative AI would still be in the academic research and development stage, trying to improve it and understand its strengths and weaknesses,” Cook said, “But we're deploying it across 12 industries. Microsoft and other companies have decided to put everyone on an exciting new rocket and assemble the landing gear and parachutes on the way to their destination.”