Google has come under fire for some of the inaccurate, ridiculous, and just plain weird answers it serves up through its AI summary for search.
AI Overview, the AI-generated search results that Google began rolling out more broadly earlier this month, has had mixed results. Apparently, one user wanting to know how to make cheese stick to pizza was told to add glue (the advice was taken from an old Reddit post), while another was told to “add glue to the pizza.”One small rock a day(From The Onion)
Don't be discouraged if you can't find the answer or reproduce it yourself. More viral searchesGoogle is working to remove inaccurate search results, but a company spokesman said in a statement that the company is “acting swiftly” and “using these cases to drive broader improvements to our systems.”
Staring at the sun is bad for your eyes regardless of skin color, despite Google AI's interpretation of WebMD https://t.co/O27O9Ex0iu
— Dr. Glaucomflecken (@DGlaucomflecken) May 24, 2024
“The majority of our AI Overviews provide high-quality information with links to dig deeper on the web,” the spokesperson said. “Many of the examples we saw were unusual queries, and we also saw examples that were doctored or couldn't be reproduced. We conducted extensive testing before launching this new experience, and as with other features we launch in Search, we welcome your feedback.”
So it's probably safe to assume that these results will improve over time and that some of the screenshots you see on social media are made for laughs.
But these AI search results got me wondering: what are they actually for? Even if everything worked perfectly, how would they be better than a regular web search?
Clearly, Google is trying to give users the answers they need without making them scroll through multiple web pages. In fact, the company noted in its AI Overviews that early testing showed that “users are using search more and are happier with the results.”
But the idea of removing the “10 blue links” is an old one, and although Google has already started to place less importance on them, I think it's too early to remove those blue links completely.
Screenshot: Google
Let's do some very selfish searches: “what is techcrunch” returns a nearly accurate summary, but it's weirdly inflated like a student trying to meet a minimum page count requirement, and the traffic numbers seem to come from Yale's career site. Then, “how do I get published on techcrunch” the summary cites an old article on how to submit guest columns (they no longer accept them).
Screenshot: Google
The point is not to find more of the AI's Overview getting things wrong, but to suggest that many of its mistakes will be mundane rather than flashy or interesting. And, to Google's credit, while the Overview does include links to the pages that provided the source material for the AI's answers, it requires a lot of clicking to work out which answers came from which sources.
Google also says that the inaccurate results flagged up on social media are often down to data gaps – subjects where there isn't much accurate information online. This is reasonable, but it highlights the fact that, like regular search, AI needs a healthy open web full of accurate information.
Unfortunately, AI could pose an existential threat to the open web: after all, there's much less incentive to write accurate how-to articles or do extensive investigative journalism if people are only going to read AI-generated summaries, which may or may not be accurate.
Google says that with AI summaries, “people visit a more diverse range of websites looking for answers to more complex questions,” and that “links in AI summaries receive more clicks than if the pages had appeared as traditional web listings for that query.” I hope that's true, but if not, no amount of technological improvement will make up for the vast swaths of the web that could disappear.