Dark
Light

Google’s AI Overviews Under Fire for Inaccurate Guidance and Reduced Publisher Clicks

June 7, 2025

Google’s AI Overviews, built to deliver quick answers to your search queries, has come under increasing scrutiny for offering advice that can be downright misleading. Some critics have even raised concerns over potentially dangerous suggestions.

For instance, one output oddly recommended adding glue to pizza sauce to help the cheese stick, while another quoted the non-existent idiom, “You can’t lick a badger twice,” as if it were genuine. Experts refer to these kinds of mistakes as ‘hallucinations’ – a term that neatly captures the tool’s occasional lapses.

The implications go beyond quirky errors. By summarising search results rather than directing users to original websites, Google’s approach is cutting click-through rates to publisher sites by 40%–60%, according to Laurence O’Toole at Authoritas. This shift could impact the visibility and traffic that many trusted publishers rely on.

While Google’s head of Search, Liz Reid, has acknowledged these challenges as areas needing improvement, CEO Sundar Pichai remains confident. He argues that, despite the setbacks, the tool broadens user access to information by changing how content is delivered.

Concerns about accuracy persist. Although the AI itself claims a hallucination rate of just 0.7% to 1.3%, data from Hugging Face suggests that the latest Gemini model could be closer to 1.8%. Moreover, the AI sometimes defends its own approach, dismissing worries about its handling of art and other creative content.

It’s a reminder that the issue of hallucinations isn’t unique to Google. OpenAI has found that its most recent models, o3 and o4-mini, have also seen increases in fabricated information, with error rates of 33% and 48% respectively when referencing real people. If you’ve ever hesitated before following an on-screen recommendation, this is yet another reason to scrutinise AI-generated answers.

Don't Miss