Tens of Millions of Errors Per Hour: Investigation Reveals the 'Accuracy Illusion' of Google AI Search

marsbitPublicado em 2026-04-10Última atualização em 2026-04-10

Resumo

A New York Times investigation, in collaboration with AI startup Oumi, reveals significant accuracy and reliability issues with Google's AI Overviews search feature. Testing over 4,300 queries showed the accuracy rate improved from 85% (Gemini 2) to 91% (Gemini 3). However, given Google's scale of ~5 trillion annual searches, this 9% error rate translates to over 57 million incorrect answers generated hourly. A more critical issue is the prevalence of unsubstantiated citations. For correct answers, the rate of "unfounded citations"—where provided source links do not support the AI's claims—worsened, rising from 37% with Gemini 2 to 56% with Gemini 3. This makes it difficult for users to verify the information. The AI also heavily relies on low-quality sources, with Facebook and Reddit being its second and fourth most cited domains. Furthermore, the system is highly susceptible to manipulation. A BBC journalist successfully "poisoned" it by publishing a fake article; Google's AI began presenting the false information as fact within 24 hours. Google disputed the study's methodology, criticizing the use of the SimpleQA benchmark and an AI model (Oumi's HallOumi) to evaluate its own AI. The company maintains that its internal safeguards and ranking systems improve accuracy beyond the base model's performance.

Author: Claude, Deep Tide TechFlow

Deep Tide Introduction: The latest test by The New York Times in collaboration with AI startup Oumi shows that the accuracy rate of Google Search's AI Overviews feature is about 91%. However, given Google's scale of processing 5 trillion searches annually, this translates to tens of millions of incorrect answers generated every hour. More troublingly, even when the answers are correct, over half of the cited links fail to support their conclusions.

Google is delivering misinformation to users on an unprecedented scale, and most people are completely unaware.

According to The New York Times, AI startup Oumi, commissioned by the publication, used the industry-standard test SimpleQA developed by OpenAI to evaluate the accuracy of Google's AI Overviews feature. The test covered 4,326 search queries, conducting one round in October last year (powered by Gemini 2) and another in February this year (upgraded to Gemini 3). The results showed that Gemini 2's accuracy was about 85%, which improved to 91% with Gemini 3.

91% sounds good, but it's a different story when considering Google's scale. Google processes approximately 5 trillion search queries annually. Calculating with a 9% error rate, AI Overviews generates over 57 million inaccurate answers per hour, nearly 1 million per minute.

Correct Answers, Wrong Sources

More alarming than the accuracy rate is the issue of "unanchored" citation sources.

Oumi's data shows that in the Gemini 2 era, 37% of correct answers had "unsupported citations," meaning the links attached to the AI summaries did not support the information provided. After upgrading to Gemini 3, this proportion increased instead of decreasing, jumping to 56%. In other words, while the model gives correct answers, it's increasingly failing to "show its work."

Oumi CEO Manos Koukoumidis pointedly questioned: "Even if the answer is correct, how do you know it's correct? How do you verify it?"

The problem is exacerbated by AI Overviews' heavy reliance on low-quality sources. Oumi found that Facebook and Reddit are the second and fourth most cited sources for AI Overviews, respectively. In inaccurate answers, Facebook was cited 7% of the time, higher than the 5% in accurate answers.

BBC Journalist's Fake Article "Poisoned" Results Within 24 Hours

Another serious flaw of AI Overviews is its susceptibility to manipulation.

A BBC journalist tested the system with a deliberately fabricated false article. In less than 24 hours, Google's AI Overview presented the false information from the article as fact to users.

This means anyone who understands how the system works could potentially "poison" AI search results by publishing false content and boosting its traffic. Google spokesperson Ned Adriance responded by saying the search AI feature is built on the same ranking and security mechanisms that block spam, and claimed that "most examples in the test are unrealistic queries that people wouldn't actually search for."

Google's Rebuttal: The Test Itself Is Flawed

Google raised several objections to Oumi's research. A Google spokesperson called the study "seriously flawed," citing reasons including: the SimpleQA benchmark itself contains inaccurate information; Oumi used its own AI model HallOumi to judge another AI's performance, potentially introducing additional errors; and the test content doesn't reflect real user search behavior.

Google's internal tests also showed that when Gemini 3 operates independently outside the Google Search framework, it produces false outputs at a rate as high as 28%. But Google emphasized that AI Overviews leverages the search ranking system to improve accuracy, performing better than the model itself.

However, as PCMag's commentary pointed out the logical paradox: If your defense is that "the report pointing out our AI's inaccuracies itself uses potentially inaccurate AI," this probably doesn't enhance users' confidence in your product's accuracy.

Perguntas relacionadas

QWhat is the accuracy rate of Google's AI Overviews feature according to the Oumi study?

AThe accuracy rate of Google's AI Overviews was found to be approximately 91% when powered by Gemini 3, an improvement from about 85% with Gemini 2.

QHow many inaccurate answers does the article estimate Google's AI Overviews produces per hour?

ABased on Google's annual volume of 5 trillion searches and a 9% error rate, the AI Overviews feature is estimated to produce over 57 million inaccurate answers per hour.

QWhat is the 'unsubstantiated citation' problem identified in the report?

AThe 'unsubstantiated citation' problem refers to instances where the AI Overviews provides a correct answer, but the attached source links do not actually support the information given. This issue increased from 37% with Gemini 2 to 56% with Gemini 3.

QWhich low-quality websites are frequently used as sources by AI Overviews, according to the Oumi data?

AAccording to Oumi's data, Facebook and Reddit are the second and fourth most cited sources by AI Overviews, with Facebook being cited more frequently in inaccurate answers.

QHow did Google respond to the findings of the Oumi study?

AGoogle criticized the study, calling it 'seriously flawed.' Their spokesperson argued that the SimpleQA benchmark itself contains inaccuracies, that using an AI (HallOumi) to judge another AI introduces errors, and that the test queries do not reflect real user search behavior.

Leituras Relacionadas

Solana Q1 Report: Revenue Plunges 68% Year-on-Year, Developers Decrease by 30%

Solana Q1 2026 Report: Key Metrics Show Significant Decline Amid Market Reset Solana experienced a substantial downturn in Q1 2026, with key performance indicators reflecting a broader market cooling. Total network revenue (REV) fell to $89.9 million, down 68% year-over-year (YoY) and 1.4% quarter-over-quarter (QoQ). This decline was driven by reduced speculative activity, which had previously fueled the network during the 2024/2025 bull market. Key revenue components saw mixed results: base fees dropped 8.7% QoQ, Jito tips (MEV) fell 19.7%, priority fees rose 23%, and vote fees declined 44.5%. The annualized real yield for stakers was just 0.17%, down 67% YoY. Network GDP, generated by top applications, fell 7% QoQ to $451 million. Pump Fun emerged as a standout, generating $103 million (up 3% QoQ), surpassing Solana's L1 revenue. However, daily active addresses averaged 2.4 million, down 4.8% YoY. Stablecoin supply on Solana reached $15.9 billion, down 2.7% QoQ but up 18% YoY. USDC and USDT remained dominant. DEX volumes averaged $3.2 billion daily, with private DEXs now accounting for 60% of all volume. The network's net dilution rate was 4.38%, while the cost to produce $1 of REV was $8.10, up 93% YoY. The number of new tokens created on launchpads grew 42% QoQ to 3 million, with Pump Fun dominating 85% of this market. Despite the downturn, Solana's core strengths remain: its position as a hub for retail trading apps, potential in perpetual markets, and growing use in stablecoin-based fintech applications, particularly in Latin America. However, developer activity declined 32% YoY, slightly worse than Ethereum's 29% drop. The network must now focus on attracting traditional finance, competing in perpetual markets, and sustaining developer ecosystem growth to drive the next expansion cycle.

marsbitHá 28m

Solana Q1 Report: Revenue Plunges 68% Year-on-Year, Developers Decrease by 30%

marsbitHá 28m

When Top Crypto VCs Are Shrinking Across the Board, Why Has This Firm Grown by 150%?

In a declining crypto market where top venture capital firms like Paradigm, Pantera, a16z crypto, and Multicoin saw significant reductions in assets under management (AUM), Haun Ventures stood out with a 150% growth, increasing its AUM from $1 billion to $2.5 billion by 2025. Founded by Katie Haun, a former federal prosecutor and a16z crypto veteran, the firm combines regulatory insight with investment discipline. Initially investing heavily in NFTs during the 2022 hype, Haun Ventures quickly pivoted as the bubble burst, adopting a cautious approach with only six investments over the following 18 months. The firm balanced its portfolio between digital tokens and traditional equity, allocating about 30% to liquid tokens like Bitcoin and Ethereum, which contributed significantly to returns as Bitcoin’s price surged. By 2024, Haun Ventures shifted focus to B2B solutions in payments and developer infrastructure, leading over 56% of its investment rounds—the highest rate among top VCs. This strategy paid off with several high-multiple exits via acquisitions, such as Bridge (acquired at $1.1 billion from a $200 million valuation) and BVNK (acquired at $1.8 billion from a $750 million valuation). The firm’s success is attributed to its regulatory foresight, adaptive strategy, high-conviction lead investments, and emphasis on real-world utility and exit efficiency—making it a standout performer during the crypto downturn.

marsbitHá 38m

When Top Crypto VCs Are Shrinking Across the Board, Why Has This Firm Grown by 150%?

marsbitHá 38m

Trading

Spot
Futuros
活动图片