Tens of Millions of Errors Per Hour: Investigation Reveals the 'Accuracy Illusion' of Google AI Search

marsbitPublié le 2026-04-10Dernière mise à jour le 2026-04-10

Résumé

A New York Times investigation, in collaboration with AI startup Oumi, reveals significant accuracy and reliability issues with Google's AI Overviews search feature. Testing over 4,300 queries showed the accuracy rate improved from 85% (Gemini 2) to 91% (Gemini 3). However, given Google's scale of ~5 trillion annual searches, this 9% error rate translates to over 57 million incorrect answers generated hourly. A more critical issue is the prevalence of unsubstantiated citations. For correct answers, the rate of "unfounded citations"—where provided source links do not support the AI's claims—worsened, rising from 37% with Gemini 2 to 56% with Gemini 3. This makes it difficult for users to verify the information. The AI also heavily relies on low-quality sources, with Facebook and Reddit being its second and fourth most cited domains. Furthermore, the system is highly susceptible to manipulation. A BBC journalist successfully "poisoned" it by publishing a fake article; Google's AI began presenting the false information as fact within 24 hours. Google disputed the study's methodology, criticizing the use of the SimpleQA benchmark and an AI model (Oumi's HallOumi) to evaluate its own AI. The company maintains that its internal safeguards and ranking systems improve accuracy beyond the base model's performance.

Author: Claude, Deep Tide TechFlow

Deep Tide Introduction: The latest test by The New York Times in collaboration with AI startup Oumi shows that the accuracy rate of Google Search's AI Overviews feature is about 91%. However, given Google's scale of processing 5 trillion searches annually, this translates to tens of millions of incorrect answers generated every hour. More troublingly, even when the answers are correct, over half of the cited links fail to support their conclusions.

Google is delivering misinformation to users on an unprecedented scale, and most people are completely unaware.

According to The New York Times, AI startup Oumi, commissioned by the publication, used the industry-standard test SimpleQA developed by OpenAI to evaluate the accuracy of Google's AI Overviews feature. The test covered 4,326 search queries, conducting one round in October last year (powered by Gemini 2) and another in February this year (upgraded to Gemini 3). The results showed that Gemini 2's accuracy was about 85%, which improved to 91% with Gemini 3.

91% sounds good, but it's a different story when considering Google's scale. Google processes approximately 5 trillion search queries annually. Calculating with a 9% error rate, AI Overviews generates over 57 million inaccurate answers per hour, nearly 1 million per minute.

Correct Answers, Wrong Sources

More alarming than the accuracy rate is the issue of "unanchored" citation sources.

Oumi's data shows that in the Gemini 2 era, 37% of correct answers had "unsupported citations," meaning the links attached to the AI summaries did not support the information provided. After upgrading to Gemini 3, this proportion increased instead of decreasing, jumping to 56%. In other words, while the model gives correct answers, it's increasingly failing to "show its work."

Oumi CEO Manos Koukoumidis pointedly questioned: "Even if the answer is correct, how do you know it's correct? How do you verify it?"

The problem is exacerbated by AI Overviews' heavy reliance on low-quality sources. Oumi found that Facebook and Reddit are the second and fourth most cited sources for AI Overviews, respectively. In inaccurate answers, Facebook was cited 7% of the time, higher than the 5% in accurate answers.

BBC Journalist's Fake Article "Poisoned" Results Within 24 Hours

Another serious flaw of AI Overviews is its susceptibility to manipulation.

A BBC journalist tested the system with a deliberately fabricated false article. In less than 24 hours, Google's AI Overview presented the false information from the article as fact to users.

This means anyone who understands how the system works could potentially "poison" AI search results by publishing false content and boosting its traffic. Google spokesperson Ned Adriance responded by saying the search AI feature is built on the same ranking and security mechanisms that block spam, and claimed that "most examples in the test are unrealistic queries that people wouldn't actually search for."

Google's Rebuttal: The Test Itself Is Flawed

Google raised several objections to Oumi's research. A Google spokesperson called the study "seriously flawed," citing reasons including: the SimpleQA benchmark itself contains inaccurate information; Oumi used its own AI model HallOumi to judge another AI's performance, potentially introducing additional errors; and the test content doesn't reflect real user search behavior.

Google's internal tests also showed that when Gemini 3 operates independently outside the Google Search framework, it produces false outputs at a rate as high as 28%. But Google emphasized that AI Overviews leverages the search ranking system to improve accuracy, performing better than the model itself.

However, as PCMag's commentary pointed out the logical paradox: If your defense is that "the report pointing out our AI's inaccuracies itself uses potentially inaccurate AI," this probably doesn't enhance users' confidence in your product's accuracy.

Questions liées

QWhat is the accuracy rate of Google's AI Overviews feature according to the Oumi study?

AThe accuracy rate of Google's AI Overviews was found to be approximately 91% when powered by Gemini 3, an improvement from about 85% with Gemini 2.

QHow many inaccurate answers does the article estimate Google's AI Overviews produces per hour?

ABased on Google's annual volume of 5 trillion searches and a 9% error rate, the AI Overviews feature is estimated to produce over 57 million inaccurate answers per hour.

QWhat is the 'unsubstantiated citation' problem identified in the report?

AThe 'unsubstantiated citation' problem refers to instances where the AI Overviews provides a correct answer, but the attached source links do not actually support the information given. This issue increased from 37% with Gemini 2 to 56% with Gemini 3.

QWhich low-quality websites are frequently used as sources by AI Overviews, according to the Oumi data?

AAccording to Oumi's data, Facebook and Reddit are the second and fourth most cited sources by AI Overviews, with Facebook being cited more frequently in inaccurate answers.

QHow did Google respond to the findings of the Oumi study?

AGoogle criticized the study, calling it 'seriously flawed.' Their spokesperson argued that the SimpleQA benchmark itself contains inaccuracies, that using an AI (HallOumi) to judge another AI introduces errors, and that the test queries do not reflect real user search behavior.

Lectures associées

Deux structures de survie : le market maker et l'arbitragiste

Dans le trading haute fréquence, deux approches coexistent : le market making, qui tire profit des spreads en passant des ordres maker avec une utilisation nominale complète du capital, et l'arbitrage inter-bourses, qui vise les écarts de prix et les financements en utilisant principalement des ordres taker, avec une efficacité capitalistique nominale réduite de moitié. L'article explore la nature de leur exposition au risque. Dans un carnet d'ordres, l'exposition résulte d'un arbitrage entre le contrôle du prix (pour le maker) et le contrôle du timing (pour le taker). Pour le market maker, le risque provient de l'inventaire non couvert et du défi d'une tarification juste. Pour l'arbitragiste, il découle des asymétries entre places de marché (glissements, latences, règles de lot). La formation de l'exposition diffère : elle est passive, fragmentée et aléatoire dans le temps pour le market maker, due à l'exécution discontinue sur le carnet. Pour l'arbitragiste, elle est active et exogène, causée par des différences de règles (taille de lot minimale) entraînant des positions résiduelles. L'exposition se manifeste aussi différemment. Pour le market maker, un inventaire unilatéral dans un marché range-bound peut être favorable, tandis qu'en tendance marquée, il devient un passif accru. Pour l'arbitragiste, les risques sont techniques : liquidation automatique (ADL), dérive des oracles, manipulation des financements, rupture de corrélation. Le rapport au profit diverge. Le market maker vise une haute fréquence de transactions avec un faible gain unitaire. Son exposition (inventaire) peut, dans les limites du risque, contribuer directement aux profits en capitalisant sur le retour à la moyenne. Il échange du "contrôle temporel local" contre une "certitude probabiliste à long terme". L'arbitragiste cherche un différentiel spatial déterminé. Son exposition est principalement une source de coûts et d'érosion des profits. Il échange une "immobilisation spatiale du capital" contre une "certitude immédiate et locale". À un niveau avancé, les stratégies convergent. Les arbitragistes utilisent aussi des ordres maker pour réduire les coûts, gérant leur exposition comme un inventaire. Les market makers utilisent des ordres taker pour se couvrir rapidement. Tous deux développent des systèmes hybrides adaptatifs. En définitive, le market maker "vend" du temps au marché, exposant son inventaire. L'arbitragiste "vend" de l'espace, immobilisant son capital. Tous deux utilisent différentes formes d'exposition au risque pour en extraire un mince filet de certitude.

链捕手Il y a 2 h

Deux structures de survie : le market maker et l'arbitragiste

链捕手Il y a 2 h

« Pourquoi n'achètes-tu pas du double effet de levier long sur SK Hynix ? »

En Corée du Sud, détenir des actions de SK Hynix ou y travailler est devenu symbole de réussite, après des résultats trimestriels exceptionnels et des prévisions de bonus faramineuses. Cet engouement a propulsé l'ETF à effet de levier 2x sur SK Hynix (07709.HK) au sommet mondial des produits dérivés sur une seule action, son actif approchant 600 milliards de HKD. Depuis son introduction en octobre 2025, sa valeur a été multipliée par plus de 10, surpassant largement la performance du titre sous-jacent. Ce produit expose toutefois les risques inhérents aux ETF à levier quotidien. Dans un marché en hausse régulière, le mécanisme de rébalancement quotidien amplifie les gains. Mais en période de forte volatilité ou de fluctuations erratiques, comme lors des tensions géopolitiques dans le détroit d'Ormuz en mars-avril 2026, il subit une « usure par la volatilité », entraînant des pertes pouvant dépasser le double de la baisse de l'action. Le succès de SK Hynix s'ancre dans le cycle haussier des semi-conducteurs, tiré par la demande en mémoire pour l'IA, notamment le HBM dont Hynix est un fournisseur clé. Avec une rentabilité record, la question est de savoir si cette dynamique peut échapper au cycle traditionnel de surcapacité et de chute des prix propre aux puces mémoire, d'autant que des géants comme Samsung ou Micron pourraient rattraper leur retard. En définitive, l'ETF 2x sur Hynix illustre l'accélération extrême de la révolution de l'IA, qui comprime les horizons temporels d'investissement. Il rappelle aussi que si la technologie détermine le rendement final, la géopolitique et les macros-économiques dictent la volatilité du chemin.

marsbitIl y a 5 h

« Pourquoi n'achètes-tu pas du double effet de levier long sur SK Hynix ? »

marsbitIl y a 5 h

Trading

Spot
Futures
活动图片