Tens of Millions of Errors Per Hour: Investigation Reveals the 'Accuracy Illusion' of Google AI Search

marsbitОпубліковано о 2026-04-10Востаннє оновлено о 2026-04-10

Анотація

A New York Times investigation, in collaboration with AI startup Oumi, reveals significant accuracy and reliability issues with Google's AI Overviews search feature. Testing over 4,300 queries showed the accuracy rate improved from 85% (Gemini 2) to 91% (Gemini 3). However, given Google's scale of ~5 trillion annual searches, this 9% error rate translates to over 57 million incorrect answers generated hourly. A more critical issue is the prevalence of unsubstantiated citations. For correct answers, the rate of "unfounded citations"—where provided source links do not support the AI's claims—worsened, rising from 37% with Gemini 2 to 56% with Gemini 3. This makes it difficult for users to verify the information. The AI also heavily relies on low-quality sources, with Facebook and Reddit being its second and fourth most cited domains. Furthermore, the system is highly susceptible to manipulation. A BBC journalist successfully "poisoned" it by publishing a fake article; Google's AI began presenting the false information as fact within 24 hours. Google disputed the study's methodology, criticizing the use of the SimpleQA benchmark and an AI model (Oumi's HallOumi) to evaluate its own AI. The company maintains that its internal safeguards and ranking systems improve accuracy beyond the base model's performance.

Author: Claude, Deep Tide TechFlow

Deep Tide Introduction: The latest test by The New York Times in collaboration with AI startup Oumi shows that the accuracy rate of Google Search's AI Overviews feature is about 91%. However, given Google's scale of processing 5 trillion searches annually, this translates to tens of millions of incorrect answers generated every hour. More troublingly, even when the answers are correct, over half of the cited links fail to support their conclusions.

Google is delivering misinformation to users on an unprecedented scale, and most people are completely unaware.

According to The New York Times, AI startup Oumi, commissioned by the publication, used the industry-standard test SimpleQA developed by OpenAI to evaluate the accuracy of Google's AI Overviews feature. The test covered 4,326 search queries, conducting one round in October last year (powered by Gemini 2) and another in February this year (upgraded to Gemini 3). The results showed that Gemini 2's accuracy was about 85%, which improved to 91% with Gemini 3.

91% sounds good, but it's a different story when considering Google's scale. Google processes approximately 5 trillion search queries annually. Calculating with a 9% error rate, AI Overviews generates over 57 million inaccurate answers per hour, nearly 1 million per minute.

Correct Answers, Wrong Sources

More alarming than the accuracy rate is the issue of "unanchored" citation sources.

Oumi's data shows that in the Gemini 2 era, 37% of correct answers had "unsupported citations," meaning the links attached to the AI summaries did not support the information provided. After upgrading to Gemini 3, this proportion increased instead of decreasing, jumping to 56%. In other words, while the model gives correct answers, it's increasingly failing to "show its work."

Oumi CEO Manos Koukoumidis pointedly questioned: "Even if the answer is correct, how do you know it's correct? How do you verify it?"

The problem is exacerbated by AI Overviews' heavy reliance on low-quality sources. Oumi found that Facebook and Reddit are the second and fourth most cited sources for AI Overviews, respectively. In inaccurate answers, Facebook was cited 7% of the time, higher than the 5% in accurate answers.

BBC Journalist's Fake Article "Poisoned" Results Within 24 Hours

Another serious flaw of AI Overviews is its susceptibility to manipulation.

A BBC journalist tested the system with a deliberately fabricated false article. In less than 24 hours, Google's AI Overview presented the false information from the article as fact to users.

This means anyone who understands how the system works could potentially "poison" AI search results by publishing false content and boosting its traffic. Google spokesperson Ned Adriance responded by saying the search AI feature is built on the same ranking and security mechanisms that block spam, and claimed that "most examples in the test are unrealistic queries that people wouldn't actually search for."

Google's Rebuttal: The Test Itself Is Flawed

Google raised several objections to Oumi's research. A Google spokesperson called the study "seriously flawed," citing reasons including: the SimpleQA benchmark itself contains inaccurate information; Oumi used its own AI model HallOumi to judge another AI's performance, potentially introducing additional errors; and the test content doesn't reflect real user search behavior.

Google's internal tests also showed that when Gemini 3 operates independently outside the Google Search framework, it produces false outputs at a rate as high as 28%. But Google emphasized that AI Overviews leverages the search ranking system to improve accuracy, performing better than the model itself.

However, as PCMag's commentary pointed out the logical paradox: If your defense is that "the report pointing out our AI's inaccuracies itself uses potentially inaccurate AI," this probably doesn't enhance users' confidence in your product's accuracy.

Пов'язані питання

QWhat is the accuracy rate of Google's AI Overviews feature according to the Oumi study?

AThe accuracy rate of Google's AI Overviews was found to be approximately 91% when powered by Gemini 3, an improvement from about 85% with Gemini 2.

QHow many inaccurate answers does the article estimate Google's AI Overviews produces per hour?

ABased on Google's annual volume of 5 trillion searches and a 9% error rate, the AI Overviews feature is estimated to produce over 57 million inaccurate answers per hour.

QWhat is the 'unsubstantiated citation' problem identified in the report?

AThe 'unsubstantiated citation' problem refers to instances where the AI Overviews provides a correct answer, but the attached source links do not actually support the information given. This issue increased from 37% with Gemini 2 to 56% with Gemini 3.

QWhich low-quality websites are frequently used as sources by AI Overviews, according to the Oumi data?

AAccording to Oumi's data, Facebook and Reddit are the second and fourth most cited sources by AI Overviews, with Facebook being cited more frequently in inaccurate answers.

QHow did Google respond to the findings of the Oumi study?

AGoogle criticized the study, calling it 'seriously flawed.' Their spokesperson argued that the SimpleQA benchmark itself contains inaccuracies, that using an AI (HallOumi) to judge another AI introduces errors, and that the test queries do not reflect real user search behavior.

Пов'язані матеріали

Understanding Hash in One Article: The "Browser Miner" on Ethereum

Hash is an Ethereum-based ERC-20 token described as a "browser-minable post-quantum token." Its key features include enabling browser-based GPU mining without specialized hardware, a fixed supply cap of 21 million tokens, immutable and permissionless smart contracts with no team allocation or pre-mining, and an emphasis on post-quantum security using Keccak256 hashing. The mining mechanism is a simplified on-chain proof-of-work where miners solve unique challenges tied to their wallet address. Key design elements prevent answer theft, with epochs resetting every 100 blocks (~20 minutes) and a per-block minting limit. Emission follows a Bitcoin-like halving schedule every 100,000 mints, starting at 100 tokens per mint. Projections suggest all tokens could be mined within approximately 294 days if a target rate of one mint per minute is sustained. Hash emphasizes "post-quantum" security by leveraging hash-based primitives like Keccak256, which are considered more resistant to quantum attacks compared to elliptic-curve cryptography. While not a fully post-quantum asset, it aligns with Ethereum's broader post-quantum research narrative. The project completed its Genesis sale at $0.03 and began trading on Uniswap, with its price reaching around $0.19. The initial circulating supply is small, with 5% sold in Genesis and 5% allocated to liquidity. The majority (47.6% of total supply) is allocated to early-stage mining, leading to a front-loaded emission schedule. This structure, combined with low initial liquidity, makes Hash a high-volatility, high-risk project dependent on sustained miner participation and market demand to absorb new supply.

marsbit12 хв тому

Understanding Hash in One Article: The "Browser Miner" on Ethereum

marsbit12 хв тому

OpenAI's Largest Internal Wealth Creation: 600 People Cash Out a Total of $6.6 Billion, 75 Take Home the Maximum $30 Million Each

A Wall Street Journal report reveals OpenAI's unprecedented pre-IPO wealth creation. In a single employee stock sale last October, over 600 current and former employees sold shares, collectively cashing out approximately $6.6 billion. Due to high investor demand, the company tripled the individual sale cap to $30 million, with about 75 employees selling the maximum amount. This event represents the largest such transaction in tech industry history for a private company. OpenAI's valuation was $500 billion for this tender offer. Employees with over two years of tenure were eligible, allowing many post-ChatGPT hires their first liquidity event. The company's stock has reportedly grown over 100-fold in seven years. Following a restructuring, employees collectively hold about 26% of OpenAI. The scale of executive wealth is also staggering. In court testimony related to Elon Musk's lawsuit, President and co-founder Greg Brockman confirmed his OpenAI stake is worth around $30 billion. Analysis indicates about 165 current and former employees hold a combined ~$164.9 billion in equity, averaging nearly $1 billion per person in paper wealth. OpenAI's per-employee stock-based compensation is estimated to be 34 times the average of major tech firms before their IPOs. OpenAI continues its rapid ascent, closing a $122 billion funding round at an $852 billion valuation in March. With monthly revenue hitting $2 billion, over 900 million weekly ChatGPT users, and plans for a potential trillion-dollar IPO in late 2026, this wealth-creation engine shows no signs of stopping.

链捕手34 хв тому

OpenAI's Largest Internal Wealth Creation: 600 People Cash Out a Total of $6.6 Billion, 75 Take Home the Maximum $30 Million Each

链捕手34 хв тому

Торгівля

Спот
Ф'ючерси
活动图片