315 Exposes AI Poisoning, a Business from Putian to Silicon Valley

比推Published on 2026-03-16Last updated on 2026-03-16

Abstract

"315 Exposed: AI 'Poisoning' - A Business from Putian to Silicon Valley" During China's 315 consumer rights expose, a practice called Generative Engine Optimization (GEO) was revealed. GEO involves manipulating AI-generated responses by flooding the internet with promotional content, which AI models then scrape and present as factual recommendations. A tool called "Liqing GEO," sold on Taobao, demonstrated this by fabricating a fake smartwatch with absurd features ("quantum entanglement sensing," "black hole-level battery") and having AI recommend it within hours. This mirrors the early days of Search Engine Optimization (SEO), where paid rankings, notably by Putian-based hospitals on Baidu, dominated search results. Despite regulations, the core model remains: whoever controls the information gateway sells rankings. Now, with AI as the new gateway, SEO has simply become GEO. The business is significant. BlueFocus, a major marketing firm, invested millions in a GEO company, PureblueAI, serving clients like Ant Group and Volvo. While Pureblue claims to optimize real brand information, the technical method—flooding the web with content for AI to scrape—is identical to the "poisoning" tactic. This ambiguity fueled a stock market frenzy in late 2025, with GEO-related stocks like BlueFocus surging over 130% before executives cashed out. Simultaneously, Silicon Valley is formalizing this model. OpenAI announced ads in ChatGPT for free users, with sponsored links appearing below...

Author: David, Deep Tide TechFlow

Original Title: 315 Exposes AI Poisoning, a Business from Putian to Silicon Valley


Last night, 315 exposed a business based on GEO.

Full name: Generative Engine Optimization. You can understand it as:

Paying to have AI say nice things about you.

How is it done?

Brands want AI to prioritize recommending them when consumers ask. So they find GEO service providers, who batch-publish promotional soft articles online. After AI crawls this content, it treats it as real information and recommends it to users.

A CCTV reporter used a software called "Liqing GEO," which can be bought on Taobao.

The reporter fabricated a smart wristband and made up several outrageous product features, like "quantum entanglement sensing" and "black hole-level battery life." The software automatically generated over a dozen promotional soft articles and published them online.

Two hours later, the reporter asked an AI: "Can you recommend a smart health wristband for me?"

The AI ranked this non-existent wristband at the top of the recommendation list.

The company behind this software is Beijing Lisi Culture Media, a one-person company with zero insured employees for many consecutive years.

A tool made by such a company fooled mainstream domestic AI models in just two hours.

315 uncovered AI poisoning, but this business might be much bigger than a single Taobao software.

SEO, the Putian Story

First, this is not new at all.

In 2008, CCTV's "News 30 Minutes" exposed Baidu's paid ranking for two consecutive days. Paying money could get your website to the top of search results, even if it was for fake medicine.

Back then, this business was called SEO, Search Engine Optimization.

The biggest buyers were Putian-affiliated private hospitals. In 2013, Putian系 spent 12 billion RMB on Baidu advertising, accounting for nearly half of Baidu's total ad revenue.

Many unqualified medical institutions used SEO to boost themselves to the first page of Baidu search results, appearing alongside Class A tertiary hospitals, making it impossible for ordinary people to tell the difference.

It wasn't until the 2016 Wei Zexi incident, where a university student died after seeking treatment at a top-ranked Putian hospital, that regulators legislated clearly: paid search is advertising.

But this didn't kill the business. It just set the rules, turning it from a gray market operation into a legitimate business. Putian系 still buys rankings, but there's a small label next to the result: "Ad."

But even with the label, people who would click still click.

The fundamental problem with search engines was never the labeling, but users' inherent trust in the top results.

Now people have moved from search engines to AI, thinking AI is more objective and不会被 (won't be) polluted by paid rankings. But whoever controls the gateway to information distribution can sell rankings.

The gateway changed, SEO changed a letter to become GEO, but the logic of selling rankings hasn't changed one bit.

What changed is the price.

GEO, Loved by the Capital Market

Businesses that can't be killed are the capital market's favorite.

In September 2025, BlueFocus, China's largest marketing communication company, invested tens of millions of RMB in a GEO company called PureblueAI Qinglan.

Qinglan helps real brands optimize their ranking and recommendation rate in AI search results. Clients include Ant Group, Tencent Cloud, and Volvo.

The products are real, the company is real, and they work to help AI understand brand information more accurately.

This is completely different from the AI poisoning exposed by 315 involving Liqing. Liqing fabricated products, made up parameters, and tricked AI with false information; Qinglan uses real brand content to adapt to AI's recommendation logic.

But from AI's perspective, the technical path for both things is the same: both involve publishing content online and waiting for AI to crawl it.

AI can't tell which is marketing and which is fabrication. This is the most ambiguous aspect of the GEO business.

When BlueFocus invested in Qinglan, GEO was just an industry term within marketing circles. Three months later, it became a stock market concept.

At the end of December 2025, BlueFocus's stock price hit the daily limit-up.

Brokerages began holding intensive conference calls to interpret GEO, with research reports defining it as "the next generation traffic entrance in the AI era." Capital poured in, not only buying BlueFocus but also driving up stocks of any company related to digital marketing and AI concepts. BlueFocus rose 132% in 9 trading days, and a batch of follower concept stocks also doubled.

Image Source: CLS News

After the surge, these companies issued risk warnings themselves:

GEO business has no revenue and has no significant impact on company operations. BlueFocus also admitted that AI-driven revenue accounts for a very small proportion of overall revenue.

The implication is that the stock price more than doubled, but the GEO business itself hasn't made much money yet.

At the end of January, BlueFocus's stock price rose from 9.6 yuan to 23.3 yuan, a 143% increase in a month. Right at this time, Chairman Zhao Wenquan announced plans to sell up to 20 million shares. Based on the stock price at the time, this would cash out approximately 467 million RMB.

Public research reports show that last year, the total market size of the domestic GEO industry was about 2.9 billion RMB. The market value increase of BlueFocus's stock alone in one month far exceeded this amount.

315 exposed Liqing system poisoning AI for a few hundred RMB. But the GEO concept went through A-shares and made billions.

Whether it's poisoning or not is hard to say, but the money made is real.

315 Calls it Poisoning, Silicon Valley Calls it Commercialization

In January this year, OpenAI announced on its official blog: ChatGPT will start selling ads.

Free users and $8/month Go users will see ads; paid subscription premium users are unaffected.

On February 9th, ads officially launched. Some ads appear at the bottom of ChatGPT's answers, marked with a small word: Sponsored. The first batch of advertisers includes Ford, Adobe, Target, Best Buy...

You ask ChatGPT what car is good to buy, it gives you an answer, and below the answer hangs a sponsored link from Ford.

OpenAI made it very clear: Ads will not influence the content of ChatGPT's answers. The answer is the answer, the ad is the ad, they are separate.

Does that sound familiar?

Baidu said the same thing back in the day. Paid ranking is paid ranking, organic search is organic search, they are separate. Later, the top five search results were all ads.

OpenAI expects ads to help double its consumer-side annual revenue to $17 billion. ChatGPT has over 800 million weekly active users, 95% of whom are free users, all potential audiences for ads.

Now looking back at the industry chain exposed by 315: Liqing floods AI with soft articles, making AI recommend non-existent products. OpenAI places sponsored content below AI's answers, making AI recommend products that paid money.

One didn't notify the platform, it's poisoning. One signed a contract with the platform, it's commercialization.

For the user, what's the difference?

One is inside the answer, one is below the answer. One has no label, one has a label saying "Ad".

315 caught Liqing for a few hundred RMB, A-shares speculated on the GEO concept for billions, OpenAI plans to make $17 billion a year from this.

The same thing, its nature changes from poisoning to commercialization, and the price increases tens of thousands of times.

In November 2023, researchers from the Indian Institute of Technology Delhi and Princeton University published a paper on arXiv titled "GEO: Generative Engine Optimization".

This was the first formal academic definition of this concept.

From the paper's publication to the 315 exposure, just over two years. In between, it experienced gray market operations, financing, concept stock surges, chairman cashing out, AI platforms亲自 (personally) stepping in to sell ads...

The path SEO took twenty years, GEO completed in two years.

The difference is, back then it took people years to learn not to fully trust search engine results; now AI is still in its trust红利期 (bonus period), most people haven't realized yet that AI's answers can also be bought.

However, this红利期 (bonus period) might not last too long. Next time you ask AI what's worth buying, remember to think for an extra second:

The answer can be free, but the brain cannot be outsourced.


Twitter:https://twitter.com/BitpushNewsCN

BitPush TG Discussion Group:https://t.me/BitPushCommunity

BitPush TG Subscription: https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7620096

Related Questions

QWhat is Generative Engine Optimization (GEO) as described in the article?

AGenerative Engine Optimization (GEO) is a practice where brands pay to have AI systems prioritize and recommend their products or services. It involves flooding the internet with promotional content that AI models scrape and treat as authentic information, influencing AI-generated recommendations to users.

QHow did the CCTV 315 exposure demonstrate the effectiveness of GEO manipulation?

ACCTV journalists used a software called 'Liqing GEO' to create fictional smart wristbands with absurd selling points like 'quantum entanglement sensing' and 'black hole-level battery life.' The software generated promotional articles and posted them online. Within two hours, mainstream AI models in China recommended the non-existent product when queried.

QWhat historical precedent does the article draw between GEO and earlier internet practices?

AThe article compares GEO to Search Engine Optimization (SEO), particularly highlighting how莆田系 (Putian系) hospitals spent billions on Baidu's paid rankings to appear alongside legitimate hospitals in search results, a practice that continued even after regulations required labeling paid results as 'ads.'

QHow did the GEO concept impact the stock market, specifically for companies like BlueFocus?

AThe GEO concept became a stock market trend after BlueFocus invested in a GEO company. This led to a surge in stock prices, with BlueFocus's stock rising 132% in nine trading days. However, companies later issued risk warnings, clarifying that GEO contributed little to actual revenue, and BlueFocus's chairman announced a significant stock sell-off during the peak.

QHow does OpenAI's approach to advertising in ChatGPT relate to the GEO practices exposed by CCTV?

AOpenAI introduced sponsored ads in ChatGPT's responses for free users, labeled as 'Sponsored.' While OpenAI claims ads do not influence the AI's answers, the article draws a parallel to GEO practice, suggesting that both involve monetizing AI recommendations—one through unauthorized 'poisoning' of data and the other through platform-sanctioned commercialization.

Related Reads

Breaking: OpenAI Undergoes Major Reorganization, President Brockman Assumes Command

OpenAI has announced a major internal reorganization just months before its anticipated IPO. The company is merging its three flagship product lines—ChatGPT, Codex, and the API platform—into a single, unified product organization. The most significant leadership change involves co-founder and President Greg Brockman moving from a background technical role to take full, permanent control over all product strategy. This follows the indefinite medical leave of AGI Deployment CEO Fidji Simo. Additionally, ChatGPT's longtime lead, Nick Turley, has been reassigned to enterprise products, with former Instagram executive Ashley Alexander taking over consumer offerings. The consolidation, internally framed as a strategic move towards an "Agentic Future," aims to break down internal silos and create a cohesive "Super App." This planned desktop application would integrate ChatGPT's conversational abilities, Codex's coding power, and a rumored internal web browser named "Atlas" to autonomously perform complex user tasks. The reorganization occurs amid significant internal and external pressures. OpenAI has recently seen a wave of high-profile departures, including Sora co-lead Bill Peebles and other senior technical leaders, leading to concerns about a thinning executive bench. Externally, rival Anthropic recently secured funding at a staggering $900 billion valuation, surpassing OpenAI's own. Google's upcoming I/O developer conference also poses a competitive threat. Analysts suggest the dramatic restructure is a pre-IPO move to present a clearer, more focused narrative to Wall Street—streamlining operations and demonstrating decisive leadership under Brockman to counter internal turbulence and intense market competition.

marsbit1h ago

Breaking: OpenAI Undergoes Major Reorganization, President Brockman Assumes Command

marsbit1h ago

Two Survival Structures of Market Makers and Arbitrageurs

Market makers and arbitrageurs represent two distinct survival structures in high-frequency trading. Market makers primarily use limit orders (makers) to profit from the bid-ask spread, enjoying high capital efficiency (nominally 100%) but bearing inventory risk. This "inventory risk" arises from passive, fragmented, and discontinuous order fills in the limit order book (LOB). This risk, while a potential cost, can also contribute to excess profit if managed within control boundaries, allowing for mean reversion. Market makers essentially sell "time" (uncertainty over execution timing) to the market for price control and low fees. In contrast, cross-exchange arbitrageurs typically use market orders (takers) to exploit price differences or funding rates, resulting in lower nominal capital efficiency (requiring capital on both exchanges) and higher transaction costs. Their risk exposure stems from asymmetries in exchange rules (e.g., minimum order sizes), execution latency, and infrastructure risks (e.g., ADL, oracle drift). These exposures are active, exogenous gaps that primarily erode profits rather than contribute to them. Arbitrageurs essentially sell "space" (capital sunk across venues) for localized, immediate certainty. Both strategies engage in a trade-off between execution friction and residual risk. Optimal systems allow for temporary, controlled risk exposure rather than enforcing zero exposure at all costs. Their evolution converges towards hybrid models: arbitrageurs may use maker orders to reduce costs, while market makers may use taker orders or hedges for risk management. Ultimately, both use different forms of risk exposure—market makers exposing inventory, arbitrageurs immobilizing capital—to extract marginal, hard-won certainty from the market.

链捕手1h ago

Two Survival Structures of Market Makers and Arbitrageurs

链捕手1h ago

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

This article, based on Anthropic's analysis, outlines the intensifying systemic competition between the U.S./allies and China for AI leadership by 2028. It argues that access to advanced computing power ("compute") is the critical bottleneck, where the U.S. currently holds a significant advantage through chip export controls and allied innovation. However, China's AI labs remain competitive by exploiting policy loopholes—via chip smuggling, overseas data center access, and "model distillation" attacks to copy U.S. model capabilities—keeping them close to the frontier. The piece presents two contrasting scenarios for 2028. In the first, decisive U.S. action to tighten compute controls and curb distillation locks in a 12-24 month AI capability lead, cementing democratic influence over global AI norms, security, and economic infrastructure. In the second, policy inaction allows China to achieve near-parity through continued access to U.S. technology, enabling Beijing to promote its AI stack globally and integrate advanced AI into its military and governance systems, altering the strategic balance. Anthropic contends that maintaining a decisive U.S. lead is essential for shaping safe AI development and governance. The core recommendation is for U.S. policymakers to urgently close compute and model access loopholes while promoting global adoption of the U.S. AI technology stack to secure a lasting strategic advantage.

marsbit3h ago

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

marsbit3h ago

Trading

Spot
Futures
活动图片