Author: Nancy, PANews
Today's social media platforms may seem as lively as ever, but the sense of "human presence" is gradually fading. As a flood of AI-generated spam (AI slop) inundates major mainstream platforms, content filled with deception and clickbait runs rampant. More and more real users are losing their desire to share and are even beginning to flee.
In the face of this deluge of AI spam, algorithmic moderation alone has proven insufficient. Recently, top venture capital firm a16z proposed the concept of Staked Media, using real financial stakes to filter out AI noise, which has attracted significant market attention.
As AI Begins to Replicate Itself, the Internet Is Being Flooded with "Pre-Made Content"
"AI is starting to imitate AI."
Recently, moderators of the "American forum" Reddit have been driven to despair, battling a massive influx of AI-generated content. The moderators of the r/AmItheAsshole subreddit, which has 24 million users, complain that over half of the content is now generated by AI.
In the first half of 2025 alone, Reddit removed over 40 million pieces of spam and fake content. This phenomenon has spread like a virus to platforms like Facebook, Instagram, X, YouTube, Xiaohongshu, and TikTok.
In an era where information seems to explode yet genuine voices are becoming scarcer, AI-generated content garbage permeates the entire internet, quietly eroding people's minds. In fact, with the proliferation of generative tools like ChatGPT and Gemini, handcrafted content creation is being replaced by AI, turning into an "assembly line factory."
According to a recent study by the search engine optimization company Graphite, the proportion of AI-generated articles has surged since ChatGPT's public release in late 2022, rising from about 10% that year to over 40% in 2024. As of May this year, this figure has climbed to 52%.
However, most of this AI-generated content resembles "pre-made meals"—produced with fixed formulas and standardized processes but lacking soul, making it dull to read. Moreover, today's AI is no longer clumsy; it can not only mimic human tones but even replicate emotions. From travel guides to emotional disputes, and even deliberately煽动ing social divisions for clicks, AI can handle it all with ease.
More critically, when AI hallucinates, it confidently spouts nonsense, creating not only information garbage but also triggering a crisis of trust.
In the Age of AI Proliferation, Building Media Credibility with Real Money
Faced with the rampant spread of AI垃圾 content on the internet, major platforms have struggled to achieve effective governance, even with updated moderation mechanisms and AI assistance. In a16z crypto's重磅 annual report, Robert Hackett proposed the concept of "Staked Media." (Related reading: a16z: 17 Exciting New Crypto Directions for 2026)
The report points out that traditional media models tout objectivity, but their drawbacks have long been apparent. The internet has given everyone a voice, and now more and more practitioners, experts, and builders are directly conveying their views to the public. Their perspectives reflect their own interests and stakes in the world. Ironically, the audience often respects them not "despite their having a stake," but "precisely because they have a stake."
The new development in this trend is not the rise of social media, but the "emergence of crypto tools" that allow people to make publicly verifiable commitments. As AI drastically reduces the cost and ease of generating vast amounts of content (able to generate content from any perspective, any identity, true or false), relying solely on human (or bot) statements is no longer sufficient to be convincing. Tokenized assets, programmable staking, prediction markets, and on-chain history provide a more solid foundation for trust: commentators can prove they practice what they preach (backing their views with capital); podcast hosts can lock tokens to prove they won't opportunistically change their stance or engage in pump-and-dump schemes; analysts can bind their predictions to publicly settled markets, creating an auditable record.
This is the early form of what is called "Staked Media": a form of media that not only embraces the concept of having a stake but also provides tangible proof. In this model, credibility comes not from pretending to be neutral, nor from baseless claims, but from publicly transparent, verifiable commitments of interest. Staked Media will not replace other media forms but will complement the current media ecosystem. It sends a new signal: no longer "trust me, I'm neutral," but "this is the risk I'm willing to take, this is how you can verify I'm telling the truth."
Robert Hackett predicts that this field will continue to grow, much like how 20th-century mass media adapted to the technology and incentives of the time (attracting mass audiences and advertisers) by superficially pursuing "objectivity" and "neutrality." Today, AI makes creating or forging any content effortless, while what is truly scarce is evidence. Creators who can make verifiable commitments and genuinely back up their claims will have the advantage.
Using Staking Mechanisms to Raise the Cost of Faking, Proposing a Dual Content Verification Mechanism
This innovative idea has also gained recognition from crypto practitioners, who have offered suggestions.
Crypto analyst Chen Jian stated that from major media to self-media, all sorts of fake news emerge endlessly, with events often being reported with repeated reversals. The root cause is that the cost of faking is low, and the benefits are high. If each information disseminator is viewed as a node, why not use the economic game theory of blockchain POS (Proof of Stake) to solve this problem? He suggests, for example, requiring each node to stake funds before expressing an opinion; the more staked, the higher the trust level. Others can gather evidence to challenge it. If the challenge is successful, the system slashes the staked funds and rewards the challenger. Of course, this process also involves privacy and efficiency issues. Current solutions like Swarm Network combine ZK and AI, protecting participant privacy while using multi-model data analysis to assist verification, similar to Grok's truth verification function on Twitter.
Crypto KOL Lanhu also believes that through cryptographic technologies like zero-knowledge proofs (zk), media or individuals can prove their credibility online, akin to leaving a "signed pledge" on the chain that cannot be tampered with. But a pledge alone is not enough; it also requires "staking" certain assets as collateral, such as ETH, USDC, or other crypto tokens.
The logic of the staking mechanism is straightforward: if published content is proven to be fake, the staked assets are slashed; if the content is true and reliable, the staked assets are returned after a certain period, possibly with additional rewards (such as tokens issued by the staked media or a share of the slashed funds from fakers). This mechanism creates an environment that encourages truth-telling. For media, staking does increase capital costs, but it换来的是 (trades for) genuine audience trust, which is particularly important in an era of rampant fake news.
For example, a YouTuber releasing a video recommending a product needs to leave a "pledge" on the Ethereum chain and stake ETH or USDC. If the video is inaccurate, the staked funds are confiscated, and viewers can trust the video's authenticity with confidence. A blogger recommending a phone needs to stake $100 worth of ETH and declare: "If this phone's beauty filter effect does not meet expectations, I will compensate." Viewers, seeing the staked funds, naturally find it more reliable. If the content is forged by AI, the blogger loses the staked funds.
Regarding judging the authenticity of content, Lanhu suggests adopting a dual verification mechanism of "community + algorithm." On the community side, users with voting rights (requiring staked crypto assets) vote on-chain; if a certain threshold (e.g., 60%)判定为假 (deems it fake); algorithm assistance: data analysis assists in verifying the voting results; arbitration mechanism: if the content publisher disagrees with the ruling, they can initiate arbitration with an expert committee; if voters are found to be maliciously manipulating, their assets are slashed; participants in voting and the expert committee receive rewards, sourced from slashed funds and media tokens. Additionally, content creators can use zero-knowledge proof technology to generate proof of content authenticity from the source, such as the genuine origin of a video.
For those with financial resources attempting to use the staking mechanism to fake, Lanhu suggests increasing the long-term cost of faking, not just financially, but also in terms of time, historical record, reputation system, and legal liability. For example, accounts that are slashed are flagged, requiring more staked funds for future content releases; if an account is slashed multiple times, the credibility of its content plummets; severe cases may even face legal pursuit.






