From "Manual Rules" to "AI Mind Reading": X's New Algorithm Reshapes the Information Flow, More Accurate and More Dangerous

比推Publicado a 2026-01-20Actualizado a 2026-01-20

Resumen

Elon Musk's X (formerly Twitter) has transitioned from a recommendation system based on "manually stacked rules and heuristic algorithms" to one that relies entirely on a large AI model to predict user preferences. The new algorithm, For You," mixes content from accounts a user follows with posts from across the platform that the AI believes the user will like. The process begins by building a user profile based on historical interactions (likes, retweets, dwell time) and user features (following list, preferences). The system then gathers candidate posts from two sources: the user's direct network ("Thunder") and a broader network of potentially interesting content from strangers ("Phoenix"). After data hydration and an initial filtering step to remove duplicates, old posts, or content from blacklisted authors, the core scoring process begins. A Transformer model (Phoenix Grok) predicts the probability of a user taking various positive actions (like, retweet, reply, click) or negative ones (block, mute, report) on each post. A final score is calculated by weighting these probabilities. An Author Diversity Scorer is then applied to reduce the visibility of multiple posts from the same author in a single batch. The highest-scoring posts undergo a final filter to remove policy-violating content and remove duplicates from the same thread before being sorted into the user's feed. The shift represents a move from "telling the machine what to do" to "letting the machine learn ...

Written by: KarenZ, Foresight News

Original title: Plain Language Breakdown of X's New Recommendation Algorithm: From "Data Fishing" to "Scoring"


Has Musk changed Twitter's recommendation system from "manually stacking rules and mostly heuristic algorithms" to "purely relying on AI large models to guess what you like"?

On January 20, Twitter (X) officially disclosed the new recommendation algorithm, which is the logic behind the "For You" timeline on the Twitter homepage.

Simply put, the current algorithm is: mixing "content posted by people you follow" and "content from the entire network that might suit your taste," then sorting it based on a series of your previous actions on X, such as likes, comments, etc., according to its appeal to you. After two rounds of filtering, it eventually becomes the recommended information flow you see.

Below is the core logic translated into plain language:

Building a Profile

The system first collects the user's contextual information to build a "profile" for subsequent recommendations:

  • User behavior sequence: Historical interaction records (likes, retweets, dwell time, etc.).

  • User features: Follow list, personal preference settings, etc.

Where does the content come from?

Every time you refresh the "For You" timeline, the algorithm fetches content from the following two sources:

  • Inner Circle (Thunder): Tweets from people you follow.

  • Outer Circle (Phoenix): Posts from people you don't follow, but which the AI, based on your taste, fishes out from the vast sea of people as posts you might be interested in (even if you don't follow the author).

These two piles of content are mixed together to form the candidate tweets.

Data Completion and Preliminary Filtering

After fishing up thousands of posts, the system pulls the complete metadata of the posts (author information, media files, core text). This process is called Hydration. Then it performs a quick cleaning round, eliminating duplicate content, old posts, posts the user themselves posted, content from blocked authors, or content containing muted keywords.

This step is to save computing resources and prevent invalid content from entering the core scoring phase.

How is scoring done?

This is the most crucial part. The Transformer model based on Phoenix Grok scrutinizes each remaining candidate post after filtering and calculates the probability of you performing various actions on it. It's a game of adding and subtracting points:

Plus points (Positive feedback): The AI thinks you are likely to like, retweet, reply, click on the image, or click to view the profile.

Minus points (Negative feedback): The AI thinks you are likely to block the author, mute, or flag the post.

Final Score = (Like probability × weight) + (Reply probability × weight) – (Block probability × weight)...

It is worth noting that in the new recommendation algorithm, the Author Diversity Scorer usually intervenes after the AI calculates the final score. When it detects multiple pieces of content from the same author in a batch of candidate posts, this tool automatically "downgrades" the score of that author's subsequent posts, making the authors you see more diverse.

Finally, sort by score and pick the batch of posts with the highest scores.

Secondary Filtering

The system re-checks the top-scoring posts, filters out violations (such as spam, violent content), deduplicates multiple branches of the same thread, and finally arranges them in order from highest to lowest score, becoming the information flow you see.

Summary

X has removed all manually designed features and most heuristic algorithms from the recommendation system. The core advancement of the new algorithm lies in "letting the AI autonomously learn user preferences," achieving a leap from "telling the machine what to do" to "letting the machine learn how to do it itself."

First, recommendations are more accurate, and "multi-dimensional prediction" fits real needs better. The new algorithm relies on the Grok large model to predict various user behaviors—not only calculating "whether you will like/retweet" but also calculating "whether you will click the link to view," "how long you will stay," "whether you will follow the author," and even predicting "whether you will report/block." This refined judgment allows the recommended content to fit users' subconscious needs with unprecedented precision.

Second, the algorithm mechanism is relatively fairer and can, to some extent, break the curse of "big account monopoly," giving new and small accounts more opportunities: The old "heuristic algorithm" had a fatal problem: big accounts, relying on historically high interaction volumes, could get high exposure no matter what content they posted, while new accounts, even with high-quality content, were buried due to "lack of data accumulation." The candidate isolation mechanism allows each post to be scored independently, unrelated to "whether other content in the same batch is a hit." At the same time, the Author Diversity Scorer also reduces the spamming behavior of subsequent posts by the same author in the same batch.

For X the company: This is a cost-reducing and efficiency-increasing measure, using computing power to replace manpower, and using AI to improve retention. For users, we are dealing with a "super brain" that constantly tries to read our minds. The more it understands us, the more we rely on it. But precisely because it understands us too well, we will sink deeper into the "information cocoon" woven by the algorithm and become more easily targeted by emotionally charged content.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group:https://t.me/BitPushCommunity

Bitpush TG Subscription: https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7604412

Preguntas relacionadas

QWhat is the core change in X's new recommendation algorithm compared to the old system?

AThe core change is shifting from 'manually designed rules and mostly heuristic algorithms' to a system that 'relies purely on AI large models to guess user preferences', allowing the AI to autonomously learn user preferences.

QFrom which two sources does the new algorithm gather candidate content for a user's 'For You' timeline?

AIt gathers content from the 'Thunder' circle (posts from people the user follows) and the 'Phoenix' circle (posts from accounts the user doesn't follow but that the AI predicts they might be interested in).

QWhat is the purpose of the 'Author Diversity Scorer' in the new algorithm?

AThe Author Diversity Scorer detects when multiple posts from the same author are in a batch of candidate content and automatically lowers the score of that author's subsequent posts to ensure the user sees a more diverse range of authors.

QHow does the AI model determine the final score for a piece of content?

AA Transformer model calculates the probability of the user performing various actions on the content. It adds points for predicted positive feedback (like, retweet, reply) and subtracts points for predicted negative feedback (block, mute, report), with each action weighted. The final score is the sum of these weighted probabilities.

QWhat are two main potential consequences for users mentioned in the article regarding the new algorithm?

AThe consequences are: 1) More accurate and personalized content that better fits the user's subconscious needs. 2) A deeper entrapment in an 'information cocoon' and a higher likelihood of being precisely targeted by emotional content because the algorithm understands them so well.

Lecturas Relacionadas

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbitHace 3 hora(s)

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbitHace 3 hora(s)

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbitHace 3 hora(s)

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbitHace 3 hora(s)

Trading

Spot
Futuros
活动图片