World Models Shift from Prediction to Planning: HWM and the Challenge of Long-Horizon Control

marsbitPublished on 2026-04-17Last updated on 2026-04-17

Abstract

World models have evolved from focusing on representation learning and future prediction to addressing long-horizon planning challenges. While models like V-JEPA 2 demonstrate strong predictive capabilities using large-scale video pre-training, they struggle with multi-stage control tasks due to error accumulation and exponential growth in action search space. HWM (Hierarchical World Model) introduces a two-level planning structure: a high-level planner outlines coarse subgoals over longer time horizons, while a low-level executor handles short-term actions. This decomposition reduces planning complexity and error propagation. In experiments, HWM achieved 70% success in real-world robotic tasks where flat models failed entirely. Complementary efforts include V-JEPA (focused on representation), HWM (on hierarchical planning), and WAV (World Action Verifier, on self-correction). Together, they mark a shift from pure world modeling to integrated systems capable of prediction, planning, and verification—key to deploying world models in real-world agents and long-term tasks.

Over the past year, the research focus of world models has initially centered on representation learning and future prediction. Models first understand the world and then internally simulate future states. This approach has already produced a number of representative results. V-JEPA 2 (Video Joint Embedding Predictive Architecture 2—a video world model suite released by Meta in 2025) used over 1 million hours of internet video for pre-training, combined with a small amount of robot interaction data, demonstrating the potential of world models in understanding, prediction, and zero-shot robot planning.

However, a model's ability to predict does not equate to its ability to handle long-horizon tasks. When faced with multi-stage control, systems typically encounter two challenges. One is that prediction errors accumulate over long rollouts (multi-step simulations), causing the entire path to increasingly deviate from the goal. The other is that the action search space expands rapidly as the planning horizon increases, leading to continuously rising planning costs. HWM does not rewrite the underlying learning approach of world models but instead adds a hierarchical planning structure on top of existing action-conditioned world models, enabling the system to first organize stage paths and then handle local actions.

From a technical perspective, V-JEPA 2 (https://ai.meta.com/research/vjepa/) leans more toward world representation and basic prediction, HWM leans more toward long-horizon planning, and WAV (World Action Verifier: Self-Improving World Models via Forward-Inverse Asymmetry, https://arxiv.org/abs/2604.01985) leans more toward the model's ability to identify and correct its own prediction distortions. These three lines of research are gradually converging. The focus of world model research has shifted from merely predicting the future to transforming predictive capabilities into executable, correctable, and verifiable system capabilities.

I. Why Long-Horizon Control Remains a Bottleneck for World Models

The difficulties of long-horizon control become clearer when applied to robotic tasks. Take robotic arm manipulation as an example: picking up a cup and placing it in a drawer is not a single action but a sequence of continuous steps. The system must approach the object, adjust its posture, complete the grasp, move to the target location, and then handle the drawer and placement. As the chain lengthens, two problems arise simultaneously. One is that prediction errors accumulate along the rollout, and the other is that the action search space expands rapidly.

What the system often lacks is not local predictive ability but the capacity to organize distant goals into stage paths. Many actions may appear to deviate from the goal locally but are actually intermediate steps required to achieve it. For example, raising the arm before grasping or moving back slightly and adjusting the angle before opening a drawer.

In demonstration tasks, world models can already provide coherent predictions. However, when entering real control scenarios, performance begins to decline, and problems emerge. The pressure comes not only from the representation itself but also from the immaturity of the planning layer.

II. How HWM Restructures the Planning Process

HWM splits the originally single-layer planning process into two layers. The upper layer is responsible for stage direction at a longer time scale, while the lower layer handles local execution at a shorter time scale. The model plans at two different temporal rhythms simultaneously, rather than at a single pace.

When handling long tasks, single-layer methods typically need to search the entire action chain directly in the underlying action space. The longer the task, the higher the search cost, and the more likely prediction errors are to diffuse along multi-step rollouts. After HWM's decomposition, the high layer only handles route selection at a longer time scale, and the low layer only handles the execution of the current segment. The entire long task is broken into multiple shorter segments, reducing planning complexity.

Another key design is that high-level actions are not simply the difference between two states but use an encoder to compress a sequence of low-level actions into a higher-level action representation. For long tasks, the key is not just the difference between the start and end points but also how the intermediate steps are organized. If the high layer only looks at displacement differences, it may lose path information in the action chain.

HWM embodies a hierarchical task organization approach. Faced with a multi-stage task, the system no longer unfolds all actions at once but first forms a coarse stage path and then executes and corrects it segment by segment. Once this hierarchical relationship is incorporated into the world model, predictive capabilities begin to transform more stably into planning capabilities.

III. From 0% to 70%: What the Experimental Results Indicate

In the real-world grasp-and-place task set up in the paper, the system was given only the final goal condition without manually decomposed intermediate goals. Under these conditions, HWM achieved a success rate of 70%, while the single-layer world model had a 0% success rate. A long task that was nearly impossible to complete originally became highly achievable after introducing hierarchical planning.

The paper also tested simulation tasks such as object pushing and maze navigation. The results showed that hierarchical planning not only improved success rates but also reduced the computational cost of the planning phase. In some environments, the computational cost of the planning phase could be reduced to about a quarter of the original while maintaining higher or comparable success rates.

IV. From V-JEPA to HWM to WAV

V-JEPA 2 represents the world representation approach. V-JEPA 2 used over 1 million hours of internet video for pre-training, combined with less than 62 hours of robot video for post-training (targeted training after pre-training), resulting in a latent action-conditioned world model (a world model that predicts in an abstract representation space incorporating action information) usable for understanding, predicting, and planning in the physical world. It demonstrates that models can acquire world representations through large-scale observation and transfer these representations to robot planning.

HWM is the next step. The model already possesses world representation and basic predictive capabilities, but once multi-stage control is involved, the problems of error accumulation and search space expansion erupt. HWM does not change the underlying representation learning approach but adds a multi-timescale planning structure on top of existing action-conditioned world models. It addresses how the model organizes distant goals into a set of intermediate steps and then advances segment by segment.

WAV further focuses on verification capabilities. For world models to enter policy optimization and deployment scenarios, they cannot just predict; they must also identify areas where they are prone to distortion and make corrections accordingly. It focuses on how the model checks itself.

V-JEPA leans toward world representation, HWM toward task planning, and WAV toward result verification. Although their focuses differ, their overall direction is consistent. The next phase of world models is no longer just internal prediction but the gradual integration of prediction, planning, and verification into a system capability.

V. From Internal Prediction to Executable Systems

Many past world model efforts were closer to improving the continuity of future state predictions or the stability of internal world representations. However, the current research focus is beginning to change. Systems must not only form judgments about the environment but also translate those judgments into actions and continue to adjust the next steps after results are obtained. To get closer to real deployment, it is necessary to control error propagation in long-horizon tasks, compress the search space, and reduce inference costs.

Such changes will also affect AI agents. Many agent systems can already handle short-chain tasks, such as calling tools, reading files, and executing multi-step commands. However, once tasks become long-chain, multi-stage, and require mid-course re-planning, performance declines. This is not fundamentally different from the difficulties in robotic control; both stem from insufficient high-level path organization capabilities, leading to a disconnect between local execution and overall goals.

The hierarchical approach provided by HWM—where the high level is responsible for paths and stage goals, the low level handles local actions and feedback processing, and result verification is layered on top—will continue to appear in more systems in the future. The next phase of world models will focus not only on predicting the future but on organizing prediction, execution, and correction into a viable path.

Related Questions

QWhat are the main challenges in long-horizon control for world models, as discussed in the article?

AThe main challenges are the accumulation of prediction errors during long rollout sequences, which causes the system to increasingly deviate from the goal, and the exponential expansion of the action search space as the planning horizon grows, leading to rising computational costs.

QHow does HWM (Hierarchical World Model) address the problem of long-term task planning?

AHWM restructures the planning process into two layers: a high-level layer that plans stage directions over longer time scales, and a low-level layer that handles local execution over shorter time scales. This hierarchical approach breaks long tasks into shorter segments, reducing planning complexity and error propagation.

QWhat key improvement did HWM demonstrate in experimental results for real-world tasks?

AIn a real-world grasping and placement task where only the final target was provided without intermediate goals, HWM achieved a 70% success rate, compared to a 0% success rate for a single-layer world model. It also reduced computational costs by up to a quarter in some environments while maintaining high success rates.

QWhat are the distinct focuses of V-JEPA 2, HWM, and WAV in world model research?

AV-JEPA 2 focuses on world representation and foundational prediction using large-scale video pre-training. HWM emphasizes hierarchical task planning for long-horizon control. WAV (World Action Verifier) concentrates on self-verification, identifying and correcting prediction distortions to improve model reliability.

QHow is the research focus of world models evolving beyond mere prediction?

AThe research is shifting from internal future prediction to building executable systems that integrate prediction, planning, and verification. This involves controlling error propagation, compressing search spaces, reducing inference costs, and organizing hierarchical structures for reliable long-term task execution.

Related Reads

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

DeepSeek, an AI startup founded by Liang Wenfeng, is reportedly in talks with Alibaba and Tencent for an external funding round that could value the company at over $20 billion. This marks a significant shift, as DeepSeek had previously relied solely on funding from its parent company,幻方量化 (Huanfang Quantitative), and had resisted external investment. The potential valuation would place DeepSeek among the top-tier AI model companies in China, comparable to competitors like MoonDark (valued at ~$18 billion) and ahead of recently listed firms like MiniMax and Zhipu. The funding—which could range from $600 million (for a 3% stake) to $2 billion (for 10%)—is seen as a move to secure resources for model development, retain talent, and support infrastructure needs, particularly as competition in inference models and AI agents intensifies. Both Alibaba and Tencent are eager to invest, not only for financial returns but also to integrate DeepSeek into their broader AI ecosystems. However, DeepSeek’s leadership is cautious about maintaining independence and may prefer financial investors over strategic ones to avoid being locked into a specific tech ecosystem. Alternative options, such as state-backed funds, offer longer-term capital and policy support but may come with slower decision-making and potential constraints on global expansion. With competing AI firms accelerating their IPO plans, DeepSeek’s window for securing optimal terms may be narrowing. The final decision will reflect a trade-off between capital, resources, and strategic independence.

marsbit38m ago

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

marsbit38m ago

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

After losing 97% of its market value since its 2018 peak, iQiyi is aggressively pivoting to AI in a desperate attempt to survive. At its 2026 World Conference, CEO Gong Yu announced an "AI Artist Library" with over 100 virtual performers and a new AIGC platform, "NaDou Pro," promising faster production and lower costs. This shift comes as the company faces severe financial distress: its market cap sits near delisting thresholds at $1.36 billion, with significant losses, declining membership revenue, and depleted cash flow. The AI strategy has sparked controversy. Top actors have issued legal threats against unauthorized digital replicas, while in Hengdian, over 134,000 background actors are seeing their already scarce job opportunities vanish as AI replaces them for background roles. iQiyi's move represents a fundamental shift from being a high-cost content buyer to a landlord" to becoming a "platform capitalist" that transfers production risk to creators. This contrasts with competitors like Douyin (TikTok's Chinese counterpart), which is investing heavily in *real* actor-led short dramas, betting that authentic human connection retains users better than AI-generated content. The article draws a parallel to the 1920s transition to "talkies," which made cinema musicians obsolete but ultimately enriched the art form. In contrast, iQiyi's AI drive is framed not as an artistic evolution but as a cost-cutting measure that could degrade storytelling, replacing genuine human emotion with algorithmically calculated stimulation and potentially numbing audiences' capacity for empathy. The core question remains: can a company focused solely on financial survival preserve the art of storytelling?

marsbit41m ago

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

marsbit41m ago

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

The CLARITY Act, which passed the House in July 2025 with strong bipartisan support (294-134), faces a critical juncture in the Senate. The Senate Banking Committee is expected to hold a markup soon, but key issues remain unresolved, including stablecoin yield provisions, DeFi regulations, and securing full Republican committee support. Other contentious points involve the Blockchain Regulatory Certainty Act (BRCA), ethics amendments for government officials, and SEC-related matters. The legislative calendar is tight, with limited time before the midterm elections. If the committee markup is delayed beyond mid-May, the chances of passage in 2026 drop significantly. Senator Cynthia Lummis has warned that failure this year could delay comprehensive crypto market structure legislation until 2030 or later. Galaxy estimates the probability of the CLARITY Act becoming law in 2026 is only about 50%. The bill provides crucial regulatory clarity by defining jurisdictional boundaries between the SEC and CFTC, establishing a path for decentralization, and bringing digital commodity intermediaries under federal regulation. Its passage is seen as vital before potential power shifts in the next Congress, which could bring less favorable leadership to key committees. The timeline is compressed, and the bill must compete for floor time with other priorities like Iran authorization and DHS appropriations. Key hurdles include finalizing the stablecoin yield compromise text, addressing law enforcement concerns about BRCA, and navigating political dynamics around SEC nominations. The outcome of the Banking Committee markup and the level of bipartisan support will be critical indicators of its future success.

marsbit1h ago

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

marsbit1h ago

Trading

Spot
Futures
活动图片