OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbitPublicado a 2026-05-11Actualizado a 2026-05-11

Resumen

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-...

Over the past decade, the advancement of AI has primarily relied on one path: feeding more data and computing power into larger models, allowing experience to accumulate within neural network parameters. This path has led to the leap in large models after ChatGPT, but it has also left behind a persistent challenge: as models become increasingly powerful, the reasons behind their successes and failures often remain difficult to explain and correct.

Recent experiments by OpenAI engineer Weng Jiayi suggest another possibility: within a clear objective, a runnable environment, and a feedback loop, AI can improve not only by training models but also by "autonomously modifying code."

On May 8, 2026, Weng Jiayi systematically documented this set of experiments in his personal blog "Learning Beyond Gradients" and simultaneously made the code repository, CSV experiment logs, and video replays public. He has long focused on reinforcement learning and post-training infrastructure, participated in the initial launch of ChatGPT, and contributed to projects like GPT-4, GPT-4 Turbo, GPT-4o, o-series, and GPT-5. Before joining OpenAI, he earned his bachelor's degree from the Department of Computer Science at Tsinghua University and his master's degree from Carnegie Mellon University. He is also a main author of the open-source reinforcement learning library Tianshou and the high-performance parallel environment engine EnvPool.

Image generated by AI

He had Codex repeatedly write policy code, run environments, read logs, review replays, locate failures, then modify code, add tests, and continue evaluation. After multiple iterations, Codex "cultivated" a set of pure Python programmatic strategies: it achieved a theoretical perfect score of 864 points in Atari Breakout and also produced results in robot control simulation environments like MuJoCo Ant and HalfCheetah that were close to those of common deep reinforcement learning algorithms.

The truly significant aspect of these experiments lies in a core question: When the coding agent is sufficiently capable, must learning necessarily occur within neural network weights?

In this experimental setup, experience is written into code, tests, logs, and replays, becoming a software system that can be read, modified, reviewed, and audited. If this direction continues to hold, the next step for Agentic AI might not only be training larger models but also enabling models to participate in maintaining a continuously evolving engineering system.

01

From 387 Points to a Perfect Score: An Engineering Loop

Weng Jiayi wrote in his blog that the starting point for this experiment was actually an engineering need. While maintaining EnvPool in his spare time, he required a cheaper method than "running a neural network every time" to test whether the game environment was functioning correctly, as placing neural networks in CI was too expensive. The original question was: Could he write cheap, reproducible, heuristic rules that were clearly better than a random policy, to drive the environment to information-rich states?

He used Codex (base model gpt-5.4) to attempt writing a completely rule-based version. The initial prompt was very direct: "Write a strategy that can solve Breakout." The result was unsatisfactory. A low score itself provided no information—the action semantics could be wrong, state detection could be wrong, the evaluation process could be wrong, or the policy structure itself could be too weak.

Subsequently, Weng Jiayi changed the task format. He no longer asked Codex to simply deliver a policy.py file; instead, he required it to maintain a complete loop: probe actions and observations, write state detectors, write the policy, run complete episodes, record trials.jsonl and summary.csv, generate videos or curves, inspect failure modes, modify the policy, simplify code, and run regressions.

The experimental log for Breakout clearly recorded this process. In the first round, Codex confirmed the action space and observation shape, identified the colors of the ball, paddle, and bricks from the RGB frames, and then used image labels to scan the 128-byte Atari RAM. The initial baseline scored only 99 points. After adding tunnel offset logic, the score increased to 387 points.

387 points was a deceptively high local optimum. The strategy could stably hit the ball, but the ball path was trapped in a periodic loop: no lives were lost, but no new bricks could be broken, and the score was stuck. If a human were writing the code, they might continue fine-tuning the "accuracy of hitting the ball." Codex watched the video and the last few dozen steps of the trajectory, and identified the problem as a lack of disturbance in the ball's path.

Image: Atari Breakout gameplay. The player controls the bottom paddle to bounce a ball, breaking layers of colored bricks above. Codex achieved the theoretical perfect score of 864 points in this game.

Codex then added a mechanism to "break the cycle": if no reward was received for a long time, periodically add an offset to the landing point prediction to knock the ball out of the local loop. The score jumped from 387 to 507. During further iterations, a new problem emerged: for fast low balls, conventional interception would cause the paddle to "over-lead" and drift away. Codex added a `fast_low_ball_lead_steps=3` parameter, and the score jumped from 507 to 839. The final improvement from 839 to 864 resembled maintaining an already complex system: trying deadband, serve offset, stuck offset, brick balance bias, lookahead steps; many directions were ineffective. The final useful change was a late-stage condition: "After the first wall of bricks is cleared, enable the stuck offset only when the ball is far from the paddle, and gradually release it when the ball is close."

The final RAM default configuration stably output 864 / 864 / 864 points across three episodes, reaching the theoretical limit of Breakout. Codex then migrated the same geometric controller to a pure vision input version—without reading RAM, relying solely on RGB segmentation to identify the paddle, ball, and brick balance. The vision version initially scored 310 points, then 428 points, and reached 864 points after the seventh local episode, corresponding to 14,504 local policy environment steps.

Image: Sample efficiency curve of Codex on Breakout. The blue line is the version that reads game memory (RAM), and the red line is the vision-only version (Vision). The RAM version experienced several jumps: 99 → 387 → 507 → 839 → 864, finally reaching the perfect score for the first time at episode 81, with a cumulative 1.5 million environment steps; the Vision version, migrating the mature structure from the RAM version, reached 864 points with only 7 episodes and approximately 14,500 environment steps.

Weng Jiayi specifically noted that this should not be understood as "the vision input started from scratch and reached a perfect score using only 14.5K steps." The actual process was that Codex first discovered the geometric controller, cycle-breaking mechanism, and late-stage offset release in the RAM version. Once the structure was stable, the state reading layer was switched from RAM to RGB. The 14.5K steps represent the migration budget for the vision version.

02

Defining Heuristic Learning

Finding a name for this evolving "software policy" was more difficult than writing the first version of the policy. Weng Jiayi ultimately named this process Heuristic Learning (HL) and termed the object it maintains as a Heuristic System (HS).

According to his blog definition, HL is composed of program code. Like today's common deep reinforcement learning, it has a loop of state, action, feedback, and update. The difference is that the object being updated is the software structure, not neural network parameters; its feedback, digested by the coding agent, can come from environmental rewards, test cases, logs, videos, replays, or human feedback; its update does not use backpropagation, but rather the coding agent directly edits the policy, state detectors, tests, configurations, or memories.

It should be added that the concept of "using programs rather than neural networks as policies" is not Weng Jiayi's original creation. Academic discussions on Programmatic RL have been ongoing for years: the PROPEL framework proposed by Rice University and Caltech in 2019 researched reinforcement learning methods representing policies as short programs in a symbolic language; the 2021 LEAPS work further learned program embedding spaces, combining differentiable program policies with RL training; the HPRL (Hierarchical Programmatic Reinforcement Learning) presented at ICML 2023 allows a meta-policy to combine multiple programs; the LLM-GS framework from National Taiwan University and Microsoft in 2024 uses LLM's programming ability and commonsense reasoning to guide the search for programmatic RL policies.

The consensus from this research is that, compared to neural policies, programmatic policies possess better interpretability, formal verifiability, and generalization ability to unseen scenarios.

Weng Jiayi's substantive contribution this time lies in treating the coding agent as the engineering channel for maintaining the heuristic system. In the past, doing programmatic RL either relied on manually designed domain-specific languages or search algorithms within restricted program spaces; Weng Jiayi, however, uses Codex to integrate code, logs, tests, video replays, and parameter adjustments into the same agent workflow, drastically reducing the iteration cost of program policies at once. In other words, he is arguing for a new engineering path: when the coding agent is sufficiently capable, those heuristic strategies once deemed "too expensive to maintain" might become cost-effective again.

Weng Jiayi provided a comparison table in his blog to clearly illustrate the differences between HL and Deep RL: in terms of policy form, the former consists of rules, state machines, controllers, model predictive control (MPC), and macro actions composed into code, while the latter consists of neural network parameters; in terms of state form, the former uses explicit variables, detectors, and caches, while the latter uses network-readable observation vectors; in terms of feedback form, the former treats tests, logs, and replays as valid signals, while the latter primarily relies on fixed reward functions; in terms of memory form, the former can explicitly store trials, summaries, failure reasons, and version diffs, while the latter has essentially none in on-policy algorithms and relies on replay buffers in off-policy algorithms.

This comparison demonstrates that HL possesses some engineering attributes: the policy is interpretable and can be translated into natural language; sample efficiency is measured in units of "one effective code change," not slow gradient updates; old capabilities can become regression tests, fixed-seed replays, or golden cases; overfitting to training seeds or test loopholes can be constrained through simplification, regression checks, and multi-seed evaluation; old capabilities don't have to reside solely in weights but can also reside in rule sets and tests, which partly addresses the catastrophic forgetting problem that neural networks have long struggled to solve.

03

Bulk Validation on Atari57: Boundaries and Shortcomings

If focusing only on Breakout, the story could easily be simplified to "AI wrote a perfect strategy." But Weng Jiayi didn't stop at Breakout; he scaled this Codex workflow in bulk to Atari57, running 57 games, two observation modes, and three repetitions each, totaling 342 "unattended" search trajectories.

The experimental design was quite rigorous. Each game was tested with two input methods: one directly reading game memory, and the other viewing only the screen. Each method was independently repeated three times. This produced a total of 342 "unattended" experimental trajectories: each Codex agent received the same prompt template, explored actions on its own, wrote code on its own, ran experiments on its own, and recorded results on its own, with no human providing hints. Constraints were strictly enforced: no training neural networks, no reading game source code, no exploiting any hidden information. All steps used for debugging and trial-and-error had to be counted in the total cost. This was to prevent Codex from cheating in any "peeking at the answers" way.

When measuring results, a metric called HNS (Human-Normalized Score) is commonly used—simply put, it standardizes the score of each game relative to "average human player performance = 1" for easy cross-game comparison.

Image: Sample efficiency comparison on the full Atari57 suite. The x-axis is environment steps (log scale), and the y-axis is HNS (Human-Normalized Score, where 1.0 indicates reaching average human player level). Codex's vision input version (red line) significantly outperforms the PPO baseline (blue/gray dashed lines) in early-stage efficiency, reaching 0.81 at 9.7 million steps, comparable to PPO's level around 10 million steps; Codex's memory input version (purple line) converges at 0.59.

Measured by this standard, Codex's early-stage efficiency appears quite impressive. With only 1 million environment steps consumed, Codex's median HNS for vision input had already reached 0.32, and for memory input, 0.26, significantly higher than that of classical reinforcement learning algorithms like PPO at the same stage. By 9.7 million steps, Codex's vision version reached 0.81, already close to PPO's level of approximately 0.88 to 0.92 at 10 million steps. If allowed to aggregate by selecting the better-performing input method for each game, Codex's median HNS was 0.83, OpenAI Baselines PPO2 was 0.80, and CleanRL EnvPool PPO was 0.98—essentially a tie.

However, Weng Jiayi himself calmly drew a boundary: this is only a comparison of environment interaction efficiency, without accounting for the costs of Codex reading logs, writing code, and watching videos. "Running fast" does not equal "low total cost," and the latter remains a black box for now.

More noteworthy is that Codex's performance across the 57 games was not uniform. In games with clear geometric structures like Breakout, Boxing, and Krull, both heuristic strategies and deep reinforcement learning could significantly surpass human levels; in games with clear rules like Asterix, Jamesbond, and Tennis, heuristic strategies were even stronger; but in fast-paced, complex-pattern games like Atlantis, VideoPinball, RoadRunner, and StarGunner, PPO still dominated.

The most cautionary counterexample is Montezuma's Revenge. This is a notorious "hard nut to crack" in reinforcement learning, where the protagonist needs to find keys, avoid enemies, and open doors in a complex underground labyrinth, with extremely sparse reward signals—a classic "long-term planning + failure recovery" challenge. Codex did score 400 points in this game, but examining the policy file it generated reveals that it's not a true "strategy" but a hardcoded sequence of 86 actions corresponding to 1,769 environment steps: more like memorizing a fixed route than learning to navigate a maze. Weng Jiayi specifically noted: "This is a boundary case and should not be understood as a generic Montezuma strategy."

Montezuma exposes the expressive limits of Heuristic Learning. Ordinary programmatic strategies are essentially reactive logic of "do this action when you see that state," struggling with tasks requiring strict action sequences, resuming plans from intermediate states, and long-horizon planning. Such tasks require not just more if-else statements but program structures closer to "macro-action combination + recoverable search state + long-term memory." This tells us one thing: even if the coding agent becomes very powerful, some problems cannot be contained by ordinary code.

04

If the Paradigm Holds, What Are the Industrial Implications?

Zooming out to an industrial perspective. If the Heuristic Learning path truly holds—meaning "coding agents can stably maintain programmatic strategies surpassing handcrafted rules and approaching RL baselines"—where does its practical significance lie?

The first application point is robot control, especially in structurally stable scenarios. The framework Weng Jiayi outlined in his blog involves hierarchical division of labor: joint-level HL, limb-level HL, full-body balance HL, and task-level HL. Lower levels handle safety and low-latency control, middle levels handle gait and contact, and higher levels handle tasks and long-term memory; the coding agent doesn't need to "understand walking"—it's more like an update channel inserted into the system, sending failure videos, sensor streams, and simulation results back to the system, and rewriting feedback into code, parameters, protection rules, and memories.

Scenarios like warehouse AGVs, inspection robots, factory robotic arms, and standardized sorting, where the environment structure is relatively fixed and safety boundaries are clear—if core control strategies can be solidified into lightweight code, robots wouldn't need to run a large policy network for every action step. Deployment-end reliance on high-power GPU inference cards would decrease, with more load handled by traditional controllers and local program logic.

This doesn't mean robots don't need GPUs; perception, localization, mapping, and semantic understanding still rely on neural networks. What changes is the role of the GPU, shifting from "burning compute for end-to-end action decisions every second" to "playing a periodic role in perception, offline simulation, policy generation, and anomaly analysis."

The second application point is the auditability of safety-critical scenarios. The most troublesome engineering problem with neural policies is the inability to locate the cause after a failure. When a robotic arm suddenly fails at a certain angle, a vehicle misjudges in an edge case, or a medical robot acts abnormally in a rare posture, engineers cannot answer "which weight caused this error." Ultimately, they can only add data, retrain, run regression tests, and bet that the new model hasn't introduced new problems.

If the policy exists in code form, state variables, conditional branches, failure logs, and regression tests are all visible; a dangerous action can be hardcoded to be prohibited, a corner case can be written as a test, and an erroneous state transition can be individually patched. This doesn't make the system inherently safer, but it allows safety issues to enter normal software engineering workflows for the first time—they can be code-reviewed, intercepted by CI, and responded to by SRE on-call. In fields requiring regulation and liability division, like autonomous driving, industrial robotic arms, and medical robots, this auditability itself is of commercial value.

The third application point is the engineering of continual learning and online learning. Weng Jiayi presented this as the main argument of the entire blog post. Catastrophic forgetting in neural networks is a structural problem: learning new things washes away old capabilities. HL also experiences forgetting, but in a more engineering form: a new rule fixes one failure mode but breaks an old scenario; a new memory repeatedly leads the agent in the wrong direction; a test range is too narrow, and the policy learns to exploit it; a patch modifies a shared interface, and old calling paths silently fail.

These problems don't disappear automatically, but they are issues that software engineering has dealt with for decades, with existing toolchains—regression testing, version diffs, fixed-seed replays, golden traces, and explicitly recorded failure directions.

A healthy HS must perform two operations simultaneously: absorbing new feedback and compressing historical patches. An HS that only grows without reduction will eventually become a "code ball of mud" no one dares to touch. In other words, HL transforms the mathematical problem of "how to update parameters" into the engineering problem of "how to maintain a software system that continuously absorbs feedback."

The latter is not necessarily easier, but it is closer to the existing boundaries of human capability.

The fourth application point is capability accumulation in Agent products. What current Agent products lack most are stable tool invocation, reliable execution chains, reusable failure experiences, and auditable task records. If HL's logic holds, an Agent's memories during execution would precipitate into code assets that can be reused across sessions, users, and tasks. It can directly interface with existing DevOps processes and also means that Agents from different companies and teams can share heuristics without needing to share models—something the neural network approach cannot achieve.

However, it must be emphasized: all four application points depend on further validation of the HL path on more complex tasks. Breakout and Ant are relatively clean environments. Real robots face changes in ground friction, lighting variations, actuator delays, and sensor noise—none of which have been systematically evaluated in public materials. The Montezuma counterexample has already shown that long-horizon tasks require program forms beyond ordinary if-else. How far this vision can go depends on the next phase of experiments.

05

Technical Debt Shifts from Weights to Code

Weng Jiayi's assessment in his blog is measured. He wrote that HL cannot accomplish everything neural networks can do; it is limited by what code can express, especially in complex perception and long-horizon generalization. With today's understanding, he cannot imagine an agent using pure Python code without any neural networks to solve ImageNet. The truly worthwhile question is how to combine neural networks with HL to jointly address Online Learning and Continual Learning.

The division of labor he proposes borrows from the System 1 / System 2 framework: specialized shallow neural networks take on part of System 1, responsible for fast perception, classification, and object state estimation; HL also takes on part of System 1, responsible for processing fresh data, rules, tests, replays, memories, safety boundaries, and local recovery; the LLM agent acts as System 2, providing feedback and improvement data to HL, and periodically extracting information from data generated by HL to update itself.

If deep learning over the past decade has proven that "experience can be compressed into weights," then the hypothesis Weng Jiayi proposes this time is another proposition: in the era of coding agents, experience might once again become readable, modifiable, testable software.

This article is from the WeChat public account "Tencent Technology," author: Xiao Jing, editor: Xu Qingyang

Preguntas relacionadas

QWhat is Heuristic Learning (HL) as proposed by Weng Jiayi, and how does it differ from traditional deep reinforcement learning?

AHeuristic Learning (HL) is a paradigm proposed by OpenAI engineer Weng Jiayi where an AI agent, like Codex, learns by iteratively writing, running, and modifying programmatic code for a policy within an explicit goal, runnable environment, and feedback loop. The core difference from traditional deep reinforcement learning (Deep RL) is that in HL, experience and improvements are encoded directly into readable, editable software components like code, tests, logs, and configurations. This is updated via direct code edits by the AI, rather than being learned through gradient descent and embedded as inscrutable weights in a neural network. This approach aims for better interpretability, auditability, and sample efficiency in terms of 'effective code changes'.

QWhat was the specific process and final result of the Atari Breakout experiment conducted by Weng Jiayi?

AIn the Atari Breakout experiment, Weng Jiayi tasked Codex not just with writing a policy, but with maintaining a full engineering loop: probing the environment, writing state detectors, running episodes, recording logs and videos, analyzing failures, and modifying the code. Starting from a baseline score of 99, the agent iteratively improved its strategy. Key breakthroughs included adding a mechanism to break cyclical ball patterns (raising the score from 387 to 507) and adjusting parameters for intercepting fast low balls (jumping from 507 to 839). The final policy, refined with late-stage conditionals, achieved a perfect score of 864 across three episodes, which is the theoretical maximum for the game. This was achieved using both game RAM data and later migrated to a vision-only (RGB) input version.

QWhat were the key findings and limitations when scaling the Heuristic Learning approach to the full Atari57 benchmark?

AWhen scaled to the Atari57 benchmark, the Heuristic Learning approach showed promising early sample efficiency, with a median Human-Normalized Score (HNS) of 0.32 at 1 million steps for the vision-based version, outperforming PPO baselines at that stage. By 9.7 million steps, its performance (HNS ~0.81) was competitive with PPO. However, performance was uneven. It excelled in geometrically clear games (e.g., Breakout, Boxing) but struggled in fast-paced, complex games (e.g., Atlantis, RoadRunner). A key limitation was exposed in Montezuma's Revenge, where the agent merely memorized a fixed action sequence instead of learning a generalizable policy, highlighting the expressivity limits of simple programmatic strategies for tasks requiring long-term planning and state recovery.

QWhat are the potential industry applications and implications if the Heuristic Learning paradigm proves viable?

AIf proven viable, Heuristic Learning could have significant industry implications: 1) **Robotics Control**: For structured environments (e.g., warehouse AGVs, industrial arms), core control logic could be lightweight, auditable code, reducing reliance on heavy GPU inference for every action. 2) **Safety-Critical Systems**: The interpretability of code allows for auditing, debugging, and embedding safety rules directly, which is crucial for autonomous vehicles and medical robots. 3) **Continual Learning Engineering**: It transforms the 'catastrophic forgetting' problem into a software maintenance challenge, solvable with regression tests and version control. 4) **Agentic AI Products**: Agent experiences could be codified into reusable, sharable software assets integrated into DevOps workflows, enabling capability沉淀 without sharing model weights.

QAccording to Weng Jiayi, how might Neural Networks and Heuristic Learning be combined in a future AI system architecture?

AWeng Jiayi suggests a potential hybrid architecture借鉴 the System 1 / System 2 framework. Specialized, shallow neural networks would act as part of System 1, handling fast perception and state estimation. The Heuristic Learning system would also be part of System 1, responsible for rules, tests, memory, safety boundaries, and local recovery based on fresh data. A large language model (LLM) agent would serve as System 2, providing high-level feedback and improvement directions to the HL system, and periodically updating itself from the structured data and experiences generated by the HL process. This combines the perceptual strength of neural networks with the interpretability and engineering manageability of programmatic systems.

Lecturas Relacionadas

380,000 Apps Exposed, 2,000+ Apps Leaked Secrets: AI Programming Turns 'Intranet' into Public Internet

Israeli cybersecurity firm RedAccess uncovered a severe data exposure trend linked to "vibe coding" or AI-powered software development tools. Their research found approximately 38,000 publicly accessible web applications built with platforms like Lovable, Base44, Netlify, and Replit. Of these, an estimated 2,000 apps exposed sensitive corporate and personal data, including medical records, financial information, internal strategic documents, and customer chat logs. In some cases, access even granted administrative privileges. The core issue stems from default privacy settings that make applications public by default, combined with a lack of built-in security controls (like authentication) in the AI-generated code. This allows employees without security expertise—"citizen developers"—to easily create and deploy applications that bypass standard corporate security reviews. The exposed apps, often indexed by search engines, are trivially discoverable. While some platform providers (Replit, Lovable, Wix/Base44) argue that security configuration is the user's responsibility and question the validity of some findings, security researchers confirm the widespread reality of such exposures. This pattern, also noted in prior studies, highlights a critical security gap as AI democratizes app creation, potentially leading to massive, unintentional data leaks.

marsbitHace 38 min(s)

380,000 Apps Exposed, 2,000+ Apps Leaked Secrets: AI Programming Turns 'Intranet' into Public Internet

marsbitHace 38 min(s)

Attracting Global Capital, Asia's New 'Super Cycle' Is Unfolding

Investors are turning to Asia as the next frontier for global equity growth, with a new "super cycle" unfolding across the region. Driven by the AI revolution, Asian markets, particularly South Korea, have seen significant rallies. According to Morgan Stanley analysis, the underlying drivers of Asia's industrial cycle are shifting from traditional sectors like real estate and manufacturing to massive investments in AI infrastructure, energy security and transition, and supply chain resilience. Fixed asset investment in Asia is projected to grow from around $11 trillion in 2025 to $16 trillion by 2030, with a 7% annual growth rate from 2026-2030. The AI wave is a primary catalyst, driving immense capital expenditure for chips, servers, data centers, and power systems. Asia is central to this hardware supply chain. In China, AI investment is focused on building a full-system domestic capability, with the local AI chip market potentially reaching $86 billion by 2030. Beyond AI, China's export story is expanding from EVs and batteries to robotics. The country already captures about half of new global industrial robot demand and over 90% of humanoid robot shipments. This growth phase mirrors the early stages of China's EV export boom. Simultaneously, energy security investments, spurred by AI's massive power needs, are rising, with China benefiting from its leadership in solar, batteries, and EVs. Regional defense spending is also increasing structurally, supporting demand for advanced manufacturing. The main beneficiaries are China, South Korea, and Japan, positioned in core supply chain areas. However, risks remain, including potential overcapacity, profit margin pressures from competition, persistent technological restrictions, geopolitical friction, and workforce displacement due to AI-driven automation. Market volatility is also expected to increase as investor expectations diverge on the realization of these capital investment and export themes.

marsbitHace 38 min(s)

Attracting Global Capital, Asia's New 'Super Cycle' Is Unfolding

marsbitHace 38 min(s)

Funding Weekly Report | 14 Public Funding Events, Kalshi Completes $10B New Funding Round at $220B Valuation Led by Coatue Management

Weekly Funding Roundup: 14 Deals and $10.49B+ in Total Funding, Led by Kalshi's $1B Round Last week (5.4-5.10) saw 14 notable funding events in the global blockchain ecosystem, raising over $10.49 billion in total. Key highlights include Kalshi, a prediction market platform, securing a $1 billion round led by Coatue Management, reaching a $22 billion valuation. The platform now boasts ~2 million MAUs and $178B in annualized trading volume. In DeFi, regulated on-chain reinsurer OnRe raised $5 million in Series A funding, and Bitcoin-backed credit protocol Saturn Credit completed a $2 million seed round. For Infrastructure & Tools, OpenTrade raised $17 million to expand its stablecoin yield infrastructure, and RWA platform Balcony secured $12.7 million to deploy its property settlement service in the US. Centralized Finance saw one deal: AI-driven trading platform Stockcoin.ai completed a seed round led by Amber Group. In the prediction market sector alongside Kalshi, AI-powered platform Elastics raised $2 million. Other notable deals include SC Ventures' strategic investment in crypto market maker GSR and Centrifuge securing a "seven-figure" investment from Coinbase to become a core RWA partner for Base. On the investor side, Haun Ventures raised a new $1 billion fund targeting crypto and AI, and Multi Investment raised ~$616 million to focus on blockchain and Web3 investments.

marsbitHace 1 hora(s)

Funding Weekly Report | 14 Public Funding Events, Kalshi Completes $10B New Funding Round at $220B Valuation Led by Coatue Management

marsbitHace 1 hora(s)

Trading

Spot
Futuros

Artículos destacados

Qué es GROK AI

Grok AI: Revolucionando la Tecnología Conversacional en la Era Web3 Introducción En el paisaje de rápida evolución de la inteligencia artificial, Grok AI se destaca como un proyecto notable que une los dominios de la tecnología avanzada y la interacción del usuario. Desarrollado por xAI, una empresa liderada por el renombrado empresario Elon Musk, Grok AI busca redefinir la forma en que interactuamos con la inteligencia artificial. A medida que el movimiento Web3 continúa floreciendo, Grok AI tiene como objetivo aprovechar el poder de la IA conversacional para responder consultas complejas, proporcionando a los usuarios una experiencia que no solo es informativa, sino también entretenida. ¿Qué es Grok AI? Grok AI es un sofisticado chatbot de IA conversacional diseñado para interactuar dinámicamente con los usuarios. A diferencia de muchos sistemas de IA tradicionales, Grok AI abraza una gama más amplia de consultas, incluyendo aquellas que normalmente se consideran inapropiadas o fuera de las respuestas estándar. Los objetivos centrales del proyecto incluyen: Razonamiento Confiable: Grok AI enfatiza el razonamiento de sentido común para proporcionar respuestas lógicas basadas en la comprensión contextual. Supervisión Escalable: La integración de asistencia de herramientas asegura que las interacciones de los usuarios sean monitoreadas y optimizadas para la calidad. Verificación Formal: La seguridad es primordial; Grok AI incorpora métodos de verificación formal para mejorar la confiabilidad de sus resultados. Comprensión de Largo Contexto: El modelo de IA sobresale en retener y recordar un extenso historial de conversaciones, facilitando discusiones significativas y contextualizadas. Robustez Adversarial: Al enfocarse en mejorar sus defensas contra entradas manipuladas o maliciosas, Grok AI busca mantener la integridad de las interacciones de los usuarios. En esencia, Grok AI no es solo un dispositivo de recuperación de información; es un compañero conversacional inmersivo que fomenta un diálogo dinámico. Creador de Grok AI La mente detrás de Grok AI no es otra que Elon Musk, una persona sinónimo de innovación en varios campos, incluyendo la automoción, los viajes espaciales y la tecnología. Bajo el paraguas de xAI, una empresa enfocada en avanzar la tecnología de IA de maneras beneficiosas, la visión de Musk busca remodelar la comprensión de las interacciones de IA. El liderazgo y la ética fundacional están profundamente influenciados por el compromiso de Musk de empujar los límites tecnológicos. Inversores de Grok AI Si bien los detalles específicos sobre los inversores que respaldan a Grok AI son limitados, se reconoce públicamente que xAI, el incubador del proyecto, está fundado y apoyado principalmente por el propio Elon Musk. Las empresas y participaciones anteriores de Musk proporcionan un respaldo robusto, fortaleciendo aún más la credibilidad y el potencial de crecimiento de Grok AI. Sin embargo, hasta ahora, la información sobre fundaciones de inversión adicionales u organizaciones que apoyan a Grok AI no está fácilmente accesible, marcando un área para una posible exploración futura. ¿Cómo Funciona Grok AI? La mecánica operativa de Grok AI es tan innovadora como su marco conceptual. El proyecto integra varias tecnologías de vanguardia que facilitan sus funcionalidades únicas: Infraestructura Robusta: Grok AI está construido utilizando Kubernetes para la orquestación de contenedores, Rust para rendimiento y seguridad, y JAX para computación numérica de alto rendimiento. Este trío asegura que el chatbot opere de manera eficiente, escale efectivamente y sirva a los usuarios de manera oportuna. Acceso a Conocimiento en Tiempo Real: Una de las características distintivas de Grok AI es su capacidad para acceder a datos en tiempo real a través de la plataforma X—anteriormente conocida como Twitter. Esta capacidad otorga a la IA acceso a la información más reciente, permitiéndole proporcionar respuestas y recomendaciones oportunas que otros modelos de IA podrían pasar por alto. Dos Modos de Interacción: Grok AI ofrece a los usuarios una elección entre “Modo Divertido” y “Modo Regular”. El Modo Divertido permite un estilo de interacción más lúdico y humorístico, mientras que el Modo Regular se centra en ofrecer respuestas precisas y exactas. Esta versatilidad asegura una experiencia personalizada que se adapta a diversas preferencias de los usuarios. En esencia, Grok AI une rendimiento con compromiso, creando una experiencia que es tanto enriquecedora como entretenida. Cronología de Grok AI El viaje de Grok AI está marcado por hitos cruciales que reflejan sus etapas de desarrollo y despliegue: Desarrollo Inicial: La fase fundamental de Grok AI tuvo lugar durante aproximadamente dos meses, durante los cuales se realizó el entrenamiento inicial y el ajuste del modelo. Lanzamiento Beta de Grok-2: En un avance significativo, se anunció la beta de Grok-2. Este lanzamiento introdujo dos versiones del chatbot—Grok-2 y Grok-2 mini—cada una equipada con capacidades para chatear, programar y razonar. Acceso Público: Tras su desarrollo beta, Grok AI se volvió disponible para los usuarios de la plataforma X. Aquellos con cuentas verificadas por un número de teléfono y activas durante al menos siete días pueden acceder a una versión limitada, haciendo que la tecnología esté disponible para un público más amplio. Esta cronología encapsula el crecimiento sistemático de Grok AI desde su inicio hasta el compromiso público, enfatizando su compromiso con la mejora continua y la interacción del usuario. Características Clave de Grok AI Grok AI abarca varias características clave que contribuyen a su identidad innovadora: Integración de Conocimiento en Tiempo Real: El acceso a información actual y relevante diferencia a Grok AI de muchos modelos estáticos, permitiendo una experiencia de usuario atractiva y precisa. Estilos de Interacción Versátiles: Al ofrecer modos de interacción distintos, Grok AI se adapta a diversas preferencias de los usuarios, invitando a la creatividad y la personalización en la conversación con la IA. Avanzada Infraestructura Tecnológica: La utilización de Kubernetes, Rust y JAX proporciona al proyecto un marco sólido para asegurar confiabilidad y rendimiento óptimo. Consideración de Discurso Ético: La inclusión de una función generadora de imágenes muestra el espíritu innovador del proyecto. Sin embargo, también plantea consideraciones éticas en torno a los derechos de autor y la representación respetuosa de figuras reconocibles—una discusión en curso dentro de la comunidad de IA. Conclusión Como una entidad pionera en el ámbito de la IA conversacional, Grok AI encapsula el potencial de experiencias transformadoras para los usuarios en la era digital. Desarrollado por xAI y guiado por el enfoque visionario de Elon Musk, Grok AI integra conocimiento en tiempo real con capacidades avanzadas de interacción. Busca empujar los límites de lo que la inteligencia artificial puede lograr mientras mantiene un enfoque en consideraciones éticas y la seguridad del usuario. Grok AI no solo encarna el avance tecnológico, sino que también representa un nuevo paradigma de conversación en el paisaje Web3, prometiendo involucrar a los usuarios con tanto conocimiento hábil como interacción lúdica. A medida que el proyecto continúa evolucionando, se erige como un testimonio de lo que la intersección de la tecnología, la creatividad y la interacción similar a la humana puede lograr.

356 Vistas totalesPublicado en 2024.12.26Actualizado en 2024.12.26

Qué es GROK AI

Qué es ERC AI

Euruka Tech: Una Visión General de $erc ai y sus Ambiciones en Web3 Introducción En el paisaje en rápida evolución de la tecnología blockchain y las aplicaciones descentralizadas, nuevos proyectos emergen con frecuencia, cada uno con objetivos y metodologías únicas. Uno de estos proyectos es Euruka Tech, que opera en el amplio dominio de las criptomonedas y Web3. El enfoque principal de Euruka Tech, particularmente su token $erc ai, es presentar soluciones innovadoras diseñadas para aprovechar las crecientes capacidades de la tecnología descentralizada. Este artículo tiene como objetivo proporcionar una visión general completa de Euruka Tech, una exploración de sus objetivos, funcionalidad, la identidad de su creador, posibles inversores y su importancia dentro del contexto más amplio de Web3. ¿Qué es Euruka Tech, $erc ai? Euruka Tech se caracteriza como un proyecto que aprovecha las herramientas y funcionalidades ofrecidas por el entorno Web3, centrándose en integrar inteligencia artificial dentro de sus operaciones. Aunque los detalles específicos sobre el marco del proyecto son algo elusivos, está diseñado para mejorar la participación del usuario y automatizar procesos en el espacio cripto. El proyecto tiene como objetivo crear un ecosistema descentralizado que no solo facilite transacciones, sino que también incorpore funcionalidades predictivas a través de inteligencia artificial, de ahí la designación de su token, $erc ai. El objetivo es proporcionar una plataforma intuitiva que facilite interacciones más inteligentes y un procesamiento eficiente de transacciones dentro de la creciente esfera de Web3. ¿Quién es el Creador de Euruka Tech, $erc ai? En la actualidad, la información sobre el creador o el equipo fundador detrás de Euruka Tech permanece no especificada y algo opaca. Esta ausencia de datos genera preocupaciones, ya que el conocimiento del trasfondo del equipo es a menudo esencial para establecer credibilidad dentro del sector blockchain. Por lo tanto, hemos categorizado esta información como desconocida hasta que se disponga de detalles concretos en el dominio público. ¿Quiénes son los Inversores de Euruka Tech, $erc ai? De manera similar, la identificación de inversores u organizaciones de respaldo para el proyecto Euruka Tech no se proporciona fácilmente a través de la investigación disponible. Un aspecto que es crucial para los posibles interesados o usuarios que consideren involucrarse con Euruka Tech es la garantía que proviene de asociaciones financieras establecidas o respaldo de firmas de inversión de renombre. Sin divulgaciones sobre afiliaciones de inversión, es difícil sacar conclusiones completas sobre la seguridad financiera o la longevidad del proyecto. De acuerdo con la información encontrada, esta sección también se encuentra en estado de desconocido. ¿Cómo Funciona Euruka Tech, $erc ai? A pesar de la falta de especificaciones técnicas detalladas para Euruka Tech, es esencial considerar sus ambiciones innovadoras. El proyecto busca aprovechar el poder computacional de la inteligencia artificial para automatizar y mejorar la experiencia del usuario dentro del entorno de las criptomonedas. Al integrar IA con tecnología blockchain, Euruka Tech tiene como objetivo proporcionar características como operaciones automatizadas, evaluaciones de riesgo e interfaces de usuario personalizadas. La esencia innovadora de Euruka Tech radica en su objetivo de crear una conexión fluida entre los usuarios y las vastas posibilidades que presentan las redes descentralizadas. A través de la utilización de algoritmos de aprendizaje automático e IA, busca minimizar los desafíos de los usuarios primerizos y optimizar las experiencias transaccionales dentro del marco de Web3. Esta simbiosis entre IA y blockchain subraya la importancia del token $erc ai, que actúa como un puente entre las interfaces de usuario tradicionales y las capacidades avanzadas de las tecnologías descentralizadas. Cronología de Euruka Tech, $erc ai Desafortunadamente, como resultado de la información limitada disponible sobre Euruka Tech, no podemos presentar una cronología detallada de los principales desarrollos o hitos en el viaje del proyecto. Esta cronología, típicamente invaluable para trazar la evolución de un proyecto y entender su trayectoria de crecimiento, no está actualmente disponible. A medida que la información sobre eventos notables, asociaciones o adiciones funcionales se haga evidente, las actualizaciones seguramente mejorarán la visibilidad de Euruka Tech en la esfera cripto. Aclaración sobre Otros Proyectos “Eureka” Es importante señalar que múltiples proyectos y empresas comparten una nomenclatura similar con “Eureka”. La investigación ha identificado iniciativas como un agente de IA de NVIDIA Research, que se centra en enseñar a los robots tareas complejas utilizando métodos generativos, así como Eureka Labs y Eureka AI, que mejoran la experiencia del usuario en educación y análisis de servicio al cliente, respectivamente. Sin embargo, estos proyectos son distintos de Euruka Tech y no deben confundirse con sus objetivos o funcionalidades. Conclusión Euruka Tech, junto con su token $erc ai, representa un jugador prometedor pero actualmente oscuro dentro del paisaje de Web3. Si bien los detalles sobre su creador e inversores permanecen no revelados, la ambición central de combinar inteligencia artificial con tecnología blockchain se presenta como un punto focal de interés. Los enfoques únicos del proyecto para fomentar la participación del usuario a través de la automatización avanzada podrían destacarlo a medida que el ecosistema Web3 progresa. A medida que el mercado cripto continúa evolucionando, los interesados deben mantener un ojo atento a los avances en torno a Euruka Tech, ya que el desarrollo de innovaciones documentadas, asociaciones o una hoja de ruta definida podría presentar oportunidades significativas en el futuro cercano. Tal como está, esperamos más información sustancial que podría revelar el potencial de Euruka Tech y su posición en el competitivo paisaje cripto.

297 Vistas totalesPublicado en 2025.01.02Actualizado en 2025.01.02

Qué es ERC AI

Qué es DUOLINGO AI

DUOLINGO AI: Integrando el Aprendizaje de Idiomas con Web3 e Innovación en IA En una era donde la tecnología redefine la educación, la integración de la inteligencia artificial (IA) y las redes blockchain anuncia una nueva frontera para el aprendizaje de idiomas. Entra DUOLINGO AI y su criptomoneda asociada, $DUOLINGO AI. Este proyecto aspira a fusionar la capacidad educativa de las principales plataformas de aprendizaje de idiomas con los beneficios de la tecnología descentralizada Web3. Este artículo profundiza en los aspectos clave de DUOLINGO AI, explorando sus objetivos, marco tecnológico, desarrollo histórico y potencial futuro, mientras mantiene claridad entre el recurso educativo original y esta iniciativa independiente de criptomoneda. Visión General de DUOLINGO AI En su esencia, DUOLINGO AI busca establecer un entorno descentralizado donde los aprendices puedan ganar recompensas criptográficas por alcanzar hitos educativos en la competencia lingüística. Al aplicar contratos inteligentes, el proyecto tiene como objetivo automatizar los procesos de verificación de habilidades y asignación de tokens, adhiriéndose a los principios de Web3 que enfatizan la transparencia y la propiedad del usuario. El modelo se aparta de los enfoques tradicionales para la adquisición de idiomas al apoyarse en gran medida en una estructura de gobernanza impulsada por la comunidad, permitiendo a los poseedores de tokens sugerir mejoras al contenido del curso y a las distribuciones de recompensas. Algunos de los objetivos notables de DUOLINGO AI incluyen: Aprendizaje Gamificado: El proyecto integra logros en blockchain y tokens no fungibles (NFTs) para representar niveles de competencia lingüística, fomentando la motivación a través de recompensas digitales atractivas. Creación de Contenido Descentralizada: Abre avenidas para que educadores y entusiastas de los idiomas contribuyan con sus cursos, facilitando un modelo de reparto de ingresos que beneficia a todos los contribuyentes. Personalización Impulsada por IA: Al emplear modelos avanzados de aprendizaje automático, DUOLINGO AI personaliza las lecciones para adaptarse al progreso de aprendizaje individual, similar a las características adaptativas que se encuentran en plataformas establecidas. Creadores del Proyecto y Gobernanza A partir de abril de 2025, el equipo detrás de $DUOLINGO AI permanece seudónimo, una práctica frecuente en el paisaje descentralizado de criptomonedas. Esta anonimidad está destinada a promover el crecimiento colectivo y la participación de los interesados en lugar de centrarse en desarrolladores individuales. El contrato inteligente desplegado en la blockchain de Solana anota la dirección de la billetera del desarrollador, lo que significa el compromiso con la transparencia en las transacciones a pesar de que la identidad de los creadores sea desconocida. Según su hoja de ruta, DUOLINGO AI aspira a evolucionar hacia una Organización Autónoma Descentralizada (DAO). Esta estructura de gobernanza permite a los poseedores de tokens votar sobre cuestiones críticas como implementaciones de características y asignaciones del tesoro. Este modelo se alinea con la ética del empoderamiento comunitario que se encuentra en diversas aplicaciones descentralizadas, enfatizando la importancia de la toma de decisiones colectiva. Inversores y Asociaciones Estratégicas Actualmente, no hay inversores institucionales o capitalistas de riesgo identificables públicamente vinculados a $DUOLINGO AI. En cambio, la liquidez del proyecto proviene principalmente de intercambios descentralizados (DEXs), marcando un contraste marcado con las estrategias de financiamiento de las empresas de tecnología educativa tradicionales. Este modelo de base indica un enfoque impulsado por la comunidad, reflejando el compromiso del proyecto con la descentralización. En su libro blanco, DUOLINGO AI menciona la formación de colaboraciones con “plataformas de educación blockchain” no especificadas, destinadas a enriquecer su oferta de cursos. Si bien aún no se han divulgado asociaciones específicas, estos esfuerzos colaborativos sugieren una estrategia para fusionar la innovación blockchain con iniciativas educativas, ampliando el acceso y la participación de los usuarios a través de diversas avenidas de aprendizaje. Arquitectura Tecnológica Integración de IA DUOLINGO AI incorpora dos componentes principales impulsados por IA para mejorar su oferta educativa: Motor de Aprendizaje Adaptativo: Este sofisticado motor aprende de las interacciones de los usuarios, similar a los modelos propietarios de las principales plataformas educativas. Ajusta dinámicamente la dificultad de las lecciones para abordar desafíos específicos de los aprendices, reforzando áreas débiles a través de ejercicios dirigidos. Agentes Conversacionales: Al emplear chatbots impulsados por GPT-4, DUOLINGO AI proporciona una plataforma para que los usuarios participen en conversaciones simuladas, fomentando una experiencia de aprendizaje de idiomas más interactiva y práctica. Infraestructura Blockchain Construido sobre la blockchain de Solana, $DUOLINGO AI utiliza un marco tecnológico integral que incluye: Contratos Inteligentes de Verificación de Habilidades: Esta característica otorga automáticamente tokens a los usuarios que superan con éxito las pruebas de competencia, reforzando la estructura de incentivos para resultados de aprendizaje genuinos. Insignias NFT: Estos tokens digitales significan varios hitos que los aprendices logran, como completar una sección de su curso o dominar habilidades específicas, permitiéndoles intercambiar o mostrar sus logros digitalmente. Gobernanza DAO: Los miembros de la comunidad con tokens pueden participar en la gobernanza votando sobre propuestas clave, facilitando una cultura participativa que fomenta la innovación en las ofertas de cursos y características de la plataforma. Línea de Tiempo Histórica 2022–2023: Conceptualización Los cimientos de DUOLINGO AI comienzan con la creación de un libro blanco, destacando la sinergia entre los avances en IA en el aprendizaje de idiomas y el potencial descentralizado de la tecnología blockchain. 2024: Lanzamiento Beta Un lanzamiento beta limitado introduce ofertas en idiomas populares, recompensando a los primeros usuarios con incentivos en tokens como parte de la estrategia de participación comunitaria del proyecto. 2025: Transición a DAO En abril, se produce un lanzamiento completo de la red principal con la circulación de tokens, lo que provoca discusiones comunitarias sobre posibles expansiones a idiomas asiáticos y otros desarrollos de cursos. Desafíos y Direcciones Futuras Obstáculos Técnicos A pesar de sus ambiciosos objetivos, DUOLINGO AI enfrenta desafíos significativos. La escalabilidad sigue siendo una preocupación constante, particularmente en equilibrar los costos asociados con el procesamiento de IA y mantener una red descentralizada y receptiva. Además, garantizar la creación y moderación de contenido de calidad en medio de una oferta descentralizada plantea complejidades en el mantenimiento de estándares educativos. Oportunidades Estratégicas Mirando hacia adelante, DUOLINGO AI tiene el potencial de aprovechar asociaciones de micro-certificación con instituciones académicas, proporcionando validaciones verificadas en blockchain de habilidades lingüísticas. Además, la expansión entre cadenas podría permitir que el proyecto acceda a bases de usuarios más amplias y a ecosistemas blockchain adicionales, mejorando su interoperabilidad y alcance. Conclusión DUOLINGO AI representa una fusión innovadora de inteligencia artificial y tecnología blockchain, presentando una alternativa centrada en la comunidad a los sistemas tradicionales de aprendizaje de idiomas. Si bien su desarrollo seudónimo y su modelo económico emergente traen ciertos riesgos, el compromiso del proyecto con el aprendizaje gamificado, la educación personalizada y la gobernanza descentralizada ilumina un camino hacia adelante para la tecnología educativa en el ámbito de Web3. A medida que la IA continúa avanzando y el ecosistema blockchain evoluciona, iniciativas como DUOLINGO AI podrían redefinir cómo los usuarios se involucran con la educación lingüística, empoderando comunidades y recompensando la participación a través de mecanismos de aprendizaje innovadores.

348 Vistas totalesPublicado en 2025.04.11Actualizado en 2025.04.11

Qué es DUOLINGO AI

Discusiones

Bienvenido a la comunidad de HTX. Aquí puedes mantenerte informado sobre los últimos desarrollos de la plataforma y acceder a análisis profesionales del mercado. A continuación se presentan las opiniones de los usuarios sobre el precio de AI (AI).

活动图片