Distributed AI lab Gradient today released the Echo-2 distributed reinforcement learning framework (arxiv.org/pdf/2602.02192), aiming to break through the efficiency barriers in AI research training. By achieving a complete decoupling of Learner and Actor at the architectural level, Echo-2 slashes the post-training cost of a 30B model from $4,500 to $425. Under the same budget, it delivers over a 10x increase in research throughput.
The framework utilizes compute-storage separation technology for asynchronous training (Async RL), offloading massive sampling computations to unstable GPU instances and heterogeneous GPUs based on Parallax. Combined with breakthroughs in bounded staleness, instance fault-tolerant scheduling, and the proprietary Lattica communication protocol, it significantly enhances training efficiency while ensuring model accuracy. Alongside the framework release, Gradient is also set to launch Logits, an RLaaS platform, to propel AI research from a "capital-intensive" paradigm to one of "efficiency iteration." Logits is now open for预约 (booking) to students and researchers worldwide (logits.dev).
About Gradient
Gradient is an AI lab dedicated to building distributed infrastructure, focusing on the distributed training, serving, and deployment of cutting-edge large models. Backed by top-tier investment institutions, Gradient is building an open and efficient future for the intelligent era.









