Models Can Also "Nest"? MiniMax Releases M2.7: The First Domestic Large Model Deeply Involved in Self-Iteration

marsbitОпубліковано о 2026-03-18Востаннє оновлено о 2026-03-18

Анотація

Artificial intelligence is evolving from monthly updates to self-evolution. On March 18, MiniMax released its first new model version deeply involved in its own iteration—MiniMax M2.7. This marks a new stage in model development: large models are no longer solely trained by human programmers but have begun to "train themselves." The core breakthrough of MiniMax M2.7 lies in its strong autonomous construction capability. It can independently build complex Agent Harness (intelligent agent testing frameworks) and, relying on underlying capabilities such as Agent Teams, complex Skills, and Tool Search tools, complete highly complex productivity tasks autonomously. In simple terms, M2.7 is not just a smarter conversational agent but also a "digital engineer" capable of self-diagnosis and self-optimization. This "self-participatory iteration" model will significantly enhance the model’s logical reasoning and tool invocation accuracy when facing unknown complex tasks. Currently, this self-evolving MiniMax M2.7 model has been fully launched on the MiniMax Agent platform and open platform. As large models begin to deeply participate in their own "growth" process, the ceiling of AI may be raised once again.

The evolution speed of artificial intelligence is transitioning from "monthly updates" to "self-evolution." On March 18, MiniMax officially released its first new version model deeply involved in iterating itself—MiniMax M2.7. This marks a new stage in model development: large models are no longer solely fed by human programmers but have begun to learn to "guide themselves."

According to reports, the core breakthrough of MiniMax M2.7 lies in its powerful autonomous construction capability. It can independently build complex Agent Harness (intelligent agent testing frameworks) and, relying on underlying capabilities such as Agent Teams (intelligent agent collaboration), complex Skills, and Tool Search tool, independently complete highly complex productivity tasks.

Simply put, M2.7 is not just a smarter conversationalist but also a "digital engineer" capable of self-diagnosis and self-optimization. This "self-participatory iteration" model will significantly enhance the model's logical reasoning limits and tool invocation accuracy when facing unknown complex tasks.

Currently, this MiniMax M2.7 model, equipped with self-evolution genes, has been fully launched on the MiniMax Agent platform and open platform. As large models begin to deeply participate in their own "growth" process, the ceiling of AI may be raised once again.

Meanwhile, the AI computing power and application market are also seeing frequent developments. LuChen Technology announced the completion of a Series B financing round worth hundreds of millions of yuan, with its overseas revenue share soaring to 79%; meanwhile, due to a surge in call volumes, some of Alibaba Cloud's AI computing power products have reportedly seen price increases. Amid the interplay of technological iteration and market fluctuations, the AI track in 2026 is becoming increasingly urgent and full of variables.

Пов'язані питання

QWhat is the name of the new model released by MiniMax that is capable of deep self-iteration?

AThe new model is called MiniMax M2.7.

QWhat is the core breakthrough of the MiniMax M2.7 model according to the article?

AIts core breakthrough is its strong autonomous construction capability, allowing it to build complex Agent Harness and complete highly complex productivity tasks independently.

QWhat specific abilities does the M2.7 model use to complete complex tasks?

AIt uses abilities such as Agent Teams (agent collaboration), complex Skills, and Tool Search tool to complete tasks.

QOn which platforms has the MiniMax M2.7 model been fully launched?

AIt has been fully launched on the MiniMax Agent platform and the open platform.

QBesides the MiniMax announcement, what other AI market dynamics are mentioned in the article?

AThe article mentions that LuChen Technology completed a Series B financing of hundreds of millions of yuan, and Alibaba Cloud increased prices for some AI computing products due to a surge in usage.

Пов'язані матеріали

Understanding Hash in One Article: The "Browser Miner" on Ethereum

Hash is an Ethereum-based ERC-20 token described as a "browser-minable post-quantum token." Its key features include enabling browser-based GPU mining without specialized hardware, a fixed supply cap of 21 million tokens, immutable and permissionless smart contracts with no team allocation or pre-mining, and an emphasis on post-quantum security using Keccak256 hashing. The mining mechanism is a simplified on-chain proof-of-work where miners solve unique challenges tied to their wallet address. Key design elements prevent answer theft, with epochs resetting every 100 blocks (~20 minutes) and a per-block minting limit. Emission follows a Bitcoin-like halving schedule every 100,000 mints, starting at 100 tokens per mint. Projections suggest all tokens could be mined within approximately 294 days if a target rate of one mint per minute is sustained. Hash emphasizes "post-quantum" security by leveraging hash-based primitives like Keccak256, which are considered more resistant to quantum attacks compared to elliptic-curve cryptography. While not a fully post-quantum asset, it aligns with Ethereum's broader post-quantum research narrative. The project completed its Genesis sale at $0.03 and began trading on Uniswap, with its price reaching around $0.19. The initial circulating supply is small, with 5% sold in Genesis and 5% allocated to liquidity. The majority (47.6% of total supply) is allocated to early-stage mining, leading to a front-loaded emission schedule. This structure, combined with low initial liquidity, makes Hash a high-volatility, high-risk project dependent on sustained miner participation and market demand to absorb new supply.

marsbit6 хв тому

Understanding Hash in One Article: The "Browser Miner" on Ethereum

marsbit6 хв тому

OpenAI's Largest Internal Wealth Creation: 600 People Cash Out a Total of $6.6 Billion, 75 Take Home the Maximum $30 Million Each

A Wall Street Journal report reveals OpenAI's unprecedented pre-IPO wealth creation. In a single employee stock sale last October, over 600 current and former employees sold shares, collectively cashing out approximately $6.6 billion. Due to high investor demand, the company tripled the individual sale cap to $30 million, with about 75 employees selling the maximum amount. This event represents the largest such transaction in tech industry history for a private company. OpenAI's valuation was $500 billion for this tender offer. Employees with over two years of tenure were eligible, allowing many post-ChatGPT hires their first liquidity event. The company's stock has reportedly grown over 100-fold in seven years. Following a restructuring, employees collectively hold about 26% of OpenAI. The scale of executive wealth is also staggering. In court testimony related to Elon Musk's lawsuit, President and co-founder Greg Brockman confirmed his OpenAI stake is worth around $30 billion. Analysis indicates about 165 current and former employees hold a combined ~$164.9 billion in equity, averaging nearly $1 billion per person in paper wealth. OpenAI's per-employee stock-based compensation is estimated to be 34 times the average of major tech firms before their IPOs. OpenAI continues its rapid ascent, closing a $122 billion funding round at an $852 billion valuation in March. With monthly revenue hitting $2 billion, over 900 million weekly ChatGPT users, and plans for a potential trillion-dollar IPO in late 2026, this wealth-creation engine shows no signs of stopping.

链捕手29 хв тому

OpenAI's Largest Internal Wealth Creation: 600 People Cash Out a Total of $6.6 Billion, 75 Take Home the Maximum $30 Million Each

链捕手29 хв тому

Торгівля

Спот
Ф'ючерси
活动图片