Ministry of Industry and Information Technology Seeks Opinions on 121 Industry Standards Including 'Artificial Intelligence Model Context Protocol'

marsbitОпубліковано о 2026-03-26Востаннє оновлено о 2026-03-26

Анотація

The Ministry of Industry and Information Technology (MIIT) has issued a notice soliciting public opinions on 121 industry standard proposals, including the "Artificial Intelligence Security Governance—Model Context Protocol Application Security Requirements." This move represents a significant step in China's efforts to establish standardized AI underlying protocols and enhance safety regulation frameworks. The core focus of the consultation is on the application security of the Model Context Protocol, aiming to address protocol compatibility and data security risks during multimodal interactions, long-text processing, and cross-platform invocations of large models through standardized technical specifications.

The Ministry of Industry and Information Technology has officially issued a notice, soliciting public opinions on **121 industry standard draft projects including *Artificial Intelligence Security Governance - Model Context Protocol Application Security Requirements***. This move marks a critical step in China's standardization of AI underlying protocols and the construction of a security supervision system. The core focus of this opinion solicitation is the application security of the **Model Context Protocol**, aiming to address protocol compatibility and data security risks during large models' multimodal interactions, long-text processing, and cross-platform calls through standardized technical specifications.

Пов'язані питання

QWhat is the primary focus of the 121 industry standard projects released by the Ministry of Industry and Information Technology (MIIT) for public comment?

AThe primary focus is on the application security requirements of the Model Context Protocol, aiming to standardize technical specifications to address protocol compatibility and data security risks in multimodal interactions, long-text processing, and cross-platform calls of large models.

QWhich specific protocol's application security is emphasized in MIIT's newly released industry standard征求意见 (solicitation of comments)?

AThe application security of the Model Context Protocol (MCP) is specifically emphasized.

QWhat is the main goal of establishing the standard for the Model Context Protocol according to the MIIT notice?

AThe main goal is to resolve protocol compatibility and data security risks during processes such as multimodal interaction, long-text processing, and cross-platform invocation of large models through standardized technical specifications.

QHow many industry standard projects did MIIT release for public comment alongside the one for the Model Context Protocol?

AMIIT released a total of 121 industry standard projects for public comment.

QWhat does this move by MIIT signify for China's AI development according to the article?

AIt signifies a key step forward in the standardization of AI underlying protocols and the construction of a safety supervision system in China.

Пов'язані матеріали

Breaking: OpenAI Undergoes Major Reorganization, President Brockman Assumes Command

OpenAI has announced a major internal reorganization just months before its anticipated IPO. The company is merging its three flagship product lines—ChatGPT, Codex, and the API platform—into a single, unified product organization. The most significant leadership change involves co-founder and President Greg Brockman moving from a background technical role to take full, permanent control over all product strategy. This follows the indefinite medical leave of AGI Deployment CEO Fidji Simo. Additionally, ChatGPT's longtime lead, Nick Turley, has been reassigned to enterprise products, with former Instagram executive Ashley Alexander taking over consumer offerings. The consolidation, internally framed as a strategic move towards an "Agentic Future," aims to break down internal silos and create a cohesive "Super App." This planned desktop application would integrate ChatGPT's conversational abilities, Codex's coding power, and a rumored internal web browser named "Atlas" to autonomously perform complex user tasks. The reorganization occurs amid significant internal and external pressures. OpenAI has recently seen a wave of high-profile departures, including Sora co-lead Bill Peebles and other senior technical leaders, leading to concerns about a thinning executive bench. Externally, rival Anthropic recently secured funding at a staggering $900 billion valuation, surpassing OpenAI's own. Google's upcoming I/O developer conference also poses a competitive threat. Analysts suggest the dramatic restructure is a pre-IPO move to present a clearer, more focused narrative to Wall Street—streamlining operations and demonstrating decisive leadership under Brockman to counter internal turbulence and intense market competition.

marsbit3 год тому

Breaking: OpenAI Undergoes Major Reorganization, President Brockman Assumes Command

marsbit3 год тому

Two Survival Structures of Market Makers and Arbitrageurs

Market makers and arbitrageurs represent two distinct survival structures in high-frequency trading. Market makers primarily use limit orders (makers) to profit from the bid-ask spread, enjoying high capital efficiency (nominally 100%) but bearing inventory risk. This "inventory risk" arises from passive, fragmented, and discontinuous order fills in the limit order book (LOB). This risk, while a potential cost, can also contribute to excess profit if managed within control boundaries, allowing for mean reversion. Market makers essentially sell "time" (uncertainty over execution timing) to the market for price control and low fees. In contrast, cross-exchange arbitrageurs typically use market orders (takers) to exploit price differences or funding rates, resulting in lower nominal capital efficiency (requiring capital on both exchanges) and higher transaction costs. Their risk exposure stems from asymmetries in exchange rules (e.g., minimum order sizes), execution latency, and infrastructure risks (e.g., ADL, oracle drift). These exposures are active, exogenous gaps that primarily erode profits rather than contribute to them. Arbitrageurs essentially sell "space" (capital sunk across venues) for localized, immediate certainty. Both strategies engage in a trade-off between execution friction and residual risk. Optimal systems allow for temporary, controlled risk exposure rather than enforcing zero exposure at all costs. Their evolution converges towards hybrid models: arbitrageurs may use maker orders to reduce costs, while market makers may use taker orders or hedges for risk management. Ultimately, both use different forms of risk exposure—market makers exposing inventory, arbitrageurs immobilizing capital—to extract marginal, hard-won certainty from the market.

链捕手3 год тому

Two Survival Structures of Market Makers and Arbitrageurs

链捕手3 год тому

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

This article, based on Anthropic's analysis, outlines the intensifying systemic competition between the U.S./allies and China for AI leadership by 2028. It argues that access to advanced computing power ("compute") is the critical bottleneck, where the U.S. currently holds a significant advantage through chip export controls and allied innovation. However, China's AI labs remain competitive by exploiting policy loopholes—via chip smuggling, overseas data center access, and "model distillation" attacks to copy U.S. model capabilities—keeping them close to the frontier. The piece presents two contrasting scenarios for 2028. In the first, decisive U.S. action to tighten compute controls and curb distillation locks in a 12-24 month AI capability lead, cementing democratic influence over global AI norms, security, and economic infrastructure. In the second, policy inaction allows China to achieve near-parity through continued access to U.S. technology, enabling Beijing to promote its AI stack globally and integrate advanced AI into its military and governance systems, altering the strategic balance. Anthropic contends that maintaining a decisive U.S. lead is essential for shaping safe AI development and governance. The core recommendation is for U.S. policymakers to urgently close compute and model access loopholes while promoting global adoption of the U.S. AI technology stack to secure a lasting strategic advantage.

marsbit5 год тому

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

marsbit5 год тому

Торгівля

Спот
Ф'ючерси
活动图片