Ministry of Industry and Information Technology Seeks Opinions on 121 Industry Standards Including 'Artificial Intelligence Model Context Protocol'

marsbitОпубліковано о 2026-03-26Востаннє оновлено о 2026-03-26

Анотація

The Ministry of Industry and Information Technology (MIIT) has issued a notice soliciting public opinions on 121 industry standard proposals, including the "Artificial Intelligence Security Governance—Model Context Protocol Application Security Requirements." This move represents a significant step in China's efforts to establish standardized AI underlying protocols and enhance safety regulation frameworks. The core focus of the consultation is on the application security of the Model Context Protocol, aiming to address protocol compatibility and data security risks during multimodal interactions, long-text processing, and cross-platform invocations of large models through standardized technical specifications.

The Ministry of Industry and Information Technology has officially issued a notice, soliciting public opinions on **121 industry standard draft projects including *Artificial Intelligence Security Governance - Model Context Protocol Application Security Requirements***. This move marks a critical step in China's standardization of AI underlying protocols and the construction of a security supervision system. The core focus of this opinion solicitation is the application security of the **Model Context Protocol**, aiming to address protocol compatibility and data security risks during large models' multimodal interactions, long-text processing, and cross-platform calls through standardized technical specifications.

Пов'язані питання

QWhat is the primary focus of the 121 industry standard projects released by the Ministry of Industry and Information Technology (MIIT) for public comment?

AThe primary focus is on the application security requirements of the Model Context Protocol, aiming to standardize technical specifications to address protocol compatibility and data security risks in multimodal interactions, long-text processing, and cross-platform calls of large models.

QWhich specific protocol's application security is emphasized in MIIT's newly released industry standard征求意见 (solicitation of comments)?

AThe application security of the Model Context Protocol (MCP) is specifically emphasized.

QWhat is the main goal of establishing the standard for the Model Context Protocol according to the MIIT notice?

AThe main goal is to resolve protocol compatibility and data security risks during processes such as multimodal interaction, long-text processing, and cross-platform invocation of large models through standardized technical specifications.

QHow many industry standard projects did MIIT release for public comment alongside the one for the Model Context Protocol?

AMIIT released a total of 121 industry standard projects for public comment.

QWhat does this move by MIIT signify for China's AI development according to the article?

AIt signifies a key step forward in the standardization of AI underlying protocols and the construction of a safety supervision system in China.

Пов'язані матеріали

Why Pricing Social Interactions is Doomed to Fail?

Titled "Why Putting a Price on Social Interaction Is Doomed to Fail," this article critiques attempts to monetize social networks directly through SocialFi models, arguing their inevitable failure stems from a fundamental misunderstanding of media dynamics. Using Marshall McLuhan's theory of "hot" and "cold" media, the author posits that social networks are inherently "cold" media. Their value isn't contained in individual posts but is co-created through user participation, interpretation, and fragmented, ongoing interaction (e.g., replies, shares). This ambiguity and need for user involvement are core to their function. The article asserts that SocialFi projects like Friend.tech failed because introducing real-time, tradable financial pricing (a definitive "hot" signal) into this "cold" environment doesn't add a layer—it replaces the medium's essence. The unambiguous price signal overshadows and nullifies the nuanced, participatory social signal. Users become traders, not participants, and when speculative profits vanish, the underlying social ecosystem—never genuinely cultivated—collapses entirely. This principle extends beyond crypto. The author argues platforms like Twitter have gradually "heated up" through metrics (likes, retweets counts, algorithmically defined value), shifting users from participants to performers and eroding organic engagement. The solution isn't to abandon capital but to manage its entry point. Successful models like Substack, Patreon, or Bandcamp allow capital to "condense" at specific, isolated nodes (e.g., subscriptions, one-time payments) without permeating and "heating" every social interaction. They preserve the core "cold," participatory medium while enabling monetization at designated boundaries. The NFT boom and bust serves as a stark parallel: the ancient "cold" medium of collecting (valued for story, community, gradual accumulation) was rapidly destroyed by platforms that introduced real-time floor prices, rarity scores, and trading dashboards, transforming collectors into speculators and vaporizing cultural value when prices fell. The core lesson: "Liquidity equals heat." Injecting high liquidity and definitive pricing into a "cold" participatory medium doesn't optimize it; it fundamentally alters and destroys its value-creating mechanism. The future lies not in pricing every social gesture but in finding precise, non-invasive points for capital to condense without overheating the entire ecosystem.

marsbit2 хв тому

Why Pricing Social Interactions is Doomed to Fail?

marsbit2 хв тому

Jensen Huang's CMU Speech: In the AI Era, Don't Just Watch, Build

Jensen Huang, CEO of NVIDIA and a first-generation immigrant, delivered the commencement address to Carnegie Mellon University's class of 2026. He shared his personal journey from a humble background to founding NVIDIA, emphasizing resilience, learning from failure, and the responsibility that comes with leadership. Huang framed the present moment as the dawn of the AI revolution, a shift he believes is more profound than previous computing waves. He described AI as fundamentally resetting computing—moving from human-written software to machines that understand, reason, and use tools. This will create a new industry for generating intelligence and transform every sector. While acknowledging AI's potential to automate tasks and displace some jobs, Huang distinguished between the *tasks* of a job and its core *purpose*. He argued AI will augment human capability, not replace humans. The real risk, he stated, is not AI itself, but people being left behind by those who effectively use AI. He presented AI as a generational opportunity for massive infrastructure investment—in chip factories, data centers, energy grids, and advanced manufacturing—that could re-industrialize nations like the U.S. and bridge the digital divide by making computing and intelligent tools accessible to all. Huang called for a balanced approach: advancing AI safely and responsibly, establishing prudent policies, ensuring broad access, and encouraging universal participation. He urged the graduates not to fear the future but to engage with optimism and ambition, reminding them of CMU's motto, "My heart is in the work." His core message was clear: this is their moment to actively build and shape the AI-powered future, not merely observe it.

marsbit59 хв тому

Jensen Huang's CMU Speech: In the AI Era, Don't Just Watch, Build

marsbit59 хв тому

The Era Has Arrived Where Human Writers Must Prove They Are Not Machines

The article describes an era where AI-generated content is flooding the market, forcing human authors to prove they are not machines. It begins with the example of dozens of AI-written, error-ridden biographies of Henry Kissinger appearing on Amazon within hours of his death, a pattern repeated for other deceased celebrities and even living experts who find fraudulent books under their names. This spam content has exploded, with monthly new book releases on platforms like Amazon reaching 300,000 by late 2025. The issue spans genres, from suspiciously high proportions of AI-written teen romance and self-help books to dangerous, AI-generated foraging guides containing lethal advice. The platforms' automated review systems, designed to catch plagiarism and banned words, are ill-equipped to detect AI-generated text that avoids these pitfalls while being nonsensical or fraudulent. The problem has infiltrated traditional publishing. A major publisher, Hachette, had to recall a bestselling horror novel after AI detection tools suggested 78% of its content was machine-generated. An acclaimed European philosophy book was later revealed to be entirely written by AI under a fake author persona. In response, authors are fighting back. At the 2026 London Book Fair, 10,000 writers published a blank book titled "Don't Steal This Book" containing only their signatures—using emptiness as a protest weapon in an age of AI overproduction. Initiatives like the "Human Author Certification" program have emerged, ironically placing the burden on humans to prove their work is not machine-made. The article warns of a vicious cycle: AI-generated low-quality books pollute the data used to train future AI models, leading to "model collapse" and an ever-worsening flood of digital waste, eroding trust in publishing and devaluing human creativity.

marsbit1 год тому

The Era Has Arrived Where Human Writers Must Prove They Are Not Machines

marsbit1 год тому

Торгівля

Спот
Ф'ючерси
活动图片