OpenAI Official Plugin Strongly Integrated into Claude Code

marsbitОпубліковано о 2026-03-31Востаннє оновлено о 2026-03-31

Анотація

OpenAI has officially released the "codex-plugin-cc" on GitHub, enabling developers to integrate OpenAI’s Codex model capabilities directly into Anthropic’s command-line development tool, Claude Code. This cross-platform integration breaks down ecosystem barriers between major AI tools, allowing developers to leverage the strengths of both models without switching environments. With simple configuration, Claude Code becomes a versatile programming assistant combining the advantages of both companies. Key features include: - Standard code review via the `/codex:review` command, providing expert improvement recommendations and double-checking for logic errors. - Adversarial review using `/codex:adversarial-review`, which challenges design decisions to identify potential performance bottlenecks or security risks. - Task delegation through `/codex:rescue`, allowing complex debugging or repair tasks to be handed off to a Codex sub-agent for collaborative problem-solving. This integration enhances code quality and development efficiency through multi-model collaboration.

Recently, the AI developer community welcomed a major update: OpenAI officially released an open-source project named codex-plugin-cc on GitHub. This plugin allows developers to directly utilize the capabilities of OpenAI's Codex model within the command-line development tool Claude Code, introduced by Anthropic.

This "cross-brand" integration breaks down the ecosystem barriers that previously existed between major AI model tools, enabling developers to leverage the technical strengths of both industry giants without switching environments. With simple command configurations, Claude Code instantly transforms into an all-in-one programming assistant that combines the best of both.

Empowered by this plugin, users can initiate a standard read-only code review using the /codex:review command to receive professional improvement suggestions from Codex. This dual verification mechanism effectively catches logical vulnerabilities that a single model might miss, adding a "double insurance" for code quality.

More uniquely, its "adversarial review" feature allows developers to use /codex:adversarial-review to actively request Codex to challenge existing design decisions. This mode is specifically designed to stress-test the rationality of system architecture, uncovering potential performance bottlenecks or security risks from a "fault-finding" perspective.

Additionally, the plugin introduces a task delegation mechanism, enabling users to transfer complex debugging or repair tasks to a Codex sub-agent via /codex:rescue. This collaborative model achieves automatic task distribution, allowing the primary model and auxiliary models to each focus on their respective areas of expertise.

github:https://github.com/openai/codex-plugin-cc

Пов'язані питання

QWhat is the name of the official OpenAI plugin that integrates with Claude Code?

AThe plugin is called codex-plugin-cc.

QWhat is the primary function of the /codex:review command in the new plugin?

AThe /codex:review command is used to initiate a standard read-only code review to get professional improvement suggestions from the Codex model.

QWhat unique feature does the /codex:adversarial-review command provide?

AThe /codex:adversarial-review command enables an 'adversarial review' function, which actively challenges existing design decisions to stress-test the system architecture and uncover potential performance bottlenecks or security risks.

QHow does the /codex:rescue command facilitate task management?

AThe /codex:rescue command allows users to delegate complex debugging or repair tasks to a Codex sub-agent, enabling a collaborative model where the main and auxiliary models work in their respective areas of expertise.

QWhere was the codex-plugin-cc project officially released?

AThe project was officially released by OpenAI on GitHub.

Пов'язані матеріали

From Theft to Re-entry: How Was $292 Million "Laundered"?

A sophisticated crypto laundering operation was executed following the $292 million hack of Kelp DAO on April 18. The attack, attributed to the North Korean Lazarus group, began with anonymous infrastructure preparation using Tornado Cash to fund wallets untraceably. The hacker exploited a vulnerability in Kelp’s cross-chain bridge, stealing 116,500 rsETH. To avoid crashing the market, the attacker used Aave and Compound as laundering tools—depositing the stolen rsETH as collateral to borrow $190 million in clean, liquid ETH. This move triggered a bank run on Aave, causing an $8 billion drop in TVL. After consolidating funds, the attacker fragmented them across hundreds of wallets to evade detection. A major breakpoint was THORChain, where over $460 million in volume—30 times its usual activity—was processed in 24 hours, converting ETH into Bitcoin. This shift to Bitcoin’s UTXO model exponentially increased tracing complexity by shattering funds into countless untraceable fragments. The final destination was Tron-based USDT, the primary channel for illicit crypto flows. From there, funds were cashed out via OTC brokers in China and Southeast Asia, using unlicensed underground banks and UnionPay networks outside Western sanctions scope. Ultimately, the laundered money supports North Korea’s weapons programs, which rely heavily on crypto hacking for foreign currency. The incident underscores structural challenges in DeFi: its openness, composability, and lack of central control make such laundering not just possible, but inherently difficult to prevent.

marsbit1 год тому

From Theft to Re-entry: How Was $292 Million "Laundered"?

marsbit1 год тому

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit7 год тому

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit7 год тому

Торгівля

Спот
Ф'ючерси
活动图片