Cursor vs. Anthropic and OpenAI: Thanks for Raising Me, Now I'm Here to Take the Market

marsbit發佈於 2026-03-31更新於 2026-03-31

文章摘要

Cursor, a VS Code plugin initially built on OpenAI's API, has transitioned from a dependent customer to a formidable competitor by launching its proprietary coding model, Composer 2. This model reportedly outperforms Claude Opus 4.6 on key benchmarks at one-tenth the cost. The case exemplifies a critical strategic dilemma in tech—when to open or close an API. The authors propose a framework: opening an API risks eroding a company’s moat if competitors can use it to bootstrap their own products and aggregate demand, eventually enabling vertical integration. This is especially risky in AI, where API outputs can directly improve a rival’s model training and product refinement—exactly what Cursor achieved by leveraging OpenAI and Anthropic models to gather user data and refine its own offering. Companies then face two choices: restrict API access (like Twitter, which closed its API to protect its social graph) or keep it open but find alternative moat, such as network effects or Lindy effects (like crypto protocols, e.g., Morpho). The authors predict that leading AI companies (like OpenAI and Anthropic) will likely restrict access to their most advanced models over time, as switching costs remain low, network effects are weak, and distillation techniques reduce training costs. This could stifle consumer AI innovation but create opportunities for open alternatives.

Author: Daniel Barabander

Compiled by: Deep Tide TechFlow

Deep Tide Introduction: Three years ago, Cursor was a VS Code plugin running on the OpenAI API. Today, it has released its own self-developed model, outperforming Claude Opus 4.6 on key benchmarks at one-tenth the price.

This article uses this case to systematically answer the most important strategic question on the internet: When should you open your API, and when should you close it? The conclusion serves as a warning to all platform builders.

Full text as follows:

Co-authored with Elijah Fox(@PossibltyResult).

In early March, Cursor released Composer 2—a proprietary programming model built on an open-source base model that outperforms Claude Opus 4.6 on key benchmarks at one-tenth the price. Three years ago, Cursor was a VS Code fork running entirely on the OpenAI API.

Cursor's journey from a dependent customer to a genuine competitor epitomizes the most critical strategic question on the internet: When should a company open its capabilities via an API, and when should it keep them closed?

We developed a framework to answer this question, which depends on two things. First: Does opening the API erode your moat? If yes: Can you find a moat elsewhere?

Whenever a company opens its intellectual property to the outside world via an API, it risks eroding its moat through demand aggregation. Simply put: Competitors can use this intellectual property to bootstrap the early stages of their own products, and once they accumulate enough demand, they can vertically integrate and cut off the API. Netflix did exactly this: it first licensed film and TV content, and then, once it had a large enough user base to amortize the huge fixed costs, it produced "House of Cards" in-house.

But the truly dangerous scenario is when the API's output can directly serve as input, compounding the quality of the competing product. This is a double whammy because competitors can both use the API to bootstrap and aggregate demand *and* directly improve their own production process. This is precisely what is happening in the AI field. Although OpenAI and Anthropic explicitly prohibit companies accessing their APIs from using the output to train competing models, they cannot stop companies like Cursor from using cutting-edge models to bootstrap the workflows needed to collect proprietary product data and improve their own models over time.

This seems to be exactly what happened behind Composer 2. Cursor used foundational models like Claude and GPT to aggregate enough demand, reaching an annualized revenue of approximately $2 billion, and then built a cutting-edge programming model using the open-source base model Kimi K2.5, plus data from continuous pre-training and reinforcement learning from its IDE.

When this output/input dynamic exists, API providers have only two choices: either close the API to stem the bleeding, or keep it open and find complementary assets that leverage their moat.

Twitter is a classic case of taking the first path. It was initially known for its generous, freely accessible API—at its peak, developers could pull 500,000 tweets per month for free. But Twitter closed most of its interfaces because the API leaked its moat: the proprietary social graph. Today, the API is effectively closed: access is strictly rate-limited, expensive at any meaningful scale, and structurally, building a serious product requires strictly controlled B2B integration.

The second path is to keep the API open and supplement it with another source of power. No industry understands this better than crypto—where APIs are forced open, and the only way to survive is to find a moat elsewhere.

The lending protocol Morpho provides a representative case. The protocol was born by accessing the open APIs of Aave and Compound and building optimizer products on top of them. It then used the output of these protocols—their aggregated liquidity—as input to bootstrap its own platform. Thus, Cursor and Morpho followed strikingly similar paths in leveraging APIs to build competing products.

However, the truly interesting dynamic is what Morpho did next. Since Morpho itself is also an open API, it needed to find a moat to compensate for the lack of switching costs. So it decided to make the protocol as aggregatable as possible, instead building its moat through other means—such as the Lindy Effect and the network effects arising from deep liquidity from diverse lenders and borrowers.

Applying this framework forward, we can make a prediction: Over time, foundational model companies will likely choose the first path, gradually restricting API access to their most cutting-edge models.

To believe in the second path, you must believe that models like Opus and GPT are powerful and trusted enough to remain open, allowing competing models to use their output as input, yet third parties still won't leave. This means the model companies are betting on other sources of power: the Lindy Effect (if they believe users won't want to build trust in a new model), developer network effects (if they believe users will build ecosystems tightly dependent on the openness of their API), or economies of scale (if they believe maximizing API calls allows them to amortize the fixed costs of training cutting-edge models).

But current evidence points in the opposite direction. The 'hottest model of the month' dynamic remains strong, and users migrate without hesitation to the best model available at the moment—we saw this again in the recent surge in Claude usage after the Opus 4.5 release. At the model level, developer network effects are also not yet evident—interoperability between APIs is increasing, not decreasing, and the surrounding tooling ecosystem is actively fighting lock-in, deliberately making it easy to switch suppliers. And currently, economies of scale in the training phase are insufficient as a moat because distillation techniques allow competitors to train models with comparable performance at a much lower cost. Without alternative sources of power, foundational AI companies will likely reserve limited access for enthusiasts and focus their efforts on B2B deployments with strict usage controls and monitoring. Increasingly, the winning choice will be to refuse to play this game.

This is a worrying outcome because the current explosion of consumer AI products is built on top of these model providers. It also opens the door for counter-positioning: if the leading labs increasingly restrict access, there is value to be captured by choosing a competitor with a weaker moat but a strong commitment to remaining open.

Thanks to @systematicls(@openforage) and @AlexanderLong(@Pluralis) for their thoughtful feedback on this article.

相關問答

QWhat is the main argument of the article regarding when a company should open or close its API?

AThe article argues that a company should open its API if it can find a moat elsewhere to compensate, but should close it if the API erodes its core competitive advantage, especially when the API's output can be used as input to improve competing products.

QHow did Cursor transition from a dependent customer to a competitor of Anthropic and OpenAI?

ACursor initially relied on OpenAI's API as a VS Code plugin, used it to aggregate demand and gather proprietary product data, and then built its own advanced programming model (Composer 2) using an open-source base model and data from its IDE, achieving comparable performance at a lower cost.

QWhat is the 'output/input dynamic' mentioned in the article, and why is it dangerous for API providers?

AThe 'output/input dynamic' refers to the situation where a competitor can use the API's output directly as input to improve its own product quality. This is dangerous because it allows competitors to bootstrap demand and enhance their production process simultaneously, accelerating their ability to become direct competitors.

QWhat prediction does the article make about the future of API access for leading foundation model companies like OpenAI and Anthropic?

AThe article predicts that leading foundation model companies will likely restrict API access to their most advanced models over time, opting for stricter B2B deployments with controlled usage, as they lack alternative moats like strong network effects or Lindy effects to justify open access.

QHow does the article use the example of Twitter to illustrate the strategy of closing an API?

ATwitter initially had a generous, free API but eventually closed most access because the API leaked its moat—the proprietary social graph. It now imposes strict rate limits and high costs, making large-scale product development dependent on controlled B2B integrations.

你可能也喜歡

谷歌亚马逊同时砸钱养竞争对手,AI时代最荒诞的商业逻辑正在成真

谷歌和亚马逊在四天内分别宣布向AI初创公司Anthropic投资250亿美元和最高400亿美元,总额达650亿美元。这两家云服务巨头罕见地共同押注同一家竞争对手,反映出AI时代下商业逻辑的根本变化。 投资实质是“算力预售”:Anthropic必须将绝大部分资金用于购买投资方的云服务和芯片,例如承诺未来十年在AWS上投入超1000亿美元,并使用谷歌提供的5吉瓦算力。此举旨在锁定Anthropic作为算力消耗大客户,保障自身产能去化。 核心原因在于,云市场竞争已从价格和稳定性转向“谁的云上运行最优模型”。微软早先通过绑定OpenAI占据先机,而Anthropic凭借Claude模型年化收入达300亿美元,成为企业市场中不可替代的非自研模型,因此成为谷歌和亚马逊必争的战略资产。 然而,Anthropic也面临三重挑战:在两大投资方之间的独立性受侵蚀、安全叙事因模型能力过强而承压,以及未来IPO可能带来的商业化压力。 对比中美AI发展,美国正走向“三极闭环”——微软-OpenAI、谷歌-Anthropic、亚马逊-Anthropic形成排他性绑定,而中国市场上DeepSeek等开源模型提供了一种替代路径,但其可持续性仍待观察。 整体上,巨头投资Anthropic并非单纯看好其估值成长,而是为了在AI重塑一切的浪潮中避免沦为“旁观者”。这张门票正变得越来越昂贵,且无人敢缺席。

marsbit5 小時前

谷歌亚马逊同时砸钱养竞争对手,AI时代最荒诞的商业逻辑正在成真

marsbit5 小時前

交易

現貨
合約

熱門文章

什麼是 DOGE M

Doge Matrix ($doge m):新一代社區驅動的加密貨幣 介紹 在不斷演變的加密貨幣領域中,新項目不斷湧現,每個項目都旨在吸引投資者和愛好者的興趣。最近進入這一領域的項目之一是 Doge Matrix,代碼為 $doge m。這個項目因其根植於圍繞 Dogecoin 的流行迷因文化而受到關注,並在 web3 空間中確立了自己的地位。本文旨在對 Doge Matrix 進行全面分析,涵蓋其概述、創建者、投資者、功能、時間線和顯著特點。 Doge Matrix ($doge m) 是什麼? Doge Matrix 是一個社區驅動的加密貨幣項目,似乎建立在 Dogecoin 的廣泛吸引力之上,這是一種以柴犬吉祥物和迷因起源而聞名的數字貨幣。雖然 Doge Matrix 的總體目標並未明確定義,但它的特點是致力於利用社區的參與和支持。與傳統加密貨幣通常強調通過底層技術的實用性或內在價值不同,Doge Matrix 將自己定位於擁抱加密貨幣文化現象的空間,特別吸引那些與基於迷因的資產精神共鳴的人。 Doge Matrix 利用 Dogecoin 社區的優勢,作為更廣泛生態系統的一部分,邀請對加密貨幣和數字領域感興趣的用戶參與和互動。 Doge Matrix ($doge m) 的創建者是誰? Doge Matrix 的創建者身份仍然未知。這種缺乏透明度在加密貨幣領域並不罕見,一些項目在未透露其創始人身份的情況下啟動。關於創始團隊的信息缺失可能會引發潛在投資者對項目責任和方向的質疑。 Doge Matrix ($doge m) 的投資者是誰? 目前,沒有公開的資訊詳細說明支持 Doge Matrix 的投資者或投資基金。該項目似乎主要依賴社區支持,而非機構投資。這一模式與該倡議的社區驅動特性相一致,促進了一個由參與者塑造項目方向的環境,而不是由少數財務支持者主導。 Doge Matrix ($doge m) 如何運作? 關於 Doge Matrix 的運作機制的具體細節有些模糊,反映了迷因幣空間中項目的普遍趨勢,即創新功能並不總是清晰表達。儘管如此,Doge Matrix 似乎旨在通過鼓勵用戶參與來利用現有的加密貨幣生態系統,同時利用與 Dogecoin 相關的熟悉文化參考。 其潛在的獨特特徵源於社區互動,而非技術進步,強調代幣持有者之間的共享經驗和合作。雖然具體的創新尚未明確說明,但該項目似乎創造了一個社區成員可以互動、分享想法並推動項目潛力的空間。 Doge Matrix ($doge m) 的時間線 回顧該項目的時間線,顯示出一些定義其迄今為止旅程的重要事件: 2024年11月25日:Doge Matrix 達到其歷史最高價,標誌著其早期歷史中的一個重要里程碑。 2025年1月1日:相反,Doge Matrix 達到其歷史最低價,顯示出加密貨幣通常伴隨的波動性,特別是在項目生命周期的早期階段。 持續進行中:該項目仍在積極交易並得到其社區的支持,儘管具體的未來里程碑或目標尚未披露。 關於 Doge Matrix ($doge m) 的要點 社區焦點 Doge Matrix 的核心是對社區參與的承諾。該項目基於成員之間的合作和共同目標而蓬勃發展,強調集體努力的重要性。與通常有明確領導結構的集中式項目不同,Doge Matrix 目前展示了一種更靈活的治理方式,每位社區成員的聲音都很重要。 波動性 加密貨幣市場以其波動性而聞名,Doge Matrix 也不例外。其價格歷史反映出高低價值之間的顯著波動,這是許多新加密貨幣的典型特徵,但也強調了投資新興代幣所面臨的風險。 缺乏詳細信息 Doge Matrix 最引人注目的特點之一是關於其技術基礎和運作機制的詳細信息稀缺。這種模糊性使得潛在投資者在參與該項目之前必須進行徹底的盡職調查。 結論 總之,Doge Matrix ($doge m) 展示了一波新興的加密貨幣項目,這些項目在很大程度上依賴於社區參與和文化相關性。儘管在某些具體方面(如明確的領導、定義的目標和詳細的功能)有所欠缺,但該項目成功地在加密社區中引起了興趣,利用了迷因文化的既有吸引力。與任何加密貨幣投資一樣,理解固有風險並進行全面研究對於潛在參與者至關重要。Doge Matrix 是加密行業動態且有時不可預測的本質的提醒,標誌著不斷的演變和對社區驅動倡議的熱情。

410 人學過發佈於 2025.02.03更新於 2025.02.03

什麼是 DOGE M

什麼是 $M

理解 Mantis ($M):跨鏈互操作性的新時代 在不斷演變的 Web3 和加密貨幣領域,新項目努力提供創新的解決方案,旨在提升用戶體驗並擴展去中心化金融生態系統中的功能可能性。其中一個引起關注的項目是 Mantis ($M),這是一個基於跨鏈互操作性和基於意圖的結算原則的開創性協議。本文深入探討 Mantis 的基本方面,包括其核心功能、創建者、投資支持、創新特徵和關鍵里程碑。 Mantis ($M) 是什麼? Mantis 被描述為一個 多域意圖結算協議,簡化了跨鏈互動,使得用戶能夠在各種區塊鏈平台上無縫執行複雜的金融交易。該協議通過三個主要層次運作: 意圖表達:用戶可以使用由 DISE LLM 提供的自然語言來表達其交易目標,這是一種先進的 AI 語言模型。例如,用戶可能會表達希望以 1% 的滑點容忍度將以太坊 (ETH) 交換為索拉納 (SOL)。 執行:這一層利用一個解決者網絡,競爭以滿足用戶的意圖。交易通過如需求一致 (CoWs) 和訂單流拍賣 (OFAs) 等機制執行,確保用戶需求得到最佳滿足。 結算:利用跨區塊鏈通信 (IBC) 協議,Mantis 實現原子跨鏈交易,使用戶能夠在包括以太坊、索拉納和宇宙等各種支持的鏈上操作。 Mantis 被設計為為閒置資產引入 原生收益生成,並利用加密證明來保持整個過程中交易的完整性。 創建者與開發團隊 Mantis 由 Composable Foundation 構思,這是一個以研究為驅動的組織,以其對區塊鏈互操作性解決方案的重視而聞名。該基金會與包括哈佛大學和里斯本大學在內的著名學術機構合作,為 Mantis 的架構和功能提供廣泛的研究和開發支持。 Composable Foundation 致力於促進區塊鏈領域的創新,使 Mantis 成為滿足多個區塊鏈網絡間日益增長的互操作性需求的強大解決方案。 投資者與支持 儘管有關個別投資者的具體細節尚未公開披露,但 Mantis 享有來自多個實體的實質支持,包括: 來自 IBC 支持鏈的生態系統補助金,支持協議在去中心化金融生態系統中的增長和整合。 與基礎設施提供商的戰略夥伴關係,增強 Mantis 的網絡能力和部署策略。 通過 Composable Foundation 的財庫提供的資金,確保持續的財務支持以應對持續的開發和運營成本。 這些合作努力反映了利益相關者對增強跨鏈功能和 Mantis 基礎設施創新潛在效用的重要性達成共識。 主要創新 Mantis 通過幾項開創性創新來提升其功能和效用: 鏈無關意圖:用戶可以從任何支持的鏈發起交易,同時在另一條鏈上結算。這種靈活性賦予用戶權力,促進不同平台之間的互動。 AI 驅動的界面:DISE LLM 的整合使得用戶能夠使用自然語言進行複雜的 DeFi 操作,從而簡化互動,並使區塊鏈技術對更廣泛的受眾變得可及。 跨域 MEV 捕獲:Mantis 通過解決者之間的競爭創建了一個內部市場,以獲取最大可提取價值 (MEV)。這一創新方法允許在複雜交易中實現更高的效率和價值提取。 模組化結算層:該協議支持多種驗證方法,包括零知識證明和樂觀滾動,提供一個靈活的框架,可以適應新興的區塊鏈技術。 歷史時間表 Mantis 的發展標誌著幾個關鍵里程碑,描繪了其軌跡和增長: | 年份 | 里程碑 | |————|————————————————————————-| | 2022 | 在 Composable Foundation 的研究部門內進行初步概念開發。 | | 2024 第三季 | 啟動測試網,實現索拉納和以太坊之間的橋接能力。 | | 2025 第一季 | 預計代幣生成事件 (TGE) 與主網啟動同時進行。 | | 2025 第二季 | 預期整合 DISE LLM 並擴展跨鏈能力。 | | 2025 下半年 | 計劃通過進一步的 IBC 升級支持超過 15 條鏈。 | 這個時間表概述了 Mantis 的演變,從概念討論到積極實施和未來增長階段。 生態系統增長策略 Mantis 的生態系統增長策略包括幾項旨在鼓勵用戶參與和開發者參與的舉措: 信用系統:用戶可以通過提供流動性和參加推薦計劃來獲得協議信用。這些信用可在未來兌換獎勵,促進強大的用戶社區。 模組化軟件開發工具包 (SDK):這個工具包使開發者能夠基於意圖驅動模型利用 Mantis 的基礎設施創建應用程序,從而促進其生態系統內的創新。 治理模型:隨著協議的成熟,$M 代幣持有者將在協議治理中擁有發言權,允許他們對提議的升級和變更進行投票,從而增強社區參與和去中心化。 Mantis 代表了跨鏈架構領域的一個重大進展。通過無縫整合先進的 AI 算法和強大的結算框架,Mantis 努力解決多鏈生態系統中的碎片化問題。其創新方法優先考慮改善用戶體驗,同時遵循去中心化和安全性的基本原則,為未來區塊鏈技術的互操作性設立了新標準。 隨著 Mantis 繼續其增長和實施之旅,它承諾成為 Web3 和去中心化金融競爭格局中值得密切關注的項目。憑藉其跨越界限和提升用戶參與的重點,Mantis 預計將成為未來加密貨幣領域發展的重要組成部分。

41 人學過發佈於 2025.03.18更新於 2025.03.18

什麼是 $M

如何購買M

歡迎來到HTX.com!在這裡,購買MemeCore (M)變得簡單而便捷。跟隨我們的逐步指南,放心開始您的加密貨幣之旅。第一步:創建您的HTX帳戶使用您的 Email、手機號碼在HTX註冊一個免費帳戶。體驗無憂的註冊過程並解鎖所有平台功能。立即註冊第二步:前往買幣頁面,選擇您的支付方式信用卡/金融卡購買:使用您的Visa或Mastercard即時購買MemeCore (M)。餘額購買:使用您HTX帳戶餘額中的資金進行無縫交易。第三方購買:探索諸如Google Pay或Apple Pay等流行支付方式以增加便利性。C2C購買:在HTX平台上直接與其他用戶交易。HTX 場外交易 (OTC) 購買:為大量交易者提供個性化服務和競爭性匯率。第三步:存儲您的MemeCore (M)購買MemeCore (M)後,將其存儲在您的HTX帳戶中。您也可以透過區塊鏈轉帳將其發送到其他地址或者用於交易其他加密貨幣。第四步:交易MemeCore (M)在HTX的現貨市場輕鬆交易MemeCore (M)。前往您的帳戶,選擇交易對,執行交易,並即時監控。HTX為初學者和經驗豐富的交易者提供了友好的用戶體驗。

1.0k 人學過發佈於 2025.07.02更新於 2025.07.02

如何購買M

相關討論

歡迎來到 HTX 社群。在這裡,您可以了解最新的平台發展動態並獲得專業的市場意見。 以下是用戶對 M (M)幣價的意見。

活动图片