Author: Daniel Barabander
Compiled by: Deep Tide TechFlow
Deep Tide Introduction: Three years ago, Cursor was a VS Code plugin running on the OpenAI API. Today, it has released its own self-developed model, outperforming Claude Opus 4.6 on key benchmarks at one-tenth the price.
This article uses this case to systematically answer the most important strategic question on the internet: When should you open your API, and when should you close it? The conclusion serves as a warning to all platform builders.
Full text as follows:
Co-authored with Elijah Fox(@PossibltyResult).
In early March, Cursor released Composer 2—a proprietary programming model built on an open-source base model that outperforms Claude Opus 4.6 on key benchmarks at one-tenth the price. Three years ago, Cursor was a VS Code fork running entirely on the OpenAI API.
Cursor's journey from a dependent customer to a genuine competitor epitomizes the most critical strategic question on the internet: When should a company open its capabilities via an API, and when should it keep them closed?
We developed a framework to answer this question, which depends on two things. First: Does opening the API erode your moat? If yes: Can you find a moat elsewhere?
Whenever a company opens its intellectual property to the outside world via an API, it risks eroding its moat through demand aggregation. Simply put: Competitors can use this intellectual property to bootstrap the early stages of their own products, and once they accumulate enough demand, they can vertically integrate and cut off the API. Netflix did exactly this: it first licensed film and TV content, and then, once it had a large enough user base to amortize the huge fixed costs, it produced "House of Cards" in-house.
But the truly dangerous scenario is when the API's output can directly serve as input, compounding the quality of the competing product. This is a double whammy because competitors can both use the API to bootstrap and aggregate demand *and* directly improve their own production process. This is precisely what is happening in the AI field. Although OpenAI and Anthropic explicitly prohibit companies accessing their APIs from using the output to train competing models, they cannot stop companies like Cursor from using cutting-edge models to bootstrap the workflows needed to collect proprietary product data and improve their own models over time.
This seems to be exactly what happened behind Composer 2. Cursor used foundational models like Claude and GPT to aggregate enough demand, reaching an annualized revenue of approximately $2 billion, and then built a cutting-edge programming model using the open-source base model Kimi K2.5, plus data from continuous pre-training and reinforcement learning from its IDE.
When this output/input dynamic exists, API providers have only two choices: either close the API to stem the bleeding, or keep it open and find complementary assets that leverage their moat.
Twitter is a classic case of taking the first path. It was initially known for its generous, freely accessible API—at its peak, developers could pull 500,000 tweets per month for free. But Twitter closed most of its interfaces because the API leaked its moat: the proprietary social graph. Today, the API is effectively closed: access is strictly rate-limited, expensive at any meaningful scale, and structurally, building a serious product requires strictly controlled B2B integration.
The second path is to keep the API open and supplement it with another source of power. No industry understands this better than crypto—where APIs are forced open, and the only way to survive is to find a moat elsewhere.
The lending protocol Morpho provides a representative case. The protocol was born by accessing the open APIs of Aave and Compound and building optimizer products on top of them. It then used the output of these protocols—their aggregated liquidity—as input to bootstrap its own platform. Thus, Cursor and Morpho followed strikingly similar paths in leveraging APIs to build competing products.
However, the truly interesting dynamic is what Morpho did next. Since Morpho itself is also an open API, it needed to find a moat to compensate for the lack of switching costs. So it decided to make the protocol as aggregatable as possible, instead building its moat through other means—such as the Lindy Effect and the network effects arising from deep liquidity from diverse lenders and borrowers.
Applying this framework forward, we can make a prediction: Over time, foundational model companies will likely choose the first path, gradually restricting API access to their most cutting-edge models.
To believe in the second path, you must believe that models like Opus and GPT are powerful and trusted enough to remain open, allowing competing models to use their output as input, yet third parties still won't leave. This means the model companies are betting on other sources of power: the Lindy Effect (if they believe users won't want to build trust in a new model), developer network effects (if they believe users will build ecosystems tightly dependent on the openness of their API), or economies of scale (if they believe maximizing API calls allows them to amortize the fixed costs of training cutting-edge models).
But current evidence points in the opposite direction. The 'hottest model of the month' dynamic remains strong, and users migrate without hesitation to the best model available at the moment—we saw this again in the recent surge in Claude usage after the Opus 4.5 release. At the model level, developer network effects are also not yet evident—interoperability between APIs is increasing, not decreasing, and the surrounding tooling ecosystem is actively fighting lock-in, deliberately making it easy to switch suppliers. And currently, economies of scale in the training phase are insufficient as a moat because distillation techniques allow competitors to train models with comparable performance at a much lower cost. Without alternative sources of power, foundational AI companies will likely reserve limited access for enthusiasts and focus their efforts on B2B deployments with strict usage controls and monitoring. Increasingly, the winning choice will be to refuse to play this game.
This is a worrying outcome because the current explosion of consumer AI products is built on top of these model providers. It also opens the door for counter-positioning: if the leading labs increasingly restrict access, there is value to be captured by choosing a competitor with a weaker moat but a strong commitment to remaining open.
Thanks to @systematicls(@openforage) and @AlexanderLong(@Pluralis) for their thoughtful feedback on this article.








