Author: Bryan Kim
Compiled by: Deep Tide TechFlow
Deep Tide Introduction: The internet is a universal access miracle of opportunity, exploration, and connection. And advertising pays for this miracle. a16z partner Bryan Kim points out that OpenAI's announcement last month of plans to launch ads for free users might be the "biggest non-news of 2026 so far."
Because if you've been paying attention, the signs of this happening are everywhere. Advertising is the best way to bring internet services to the largest possible number of consumers.
Data shows that the conversion rates for consumer AI subscription companies are very low (5-10%). Most people use AI for personal productivity tasks (writing emails, searching for information), not for high-value pursuits (programming). 5-10% of 800M WAU is already 40-80M paying users, but to reach a billion users, advertising is needed.
Full text as follows:
The internet is a universal access miracle of opportunity, exploration, and connection. And advertising pays for this miracle. As Marc has long argued, "If you take a principled stance against advertising, you are also taking a stance against broad access." Advertising is the reason we have great things.
Therefore, OpenAI's announcement last month of plans to launch ads for free users might be the "biggest non-news of 2026 so far." Because, of course, if you've been paying attention, the signs of this happening are everywhere. Fidji Simo joined OpenAI as App CEO in 2025, which many interpreted as "implementing advertising, just like she did at Facebook and Instacart." Sam Altman has been teasing the launch of ads on business podcasts. Tech analysts like Ben Thompson have been predicting ads almost since ChatGPT launched.
But the main reason ads aren't surprising is that they are the best way to bring internet services to the largest possible number of consumers.
The Long Tail of LLM Users
The term "luxury beliefs," which became popular a few years ago, refers to taking a stance not for principled reasons but for optical reasons. There are many examples of this in the tech world, especially regarding advertising. Despite all the moral hand-wringing over bingo words like "selling data!" or "tracking!" or "attention harvesting," the internet has always run on advertising, and most people like it that way. Internet advertising has created one of the greatest "public goods" in history at a negligible cost—occasionally having to watch ads for cat sleeping bags or hydroponic living room gardens. People who pretend this is a bad thing are usually trying to prove something to you.
Any internet history enthusiast knows that advertising is a core part of how platforms eventually monetize: Google, Facebook, Instagram, and TikTok all started free and then found monetization through targeted advertising. Advertising can also be a way to supplement the ARPU of low-value subscribers, as in the case of Netflix's newer $8 per month option, which introduced ads to the platform. Advertising has done an excellent job of training people to expect most things on the internet to be free or very low cost.
Now we can see this model emerging in frontier labs, specialized model companies, and smaller consumer AI companies. From our survey of consumer AI subscription companies, we can see that converting subscribers is a real challenge for all these companies:
So what's the solution? As we know from past consumer success stories, advertising is often the best way to scale a service to billions of users.
To understand why most people don't pay for AI subscriptions, it helps to understand what people use AI for. Last year, OpenAI released data on this.
In short, most people use AI for personal productivity: writing emails, searching for information, tutoring, or advice-like things. Meanwhile, high-value pursuits, like programming, make up only a small fraction of total queries. Anecdotally, we know programmers are among the most loyal users of LLMs, with some even adjusting their sleep schedules to optimize daily usage limits. For these users, a $20 or $200 monthly subscription fee doesn't seem exorbitant because the value they get (equivalent to a team of efficient SWE interns) likely exceeds the subscription cost by orders of magnitude.
But for users who use LLMs for general queries, advice, or even writing help, the burden of actually paying is too high. Why would they pay for answers to questions like "why is the sky blue" or "what were the causes of the Peloponnesian War," when a Google search would previously give you a good enough answer for free. Even in the case of writing help (which some do use for email work and routine tasks), it often doesn't do enough of a person's job to justify a personal subscription fee. Furthermore, most people don't typically need the advanced models and features: you don't need the best reasoning model to write an email or suggest a recipe.
Let's step back and acknowledge something. The absolute number of people paying for a product like ChatGPT is still huge: 5-10% of 800M WAU. 5-10% of 800M is 40-80M people! On top of that, the Pro $200 price point is ten times what we consider the ceiling for consumer software subscriptions. But, if you want to get ChatGPT to a billion people (and beyond) for free, you need to introduce products beyond subscriptions.
The good news is that people actually do like ads! Ask the average Instagram user, and they'll probably tell you the ads they get are quite useful: they get products they really want and need and make purchases that genuinely improve their lives. Characterizing ads as exploitative or intrusive is regressive: maybe we felt that way about TV ads, but most of the time targeted ads are actually pretty good content.
I'm using OpenAI as an example here (because they have been one of the most forthright labs in terms of full disclosure of usage trends). But this logic applies to all frontier labs: if they want to scale to billions of users, they will all eventually need to introduce some form of advertising. Consumer monetization models in AI are still unresolved. In the next section, I'll cover some approaches.
Possible AI Monetization Models
My general rule of thumb in consumer app development is that you need at least 10 million WAU before introducing ads. Many AI labs have already reached this threshold.
We already know ad units are coming to ChatGPT. What might they look like, and what other advertising and monetization models are viable for LLMs?
1. Higher-value search and intent-based ads: OpenAI has confirmed that these types of ads (recipe ingredients, travel hotel recommendations, etc.) are coming soon for free and low-cost tier users. These ads will be differentiated from answers in ChatGPT and will be clearly labeled as sponsored.
Over time, ads might feel more like prompts: you would prompt with an intent to buy something, and an agent would fulfill your request end-to-end, choosing from a list of sponsored and non-sponsored content. In many ways, these ads harken back to the earliest ad units of the 90s and 2000s, and what Google perfected with its sponsored SEO ad units (it's worth mentioning that Google still gets the vast majority of its revenue from its ad business and only entered subscriptions more than 15 years into its history).
2. Instagram-style context-based ads: Ben Thompson noted that OpenAI should have introduced ads into ChatGPT responses much earlier. First, it would have acclimated non-paying users to ads earlier (when they had a real lead in capability over Gemini).
Second, it would have given them a head start in building a truly great ad product that predicts what you want, rather than opportunistically serving ads based on intent-based queries. Instagram and TikTok can deliver amazing ad experiences, showing you products you never knew you wanted but absolutely need to buy immediately, and many find the ads useful rather than intrusive.
Given the amount of personal information and memory OpenAI has, there is ample opportunity to build a similar ad product for ChatGPT. Of course, there is a difference between the experience of using these apps: can you translate the more "lean-back" ad experience of Instagram or TikTok to the more engagement-focused model of using ChatGPT? This is a much more difficult, and more profitable, question.
3. Affiliate commerce: Last year, OpenAI announced partnerships with marketplace platforms and individual retailers to launch instant checkout features, allowing users to make purchases directly within the chat. You could imagine this being built into its own dedicated shopping vertical, where agents proactively hunt for clothing, home goods, or rare items you're tracking due to their limited availability, with the model provider taking a revenue share from the marketplace featured through this service.
4. Gaming: Games are often forgotten or glossed over as their own ad unit, and we're not sure how they fit into ChatGPT's ad strategy, but they are worth mentioning here. App install ads (many of which are for mobile games) have been a huge part of Facebook's ad growth for years, and games are inherently so profitable that it's not hard to imagine a lot of ad budgets appearing here.
5. Goal-based bidding: This is an interesting one for fans of auction algorithms (or former blockchain gas fee optimizers looking to move to LLMs). What if you could set a bounty for a specific query (e.g., $10 for a Noe Valley real estate alert) and have the model invest a super-linear amount of computation on that specific result? You'd get perfect price discrimination based on the determined "value" of the question and also get better guaranteed chain-of-thought reasoning for searches that are particularly important to you.
Poke is one of the best examples of this: people had to explicitly negotiate subscription services with the chatbot (which of course doesn't map to compute cost, but it's still an interesting illustration of what it could look like). In some ways, this is already how some models work: Cursor and ChatGPT both have routers that choose the model for you based on the interpreted query complexity. But even if you select the model from a dropdown, you don't get to choose the underlying amount of computation the model invests in the problem. For highly motivated users, the ability to specify in dollar terms how much a problem is worth to them could be attractive.
6. Subscriptions for AI entertainment and companions: The two primary use cases where AI users show a willingness to pay are: coding and companions. CharacterAI has one of the highest WAU counts of any non-lab AI company. They can also charge a $9.99 subscription fee for their service because they offer a mix of companionship and entertainment. But even though people do pay for companion apps, we haven't seen companion products cross the threshold where they can be reliably monetized with ads.
7. Per-token usage pricing: In the AI creative tools and coding space, per-token usage pricing is also a common monetization model. This has become an attractive pricing mechanism for companies with power users, allowing them to differentiate and charge more based on usage.
Monetization is still an unsolved problem in AI, and most users are still enjoying the free tiers of their preferred LLM. But this is only temporary: the history of the internet tells us that advertising will find a way.


