OpenAI Researcher Leaves and Accuses: ChatGPT Sells Ads, Who Will Protect Your Privacy?

marsbitPublished on 2026-02-12Last updated on 2026-02-12

Abstract

Former OpenAI researcher Zoë Hitzig resigned in protest as the company began testing ads in ChatGPT. She warns that introducing ads risks exploiting ChatGPT’s vast archive of intimate human conversations, potentially manipulating users’ deepest fears and desires. Hitzig argues that OpenAI is repeating Facebook’s pattern of prioritizing engagement over safety, eroding user trust. She rejects the false choice between exclusive access for the wealthy and manipulative ad-based models, proposing alternatives like cross-subsidies from corporate AI use, independent oversight boards with binding authority, and user-controlled data trusts. These models could keep AI accessible without exploiting private data.

Author: Zoë Hitzig

Compiled by: Deep Tide TechFlow

Deep Tide Introduction: As OpenAI announced the testing of ads in ChatGPT, former researcher Zoë Hitzig resigned in anger and wrote an article exposing the shift in the company's internal values. The author points out that ChatGPT has accumulated an unprecedented archive of candid human conversations, and once an advertising model is introduced, it could easily become a tool for psychological manipulation using users' private information. She warns that OpenAI is repeating the old path of Facebook's "promise first, break later," prioritizing user engagement over safety. This article delves into the ethical dilemmas of AI financing and proposes alternative solutions such as cross-subsidization, independent regulation, and data trusts, calling on the industry to be wary of the profit-driven motives behind "chatbot psychosis."

Full text as follows:

This week, OpenAI began testing ads on ChatGPT. I also resigned from the company. Previously, I worked there as a researcher for two years, responsible for assisting in building AI models and their pricing structures, and guiding early safety policies before industry standards were established.

I once believed that I could help those building AI stay ahead of the problems it might cause. But this week's events confirmed the reality I had gradually come to realize: OpenAI seems to have stopped asking the questions I originally joined to help answer.

I don't think advertising is immoral or unethical. Running AI is extremely expensive, and ads can be a critical source of revenue. But I have deep reservations about OpenAI's strategy.

For years, ChatGPT users have generated an unprecedented archive of candid human conversations, partly because people believe they are talking to an entity without ulterior motives. Users are interacting with an adaptive, conversational voice and revealing their most private thoughts. People tell chatbots about their health fears, relationship issues, beliefs about God and the afterlife. An advertising model built on this archive is highly likely to manipulate users in ways we currently lack the tools to understand, let alone prevent.

Many frame the AI funding issue as a choice between the lesser of two evils: either restrict access to this transformative technology to a few wealthy individuals who can afford it, or accept advertising, even if it means exploiting users' deepest fears and desires to sell products. I believe this is a false dilemma. Tech companies can absolutely seek other solutions that keep these tools widely accessible while limiting the company's incentives to surveil, profile, and manipulate its users.

OpenAI states it will adhere to principles for placing ads on ChatGPT: ads will be clearly labeled, appear at the bottom of responses, and will not influence the content of replies. I believe the first version of ads might follow these principles. But I worry subsequent iterations will not, because the company is building a powerful economic engine that will create strong incentives to overturn its own rules. (The New York Times has sued OpenAI regarding copyright infringement issues related to dynamic news content used by AI systems. OpenAI denies these allegations.)

In its early days, Facebook promised users control over their data and the ability to vote on policy changes. But these promises later crumbled. The company eliminated the system for public voting on policies. Privacy changes that claimed to give users more control over their data were later found by the Federal Trade Commission (FTC) to have backfired, effectively making private information public. All of this happened gradually under the pressure of an advertising model that prioritized user engagement above all else.

The erosion of OpenAI's own principles in the pursuit of maximizing engagement may have already begun. Optimizing for user engagement solely to generate more ad revenue violates the company's principles, but it has been reported that the company is already optimizing for daily active user numbers, likely by encouraging the model to behave more pleasingly and obsequiously. This optimization would make users feel more dependent on AI support in their lives. We have already seen the consequences of over-reliance, including cases of "Chatbot Psychosis" documented by psychiatrists, and allegations that ChatGPT reinforced suicidal thoughts in some users.

Nevertheless, advertising revenue does help ensure that the most powerful AI tools are not by default only available to those who can afford them. It's true that Anthropic has stated it will never run ads on Claude, but Claude's weekly active users are a fraction of ChatGPT's 800 million users; its revenue strategy is completely different. Furthermore, the top-tier subscription fees for ChatGPT, Gemini, and Claude now run as high as $200 to $250 per month—more than 10 times the cost of a standard Netflix subscription for a single piece of software.

So the real question is not whether there are ads, but whether we can design structures that both avoid excluding ordinary users and avoid potentially manipulating them as consumers. I believe we can.

One method is explicit cross-subsidization—using profits from one service or customer group to offset losses in another. If a business uses AI at scale to perform high-value labor once done by human employees (for example, a real estate platform using AI to write property listings or valuation reports), then it should also pay a surcharge to subsidize free or low-cost access for others.

This approach draws from how we handle basic infrastructure. The Federal Communications Commission (FCC) requires telecommunications carriers to contribute to a fund to keep telephone and broadband costs affordable in rural areas and for low-income households. Many states add a public benefit charge to electricity bills to provide low-income assistance.

A second option is to accept ads, but paired with real governance—not a blog post full of principles, but a binding structure with independent oversight functions responsible for regulating the use of personal data. There is some precedent for this. Germany's co-determination law requires large companies like Siemens and Volkswagen to cede up to half of their supervisory board seats to workers, showing that formal stakeholder representation within private companies can be enforced. Meta is also bound to follow the content moderation rulings of its Oversight Board, an independent body of external experts (though its effectiveness has been criticized).

What the AI industry needs is a combination of these approaches—a committee that includes both independent experts and representatives of the public whose data is affected, with binding authority over which conversational data can be used for targeted advertising, what constitutes a major policy change, and what users must be informed about.

A third method involves placing user data under independent control through a trust or cooperative, with a legal obligation to act in the users' interests. For example, the Swiss cooperative MIDATA allows members to store their health data on an encrypted platform and decide case-by-case whether to share it with researchers. MIDATA's members manage its policies in a general assembly, and an elected ethics committee reviews research access requests.

None of these options are easy. But we still have time to refine them to avoid the two outcomes I fear most: a technology that manipulates the people who use it without charging them, or a technology that serves only a select few who can afford it.

Related Questions

QWhy did the former OpenAI researcher Zoë Hitzig resign from the company?

AZoë Hitzig resigned because OpenAI began testing ads in ChatGPT, which she believes signifies a shift in the company's values away from the ethical and safety concerns she initially aimed to address, particularly regarding the potential for user manipulation through advertising based on private conversations.

QWhat is the primary concern regarding the introduction of ads in ChatGPT, according to the author?

AThe primary concern is that ChatGPT has accumulated an unprecedented archive of human conversations where users share intimate thoughts and fears, and an ad-based model could exploit this private information to manipulate users in ways that are not well understood or preventable, prioritizing engagement over safety.

QWhat historical example does the author use to illustrate the risk of OpenAI's advertising model?

AThe author cites Facebook as a historical example, noting that it initially promised user control over data and policy votes, but these promises eroded under the pressure of an ad-driven model that prioritized user engagement, leading to increased data exposure and manipulation.

QWhat alternative funding models does the author propose to avoid both exclusivity and user manipulation in AI?

AThe author proposes three alternatives: explicit cross-subsidies where profitable AI uses fund affordable access, advertising with independent governance and binding oversight structures, and placing user data under independent control through trusts or cooperatives with legal obligations to act in users' interests.

QWhat is 'chatbot psychosis' as mentioned in the article, and how is it related to OpenAI's strategies?

A'Chatbot psychosis' refers to cases where over-reliance on AI chatbots leads to negative mental health effects, such as reinforced suicidal thoughts, as reported by psychiatrists. It is related to OpenAI's strategies because optimizing for engagement (e.g., making models more agreeable) to increase daily active users might exacerbate such dependencies and risks.

Related Reads

Trading

Spot
Futures
活动图片