Author: Salad Sauce
Food and sex are human nature. Most great business models rise from these desires, and AIGC is no exception.
A16Z, a top-tier VC in Silicon Valley's investment circle, released a report on AI consumer trends. This report, which should have seriously discussed AI productivity, contained a laughable line graph: last year, American users spent more money on OnlyFans than on OpenAI and The New York Times combined.
A16Z Report Chart
Ironic, yet true—productivity can't compete with sexual appeal.
So, how much money can you make with AI-generated NSFW material?
Image Source: Giphy
Productivity Can't Compete with Sexual Appeal
The first wave of people creating AI virtual models knows this best.
Around the end of 2022, when tools like Midjourney and Stable Diffusion became stable enough to generate images consistently, some realized: this technology can create photorealistic human faces, produce in bulk, and cost almost nothing. They used AI to generate virtual female images, paired with a name, a backstory, and a few carefully designed "daily life" posts, operating on Instagram and TikTok as real people. Intimate replies in private messages were handled by ChatGPT, providing a so-called "girlfriend experience." The entire process was almost fully automated, and the operators behind it didn't even need to show their faces.
Image Source: Giphy
This model worked best on Fanvue, a competitor to OnlyFans. Fanvue has a more relaxed attitude toward AI content. According to its official disclosure, by November 2023, AI virtual models already contributed to 15% of the platform's total revenue. By 2024, top AI virtual models were普遍 earning over $20,000 per month, with some well-operated accounts exceeding $200,000 annually. In 2025, this number continued to rise. According to Fanvue CEO Will Monange in a 2025 interview, the overall income of AI creators on the platform increased by over 60% compared to the same period in 2024, making virtual models the fastest-growing content category on the platform.
OnlyFans officially prohibits AI content, but people keep finding loopholes. On Reddit, there are frequent discussions on how to use AI for NSFW content to make money on OnlyFans. A common method is to find a real woman to complete the platform's facial verification, then use her photos to train an AI model for mass content production.
Image Source: Giphy
No matter how strict the platform is, technology keeps advancing. Now, AI-generated images are so realistic that even seasoned viewers can't easily tell the difference. A few days ago, I saw a video on Xiaohongshu of a handsome guy posing suggestively in a car. If I hadn't seen the pinned comment saying "This AI has great taste," I wouldn't have realized it was an AI-generated guy.
Beyond adult content, another group of people made money with AI in a completely different direction: children's picture books.
Zhao Lei (pseudonym) was one of the earliest to enter this field. At the end of 2022, he had just been laid off from a product position at a major tech company and was researching new opportunities at home. Midjourney had just started producing stable images. Looking at the generated watercolor-style animals, an idea struck him: isn't this perfect for picture book illustrations? He spent two weeks researching Amazon KDP (Kindle Direct Publishing). The logic was extremely simple: ChatGPT writes the story, Midjourney generates the images, layout and upload, then wait for the money. "It was really easy to make money back then," he said. "With a few books stacked up, you could have over ten thousand yuan in passive income per month."
But the window didn't stay open long. In the second half of 2023, AI picture books on KDP exploded. Nearly ninety thousand tutorials of the same type appeared on TikTok, all with titles like: EASY AI Money, Make $100K Monthly with Children's Books.
Everyone rushed into the same赛道, quickly diluting sales. Quality issues also surfaced, with AI books featuring dinosaurs with huge front legs and children with mismatched finger counts. Major platforms began requiring disclosure of AI use upon upload, essentially ending this赛道. "It's already very difficult to make money with AI picture books now," Zhao Lei said.
Then, he and those doing AI NSFW content converged on the same endpoint: selling courses (in this regard, the recently popular "Lobster" has taken it to the extreme).
Image Source: Giphy
Zhao Lei sells "AI Picture Books: From Zero to Publishing Full Process." Those doing NSFW sell "AI Virtual Model Setup Tutorials." The buyers are the next batch of people who just heard about it and think the window is still open.
Two tracks, two types of content, different packaging, selling the same thing: the illusion that "I too can be the pig that flies."
Aesthetics and "Old Skills" Hinder Many
What are the barriers to these seemingly easy money-making opportunities on the风口?
An internet UX designer friend gave me one answer: regional internet restrictions and membership fees. When Midjourney first came out, she wrote an operation guide, sold it for 99 yuan, and it's still generating income on Xiaohongshu. From a tool usage perspective, she was spot on—the barriers are indeed falling fast.
But as someone whose drawing skills stop at stick figures and who frequently produces ugly images with various AIGC tools, I must add something she didn't mention: another barrier called aesthetics.
Image Source: Giphy
People used to joke that AI can't replace designers because clients don't know what they want. I thought it was just a joke until I used these tools myself and found the joke applied to me word for word.
Last year, I started a media account and wanted to use the physics concept of "Integrable Island" for the logo. An Integrable Island can be understood as those things worth preserving amidst the chaotic flow of information. I found reference images for this concept, opened the tool, dropped the images in, wrote a bunch of descriptive prompts, and started generating. The results were a mess. After seven or eight revisions, each version was a different kind of mess. I knew I wanted a certain feeling, but I had no idea how to translate that feeling into instructions. Finally, I asked a designer friend for help. She spent twenty minutes, and the version she produced was on a completely different level than my two-hour struggle.
Top image: before modification; Bottom image: after modification
The problem wasn't the tool; it was me. More precisely, my inability to turn the vague aesthetic feeling in my head into precise language.
This dilemma isn't mine alone.
A friend in content operations started using Seedance for short videos last year. She learned the tool itself quickly, but what really stumped her was writing the storyboard. "I know I want a textured shot, but putting 'textured' into the prompt does nothing," she said. "I don't know what that texture specifically means in terms of lighting, shot type, or camera movement." The final product, she described, was "somewhat similar but wrong everywhere."
Another friend used Marble, a tool that generates 3D scenes from text and images, to create content素材. After反复 generating and rejecting images, she折腾了半天才意识到 she had no reference point. She didn't know what "good" looked like, so she couldn't judge if the generated content was what she wanted.
Marble Generated 3D Image Panorama
In stark contrast was a friend with photography experience. Using the same tool, his output quality was significantly higher. He said he didn't spend much time studying prompt技巧. "I just know what composition and lighting I want. Articulate that clearly, and the tool delivers accurately."
The tool's capabilities are rapidly improving, but the gap between users isn't narrowing; in some ways, it's widening. Before, everyone struggled to produce good work. Now, those with aesthetic积累 can produce excellent work, while those without remain stuck between "usable" and "good."
Tools are also responding to this reality. The popularity of one-click template tools like NotebookLM stems from a simple logic: they bypass the prerequisite of "you need to know what you want first." The template makes the aesthetic decisions for you; you just fill in the content. But the template's ceiling is also here: it solves "usable," not "beautiful."
This is equally clear in the text direction. A friend in market策划 was recently transferred to handle PR and needed to output大量文字稿件. Her leader suggested using AI, but this only confused her more. She came to me for an AI writing manual I'd written before. The crux was: she had no feel for "what makes a good PR piece." Without knowing the standard of quality, she couldn't judge which direction to push the AI-generated content.
Image Source: Giphy
I, on the other hand, find using AI for writing much smoother. Not because I understand the tool better, but because years as a文字记者 have given me judgment about expression. I know what makes a sentence good or awkward, I know where the AI's output falls short and which direction to push it. Here, aesthetics becomes a very practical skill: it lets you know the destination, rather than having the AI run aimlessly again and again.
When tool capability is not the issue, aesthetics and "old skills" become the biggest barriers—those who use them poorly might even be worse off than those who don't use them at all.
I Want NSFW; Does the Distinction Between AI and Real Matter?
The first to taste the crab not only enjoy the sweetness but also attract controversy. A bizarre phenomenon is emerging in the AIGC circle: whether AI is used seems more important than whether the work is good.
Fang Yuan (pseudonym) is a brand designer. He took on a brand visual project and used AI tools to compress a process that used to take two weeks into three days. He felt the result was even better than before. He sent it off and waited for the client's response.
The client's first reply wasn't an evaluation of the work but, "So fast, did you use AI?" Before Fang Yuan could respond, another message followed: "We don't accept design work involving AI." He's still not sure if they even opened the attachment. He's frustrated; being too efficient became a crime.
Image Source: Giphy
He's not alone in this situation. AI has quietly become a coordinate for moral judgment in many people's evaluation systems. This is different from Photoshop or Excel. No one receives a retouched photo and asks, "Did you use editing software?" No one gets a financial report and probes, "Did you calculate this in Excel?"
AI triggers a different kind of suspicion, one closer to questioning "Did you actually *do* this?"
Creative work has always had an implicit contract: good work implies that someone has invested time, effort, and refinement into it. The emergence of AI disrupts this assumed causal link between "input" and "output."
If you produce something in three days using AI and someone else produces something of equal quality manually in two weeks, the former feels somehow "off," even if the quality is the same. This "off-ness" can be summarized as "unfairness."
The University of Arizona conducted a study showing that if designers proactively disclose using AI assistance, even explaining it's just an辅助环节, client trust in the designer still drops by an average of 20%.
As AIGC technology matures, this issue is gradually escalating from a client-designer trust problem to a platform-wide problem.
Starting in 2023, national regulations陆续出台 requiring labeling of AI-generated content: first, the "Internet Information Service Deep Synthesis Management Regulations" in January, primarily governing deep synthesis technologies like AI face swaps and synthetic voices; in August of the same year, the "Interim Measures for the Management of Generative Artificial Intelligence Services" officially took effect, bringing ChatGPT-like generative services under regulation. By March 2025,监管再度升级. The Cyberspace Administration of China (CAC), jointly with multiple departments, issued the "Measures for the Identification of Artificial Intelligence Generated Synthetic Content." This time, the regulations covered all content forms: text, images, audio, and video.
But what regulations cannot clearly define are the boundaries.
A platform can identify a video 100% generated by AI, but it struggles with the gray areas: Does a selfie adjusted for color and composition using AI count as AI-generated content? A video where the footage is self-shot but editing and music are handled by AI—should it be labeled? An article where AI produced the first draft and a human revised 70% of it—who does the label belong to?...
Image Source: Giphy
Behind the难题 of boundaries lies the issue of accountability. Without clear definitions, responsibility has no anchor point. If the melody of a song is written by AI and the lyrics are changed by a human, who is responsible in a copyright dispute? Or if a product review is generated by AI and the blogger only changed the tone, and the recommended product turns out to be subpar, when we ask "Was this made by AI?", we are actually asking a more fundamental question: Is there genuinely a person behind this work who is认真负责? Was someone thinking about your problem? Does someone care about the outcome?
The hardest thing to delineate is not the boundary, but the responsibility.
















