AI Wealth Tutorial: Start with NSFW, Then Sell Courses

marsbitОпубліковано о 2026-03-23Востаннє оновлено о 2026-03-23

Анотація

The article "AI致富教程:先搞色色,再去卖课" (AI Money-Making Guide: Start with Adult Content, Then Sell Courses) explores how AI-generated content (AIGC) is being monetized, particularly through adult entertainment and low-barrier creative work, before ultimately shifting to selling instructional courses. A16Z’s report highlights a striking trend: in the U.S., user spending on OnlyFans surpassed combined spending on OpenAI and The New York Times. This reflects a broader pattern where “sexual appeal outperforms productivity.” Early adopters used tools like Midjourney and Stable Diffusion to create AI-generated virtual models, offering “girlfriend experiences” on platforms like Fanvue, where AI models now contribute significantly to revenue. Similarly, some turned to AI-generated children’s books, though market saturation and quality issues quickly diminished profitability. Both paths often lead to selling courses—packaging the “get-rich-quick” illusion to newcomers. However, the real barrier isn’t technical proficiency but aesthetic judgment: the ability to translate vague ideas into precise prompts. Those with design, photography, or writing backgrounds excel because they know what “good” looks like; others struggle even with advanced tools. The rise of AI also brings ethical and trust issues. Clients often reject AI-assisted work on principle, perceiving it as “unfair” or lacking human effort. Regulations now require AI-generated content labeling, but boundaries remain unclear—e...

Author: Salad Sauce

Food and sex are human nature. Most great business models rise from these desires, and AIGC is no exception.

A16Z, a top-tier VC in Silicon Valley's investment circle, released a report on AI consumer trends. This report, which should have seriously discussed AI productivity, contained a laughable line graph: last year, American users spent more money on OnlyFans than on OpenAI and The New York Times combined.

A16Z Report Chart

Ironic, yet true—productivity can't compete with sexual appeal.

So, how much money can you make with AI-generated NSFW material?

Image Source: Giphy

Productivity Can't Compete with Sexual Appeal

The first wave of people creating AI virtual models knows this best.

Around the end of 2022, when tools like Midjourney and Stable Diffusion became stable enough to generate images consistently, some realized: this technology can create photorealistic human faces, produce in bulk, and cost almost nothing. They used AI to generate virtual female images, paired with a name, a backstory, and a few carefully designed "daily life" posts, operating on Instagram and TikTok as real people. Intimate replies in private messages were handled by ChatGPT, providing a so-called "girlfriend experience." The entire process was almost fully automated, and the operators behind it didn't even need to show their faces.

Image Source: Giphy

This model worked best on Fanvue, a competitor to OnlyFans. Fanvue has a more relaxed attitude toward AI content. According to its official disclosure, by November 2023, AI virtual models already contributed to 15% of the platform's total revenue. By 2024, top AI virtual models were普遍 earning over $20,000 per month, with some well-operated accounts exceeding $200,000 annually. In 2025, this number continued to rise. According to Fanvue CEO Will Monange in a 2025 interview, the overall income of AI creators on the platform increased by over 60% compared to the same period in 2024, making virtual models the fastest-growing content category on the platform.

OnlyFans officially prohibits AI content, but people keep finding loopholes. On Reddit, there are frequent discussions on how to use AI for NSFW content to make money on OnlyFans. A common method is to find a real woman to complete the platform's facial verification, then use her photos to train an AI model for mass content production.

Image Source: Giphy

No matter how strict the platform is, technology keeps advancing. Now, AI-generated images are so realistic that even seasoned viewers can't easily tell the difference. A few days ago, I saw a video on Xiaohongshu of a handsome guy posing suggestively in a car. If I hadn't seen the pinned comment saying "This AI has great taste," I wouldn't have realized it was an AI-generated guy.

Beyond adult content, another group of people made money with AI in a completely different direction: children's picture books.

Zhao Lei (pseudonym) was one of the earliest to enter this field. At the end of 2022, he had just been laid off from a product position at a major tech company and was researching new opportunities at home. Midjourney had just started producing stable images. Looking at the generated watercolor-style animals, an idea struck him: isn't this perfect for picture book illustrations? He spent two weeks researching Amazon KDP (Kindle Direct Publishing). The logic was extremely simple: ChatGPT writes the story, Midjourney generates the images, layout and upload, then wait for the money. "It was really easy to make money back then," he said. "With a few books stacked up, you could have over ten thousand yuan in passive income per month."

But the window didn't stay open long. In the second half of 2023, AI picture books on KDP exploded. Nearly ninety thousand tutorials of the same type appeared on TikTok, all with titles like: EASY AI Money, Make $100K Monthly with Children's Books.

Everyone rushed into the same赛道, quickly diluting sales. Quality issues also surfaced, with AI books featuring dinosaurs with huge front legs and children with mismatched finger counts. Major platforms began requiring disclosure of AI use upon upload, essentially ending this赛道. "It's already very difficult to make money with AI picture books now," Zhao Lei said.

Then, he and those doing AI NSFW content converged on the same endpoint: selling courses (in this regard, the recently popular "Lobster" has taken it to the extreme).

Image Source: Giphy

Zhao Lei sells "AI Picture Books: From Zero to Publishing Full Process." Those doing NSFW sell "AI Virtual Model Setup Tutorials." The buyers are the next batch of people who just heard about it and think the window is still open.

Two tracks, two types of content, different packaging, selling the same thing: the illusion that "I too can be the pig that flies."

Aesthetics and "Old Skills" Hinder Many

What are the barriers to these seemingly easy money-making opportunities on the风口?

An internet UX designer friend gave me one answer: regional internet restrictions and membership fees. When Midjourney first came out, she wrote an operation guide, sold it for 99 yuan, and it's still generating income on Xiaohongshu. From a tool usage perspective, she was spot on—the barriers are indeed falling fast.

But as someone whose drawing skills stop at stick figures and who frequently produces ugly images with various AIGC tools, I must add something she didn't mention: another barrier called aesthetics.

Image Source: Giphy

People used to joke that AI can't replace designers because clients don't know what they want. I thought it was just a joke until I used these tools myself and found the joke applied to me word for word.

Last year, I started a media account and wanted to use the physics concept of "Integrable Island" for the logo. An Integrable Island can be understood as those things worth preserving amidst the chaotic flow of information. I found reference images for this concept, opened the tool, dropped the images in, wrote a bunch of descriptive prompts, and started generating. The results were a mess. After seven or eight revisions, each version was a different kind of mess. I knew I wanted a certain feeling, but I had no idea how to translate that feeling into instructions. Finally, I asked a designer friend for help. She spent twenty minutes, and the version she produced was on a completely different level than my two-hour struggle.

Top image: before modification; Bottom image: after modification

The problem wasn't the tool; it was me. More precisely, my inability to turn the vague aesthetic feeling in my head into precise language.

This dilemma isn't mine alone.

A friend in content operations started using Seedance for short videos last year. She learned the tool itself quickly, but what really stumped her was writing the storyboard. "I know I want a textured shot, but putting 'textured' into the prompt does nothing," she said. "I don't know what that texture specifically means in terms of lighting, shot type, or camera movement." The final product, she described, was "somewhat similar but wrong everywhere."

Another friend used Marble, a tool that generates 3D scenes from text and images, to create content素材. After反复 generating and rejecting images, she折腾了半天才意识到 she had no reference point. She didn't know what "good" looked like, so she couldn't judge if the generated content was what she wanted.

Marble Generated 3D Image Panorama

In stark contrast was a friend with photography experience. Using the same tool, his output quality was significantly higher. He said he didn't spend much time studying prompt技巧. "I just know what composition and lighting I want. Articulate that clearly, and the tool delivers accurately."

The tool's capabilities are rapidly improving, but the gap between users isn't narrowing; in some ways, it's widening. Before, everyone struggled to produce good work. Now, those with aesthetic积累 can produce excellent work, while those without remain stuck between "usable" and "good."

Tools are also responding to this reality. The popularity of one-click template tools like NotebookLM stems from a simple logic: they bypass the prerequisite of "you need to know what you want first." The template makes the aesthetic decisions for you; you just fill in the content. But the template's ceiling is also here: it solves "usable," not "beautiful."

This is equally clear in the text direction. A friend in market策划 was recently transferred to handle PR and needed to output大量文字稿件. Her leader suggested using AI, but this only confused her more. She came to me for an AI writing manual I'd written before. The crux was: she had no feel for "what makes a good PR piece." Without knowing the standard of quality, she couldn't judge which direction to push the AI-generated content.

Image Source: Giphy

I, on the other hand, find using AI for writing much smoother. Not because I understand the tool better, but because years as a文字记者 have given me judgment about expression. I know what makes a sentence good or awkward, I know where the AI's output falls short and which direction to push it. Here, aesthetics becomes a very practical skill: it lets you know the destination, rather than having the AI run aimlessly again and again.

When tool capability is not the issue, aesthetics and "old skills" become the biggest barriers—those who use them poorly might even be worse off than those who don't use them at all.

I Want NSFW; Does the Distinction Between AI and Real Matter?

The first to taste the crab not only enjoy the sweetness but also attract controversy. A bizarre phenomenon is emerging in the AIGC circle: whether AI is used seems more important than whether the work is good.

Fang Yuan (pseudonym) is a brand designer. He took on a brand visual project and used AI tools to compress a process that used to take two weeks into three days. He felt the result was even better than before. He sent it off and waited for the client's response.

The client's first reply wasn't an evaluation of the work but, "So fast, did you use AI?" Before Fang Yuan could respond, another message followed: "We don't accept design work involving AI." He's still not sure if they even opened the attachment. He's frustrated; being too efficient became a crime.

Image Source: Giphy

He's not alone in this situation. AI has quietly become a coordinate for moral judgment in many people's evaluation systems. This is different from Photoshop or Excel. No one receives a retouched photo and asks, "Did you use editing software?" No one gets a financial report and probes, "Did you calculate this in Excel?"

AI triggers a different kind of suspicion, one closer to questioning "Did you actually *do* this?"

Creative work has always had an implicit contract: good work implies that someone has invested time, effort, and refinement into it. The emergence of AI disrupts this assumed causal link between "input" and "output."

If you produce something in three days using AI and someone else produces something of equal quality manually in two weeks, the former feels somehow "off," even if the quality is the same. This "off-ness" can be summarized as "unfairness."

The University of Arizona conducted a study showing that if designers proactively disclose using AI assistance, even explaining it's just an辅助环节, client trust in the designer still drops by an average of 20%.

As AIGC technology matures, this issue is gradually escalating from a client-designer trust problem to a platform-wide problem.

Starting in 2023, national regulations陆续出台 requiring labeling of AI-generated content: first, the "Internet Information Service Deep Synthesis Management Regulations" in January, primarily governing deep synthesis technologies like AI face swaps and synthetic voices; in August of the same year, the "Interim Measures for the Management of Generative Artificial Intelligence Services" officially took effect, bringing ChatGPT-like generative services under regulation. By March 2025,监管再度升级. The Cyberspace Administration of China (CAC), jointly with multiple departments, issued the "Measures for the Identification of Artificial Intelligence Generated Synthetic Content." This time, the regulations covered all content forms: text, images, audio, and video.

But what regulations cannot clearly define are the boundaries.

A platform can identify a video 100% generated by AI, but it struggles with the gray areas: Does a selfie adjusted for color and composition using AI count as AI-generated content? A video where the footage is self-shot but editing and music are handled by AI—should it be labeled? An article where AI produced the first draft and a human revised 70% of it—who does the label belong to?...

Image Source: Giphy

Behind the难题 of boundaries lies the issue of accountability. Without clear definitions, responsibility has no anchor point. If the melody of a song is written by AI and the lyrics are changed by a human, who is responsible in a copyright dispute? Or if a product review is generated by AI and the blogger only changed the tone, and the recommended product turns out to be subpar, when we ask "Was this made by AI?", we are actually asking a more fundamental question: Is there genuinely a person behind this work who is认真负责? Was someone thinking about your problem? Does someone care about the outcome?

The hardest thing to delineate is not the boundary, but the responsibility.

Пов'язані питання

QAccording to the A16Z report mentioned in the article, how did consumer spending on OnlyFans compare to spending on OpenAI and The New York Times last year?

AAccording to the A16Z report, last year, American users spent more money on OnlyFans than the combined spending on OpenAI and The New York Times.

QWhat is the primary business model that the article suggests many early AIGC adopters ultimately transition to after their initial success?

AThe article suggests that many early adopters, whether in AI-generated adult content or children's picture books, ultimately transitioned to selling courses (e.g., 'AI picture book from zero to shelf' or 'AI virtual model setup tutorials') to newcomers who believed the opportunity window was still open.

QBeyond technical access, what key personal skill does the article identify as a major barrier to creating high-quality AI-generated content?

AThe article identifies 'aesthetic sense' or 'taste' as a major barrier. It argues that the ability to translate a vague aesthetic feeling in one's mind into precise language (prompts) is crucial, and this skill gap is not closed by the tools themselves, often separating those who can create 'good' content from those who can only create 'usable' content.

QWhat was one of the main reasons cited for the AI-generated children's picture book market on Amazon KDP becoming difficult to profit from?

AThe market became difficult due to an explosion of competition (with nearly 90,000 similar tutorials on TikTok) which diluted sales, coupled with quality issues in the AI-generated books (e.g., dinosaurs with huge front legs, children with incorrect numbers of fingers) that led to platforms requiring AI use disclosure, effectively ending the easy money era.

QWhat underlying concern does the article suggest is at the heart of the question 'Did you use AI?' in creative professional contexts?

AThe article suggests the underlying concern is about 'responsibility' and 'accountability.' The question reflects a doubt about whether genuine human effort, care, and打磨 (polishing) went into the work, and who is ultimately responsible for the quality and consequences of the final product, especially if something goes wrong.

Пов'язані матеріали

North Korean Hackers Loot $500 Million in a Single Month, Becoming the Top Threat to Crypto Security

North Korean hackers, particularly the notorious Lazarus Group and its subgroup TraderTraitor, have stolen over $500 million from cryptocurrency DeFi platforms in less than three weeks, bringing their total theft for the year to over $700 million. Recent major attacks on Drift Protocol and KelpDAO, resulting in losses of approximately $286 million and $290 million respectively, highlight a strategic shift: instead of targeting core smart contracts, attackers are now exploiting vulnerabilities in peripheral infrastructure. For instance, the KelpDAO attack involved compromising downstream RPC infrastructure used by LayerZero's decentralized validation network (DVN), allowing manipulation without breaching core cryptography. This sophisticated approach mirrors advanced corporate cyber-espionage. Additionally, North Korea has systematically infiltrated the global crypto workforce, with an estimated 100 operatives using fake identities to gain employment at blockchain companies, enabling long-term access to sensitive systems and facilitating large-scale thefts. According to Chainalysis, North Korean-linked hackers stole a record $2 billion in 2025, accounting for 60% of all global crypto theft that year. Their total historical crypto theft has reached $6.75 billion. Post-theft, they employ specialized money laundering methods, heavily relying on Chinese OTC brokers and cross-chain mixing services rather than standard decentralized exchanges. Security experts, while acknowledging the increased sophistication, emphasize that many attacks still exploit fundamental weaknesses like poor access controls and centralized operational risks. Strengthening private key management, limiting privileged access, and enhancing coordination among exchanges, analysts, and law enforcement immediately after an attack are critical to improving defense and fund recovery chances. The industry's challenge now extends beyond secure smart contracts to safeguarding operational security at the infrastructure level.

marsbit58 хв тому

North Korean Hackers Loot $500 Million in a Single Month, Becoming the Top Threat to Crypto Security

marsbit58 хв тому

Circle CEO's Seoul Visit: No Korean Won Stablecoin Issuance, But Met All Major Korean Banks

Circle CEO Jeremy Allaire's recent activities in Seoul indicate a strategic shift for the company, moving away from issuing a Korean won-backed stablecoin and instead focusing on embedding itself as a key infrastructure provider within Korea’s financial and crypto ecosystem. Despite Korea accounting for nearly 30% of global crypto trading volume—with a market characterized by high retail participation and altcoin dominance—Circle has chosen not to compete for the role of stablecoin issuer. Instead, Allaire met with major Korean banks (including Shinhan, KB, and Woori), financial groups, leading exchanges (Upbit, Bithumb, Coinone), and tech firms like Kakao. This approach reflects a broader industry transition: the core of stablecoin competition is shifting from issuance rights to systemic positioning. With Korean regulators still debating whether banks or tech companies should issue stablecoins, Circle is avoiding regulatory uncertainty by strengthening its role as a service and technology partner. The company is deepening integration with trading platforms, building connections, and promoting stablecoin infrastructure. This positions Circle to benefit regardless of which entity eventually issues a won stablecoin. Allaire also noted the potential for a Chinese yuan stablecoin in the next 3–5 years, underscoring a regional trend of stablecoins becoming more regulated and integrated with traditional finance. Ultimately, Circle’s strategy highlights that future influence in the stablecoin market will belong not necessarily to the issuers, but to the foundational infrastructure layers that enable cross-system transactions.

marsbit1 год тому

Circle CEO's Seoul Visit: No Korean Won Stablecoin Issuance, But Met All Major Korean Banks

marsbit1 год тому

SpaceX Ties Up with Cursor: A High-Stakes AI Gambit of 'Lock First, Acquire Later'

SpaceX has secured an option to acquire AI programming company Cursor for $60 billion, with an alternative clause requiring a $10 billion collaboration fee if the acquisition does not proceed. This structure is not merely a potential acquisition but a strategic move to control core access points in the AI era. The deal is designed as a flexible, dual-path arrangement, allowing SpaceX to either fully acquire Cursor or maintain a binding partnership through high-cost collaboration. This "option-style" approach minimizes immediate regulatory and integration risks while ensuring long-term alignment between the two companies. At its core, the transaction exchanges critical AI-era resources: SpaceX provides its Colossus supercomputing cluster—one of the world’s most powerful AI training infrastructures—while Cursor contributes its AI-native developer environment and strong product adoption. This synergy connects compute power, models, and application layers, forming a closed-loop AI capability stack. Cursor, founded in 2022, has achieved rapid growth with over $1 billion in annual revenue and widespread enterprise adoption. Its value lies in transforming software development through AI agents capable of coding, debugging, and system design—positioning it as a gateway to future software production. For SpaceX, this move is part of a broader strategy to evolve from a aerospace company into an AI infrastructure empire, integrating xAI, supercomputing, and chip manufacturing. Controlling Cursor fills a gap in its developer tooling layer, strengthening its AI narrative ahead of a potential IPO. The deal reflects a shift in AI competition from model superiority to ecosystem and entry-point control. With programming tools as a key battleground, securing developer loyalty becomes crucial for dominating the software production landscape. Risks include questions around Cursor’s valuation, technical integration challenges, and potential regulatory scrutiny. Nevertheless, the deal underscores a strategic bet: controlling both compute and software development access may redefine power dynamics in the AI-driven future.

marsbit2 год тому

SpaceX Ties Up with Cursor: A High-Stakes AI Gambit of 'Lock First, Acquire Later'

marsbit2 год тому

Торгівля

Спот
Ф'ючерси
活动图片