OpenAI Exposes 'Polaris' Project, the '2028 Great Unemployment' May Really Be Coming

marsbit2026-03-24 tarihinde yayınlandı2026-03-24 tarihinde güncellendi

Özet

OpenAI has disclosed its "North Star" project, aiming to build a fully automated multi-agent AI research system by 2028. This goal, announced by Chief Scientist Jakub Pachocki, includes launching an "autonomous AI research intern" capable of handling specific research tasks by September 2024. The project represents a major strategic shift, consolidating resources and products like ChatGPT and Codex into a unified super-app. The initiative reflects a broader industry trend, with competitors like Anthropic also advancing AI agent integration into workflows, such as through Claude Code on platforms like Discord. However, Pachocki acknowledges significant challenges in safety and controllability, noting that full control over LLMs remains unresolved. Economically, the project could be highly lucrative, with projections suggesting AI agents—including a $20,000/month "research agent"—generating $29 billion annually by 2029. The core implication is profound: if AI can autonomously conduct research, the pace of scientific and AI advancement could accelerate beyond human limitations, potentially reshaping labor markets and research paradigms.

Not long ago, an article titled '2028 Prediction' went viral online. The article pointed out that due to the advancement of AI, there will be a major wave of unemployment in 2028, and many people's jobs will be replaced by AI.

The article's release, combined with the Middle East situation, severely hit the US stock market that day. The incident was quite surreal, as the article was clearly written by AI, but it seemed to fit people's fear of 'AI causing massive unemployment,' thus causing such a significant impact.

Recently, a piece of news exposed by OpenAI has made people realize that the '2028 Great Unemployment' may not be groundless.

Recently, OpenAI's Chief Scientist, Jakub Pachocki, said something spine-chilling in an exclusive interview with MIT Technology Review—their 'Polaris' is to build a fully automated multi-agent research system by 2028.

The first phase goal will be achieved by September this year:

An 'autonomous AI research intern' capable of independently handling specific research problems.

This is not a placeholder in the product roadmap, nor is it a casual boast by Altman on X. This is OpenAI betting the entire company's resources on one direction.

The Meaning of 'Polaris'

When tech companies say 'Polaris,' it usually means two things: first, other matters must give way to it, and second, there is internal consensus within the company.

Judging from OpenAI's actions over the past two weeks, this assessment is largely correct.

On March 19, OpenAI announced the acquisition of the developer tools company Astral, with the team merged into the Codex department; at the same time, the company announced the integration of ChatGPT, Codex, and the browser into a unified desktop 'super app,' led by Head of Applications Fidji Simo, with Greg Brockman assisting in promoting organizational reforms.

The era of fragmented products has come to an end. OpenAI is pushing all its chips in one direction.

And this direction points to 'letting AI do research itself.'

Pachocki's logic is actually quite clear: the three technical routes of reasoning models, agents, and interpretability, which were previously fought separately within OpenAI, are now to be integrated under one goal—to create an AI researcher that can run autonomously in data centers for long periods. He said that once this is achieved, 'This is what we truly rely on.'

Former OpenAI researcher Andrej Karpathy's view is more direct—'All large language model frontier labs will do this; this is the final BOSS battle.' He added a sentence worth pondering: 'Scaling will of course be more complex, but doing this is just an engineering problem; it will succeed.'

Note his wording: not 'if,' but 'when.'

Anthropic in Action

On the same day OpenAI announced 'Polaris,' Anthropic quietly launched Claude Code Channels—a feature that allows developers to interact directly with a running Claude Code session via Telegram and Discord.

This matter seems small when viewed alone, but when placed in the overall trend, it becomes very important.

Anthropic's logic is: rather than telling developers what AI can do in the future, let it embed into developers' real workflows now. Telegram and Discord are not academic papers; they are places where programmers work every day. Having Claude Code live here means it changes from a 'tool' to a 'colleague.'

The reaction in the community confirms this judgment.

One user directly said: 'Claude killed OpenClaw with this update; you no longer need to buy a Mac Mini.' The meaning behind this sentence is that Anthropic's infrastructure improvements have already made open-source alternatives lose their cost advantage.

Looking at a broader timeline, Anthropic's iteration speed on Claude Code is indeed astonishing. In just a few weeks, it has integrated text processing, thousands of MCP skill integrations, and autonomous bug-fixing capabilities. While OpenAI is strengthening Codex by acquiring Astral, Anthropic has already sent Claude Code directly into developers' chat windows.

Both companies are rushing towards the same finish line, but their routes are completely different—OpenAI is working on the 'fully automated researcher of 2028,' while Anthropic is working on 'agent tools that can be used today.'

The Real Challenge

However, there is one detail that cannot be skipped.

Pachocki did something very rare in the exclusive interview—he proactively talked about the challenges of safety and controllability, and he was quite candid about it.

He said their idea is to use other large language models to 'monitor the AI researcher's scratchpad,' catching bad behavior before it becomes a problem. But then he admitted: 'The understanding of large language models is not sufficient for us to fully control them; it will take a long time to truly say 'this problem is solved.''

The chief scientist of a company says 'we don't have full control,' while simultaneously announcing the delivery of a fully automated AI research system by 2028. These two things placed together are worth everyone thinking about carefully.

This is not pessimism, but rather understanding the true difficulty of this matter. The fact that Pachocki can say this sentence itself indicates that OpenAI has a clear awareness of the hardships on this path internally.

On the technical level, a 'Karpathy Cycle' summarized by researchers is worth referencing—a successful automated AI research framework requires three elements: an agent with the authority to modify individual files, a single metric that can be objectively tested, and fixed experimental time limits.

This framework has already begun to produce results in practical environments. Shopify CEO Tobias Lütke publicly shared a case: he let an autoresearch agent run overnight, and the next morning, the agent had run 37 experiments and improved model performance by 19%.

From concept to implementation, this road is shorter than imagined.

The Future of a $20,000 Subscription Fee

The 'Polaris' project is not only a technical advantage but also a key to commercial victory.

A set of numbers from Paul Roetzer makes one want to look twice: he cited internal predictions from OpenAI that by 2029, the agent business alone could generate $29 billion in annual revenue, including a 'knowledge agent' with a monthly fee of $2,000 and a 'research agent' with a monthly fee of $20,000.

This set of numbers shows that 'AI researcher' was never just a technical goal; it is a revenue roadmap.

A 'research agent' with a monthly fee of $20,000, converted, is a fraction of a senior researcher's annual salary, but it can work 24/7 and run 37 experiments simultaneously. This is not about replacing a specific person, but redefining what 'research productivity' itself is.

This reminds me of Karpathy's words—'This is the final BOSS battle.' The BOSS he refers to is not a competitor, but the ceiling of AI capability itself.

Once AI can autonomously advance scientific research, the speed of AI progress will no longer be limited by the number and working hours of human researchers.

Pachocki also said the same thing, just expressed more restrainedly—'Once the system can run autonomously in the data center for a long time, this is what we truly rely on.'

The AI research intern of September 2026 is not the end goal, but an important starting point.

İlgili Sorular

QWhat is the core objective of OpenAI's 'North Star' project as mentioned in the article?

AThe core objective of OpenAI's 'North Star' project is to build a fully automated, multi-agent research system by 2028, capable of autonomously conducting scientific research.

QWhat specific milestone is OpenAI aiming to achieve by September 2024, according to the article?

ABy September 2024, OpenAI aims to launch the first phase: an 'autonomous AI research intern' that can independently handle specific research problems.

QHow does the article contrast the approaches of OpenAI and Anthropic in developing AI research capabilities?

AOpenAI is focused on building a long-term, fully automated research system for 2028, while Anthropic is developing immediate, practical AI agent tools integrated into developers' current workflows, such as Claude Code in Telegram and Discord.

QWhat significant challenge did OpenAI's Chief Scientist, Jakub Pachocki, acknowledge regarding the development of autonomous AI researchers?

AJakub Pachocki acknowledged that the understanding of large language models is not sufficient to fully control them, and ensuring safety and controllability remains a significant challenge that will take a long time to resolve.

QWhat potential commercial impact does the article suggest autonomous AI researchers could have by 2029?

AThe article suggests that by 2029, AI agent services, including a $20,000 per month 'research agent', could generate $29 billion in annual revenue for OpenAI by redefining research productivity and operating continuously at a fraction of the cost of human researchers.

İlgili Okumalar

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

DeepSeek, an AI startup founded by Liang Wenfeng, is reportedly in talks with Alibaba and Tencent for an external funding round that could value the company at over $20 billion. This marks a significant shift, as DeepSeek had previously relied solely on funding from its parent company,幻方量化 (Huanfang Quantitative), and had resisted external investment. The potential valuation would place DeepSeek among the top-tier AI model companies in China, comparable to competitors like MoonDark (valued at ~$18 billion) and ahead of recently listed firms like MiniMax and Zhipu. The funding—which could range from $600 million (for a 3% stake) to $2 billion (for 10%)—is seen as a move to secure resources for model development, retain talent, and support infrastructure needs, particularly as competition in inference models and AI agents intensifies. Both Alibaba and Tencent are eager to invest, not only for financial returns but also to integrate DeepSeek into their broader AI ecosystems. However, DeepSeek’s leadership is cautious about maintaining independence and may prefer financial investors over strategic ones to avoid being locked into a specific tech ecosystem. Alternative options, such as state-backed funds, offer longer-term capital and policy support but may come with slower decision-making and potential constraints on global expansion. With competing AI firms accelerating their IPO plans, DeepSeek’s window for securing optimal terms may be narrowing. The final decision will reflect a trade-off between capital, resources, and strategic independence.

marsbit42 dk önce

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

marsbit42 dk önce

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

After losing 97% of its market value since its 2018 peak, iQiyi is aggressively pivoting to AI in a desperate attempt to survive. At its 2026 World Conference, CEO Gong Yu announced an "AI Artist Library" with over 100 virtual performers and a new AIGC platform, "NaDou Pro," promising faster production and lower costs. This shift comes as the company faces severe financial distress: its market cap sits near delisting thresholds at $1.36 billion, with significant losses, declining membership revenue, and depleted cash flow. The AI strategy has sparked controversy. Top actors have issued legal threats against unauthorized digital replicas, while in Hengdian, over 134,000 background actors are seeing their already scarce job opportunities vanish as AI replaces them for background roles. iQiyi's move represents a fundamental shift from being a high-cost content buyer to a landlord" to becoming a "platform capitalist" that transfers production risk to creators. This contrasts with competitors like Douyin (TikTok's Chinese counterpart), which is investing heavily in *real* actor-led short dramas, betting that authentic human connection retains users better than AI-generated content. The article draws a parallel to the 1920s transition to "talkies," which made cinema musicians obsolete but ultimately enriched the art form. In contrast, iQiyi's AI drive is framed not as an artistic evolution but as a cost-cutting measure that could degrade storytelling, replacing genuine human emotion with algorithmically calculated stimulation and potentially numbing audiences' capacity for empathy. The core question remains: can a company focused solely on financial survival preserve the art of storytelling?

marsbit46 dk önce

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

marsbit46 dk önce

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

The CLARITY Act, which passed the House in July 2025 with strong bipartisan support (294-134), faces a critical juncture in the Senate. The Senate Banking Committee is expected to hold a markup soon, but key issues remain unresolved, including stablecoin yield provisions, DeFi regulations, and securing full Republican committee support. Other contentious points involve the Blockchain Regulatory Certainty Act (BRCA), ethics amendments for government officials, and SEC-related matters. The legislative calendar is tight, with limited time before the midterm elections. If the committee markup is delayed beyond mid-May, the chances of passage in 2026 drop significantly. Senator Cynthia Lummis has warned that failure this year could delay comprehensive crypto market structure legislation until 2030 or later. Galaxy estimates the probability of the CLARITY Act becoming law in 2026 is only about 50%. The bill provides crucial regulatory clarity by defining jurisdictional boundaries between the SEC and CFTC, establishing a path for decentralization, and bringing digital commodity intermediaries under federal regulation. Its passage is seen as vital before potential power shifts in the next Congress, which could bring less favorable leadership to key committees. The timeline is compressed, and the bill must compete for floor time with other priorities like Iran authorization and DHS appropriations. Key hurdles include finalizing the stablecoin yield compromise text, addressing law enforcement concerns about BRCA, and navigating political dynamics around SEC nominations. The outcome of the Banking Committee markup and the level of bipartisan support will be critical indicators of its future success.

marsbit1 saat önce

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

marsbit1 saat önce

İşlemler

Spot
Futures
活动图片