Not long ago, an article titled '2028 Prediction' went viral online. The article pointed out that due to the advancement of AI, there will be a major wave of unemployment in 2028, and many people's jobs will be replaced by AI.
The article's release, combined with the Middle East situation, severely hit the US stock market that day. The incident was quite surreal, as the article was clearly written by AI, but it seemed to fit people's fear of 'AI causing massive unemployment,' thus causing such a significant impact.
Recently, a piece of news exposed by OpenAI has made people realize that the '2028 Great Unemployment' may not be groundless.
Recently, OpenAI's Chief Scientist, Jakub Pachocki, said something spine-chilling in an exclusive interview with MIT Technology Review—their 'Polaris' is to build a fully automated multi-agent research system by 2028.
The first phase goal will be achieved by September this year:
An 'autonomous AI research intern' capable of independently handling specific research problems.
This is not a placeholder in the product roadmap, nor is it a casual boast by Altman on X. This is OpenAI betting the entire company's resources on one direction.
The Meaning of 'Polaris'
When tech companies say 'Polaris,' it usually means two things: first, other matters must give way to it, and second, there is internal consensus within the company.
Judging from OpenAI's actions over the past two weeks, this assessment is largely correct.
On March 19, OpenAI announced the acquisition of the developer tools company Astral, with the team merged into the Codex department; at the same time, the company announced the integration of ChatGPT, Codex, and the browser into a unified desktop 'super app,' led by Head of Applications Fidji Simo, with Greg Brockman assisting in promoting organizational reforms.
The era of fragmented products has come to an end. OpenAI is pushing all its chips in one direction.
And this direction points to 'letting AI do research itself.'
Pachocki's logic is actually quite clear: the three technical routes of reasoning models, agents, and interpretability, which were previously fought separately within OpenAI, are now to be integrated under one goal—to create an AI researcher that can run autonomously in data centers for long periods. He said that once this is achieved, 'This is what we truly rely on.'
Former OpenAI researcher Andrej Karpathy's view is more direct—'All large language model frontier labs will do this; this is the final BOSS battle.' He added a sentence worth pondering: 'Scaling will of course be more complex, but doing this is just an engineering problem; it will succeed.'
Note his wording: not 'if,' but 'when.'
Anthropic in Action
On the same day OpenAI announced 'Polaris,' Anthropic quietly launched Claude Code Channels—a feature that allows developers to interact directly with a running Claude Code session via Telegram and Discord.
This matter seems small when viewed alone, but when placed in the overall trend, it becomes very important.
Anthropic's logic is: rather than telling developers what AI can do in the future, let it embed into developers' real workflows now. Telegram and Discord are not academic papers; they are places where programmers work every day. Having Claude Code live here means it changes from a 'tool' to a 'colleague.'
The reaction in the community confirms this judgment.
One user directly said: 'Claude killed OpenClaw with this update; you no longer need to buy a Mac Mini.' The meaning behind this sentence is that Anthropic's infrastructure improvements have already made open-source alternatives lose their cost advantage.
Looking at a broader timeline, Anthropic's iteration speed on Claude Code is indeed astonishing. In just a few weeks, it has integrated text processing, thousands of MCP skill integrations, and autonomous bug-fixing capabilities. While OpenAI is strengthening Codex by acquiring Astral, Anthropic has already sent Claude Code directly into developers' chat windows.
Both companies are rushing towards the same finish line, but their routes are completely different—OpenAI is working on the 'fully automated researcher of 2028,' while Anthropic is working on 'agent tools that can be used today.'
The Real Challenge
However, there is one detail that cannot be skipped.
Pachocki did something very rare in the exclusive interview—he proactively talked about the challenges of safety and controllability, and he was quite candid about it.
He said their idea is to use other large language models to 'monitor the AI researcher's scratchpad,' catching bad behavior before it becomes a problem. But then he admitted: 'The understanding of large language models is not sufficient for us to fully control them; it will take a long time to truly say 'this problem is solved.''
The chief scientist of a company says 'we don't have full control,' while simultaneously announcing the delivery of a fully automated AI research system by 2028. These two things placed together are worth everyone thinking about carefully.
This is not pessimism, but rather understanding the true difficulty of this matter. The fact that Pachocki can say this sentence itself indicates that OpenAI has a clear awareness of the hardships on this path internally.
On the technical level, a 'Karpathy Cycle' summarized by researchers is worth referencing—a successful automated AI research framework requires three elements: an agent with the authority to modify individual files, a single metric that can be objectively tested, and fixed experimental time limits.
This framework has already begun to produce results in practical environments. Shopify CEO Tobias Lütke publicly shared a case: he let an autoresearch agent run overnight, and the next morning, the agent had run 37 experiments and improved model performance by 19%.
From concept to implementation, this road is shorter than imagined.
The Future of a $20,000 Subscription Fee
The 'Polaris' project is not only a technical advantage but also a key to commercial victory.
A set of numbers from Paul Roetzer makes one want to look twice: he cited internal predictions from OpenAI that by 2029, the agent business alone could generate $29 billion in annual revenue, including a 'knowledge agent' with a monthly fee of $2,000 and a 'research agent' with a monthly fee of $20,000.
This set of numbers shows that 'AI researcher' was never just a technical goal; it is a revenue roadmap.
A 'research agent' with a monthly fee of $20,000, converted, is a fraction of a senior researcher's annual salary, but it can work 24/7 and run 37 experiments simultaneously. This is not about replacing a specific person, but redefining what 'research productivity' itself is.
This reminds me of Karpathy's words—'This is the final BOSS battle.' The BOSS he refers to is not a competitor, but the ceiling of AI capability itself.
Once AI can autonomously advance scientific research, the speed of AI progress will no longer be limited by the number and working hours of human researchers.
Pachocki also said the same thing, just expressed more restrainedly—'Once the system can run autonomously in the data center for a long time, this is what we truly rely on.'
The AI research intern of September 2026 is not the end goal, but an important starting point.





