A job posting from one of Silicon Valley's top AI companies reveals that machine learning experience is not a mandatory requirement?
Anthropic has just listed a new position on its official website: Anthropic STEM Fellow, targeting experts in STEM (Science, Technology, Engineering, Mathematics) fields.
In the STEM Fellow job description, Anthropic states that machine learning experience is helpful but not required, emphasizing that scientific judgment and a willingness to learn quickly are more important.
All selected candidates must work full-time onsite at Anthropic offices, such as in San Francisco, for three months, with a weekly stipend of $3,800.
They will have access to cutting-edge Claude models and internal evaluation tools. Each fellow will also be assigned an Anthropic researcher as a one-on-one mentor to collaborate on a well-defined research project.
Anthropic provided two example projects in the STEM Fellow job description:
A materials scientist discovered that Claude made errors when reasoning about phase stability, so they built a specialized evaluation process to address this shortcoming;
A climate scientist integrated atmospheric modeling software with Claude and built an interface capable of utilizing these tools.
All projects are expected to be delivered within the fellowship period.
Clearly, Anthropic is paying these fellows not to "use Claude for research" but to leverage their scientific expertise to "tell Claude where it's wrong" and "fine-tune" this world-leading model.
Three Generations of Fellowships Over Three Years, Getting Closer to Claude
Over the past three years, Anthropic has been increasing its investment in scientific research, with each step going deeper than the last.
The first generation was the AI Safety Fellows Program in 2024.
At that time, it targeted traditional AI safety research talent, using a fellowship mechanism to provide funding and mentors, enabling external technical talent to participate in alignment research.
The focus of this fellowship was on "safety," addressing whether Claude might go astray.
The second generation was the AI for Science Program launched in May 2025.
Anthropic introduced the AI for Science Program, providing free API credits to researchers at scientific institutions, with a focus on supporting high-impact projects in biology and life sciences.
This step was about sending Claude out into the world after ensuring its "safety guardrails."
The third generation is the current Anthropic STEM Fellow.
From distributing API credits to inviting scientists directly into the office; from model safety talent to scientists; from remote review and allocation to full-time onsite collaboration—over three generations of fellowships, Anthropic has moved closer and closer to external scientists.
The first generation sought "people who can make Claude safer";
The second generation sought "people who can use Claude to achieve scientific results";
The third generation seeks "people who can teach Claude how to do science."
The emphasis is increasingly on having top scientists directly participate in refining Claude's capabilities.
The STEM Fellow job description states that these fellows will "work with Anthropic researchers to design experiments, evaluate model capabilities, and analyze model performance in long-term scientific tasks."
This is collaboration at the co-creation level.
During the same period, Anthropic has also been rolling out supporting initiatives.
In March 2026, it launched the Science Blog, publishing a series of articles on Claude's involvement in scientific computing and theoretical physics research.
Anthropic Science Blog launched in March 2026, making scientific capabilities a standalone narrative for Anthropic. https://www.anthropic.com/research/introducing-anthropic-science
It is also a core partner in the U.S. Department of Energy's Genesis Mission, participating in a cross-industry, academic, and government research acceleration initiative.
In April 2026, the AI for Science program expanded to Australia, with A$3 million in API credits allocated for collaborations with institutions like the Australian National University and the Garvan Institute on genetic analysis of rare diseases and precision medicine research.
Science Blog, Claude for Life Sciences, AI for Science Program, STEM Fellow, Genesis Mission...
The thread behind this series of actions is clear:
Anthropic is systematically building a scientific research ecosystem, with each step being a move in this larger game.
The Real Bottleneck in AI Research Isn't Compute, It's "Judgment"
Why would an AI company think that the most lacking element in improving a model's scientific capabilities isn't more GPUs or more AI engineers, but a group of experimental scientists?
The answer lies in one of Anthropic's own blog posts.
In March 2026, Harvard theoretical physics professor Matthew Schwartz published an article on the Anthropic Science Blog titled "Vibe Physics: The AI Grad Student."
https://www.anthropic.com/research/vibe-physics?utm_source=chatgpt.com
He conducted an experiment: having Claude Opus 4.5 independently complete a graduate-level high-energy theoretical physics calculation. He himself did not intervene, only guiding Claude with text prompts.
The results were astonishing. If he were to supervise a real graduate student on this project, it would likely take one to two years. If he did it alone, three to five months. Working with Claude, it took two weeks.
It was 10 times faster.
Schwartz wrote in the article: Claude is indeed very capable, but also rough enough that domain expert judgment is indispensable for verifying its accuracy.
He gave an example.
Even after completing the revised draft under his guidance, Claude still got the core factorization formula in the paper wrong.
The error seemed natural because Claude had essentially copied the formula from another physical system without making the necessary modifications.
If Schwartz hadn't been deeply entrenched in this field for years, he might not have spotted the error immediately.
He also found that Claude kept adjusting parameters just to make the charts fit, rather than identifying the real mistake. "It faked the results, hoping I wouldn't notice."
Furthermore, Claude didn't know what to check to verify its own results.
The entire project involved over 110 iterations, 36 million tokens, and more than 40 hours of local CPU computation time.
Finally, Schwartz gave a precise rating:
Current large language models are approximately at the level of a "second-year graduate student" in theoretical physics.
He also offered another, more crucial judgment: AI has not yet achieved end-to-end autonomous scientific research.
Looking back now at the Anthropic STEM Fellow job description, it all makes sense:
Design rigorous evaluation methods that are not easily gamed, test the model's ability to plan experiments, interpret data, and reason about mechanisms in your field. Systematically identify where it is "confident but wrong." Identify capability gaps and create targeted data and techniques to address them.
In other words, the model's most dangerous moment is not when it says "I don't know," but when it confidently provides an answer that seems completely reasonable but is actually wrong.
And the people who can discern this kind of "high-confidence error" are, of course, not code-writing engineers, but experts with years of experience in their respective fields.
Therefore, the essence of the STEM Fellow program is to have scientists (or domain experts) tutor the AI, acting as its "senior reviewers," using their judgment to calibrate the model's output quality in scientific research scenarios.
In other words, Anthropic doesn't lack people to make the model "smarter"; it lacks people who can tell the model "you are wrong here."
Amodei's Obsession and Anthropic's Bet
Anthropic's recruitment of these experts is not a spur-of-the-moment decision.
Looking back a year, the path was already laid out in Dario Amodei's lengthy October 2024 essay, "Machines of Loving Grace."
https://www.darioamodei.com/essay/machines-of-loving-grace
In this essay, Amodei prioritized AI application scenarios.
Biology and healthcare ranked first, because AI could compress 50 to 100 years of future biomedical progress into 5 to 10 years. More importantly is how he defined AI's role in this endeavor.
Amodei believes AI should be a virtual biologist:
It should be able to design experiments, direct experiments, and invent new methods itself; it should be able to independently execute research workflows like a complete human biologist.
This elevates the role of AI in science from efficiency improvement to "direct participation." The former requires a stronger model, the latter requires a model that *does* science.
Amodei also provided a rationale.
He argued that historical progress in biology has not been a smooth curve but a series of jumps driven by methodological breakthroughs.
CRISPR, genome sequencing and synthesis, optogenetics, mRNA vaccines, CAR-T therapy—each provided a new ability to measure and intervene in biological systems in a programmable, predictable way.
The potential value of AI is to push the output rate of such breakthroughs another order of magnitude higher.
Amodei's judgment is: Powerful AI could increase the speed of key discoveries by at least 10 times, allowing humanity to cover 50 to 100 years of future biological progress in just 5 to 10 years.
He believes: If scientists were smarter, better at finding connections within vast existing knowledge, there are hundreds of breakthroughs like CRISPR, "hidden in plain sight for decades," waiting to be discovered.
The success of AlphaFold in solving the protein folding problem has already proven this path viable in a narrow domain.
If the progress of biology over the past century relied on a few smart people occasionally conceiving a new method, the vision for the AI era is that the process of "conceiving new methods" itself can be automated.
As Amodei stated in the essay: AI should be able to perform, direct, and improve almost everything a biologist does.
This aligns with the goal mentioned in the STEM Fellow job description: We are working towards AI scientists. Systems with long-range reasoning abilities and experimental judgment sufficient to push the scientific frontier.
Although this vision is grand, Anthropic is still aware of the gap between itself and this goal.
In the inaugural article of the Science Blog, Anthropic quoted Fields Medalist Timothy Gowers:
We seem to have entered a brief but delightful era where AI significantly accelerates our research, but AI still needs us.
Anthropic itself admits that although models have demonstrated capabilities surpassing humans in certain parts of the research workflow, they also fabricate results, over-conform to users, and get stuck on problems that seem basic to practitioners in the field.
From Hoarding GPUs to Betting on Scientists
Anthropic is turning "scientific capability" into a systematic competitive moat.
Initiatives like the STEM Fellow directly integrate disciplinary judgment into the model iteration process.
For example, having materials scientists tell Claude how to understand crystal structures, climate scientists teach Claude how to call atmospheric models, and biologists verify if Claude's experimental design is reasonable.
These things cannot be achieved by stacking GPUs or benchmarking.
If this path proves effective, the competitive rules of the AI research track could undergo a fundamental change:
The ultimate winner will no longer depend on whose model is larger, but on who has more truly knowledgeable scientists by their side.
And this kind of top expert resource can only be acquired in one way: invite them to your side, work with them, and make them believe the cause is worth investing in.
This is Anthropic's bet.
But it's not just Anthropic, and not just scientists. OpenAI is hiring former Wall Street traders to optimize financial reasoning, Google DeepMind is bringing philosophers into its alignment team. Everyone is realizing the same thing:
The next phase of AI competition is not about who has more parameters, but about who can encode the most knowledgeable human brains into their flywheel.
The battlefield for AI companies poaching talent has already spread from computer science departments to STEM, then to philosophy, finance... and will go even further in the future.
References:
https://x.com/AnthropicAI/status/2046362119755727256
https://www.anthropic.com/careers/jobs/4493001008
https://www.anthropic.com/research/introducing-anthropic-science
This article is from the WeChat public account "新智元" (New Wisdom Yuan), author: 新智元

















