Rivaling a Documentary, OpenAI President Recounts the 72-Hour Internal Strife

marsbitXuất bản vào 2026-04-24Cập nhật gần nhất vào 2026-04-24

Tóm tắt

OpenAI President Greg Brockman details the 72-hour internal crisis following CEO Sam Altman's sudden firing by the board. Both were blindsided by the decision, leading Brockman to resign immediately in solidarity. They began planning a new company at Altman’s home, aiming to take all OpenAI employees with them. Despite intense recruitment efforts by competitors, not a single employee accepted an external offer. The situation shifted when co-founder Ilya Sutskever publicly supported the employees' petition, leading to Altman’s reinstatement. Brockman reflects on OpenAI’s origins, its shift from non-profit to capped-profit for computational scale, key breakthroughs like Dota project and GPT, and the importance of resilience and mission-focused leadership. He emphasizes AI’s potential for global benefit and the need for broad access to compute power, while acknowledging risks and the necessity of iterative deployment for safety and alignment.

What a drama! This might be the most detailed complete review of the Altman power struggle drama online.

The other protagonist of the event, OpenAI's number two, Greg Brockman, personally reveals:

What exactly happened in the 72 hours after Altman was fired?

Truths keep emerging, but they are quite painful:

Greg and Altman truly knew nothing before the incident; even now, the parties involved are still reflecting on which link went wrong.

The board initially only wanted to kick out Altman, but Greg was too loyal and submitted his resignation the same day.

On the first day after the firing, they secretly met at Altman's house, planned a new company, and even planned to take all employees with them.

The board changed its mind at the last minute; they had basically reached a deal for Altman's return, but then suddenly appointed a new CEO.

All weekend, competitors were frantically poaching people, but not a single person accepted.

Ilya's change of heart was a relief for Greg.

In an interview lasting over an hour, Greg almost laid bare the ins and outs of this epic Silicon Valley coup, and responded to everything, including OpenAI's founding history, why it turned for-profit, and where it's headed in the future......

From the迷茫 (confusion) when leaving Stripe, to the fateful offsite in Napa Valley, to the unexpected breakthrough in the Dota project, the information density is extremely high.

Greg even choked up several times:

When Ilya left, that was the only time I felt like I didn't want to do this anymore.

Below is the full text of the ten-thousand-word interview, refined and adjusted based on the original meaning.

Dialogue with OpenAI President Greg Brockman

(The host Shane Parrish's questions are abbreviated as Q below)

OpenAI Was Born from Self-Doubt

Q: How was OpenAI founded?

Greg: I knew I wanted to start a company because I felt it was profoundly meaningful.

Q: But you had just started up at Stripe.

Greg: That's right, but I always felt that the problem Stripe was solving wasn't "my problem".

It was important, and I devoted years to it. But I felt it would succeed with or without me.

So that was the first time I really had the chance to think: What is the mission I want to devote my life to? A problem I'm willing to spend the rest of my life pushing forward, even if just to make it slightly better.

The answer was clear—AI.

If you can tangibly influence the direction of AI's development in the world, then this life is not wasted.

Q: When you were planning to leave Stripe, Patrick told you to go talk to Sam Altman. What happened in that conversation?

Greg: Patrick told me then that Sam had seen many young people in situations like mine.

Actually, I think Patrick meant for Sam to convince me to stay, but after talking with Sam for a few minutes, he understood my determination to leave.

Then Sam asked me about my plans. I told him I was considering starting an AI company.

Sam said he was also thinking about doing something in AI and hoped we could stay in touch.

After leaving Stripe, I talked with Sam again. This time, Sam said he had more concrete ideas and invited me to a dinner in July.

I remember the theme of the dinner was: Is it too late to start a lab now, to gather the world's top researchers? Is it still possible?

Q: What year was that?

Greg: 2015.

At that time, DeepMind had almost monopolized all the top researchers, funding, and data. We all doubted if we could start something new from scratch.

We listed countless difficulties together, but no one could give a truly impossible reason.

So that night, Sam and I drove back to the city. We looked at each other, and he said, we have to do this.

The next day, I started working full-time on the preparation.

It was hard; everything was模糊不清 (blurry/unclear). We only had a vision: We want to build artificial general intelligence, have it impact the world positively, and have the benefits accrue to everyone. But how to do it, how to convince others to quit and join, we had no clue.

Initially, the core team I locked in was Ilya, John Schulman, and myself. We spent a lot of time together discussing various visions for the lab, possible ways of working, but it never really took shape.

Partly because of concerns about the project's lack of momentum, Dario felt he needed to go out and make a name for himself first and wasn't sure if this project was right for him.

Meanwhile, I started lobbying John Schulman to join, and he agreed. But Dario and Chris eventually decided to go to Google Brain, leaving the team essentially with me, Ilya, John, and a few others.

About a dozen people had expressed interest but were waiting to see who else would join.

I asked Sam how we could break this deadlock. Sam suggested getting everyone together for an off-site. We chose Napa Valley, and I even made T-shirts.

There were no formal offers yet, no company structure, nothing. We just had an idea, a vision, a mission.

But we brought people in, and that day in Napa Valley, inspiration struck, and we almost finalized the technical roadmap for the next ten years:

1. Solve reinforcement learning. 2. Solve unsupervised learning. 3. Gradually learn more complex things.

After the closed-door meeting, I sent offers to everyone, saying we would launch in the next 2-3 weeks, please let us know if you want to join.

Q: Why did it feel like DeepMind was insurmountable back then?

Greg: Google DeepMind was the behemoth in the AI field back then. They had massive funding, impressive achievements—this was months before AlphaGo came out, but their advantage was already obvious.

So we doubted: Can we really build a new, independent institution? The answer wasn't clear.

The Reason for Abandoning Non-Profit

Q: When did you realize the non-profit path wouldn't work?

Greg: In 2017, we started thinking very seriously about how to truly achieve the mission, how to actually build AGI. We calculated the computing power needs and found we needed computing equipment on a massive scale.

We came across Cerebras, who were developing specialized computing hardware with performance far exceeding the算力水平 (computing power level) we had calculated ourselves.

So we realized if we could buy a lot of that equipment, get exclusive access to Cerebras products, build超大 (super large) data centers, it would give us a overwhelming advantage.

But fundraising for a non-profit has a ceiling; it simply couldn't support such investment. So Elon, Sam, Ilya, and I all agreed that the only way for OpenAI to achieve its mission was to create a for-profit affiliated entity.

OpenAI's Own "GPT Moment"

Q: When did you realize everything was going to change completely? Before or after the Dota project?

Greg: OpenAI's way of working is a series of "dream come true" moments. Every time you think you see the whole picture, you soon discover new boundaries.

When we first formed the team, we were excited that we actually managed to put the team together and could start advancing the mission. But the next day in the office, we found we didn't even have a whiteboard.

The Dota project was our first major achievement; it really made us feel that if we went all out, we could indeed succeed. It proved that汇聚算力 (gathering computing power), scaling up computing power, would enhance results.

There were many such moments in the GPT series too, like the early unsupervised sentiment neuron paper, which was the first time we saw semantics emerge from training on a language modeling objective.

You train a model to predict the next character, and then suddenly, you get a neural network that understands sentiment, can distinguish positive from negative.

At that moment, we realized we were building machines that could learn semantics, not just syntactic rules.

And when GPT-4 came out, some asked why it wasn't AGI yet. It could converse fluently, almost meeting all our previous definitions of AGI, but it was still missing that final kick.

All along the way, there were many such moments that felt like dreams coming true, but these moments are far from over. We will have more breakthrough moments, then realize the next phase might be possible.

Q: Why do you think Dota was so important?

Greg: Dota was an incredible moment. Unlike Deep Blue playing chess or AlphaGo playing Go, which have clear rules, Dota is about real-time interaction with humans in a complex, open environment, closer to the real world.

Actually, we initially just wanted to use it to validate new algorithms because reinforcement learning at the time couldn't scale. But as we kept scaling the computing power, we surpassed the best human players using the very simple PPO algorithm. This proved:

Massive computing power + simple algorithms really work in practice.

Especially in such a chaotic environment where you can't program, can't predict, can't search, what you need is almost human-like intuition.

And the neural network used back then was very small, with a number of synapses comparable to an insect brain. It made us realize, what would it look like if we scaled this approach to human brain scale? That's a very compelling question.

Q: Since we're talking about prediction, do you think there's a difference between prediction and reasoning?

Greg: I think there is a deep connection between the two.

Predicting the next word seems simple, but if you can accurately predict Einstein's next word, then you are at least as smart as Einstein.

The core of prediction isn't foreseeing known information, but inferring what comes next in never-before-seen scenarios, which is deeply tied to the essence of intelligence.

Current reasoning models involve two steps:

1. Unsupervised learning: Train the model by predicting what happens next. The data is more static, more observational. 2. Reinforcement learning: Let the AI learn on its own data. It takes actions itself, gets feedback from the world, and learns from it. The training method is essentially still prediction—predicting the outcome of actions and reinforcing based on the effect.

But fundamentally, the technology used in these two phases is exactly the same: prediction, just with different data structures.

The Altman Ousting Incident

Q: When did the internal矛盾 (contradictions/conflicts) start to become acute?

Greg: The special thing about OpenAI is that we firmly believe we can create AI at human level, which means the stakes are very high.

Who is making the decisions? What values are behind these decisions? Things that are无关紧要 (irrelevant) in a normal company, like office politics, are given the weight of human survival here.

I think this has influenced a lot of OpenAI's internal development and is the root of all major conflicts.

A core driver in the AI field is that people want to be at the center of the technological revolution, to be remembered. So this isn't just an OpenAI problem.

And AI technology is inherently fragmented; high pressure can either forge diamonds or create cracks. So you often see diamonds forming in small groups because they collaborate closely and trust each other highly. But sometimes they also split and go their own way.

I think多元路径 (multiple paths) and良性竞争 (healthy competition) are normal in the AI field, allowing us to advance technology more safely and discuss tough issues like safety and ethics.

So healthy debate has always existed within OpenAI, but now it's happening worldwide.

Q: Let's go back to the moment you found out Sam was fired. Where were you?

Greg: I was at home. I received a text inviting me to a video call and noticed all board members except Sam were on it. I had a bad feeling immediately.

They told me, the board had decided to remove Sam from his position. The information I received was basically the same as the public statement, so I tried to ask for more details but was refused.

Then they said I was also removed from the board but would stay with the company because I was crucial to the company and the mission.

I asked for reasons again and was again refused. Finally, they told me I might get feedback under the new structure. That was the content of that call.

Q: What was going through your mind then? Were you angry?

Greg: No, I just felt it wasn't right, but I could somewhat understand what was happening.

Q: How long did it take before you knew what actually caused all this?

Greg: The answer is in two parts. One is that I feel I'm still learning new facts, things that were in other people's minds. To some extent, it boils down to poor communication; you suddenly realize there were all sorts of things that were overlooked before.

On the other hand, I roughly know why each of them did what they did.

But at that moment, finding the cause wasn't important anymore; I just knew it was wrong. So after hanging up, I immediately told my wife I was resigning, and she agreed.

So I submitted my resignation that same day.

After resigning, I started receiving many messages. We got a lot of support and enthusiasm; many people were willing to leave with us and start anew, including Jakob, Shimone, Alexander.

Later, we gathered with Sam and started planning a new company.

On the first day, we felt the possibility of Sam returning was only 10%. So we arranged a meeting at Sam's house; many people from the company came. We showed them the vision we were sketching out. In one day, we had a whole new picture of how to run the project.

That weekend, we also spent a lot of time negotiating with the board and the company, trying to find a meaningful path back.

Then on Sunday night, the board suddenly appointed a new CEO临时 (temporarily/at the last minute), replacing my position. The company erupted in protest. In fact, we were in the office at the time, and it seemed like we were about to reach an agreement to go back, but the board suddenly changed its mind at the last minute.

People started pouring out of the building; it was everywhere.

We started video calls with people interested in the new company, reassuring them it would be okay, we had a plan. We were trying to build a lifeboat for a small group that might join, but suddenly, it seemed like everyone changed their mind and wanted to join our new company.

Sam also talked with Microsoft CEO Satya; we had been discussing whether he could support our new venture. We hoped to scale up the lifeboat, like taking all OpenAI employees with us.

It was right before Thanksgiving; many people were supposed to fly home to their families, but they canceled their flights. The office was packed.

Everyone was there; even if not in the conversation, they wanted to witness this history firsthand.

Then, the petition started circulating. Too many people tried to sign the petition at once,一度导致 (once causing) Google Docs to crash, so they had to designate certain people to register names to avoid too many simultaneous editors.

I remember getting home around 5 AM, slept for 45 minutes, woke up and checked Twitter, saw Ilya had tweeted and signed the petition, saying he hoped the company could reunite.

That was truly a moment of relief. I was very grateful; it felt like we could piece everything back, we could get back on track.

Q: You co-founded this company with Ilya. How do you feel about your relationship after that event?

Greg: It's tough. We definitely had a very close relationship; he was the officiant at my wedding, we went through many extremely difficult moments together. But any relationship has its ups and downs.

Afterwards, we spent a lot of time really talking, trying to understand and articulate things that had built up or gone unsaid between us. Through that process, I think we reached a very good place.

For me, I feel we have closure on what happened.

Q: How do you feel about the employee loyalty you inspired?

Greg: I am deeply grateful for it. I never asked for it, never expected it.

I think my leadership style is that of a hands-on leader, trying to lead from the front, sometimes a bit emotional. I don't always look back to see if everyone is keeping up; I just charge forward.

But when people actually came to help build, I felt very grateful, felt they exceeded my expectations in every way.

Q: So did everyone eventually come back?

Greg: Actually, all weekend, all the competitors were circling. People received各种offer (various offers), but that weekend, we didn't lose a single person; no one accepted an offer. It was incredible.

Actually, coach Bill Belichick once told me, the best teams don't play for money; they play for the people next to them. When everyone came to support us, I remembered that saying.

Without a doubt, it was a diamond moment.

Brief Rest and Self-Reflection

Q: After all this happened, you took some time off. What did you go through internally?

Greg: It was an intense experience, both going through it and coming back to face it.

But honestly, one of the hardest moments in OpenAI's history was when Ilya left. That was probably the only moment in OpenAI's history where I felt like I didn't want to do this anymore.

I think I needed some time to find myself again, to remember why I started doing this, why it's so important, why it's worth enduring this pain.

Q: What did you do during your break?

Greg: I trained language models on DNA sequences.

Actually, I had already done this during my time at OpenAI, for the Arc non-profit biomedical research institution. I applied my skills to a very different field, one very meaningful to me and my wife personally.

My wife has many health issues; we've always been thinking about what AI can do for her health and even animal health. This experience also made me realize that maybe we can apply the technology to some全新的 (brand new),有温度的 (warm/humane) fields.

Q: If you had to summarize all this on one page, from Sam's ousting to your resignation, the collective employee petition, the break and coming back, what would you write?

Greg: I think what I learned is to persevere for what's worth it.

If you have an important mission, the fact that you persevere through the ups and downs is key. There will be moments of "it's all over," and moments of "we're back."

You can't let these moments derail you; during this period, you must cultivate personal resilience. Because if you're a leader, people will look to you for stability, support, and direction forward.

What I strive to cultivate is the ability to both understand the details of what we're doing, the implications of every choice, and to be decisive.

Sometimes, I largely viewed OpenAI from a perspective of uncertainty, feeling I didn't know what the right answer was, didn't know the right way to build this technology, or how to answer these tough questions.

But there are many very smart people here with strong opinions. So I tried to understand all these opinions and find ways to integrate them. Sometimes this is the right approach. But sometimes you find these opinions are contradictory and can't all be true simultaneously.

Sometimes you have to make a choice, knowing it means someone will be unhappy, someone will resign, someone will feel slighted.

What I'm trying to do is develop a stronger sense of self, and the awareness that when确信某事 (convinced of something), I must take action.

Looking back on OpenAI's journey, I wish we had done some things differently.

Usually, those situations were where we dragged our feet on something. We knew early on that someone wasn't quite right for a role, we thought a technical direction wasn't quite right, we thought a certain way of running a project wasn't working, but we just waited too long.

This is a lesson I'm trying to learn, an aspect I reflect on daily regarding OpenAI, Stripe, and even earlier university projects, trying to grow.

I think my way of operating is that I both deeply love the day-to-day activities, love contributing individually, love software, love thinking about problems, but I also care deeply about the environment in which these things are done.

Actually, I'm willing to give up that "type 1 fun"—quick gratification, like what you're building right now—in pursuit of "type 2 fun"—things that are painful in the moment but have long-term value.

You create an environment where others can do the hard work and achieve great things. So striving to create an environment is a natural inclination for me; it's not always the easiest. You really have to be willing to endure significant personal pain.

Ilya always says "you must suffer"; if you're not suffering, you're not creating value. I think there's profound truth in that.

About Ilya's perspective, I find it interesting that he has a unique way of speaking; the words he chooses often contain deep inspiration.

This vision of "suffering" is something we've thought about throughout OpenAI's journey. From the beginning, we had a lot of uncertainty; everything was extremely difficult, extremely uncertain.

Many people are accustomed to sweeping problems under the rug, blindly charging ahead. I think this is the negative part of Silicon Valley culture, at least the stereotype, but I don't think it works in AI, it doesn't work at OpenAI, and we've never operated that way.

Our way of operating has always been to face the harsh facts, understand the true nature of reality. I think this helped us think about problems differently, not being satisfied with just writing papers that could be cited early on; that's just the foundation, far from enough.

Then you start thinking about bigger questions: What does it really take to build AGI? It's not pleasant. Because you realize there's no ready-made path.

You need funding, but you don't have a mechanism to raise funds. You try hard; we tried极其努力 (extremely hard). Maybe you can raise $100 million or $500 million, but $1 billion is very difficult.

But relying on the resources we had, we achieved good results. There was really no other way without迎难而上 (meeting the difficulties head-on), striving to understand the truth of what we were trying to accomplish.

Q: What's a lesson you've had to learn repeatedly?

Greg: Making tough decisions, having tough conversations.

Q: What's the best advice you've received?

Greg: It was from my freshman writing class at Harvard. Constantly删减文字 (cut words) for clarity and communication.

Q: How do you filter information?

Greg: Read a lot, actively categorize and process.

Q: Who are your role models, and why?

Greg: Gauss and Descartes. They were extremely thoughtful people, far ahead of their time, visionaries who brought real breakthroughs that changed how we think and live.

Q: What does the world misunderstand about Greg Brockman?

Greg: I think people don't understand how focused I am on this mission. This focus has brought me great personal pain in many ways. But I just believe this technology can help empower people and benefit everyone. I really want to help make that happen.

Core Judgments on the AI Industry

Q: What do you want non-technical people to understand about AI?

Greg: It will be a force for good in their personal lives; they will benefit from it. It will advance science, medicine, and tangibly impact everyone.

Q: Why is OpenAI so bad at naming models?

Greg: I can't tell you that. (doge)

Q: Are we接近 (close to) the point where AI will make AI development accelerate exponentially?

Greg: I think we are in the stage of applying AI to its own development process, and it will get faster and faster.

This has been happening since ChatGPT. We used ChatGPT to speed up the development process by 10% or 20%. Now we have those amazing coding tools that have truly revolutionized how software engineering is done.

And much of the work we do in model production is bottlenecked by software. We will soon enter the next stage where AI will also propose its own research ideas, run tests, conduct experiments. So I think the pace of iteration and innovation will continue to accelerate because of the things we are producing.

Q: What percentage of code is now written by AI?

Greg: It's hard to say; how much code is *not* written by AI? This percentage is approaching zero.

Currently, given the right context and structure, AI is far better than humans at the actual writing of code. As for the code structure part, human experts are still much better, but the actual writing of code is基本上全是 (basically all) AI's work.

Q: Has AI ever proposed novel ideas that you hadn't thought of?

Greg: We are getting close to that goal. For example, in chip design. Last year, in our own chip design, we tried to better adapt the technology to reduce the area used by circuits.

We found the optimization solutions generated by the model were actually on our list, so it didn't propose something completely new that humans had never thought of, but it achieved it faster, in ways we didn't have time to complete originally.

Another example, recently in quantum physics, we solved a specific physics problem, and the result was opposite to the direction expected by the academic community, yielding an elegant and simple formula.

So getting new ideas from these models is entirely feasible. Later, we will apply it to harder fields, or it might need more real-world context. This is just the beginning. But we have a roadmap to achieve it; we still have a lot of work to do.

Q: If models are based on reinforcement learning, do you think they will evolve to only tell us what we want to hear?

Greg: We actually went through an evolution of training models to adapt to user preferences.

We saw that at some point last year, models did start leaning towards telling you what you want to hear. We made changes to that because we want the model to truly align with helping you achieve your goals, your long-term goals.

Maybe it feels good to hear agreement in the moment, but that's not what you truly want. Maybe some people like it, but it's not what most people真正想要的 (truly want).

So we've actually made huge technical progress to ensure our AI training doesn't lead to so-called reward hacking. We really want to ensure there's a good signal about the goal, not just short-term, quick gratification.

For me, this is probably the most important part of the vision where personal AI, personal AGI will take us: ensuring it's not just about what looks good now, but truly about alignment with your long-term well-being, long-term goals, what you really want.

I think that's what is most empowering for people.

Q: The current trend seems to be releasing preview models. Do you think it's because we are limited by computing power?

Greg: Overall, we are moving towards a compute-driven world.

It's no longer just about quickly answering a question; it truly starts to go deep, spending many tokens to integrate different data sources, search enterprise knowledge bases, to solve difficult problems, write software beyond human capability.

All of this is fundamentally driven by compute, and compute is far from enough. If everyone in the world had a GPU, that's 8 billion GPUs; our current trajectory is far from that level. Now,几千 (a few thousand),几百万 (a few million) GPUs is considered huge.

So in training, we tend to build compute ahead of the demand we see. We will be very focused on the mission of bringing the models to everyone, making them widely available.

Q: You were once mocked for putting a lot of effort and money into data centers. How do you feel about that now?

Greg: I think it will give us an advantage. Not only for business but also to truly realize bringing the technology to everyone.

In the future, compute will be prioritized for major missions, like curing cancer, which could happen this year.

Actually, compute allocation is a core issue for society's future; there's only so much compute, so it must be prioritized. But we firmly believe everyone needs access to compute.

That's why we have the free version of ChatGPT; we strive to ensure people can use this technology.

Q: Internally at OpenAI, how do you view the balance between consumer and enterprise business?

Greg: What I've been thinking a lot about recently is focus.

Because this field is the embodiment of opportunity; you can apply AI to any problem, anything you want to build, everything is possible. But our current problem is still limited compute.

So I think in OpenAI's next phase, enterprise business is obviously important because the economy is turning into a compute economy before our eyes. Software engineering is already like this; every field that works with computers will be like this.

So we need to be there to help people deploy these models, figure out how to leverage them, how to get the most out of them.

The line between enterprise and consumer will also blur because starting a business will become easier than ever. We've already seen this.

Q: Do you think we will have space data centers?

Greg: I think we will have data centers everywhere, but space data centers still have many technical issues currently.

Q: What is iterative deployment? Why do you do it?

Greg: Iterative deployment is a core pillar of how OpenAI handles making this technology benefit humanity and achieve the mission.

Secret R&D, one-time launch carries extremely high risks because you can't predict real-world problems. Iterative deployment allows us to discover risks in practice and correct them quickly. For example, after GPT-3 launched, we didn't expect the biggest abuse to be medical spam messages; it was实战 (actual combat) that allowed us to respond in time.

So the idea of iterative deployment is that we will release intermediate versions of this technology.

This isn't an excuse for blind deployment; you still need to think at every step about our best judgment regarding all possible ways it could happen, what the shortcomings are, what the risks are, and then mitigate them. But you can also see the actual situation, see if your judgment was correct, learn from reality, and do better next time.

In OpenAI's history, we had hoped that someone had deployed transformative technology before, maybe they could tell us the answer. But it was never that simple.

They did have wisdom and insights, and we absorbed them. But we realized we are the ones closest to this technology; because we created it, we can better understand the right way to shape it.

Q: If a frontier model prioritizes safety as its primary concern, and another does not, how do you view this difference?

Greg: I think we've found that safety is actually a core product feature; no one wants a model that isn't aligned with them.

So we have invested in safety, probably far more than people realize, and possibly more than any other lab.

I always thought that those building this technology and having successful products not simultaneously investing heavily in safety is unsustainable. You need to think long-term for your business and what you're creating; it's about how to train models, how to get feedback loops.

I'll just say we are committed to safety as part of the mission, and this is already reflected in our products and the world.

Q: When I told people I was doing this interview, a common reaction was他们担心自己的工作 (they worry about their jobs), feel uncertain. What would you say to them?

Greg: I do think how this technology will develop is uncertain. Its development has been surprising; our current AI, our current world, is not what was foreseen in science fiction. Some seemingly inevitable conclusions, when they actually materialize, don't look exactly the same.

I believe people always最容易看到 (most easily see) what they will lose. Change is coming; that's undeniable. But what's harder is foreseeing what you will gain.

For example, think about how someone in 1950 would understand Uber. First, you need to think of computers, mobile phones, GPS. Actually, it involves quite a lot of technology, but it did happen. And成千上万 (thousands),数以百万计 (millions) of other cases are happening simultaneously.

So my view on AI is that it's about empowerment, about human agency. This does mean that some institutions, jobs, things we thought were stable may not be as stable as we thought.

So it will affect people, but the question worth深入 (delving into) is: What do you get? How do you benefit from it?

Now you can be a creator; you can create anything; anything you can imagine can become reality.

Q: How to cultivate creative ability then?

Greg: Really dive deep into this technology.

What I've observed is, across generations of technology, the people who benefit the most are those who were invested in the previous generation of technology. And the barrier to trying them now is lower than ever.

So I think new opportunities will be created.

I think the world really needs to consider how to support everyone during this moment of uncertainty, through whatever transition is coming. Because the economy will become a compute economy, but there will be a place for everyone to contribute.

Q: Where should young people invest today? If you're in high school or college, or just starting work, what skills do you think will be more valuable in the future?

Greg: I really think diving deep into this technology will become a key skill, truly understanding how to get the most value from AI.

Because we are all moving towards a world where we become managers of agents, maybe soon CEOs of autonomous AI companies.

As long as you have tokens, have the compute to drive it. By then, you can point compute at any problem, and the number of problems humans want to solve is infinite.

So I think the more people dive deep into this technology, figure out how to leverage what's coming, how to combine these technologies in new ways, how to interact with our agents, truly manage them, think "What do I want? What is my sense of self? What is my purpose? What do I want to see in the world?", achieving these will be easier than ever.

I think, in terms of what we gain, the upside of that world is almost unimaginable.

Q: This is the most positive future view. What's the most negative future you can imagine?

Greg: A very interesting point about how technology has developed so far is that it actually makes us (扭曲我们去适应机器) twist ourselves to fit the machine.

Think about how many people work facing this box, typing, getting carpal tunnel syndrome, shoulders hunched. But this isn't what we希望的样子 (hoped for). The world we are moving towards isn't just you working with a computer, but your computer working for you.

This brings opportunities and risks. So we need to find ways to mitigate these risks.

Ultimately, a core question is: If you have machines helping people achieve their goals, they are there to do what you want. But sometimes people's goals conflict; how do you resolve that? How do you decide what the AI will help you with and what it won't? Really trying to figure out how this fits into society? How to ensure benefits don't just flow to one company, one group of people, but truly elevate everyone?

We must acknowledge there are still many ways things can go wrong or risks that need us to address.

Q: Last question, for you, what is success?

Greg: Achieving OpenAI's mission, ensuring AGI benefits all of humanity.

Reference links:[1]https://x.com/shaneparrish/status/2046900710055297072[2]https://youtu.be/6JoUcQ1qmAc

This article is from the WeChat public account "QbitAI", author: Focus on Frontier Technology

Câu hỏi Liên quan

QWhat was the immediate reaction of Greg Brockman upon learning about Sam Altman's dismissal from OpenAI?

AGreg Brockman immediately decided to resign and submitted his resignation on the same day, feeling that the board's decision was incorrect.

QWhat key technical breakthrough did the Dota project represent for OpenAI according to Greg Brockman?

AThe Dota project demonstrated that large-scale computing power combined with simple algorithms could achieve superhuman performance in complex, open-ended environments, validating their approach to scaling AI systems.

QHow did OpenAI employees respond to the leadership crisis and external recruitment efforts during the weekend of the coup?

ADespite intense recruitment efforts by competitors, not a single OpenAI employee accepted an external offer. They collectively signed a petition supporting Sam Altman and Greg Brockman, showing strong loyalty to the company.

QWhat is Greg Brockman's view on the role of 'suffering' in achieving meaningful progress at OpenAI?

AGreg Brockman, echoing Ilya Sutskever, believes that suffering is necessary to create value. He emphasizes facing harsh realities and making difficult decisions to drive long-term mission success rather than seeking short-term gratification.

QWhat does Greg Brockman identify as the core reason for OpenAI's transition from a non-profit to a for-profit structure?

AThe transition was driven by the realization that building AGI required massive computational resources, which exceeded the fundraising capabilities of a non-profit. Elon Musk, Sam Altman, Ilya Sutskever, and Greg unanimously agreed a for-profit entity was necessary to fulfill the mission.

Nội dung Liên quan

Giao dịch

Giao ngay
Hợp đồng Tương lai
活动图片