What did Jensen Huang actually say at the Davos Forum?
On the surface, he was promoting robotics, but in reality, he was initiating a bold 'self-revolution.' With one speech, he ended the old era of 'stacking GPUs,' yet unexpectedly handed the Crypto sector a once-in-a-lifetime opportunity?
Yesterday, at the Davos Forum, Huang pointed out that the application layer of AI is exploding, and the demand for computing power will shift entirely from the 'training side' to the 'inference side' and the 'Physical AI side.'
This is very interesting.
As the biggest winner in the 'computing arms race' of the AI 1.0 era, NVIDIA is now actively advocating for a shift toward 'inference' and 'Physical AI,' sending a very clear signal: the era of 'brute force miracles' by stacking GPUs to train large models is over. From now on, AI competition will revolve around the 'application-first' principle for real-world implementation.
In other words, Physical AI is the second half of Generative AI.
Because LLMs have already read all the data accumulated by humans on the internet over decades, but they still don’t know how to twist open a bottle cap like a human. Physical AI aims to solve the problem of 'unity of knowledge and action' beyond AI’s intellectual capabilities.
The reason is simple: Physical AI cannot rely on the 'long reflex arc' of remote cloud servers. If ChatGPT is one second slower in generating text, you might just feel a lag. But if a bipedal robot is one second slower due to network latency, it might fall down the stairs.
However, while Physical AI seems like a continuation of generative AI, it actually faces three entirely new challenges:
1) Spatial Intelligence: Enabling AI to understand the three-dimensional world.
Professor Fei-Fei Li once proposed that spatial intelligence is the next North Star for AI evolution. For robots to move, they must first 'see' the environment. This isn’t just about recognizing 'this is a chair,' but understanding 'the chair’s position in 3D space, its structure, and how much force I should use to move it.'
This requires massive, real-time, 3D environmental data covering every corner, both indoors and outdoors;
2) Virtual Training Grounds: Allowing AI to train through trial and error in simulated worlds.
The Omniverse mentioned by Jensen Huang is essentially a 'virtual training ground.' Before entering the real physical world, robots need to train 'falling ten thousand times' in a virtual environment to learn how to walk. This process is called Sim-to-Real, or simulation to reality. If robots were to trial-and-error directly in the real world, the hardware wear-and-tear costs would be astronomically high.
This process demands an exponential increase in the throughput requirements for physics engine simulation and rendering computing power;
3) Electronic Skin: 'Tactile Data'—A Gold Mine Waiting to Be Mined.
For Physical AI to have a 'sense of touch,' it needs electronic skin to perceive temperature, pressure, and texture. This 'tactile data' is a brand-new type of asset that has never been collected on a large scale before. This may require large-scale sensor deployment. At CES, Ensuring company demonstrated 'mass-produced skin' with a single densely packed hand integrating 1,956 sensors, enabling the robot to perform miracles like peeling an egg.
This 'tactile data' is a brand-new type of asset that has never been collected on a large scale before.
After reading this, you might feel that the emergence of the Physical AI narrative gives a significant opportunity for wearable devices, humanoid robots, and other hardware devices to shine—要知道, these were largely dismissed as 'big toys' just a few years ago.
Actually, I want to say that in the new landscape of Physical AI, the Crypto sector also has an excellent opportunity to fill ecological gaps. Let me give a few examples:
1. AI giants can deploy street-view cars to scan every main street in the world, but they can’t collect data from the nooks and crannies of streets, residential complexes, and basements. By using Token incentives provided by DePIN network devices to mobilize global users to supplement this data with their personal devices, it’s possible to fill these gaps;
2. As mentioned earlier, robots cannot rely on cloud computing power, but to utilize edge computing and distributed rendering capabilities on a large scale in the short term, especially for Sim-to-Real data processing. By leveraging distributed computing networks to pool and schedule idle consumer-grade hardware, it can be put to good use;
3. 'Tactile data,' besides requiring large-scale sensor applications, by its very name, will be extremely private. How to incentivize the public to share this privacy-sensitive data with AI giants? A feasible path is to allow data providers to obtain data ownership and profit-sharing rights.
To sum up:
Physical AI is the second half of the web2 AI赛道 that Huang has called for. For the web3 AI + Crypto sectors, such as DePIN, DeAI, DeData, isn’t it the same? What do you think?












