Nvidia CEO Jensen Huang says demand for the company’s next generation AI chips could reach $1 trillion in orders by 2027 as the industry rapidly expands its computing needs.
Key Takeaways
- Nvidia expects up to $1 trillion in orders for its Blackwell and Vera Rubin AI chips by 2027.
- The projection doubles the company’s earlier $500 billion opportunity estimate announced last year.
- Rising AI inference demand and global cloud adoption are fueling massive compute requirements.
- Nvidia also unveiled new hardware and software tools at the GTC developer conference aimed at accelerating AI infrastructure.
What Happened?
During the company’s annual GTC developer conference in San Jose, Nvidia CEO Jensen Huang said the company could see $1 trillion in purchase orders for its Blackwell and Vera Rubin systems by 2027. The forecast reflects surging demand for AI computing as companies shift toward more advanced applications and autonomous AI agents.
The announcement expands Nvidia’s earlier projection of $500 billion in revenue opportunity, signaling stronger confidence in long term AI infrastructure spending.
AI Compute Demand Reaches a New Phase
According to Huang, the AI industry has reached a major turning point as the focus shifts from training models to running AI systems at scale, a process known as inference. This change is dramatically increasing the amount of computing power required across the industry.
Huang said the scale of computing demand has grown at an extraordinary pace.
- Compute requirements have increased up to one million times in just two years.
- AI labs are requesting far larger infrastructure deployments.
- Older GPUs are even seeing higher spot pricing due to limited supply.
This surge in demand reflects the growing number of AI systems generating tokens and running complex tasks in real time. As businesses move beyond simple chatbots toward autonomous AI agents that can perform actions independently, computing needs continue to climb.
Huang told the audience at the conference that companies could generate significantly more revenue if they had access to more computing capacity. Huang said:
Blackwell and Vera Rubin Systems Drive Growth
Much of Nvidia’s expected growth is tied to its next generation AI platforms, particularly the Blackwell and Vera Rubin systems.
The upcoming Vera Rubin architecture, scheduled to launch later this year, is designed to deliver ten times more performance per watt compared with its predecessor Grace Blackwell. Energy efficiency has become a critical issue as large AI data centers consume enormous amounts of electricity.
The Vera Rubin system itself contains around 1.3 million individual components, highlighting the scale and complexity of modern AI infrastructure.
Looking further ahead, Nvidia also revealed an early prototype of Kyber, its next rack scale architecture expected to arrive after Rubin. The system will integrate 144 GPUs in vertical compute trays, a design intended to improve density and reduce latency in large scale AI deployments.
Nvidia Expands AI Infrastructure Ecosystem
Beyond hardware, Nvidia used the conference to introduce new tools and partnerships designed to accelerate AI adoption across industries.
One of the major announcements was the Groq 3 Language Processing Unit, a new chip based on technology from startup Groq, which Nvidia largely acquired through a $20 billion asset purchase in December.
The Groq 3 LPX rack will contain 256 LPUs and will operate alongside the Vera Rubin systems. Huang said the combination could increase tokens per watt performance by up to 35 times.
The company also introduced NemoClaw, a developer toolkit designed to support the rapidly growing OpenClaw ecosystem. OpenClaw gained attention earlier this year for enabling AI agents capable of autonomously completing tasks without constant human guidance.
In addition, Nvidia announced expansion of its autonomous driving platform. Ride hailing company Uber plans to deploy vehicles powered by Nvidia Drive AV software across 28 cities on four continents by 2028, starting with Los Angeles and San Francisco next year.
Several automakers including Nissan, BYD, Geely, Isuzu, and Hyundai are also developing level 4 autonomous vehicles using Nvidia’s Drive Hyperion system.
CoinLaw’s Takeaway
From my perspective, this projection shows just how massive the AI infrastructure race has become. In my experience covering technology companies, it is rare to see a single hardware platform drive an entire industry the way Nvidia’s GPUs are doing today.
What stood out to me is that the demand is no longer limited to big tech firms. Governments, startups, cloud providers, and AI labs are all building massive infrastructure at the same time. If the current pace continues, the $1 trillion figure may not be as surprising as it first sounds.
The real story here is that AI computing has become the backbone of the global technology economy, and Nvidia is positioning itself at the center of that shift.