At Nvidia GTC 2026, Jensen Huang confidently stated: Many AI-native companies have emerged because Nvidia “redefined computing.” He said we are at the beginning of a new platform revolution, similar to the personal computer revolution. Over the past two years, with the advent of ChatGPT, the era of generative AI has truly begun.
Huang presented a key slide revealing that the global AI computing market is entering an explosive growth phase. The slide shows that the potential size (TAM) of the global data center market has risen from about $500 billion in 2025 to over $1 trillion in just one year, and it continues to expand. The core concept on the slide is the “Inference Inflection.”
Previously, AI development focused mainly on training, which involves teaching models with large amounts of data. But as large models mature, AI is starting to be deployed into various products and services, such as customer service, image generation, and software development. This shift indicates that the market focus is moving from training to inference. When AI is called upon by billions of users simultaneously, every question, image, or video generated requires computational support. This high-frequency, low-latency computing demand will cause inference compute needs to grow geometrically.
At the start of his speech, Huang spent considerable time discussing Nvidia’s software stack applications across industries, especially the CUDA-X library ecosystem. He said, “We are an algorithm company.” He pointed out that AI deployment is not just about generative AI. “Throwing GenAI at the wall to see if it succeeds is not a strategy.” He believes that different industries face vastly different problems, so Nvidia must develop domain-specific libraries.
Inference Inflection: Global Data Center Market Approaching $1 Trillion
In Huang’s keynote at Nvidia GTC 2026, he showcased a critical slide revealing that the global AI compute market is entering a phase of explosive growth. The slide indicates that the potential TAM of the global data center market has surged from about $500 billion in 2025 to over $1 trillion within a year, and it continues to grow.
The key concept on the slide is the “Inference Inflection.” Historically, AI development focused on training models with large datasets. But as large models mature, AI is increasingly deployed in products like search, customer service, image generation, and software development. This shift signifies the market focus moving from training to inference.
When AI is used by billions simultaneously, each query, image, or video requires computational support. This high-frequency, low-latency demand will cause inference compute needs to grow exponentially, which Nvidia sees as the main driver pushing the trillion-dollar AI data center market.
The market structure on the right side of the slide shows that current AI compute demand mainly comes from two major customer groups. About 60% of the demand is from hyperscalers and AI-native companies, including:
Amazon Web Services
Google Cloud
Microsoft
And AI model developers:
OpenAI
Anthropic
xAI
The remaining 40% comes from Nvidia’s recent focus areas, including Sovereign AI, industrial, and enterprise applications. Sovereign AI refers to governments building AI infrastructure tailored to their languages, cultures, and data sovereignty, such as:
Establishing national AI supercomputers
Training local language models
Securing national data sovereignty
Traditional industries are also beginning to adopt AI at scale, including:
Automotive and autonomous driving systems
Manufacturing and smart factories
Medical imaging analysis
Financial risk modeling
The slide’s center lists major AI model ecosystems, including ChatGPT, Gemini, Grok, and various open-source models. Notably, Anthropic and Meta Superintelligence Labs are marked as emerging forces after 2025, indicating rapid expansion in AI model competition.
Huang at GTC 2026: Nvidia is Essentially an “Algorithm Company”
Huang spent considerable time discussing Nvidia’s software stack applications across industries, from healthcare, manufacturing, finance to cloud computing. He emphasized that all capabilities ultimately return to Nvidia’s CUDA-X library ecosystem. “We are an algorithm company,” he said. He described CUDA-X as Nvidia’s “crown jewel,” emphasizing that the true value of GPUs comes from the software platform, not just the hardware.
One of the most critical components is cuDNN, a library optimized for high-performance GPU acceleration of deep neural networks, widely adopted by mainstream AI frameworks and a fundamental part of modern deep learning infrastructure.
Huang reiterated the importance of software in the AI ecosystem, stating that cuDNN is one of the company’s most crucial libraries, even calling it the catalyst for the modern AI wave. Nvidia showcased a short video about its CUDA-X software ecosystem, including a nearly photorealistic AI-generated simulation, highlighting breakthroughs in visual computing enabled by GPU acceleration and deep learning frameworks.
Huang: AI Needs “Industry-Specific Libraries”
Huang pointed out that AI deployment is not solely about generative AI. “Throwing GenAI at the wall to see if it succeeds is not a strategy.” He believes that because different industries face vastly different problems, Nvidia must develop domain-specific libraries to optimize solutions for each vertical.
This is why the CUDA-X ecosystem continues to expand, now covering dozens of fields, including:
Scientific computing
Medical imaging
Autonomous driving
Financial analysis
Data engineering
These libraries enable GPUs to deliver maximum performance across various industry scenarios.
Vertical Integration, Horizontal Openness in Nvidia’s AI Stack
Huang described Nvidia’s strategy as “vertically integrated but horizontally open.” This means Nvidia offers a complete stack from chips, systems, and software to application platforms, while allowing various enterprises and developers to build applications on its platform. In the face of exploding AI compute demands, Nvidia believes this model is the only way to drive accelerated computing.
AI’s Key Battleground: Unstructured Data
Huang also mentioned another critical AI task: handling unstructured data. He noted that about 90% of global data is unstructured, such as images, videos, audio, and natural language text, which were previously considered nearly useless due to difficulty in search and analysis. As AI and GPU acceleration mature, these data are gradually being transformed into analyzable assets.
For example, IBM is leveraging Nvidia’s cuDF GPU-accelerated framework to enhance the efficiency of its WatsonX data platform, enabling rapid analysis and utilization of large volumes of unstructured data.
OpenAI to Use AWS to Alleviate Compute Pressure
Regarding AI infrastructure, Huang mentioned OpenAI’s compute needs. He stated that OpenAI is “completely constrained by compute,” and this year, the company will adopt Amazon Web Services infrastructure to help meet its massive computational demands.
This article originally appeared on Chain News ABMedia: Nvidia GTC 2026 | Jensen Huang: Redefining Computing, Data Center Scale Approaching Trillion-Dollar Market