Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
5-Minute Quick Review of Huang Renxiun's GTC Speech: Trillion-Dollar Revenue, LPU, Space Chips, One-Click "Shrimp Farming"
Tuesday early morning Beijing time, NVIDIA CEO Jensen Huang delivered a two-and-a-half-hour speech that launched a “carpet bombing” of AI industry concepts, both hardware and software.
For the capital markets, today was also a fruitful day—most of the anticipated hype concepts were realized, and Huang Huang unexpectedly provided the latest explosive financial outlook for compute chips’ revenue.
Full coverage report: A complete review of Jensen Huang’s “Full-Stack AI” speech: Launching a Trillion-Dollar New Blueprint
Key buzzword: 1 trillion dollars
Huang confirmed during the speech that NVIDIA’s flagship chips will help the company generate $1 trillion in revenue by 2027.
The significance of this statement depends on each investor’s interpretation. He previously stated that data center equipment would generate $500 billion in sales by the end of 2026. The latest forecast extends this outlook by a year, doubling the cumulative amount.
This statement was also the most exciting moment for investors during the entire speech. NVIDIA’s stock price surged over 4% intraday, ultimately closing up 1.6%.
GPU (X) AI Factory Platform
NVIDIA emphasized that Vera Rubin is not a single chip but a complete AI supercomputing platform composed of 7 types of chips and 5 rack systems.
In addition to the familiar Rubin GPU and Vera CPU combination (Vera Rubin NVL72 GPU rack), the biggest variable at this launch was two CPU products.
One of them, the Vera CPU rack, integrates 256 Vera CPUs per rack, offering twice the computing efficiency and 50% faster operation compared to traditional CPUs.
The Groq 3 LPX rack is equipped with 256 LPU processors, providing 128GB on-chip SRAM and 640TB/s expansion bandwidth. When combined with the Vera Rubin platform, the inference throughput/watt ratio can be improved by 35 times. Huang explained that the LPU chips will be manufactured by Samsung, with shipments expected to begin in the second half of this year.
All three racks use liquid cooling architecture.
The highly anticipated Spectrum-6 SPX rack is expected to adopt Co-Packaged Optical (CPO) technology, delivering five times higher optical power efficiency and ten times higher network reliability.
For future products, Rubin Ultra in the Kyber rack will use vertical insertion arrangements, allowing 144 GPUs to be connected within a single NVLink domain. The next-generation Feynman architecture GPUs will feature stacked chips and custom HBM technology.
Space Data Chips
NVIDIA also launched the Space-1 Vera Rubin module, deploying data center-level AI computing power to satellites and orbital data centers (ODC), emphasizing its focus on on-orbit inference, real-time geospatial intelligence, and autonomous space missions.
The company also highlighted its product portfolio—Jetson Orin, IGX Thor, RTX PRO 6000 Blackwell GPU, and the upcoming Space-1 module—forming a complete computing architecture from orbital edge computing → ground AI data centers → cloud analysis.
One-click “Shrimp Farming”
By entering the “lobster industry,” NVIDIA is turning AI agent infrastructure into a new growth track.
NemoClaw, positioned as the infrastructure layer of the OpenClaw agent platform, enables deployment of AI agents with a single command, integrating Nemotron models and OpenShell runtime environment, addressing security, privacy, and sandbox capabilities. The goal is not only minimal deployment but also “safe shrimp farming.”
NVIDIA emphasized that NemoClaw can run on RTX PCs, RTX PRO workstations, as well as DGX Station, DGX Spark, and other devices, supporting the need for dedicated computing hardware for “always-on AI assistants.”
NVIDIA also announced further expansion of its “Open Model Ecosystem,” covering three major AI directions: agent AI, physical AI, and medical AI, broadening its open foundational model family.
DLSS 5: The GPT Moment in Graphics Technology
At GTC, NVIDIA also released DLSS 5, claiming it to be the most significant breakthrough in computer graphics since the introduction of real-time ray tracing in 2018.
Huang stated: “25 years after NVIDIA invented programmable shaders, we are redefining computer graphics again. DLSS 5 is the ‘GPT moment’ in graphics.”
The new DLSS 5 system combines traditional 3D graphics data with generative AI models, which can predict and fill in parts of images, allowing NVIDIA’s GPUs to generate detailed scenes and highly realistic characters without rendering every element from scratch.
Tushare Diagram · Adding Some Practical Tips
(Source: Cailian Press)