Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
NVIDIA Releases DLSS 5, Jensen Huang Hails the GPT Moment for Graphics
On Monday, March 16th Eastern Time, NVIDIA officially announced DLSS 5 at the annual GTC developer conference, calling it the most significant breakthrough in computer graphics since real-time ray tracing in 2018: by using real-time neural rendering models, injecting pixels with “cinematic” lighting and material details, aiming to achieve near-Hollywood visual quality in interactive gaming.
NVIDIA founder and CEO Jensen Huang described DLSS 5 at GTC as a “graphics GPT moment,” emphasizing the new balance achieved between generative AI in visual expression and artistic controllability.
According to NVIDIA, DLSS 5 will be available for mainstream games this fall and has already received support from major companies including Bethesda, CAPCOM, NetEase, Tencent, Ubisoft, and others.
DLSS 5 can run in real-time at 4K, understanding scene semantics (such as characters, hair, skin subsurface scattering, fabric glossiness) and injecting a sense of physical realism into each frame, while giving developers detailed control (intensity, grading, masks, etc.). This means game visuals are no longer just rule-based approximations but are enhanced in real-time by trained models while maintaining determinism.
What DLSS 5 Brings
Real-time Neural Rendering: DLSS 5 uses an end-to-end trained AI model that, based on each frame’s color and motion vectors, generates pixel results with lighting and material interactions, aiming to approach offline cinematic rendering quality in real-time interactive scenarios.
Industrial-grade Controllability: Unlike general video generation models, this system emphasizes “controllability and high determinism,” providing game artists with parameters such as strength, color grading, and local masks to ensure visual changes stay within artistic boundaries.
Wide Industry Support and Launch Titles: NVIDIA listed several supported or planned titles, such as “Starfield,” “Resident Evil Requiem,” “Assassin’s Creed Shadows,” “Hogwarts Legacy,” and more, stating that partners including major AAA studios have participated or tested integration.
Technical Details and Evolution Path
DLSS has evolved from initial super-sampling/AI upscaling to frame generation, and now to incorporating “material and lighting” into AI learning targets.
NVIDIA notes that DLSS 4.5 can generate many pixels and achieve multi-frame generation (Dynamic Multi Frame Generation), while DLSS 5 further trains neural networks to understand scene semantics and complex light-material interactions, producing pixels with detailed effects like subsurface scattering and fiber reflections while maintaining visual coherence. For players, this means improved detail and realism at the same or similar rendering resources; for developers, it provides new tools for art and performance optimization.
NVIDIA emphasizes that DLSS 5 will be integrated into the existing Streamline framework, sharing features with DLSS, NVIDIA Reflex, and others, reducing development adaptation costs and supporting operation on current RTX platforms. However, high-quality performance will still rely heavily on the computing power and bandwidth of high-end GPUs.
Risk Disclaimer and Terms
The market carries risks; please invest cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Invest at your own risk.