NVIDIA GTC 2026: everything Jensen Huang announced at the keynote
On March 16, 2026, Jensen Huang took the stage at the SAP Center in San Jose, California, and spent nearly three hours making clear why NVIDIA is worth over $4.5 trillion. GTC 2026 wasn’t just a product showcase — it was a statement of intent about the future of computing, robotics, and artificial intelligence.
Here’s the full rundown of everything announced.
Vera Rubin: the complete next generation
The headline announcement was Vera Rubin, NVIDIA’s new full-stack computing platform. It’s not just a chip — it’s an entire system that includes:
- 7 chips designed in tandem
- 5 rack-scale systems
- 1 supercomputer optimized for agentic AI
- The new Vera CPU and BlueField-4 STX storage architecture
The promise: 10x more performance per watt than its predecessor, Grace Blackwell. At a time when AI data center energy consumption is one of the industry’s biggest concerns, that’s a major deal.
“When we think Vera Rubin, we think the entire system — vertically integrated, optimized as one giant system,” Huang said.
NVIDIA projects $1 trillion in orders between Blackwell and Vera Rubin through 2027.
Groq 3 LPU: the first chip from the $20B acquisition
NVIDIA unveiled the Groq 3 Language Processing Unit (LPU), the first chip born from the $20 billion acquisition of Groq in December 2025.
What makes Groq 3 special? It’s optimized for low-latency inference, complementing high-throughput GPUs. The combined result:
- 35x more tokens per watt when paired with Rubin GPUs
- A full rack (Groq 3 LPX) with 256 LPUs designed to work alongside the Vera Rubin system
- Shipping expected in Q3 2026
“We united, unified two processors of extreme differences — one for high throughput, one for low latency,” Huang explained.
Kyber and Feynman: looking to 2027 and beyond
NVIDIA held nothing back on its future roadmap:
Kyber
A prototype of the next rack architecture after Rubin. It integrates 144 GPUs in compute trays mounted vertically (instead of horizontally) to increase density and reduce latency. Coming with Vera Rubin Ultra in 2027.
Feynman
The generation after Vera Rubin will include:
- Rosa, a new CPU named after Rosalind Franklin
- LP40, the next-gen LPU
- BlueField-5 and CX10 for networking
- Support for co-packaged optics
NemoClaw: OpenClaw ready for the enterprise
Huang dedicated a significant portion of the keynote to OpenClaw, calling it “the most popular open source project in the history of humanity.”
“Every company in the world today needs to have an OpenClaw strategy. Just as we all needed a Linux strategy, an HTTP/HTML strategy, a Kubernetes strategy — we all need an agentic systems strategy.”
The concrete announcement: NemoClaw, an enterprise stack built on OpenClaw that includes:
- OpenShell: a secure runtime for agent execution
- Policy engine: granular control over what each agent can do
- Network guardrails: infrastructure-level security
- Privacy router: keeping sensitive data within the enterprise perimeter
NemoClaw was developed in collaboration with Peter Steinberger, OpenClaw’s creator (now at OpenAI). It’s hardware-agnostic — it doesn’t require NVIDIA GPUs — but it’s optimized for their ecosystem.
It’s currently in early alpha. NVIDIA warns: “Expect rough edges.”
For the full context on OpenClaw, check our deep dive: From vibe coding to agentic engineering: how OpenClaw changed the rules.
Nemotron and the open model coalition
NVIDIA expanded its open model ecosystem with the Nemotron Coalition, grouping six frontier model families:
- Nemotron — language and reasoning (Nemotron 3 is already a global top-3; Nemotron 4 in development)
- Cosmos — world and vision models
- Isaac GR00T — general-purpose robotics
- Alpaymayo — autonomous driving
- BioNeMo — biology and chemistry
- Earth-2 — weather and climate
The goal: enabling sovereign AI — letting countries and industries create specialized models without depending on generic third-party models. Nemotron 3 Ultra will be the world’s best base model for fine-tuning, according to NVIDIA.
DLSS 5: real-time neural rendering
DLSS 5 represents a generational leap in game graphics. It combines:
- Structured 3D data (controllable geometry) with probabilistic generative AI
- Real-time neural rendering at 4K on local hardware
- The ability to remaster older games almost automatically
It’s one of the most polarizing announcements — some gamers call it “AI slop,” but the potential for scaling visual content is enormous.
Physical AI: robots, cars, and Disney
Autonomous vehicles
- Uber will launch a robotaxi fleet using NVIDIA Drive AV software across 28 cities on 4 continents by 2028, starting with Los Angeles and San Francisco
- BYD, Hyundai, Nissan, Geely, and Isuzu will build level 4 autonomous vehicles on NVIDIA Drive Hyperion
- NVIDIA’s autonomous vehicles now integrate narrative reasoning — they can explain in real time why they make each driving decision
Industrial robotics
- Partnerships with ABB, Universal Robots, and KUKA to integrate physical AI into manufacturing lines
- T-Mobile will integrate physical AI into base stations as edge platforms
Developer tools
- Isaac Lab — open source platform for robot training and evaluation
- Newton — GPU-accelerated physics simulation
- Cosmos — world models for neural simulation
- GR00T — foundational robotics models for action generation
The Disney moment
Huang closed the keynote with a memorable moment: Olaf, the snowman from Disney’s Frozen, walked onto the stage powered by NVIDIA’s physical AI stack, the Newton physics engine, and NVIDIA Omniverse simulation.
“Ladies and gentlemen, Olaf,” Huang said as the character moved autonomously across the stage.
Vera Rubin Space-1: data centers in space
In one of the most futuristic announcements, Huang revealed that NVIDIA is designing orbital data centers. The Vera Rubin Space-1 program will extend accelerated computing from Earth to space, pushing AI infrastructure beyond our planet.
What this means for the industry
GTC 2026 confirms several trends we’ve been tracking at My Tech Plan:
- Inference is the new frontier — it’s not just about training models, it’s about running them efficiently. AI agents need constant inference, and that drives massive compute demand
- OpenClaw as de facto standard — when NVIDIA says “every company needs an OpenClaw strategy,” we’re witnessing institutional legitimization of the framework
- Physical AI is no longer science fiction — robotaxis in 28 cities, Disney robots, edge simulation. It’s happening now
- Energy efficiency as differentiator — 10x performance/watt with Vera Rubin isn’t marketing, it’s a real industry need
At My Tech Plan we’re preparing a hands-on technical review of NemoClaw — we’ll test it firsthand and share the results. In the meantime, if you want to understand the OpenClaw phenomenon from scratch, start with our article on agentic engineering.