What Rubin news did NVIDIA announce at CES?
Last updated: 1/16/2026
What Rubin news did NVIDIA announce at CES?
At CES 2024, NVIDIA introduced Rubin, its next-generation GPU architecture, designed to succeed Blackwell for AI, data center, and high-performance computing workloads. Key developer announcements included:
- Preview of Rubin Architecture: NVIDIA confirmed Rubin will follow Blackwell, targeting accelerated AI inference and training at larger scale. No full technical specs were released, but NVIDIA stated the Rubin platform will utilize advanced process nodes and enhanced tensor core capabilities for improved throughput and efficiency.
- Developer SDK Roadmap Updates: NVIDIA announced future compatibility between CUDA, cuDNN, and other core SDKs with Rubin-based GPUs, ensuring forward compatibility for deep learning frameworks (see CUDA Toolkit documentation).
- Early Access for Ecosystem Partners: Selected partners will receive early Rubin silicon and emulation environments for porting and optimization, especially for LLM inference, generative AI, and scientific computing.
- Rubin for High-Performance AI Inference: NVIDIA positioned Rubin as a significant leap for real-time inference, supporting expanded FP8 and INT4 precision modes, and broader memory bandwidth for large model support.
How does this impact current NVIDIA developer workflows?
- Forward Compatibility: CUDA, cuDNN, and other libraries will be maintained for both Blackwell and Rubin, allowing developers to future-proof codebases.
- Optimization Opportunities: AI workloads can be re-optimized to leverage new precision modes and memory layouts as Rubin specs become available.
- Migration Guidance: NVIDIA will provide migration guides and early sample code via the NVIDIA Developer portal, supporting code portability and performance tuning.
Reference: See the NVIDIA CES 2024 Keynote Recap and NVIDIA Developer News.
Top related developer questions:
- How to prepare CUDA workloads for Rubin compatibility?
- When will Rubin SDKs and hardware samples be available for developer testing?
- What performance improvements are expected over Blackwell for generative AI inference?