❌

Normal view

Received before yesterday

Delivering Massive Performance Leaps for Mixture of Experts Inference on NVIDIA Blackwell

8 January 2026 at 19:43
As AI models continue to get smarter, people can rely on them for an expanding set of tasks. This leads usersβ€”from consumers to enterprisesβ€”to interact with...

As AI models continue to get smarter, people can rely on them for an expanding set of tasks. This leads usersβ€”from consumers to enterprisesβ€”to interact with AI more frequently, meaning that more tokens need to be generated. To serve these tokens at the lowest possible cost, AI platforms need to deliver the best possible token throughput per watt. Through extreme co-design across GPUs, CPUs…

Source

NVIDIA Blackwell Enables 3x Faster Training and Nearly 2x Training Performance Per Dollar than Previous-Gen Architecture

11 December 2025 at 19:20
AI innovation continues to be driven by three scaling laws: pre-training, post-training, and test-time scaling. Training is foundational to building smarter...

AI innovation continues to be driven by three scaling laws: pre-training, post-training, and test-time scaling. Training is foundational to building smarter models, and post-trainingβ€”which can include fine-tuning, reinforcement learning, and other techniquesβ€”helps to further increase accuracy for specific tasks, as well as provide models with new capabilities like the ability to reason.

Source

NVIDIA Blackwell Architecture Sweeps MLPerf Training v5.1 Benchmarks

13 November 2025 at 00:08
The NVIDIA Blackwell architecture powered the fastest time to train across every MLPerf Training v5.1 benchmark, marking a clean sweep in the latest round of...

The NVIDIA Blackwell architecture powered the fastest time to train across every MLPerf Training v5.1 benchmark, marking a clean sweep in the latest round of results. As developers experiment with new architectures, and models continue to grow in size, more training compute is essential. Meeting this need for delivered compute requires innovation across every layer of the AI stackβ€”from chips and…

Source

❌