❌

Normal view

Received yesterday β€” 31 January 2026

Accelerating Diffusion Models with an Open, Plug-and-Play Offering

27 January 2026 at 19:00
Recent advances in large-scale diffusion models have revolutionized generative AI across multiple domains, from image synthesis to audio generation, 3D asset...

Recent advances in large-scale diffusion models have revolutionized generative AI across multiple domains, from image synthesis to audio generation, 3D asset creation, molecular design, and beyond. These models have demonstrated unprecedented capabilities in producing high-quality, diverse outputs across various conditional generation tasks. Despite these successes…

Source

Received before yesterday

Reimagining LLM Memory: Using Context as Training Data Unlocks Models That Learn at Test-Time

9 January 2026 at 16:58
Decorative image.We keep seeing LLMs with larger context windows in the news, along with promises that they can hold entire conversation histories, volumes of books, or multiple...Decorative image.

We keep seeing LLMs with larger context windows in the news, along with promises that they can hold entire conversation histories, volumes of books, or multiple codebases in view at once. And yet, these models still repeat the same mistakes. We still have to copy and paste the earlier context back into the chat for LLMs to β€œget it”. A smart co-worker would pick up on these patterns, adapt…

Source

How to Scale Data Generation for Physical AI with the NVIDIA Cosmos Cookbook

1 December 2025 at 17:00
Building powerful physical AI models requires diverse, controllable, and physically-grounded data at scale. Collecting large-scale, diverse real-world datasets...

Building powerful physical AI models requires diverse, controllable, and physically-grounded data at scale. Collecting large-scale, diverse real-world datasets for training can be expensive, time-intensive, and dangerous. NVIDIA Cosmos open world foundation models (WFMs) address these challenges by enabling scalable, high-fidelity synthetic data generation for physical AI and the augmentation of…

Source

Breaking Through Reinforcement Learning Training Limits with Scaling Rollouts in BroRL

19 November 2025 at 21:51
When training large language models (LLMs) with reinforcement learning from verifiable rewards (RLVR), one of the most compelling questions is how to overcome...

When training large language models (LLMs) with reinforcement learning from verifiable rewards (RLVR), one of the most compelling questions is how to overcome performance plateaus. The previous NVIDIA Research solution, Prolonged Reinforcement Learning (ProRL), showed that adding more reinforcement learning (RL) steps during prolonged training could expand the reasoning boundaries of LLMs.

Source

❌