Reimagining LLM Memory: Using Context as Training Data Unlocks Models That Learn at Test-Time
9 January 2026 at 16:58
We keep seeing LLMs with larger context windows in the news, along with promises that they can hold entire conversation histories, volumes of books, or multiple codebases in view at once. And yet, these models still repeat the same mistakes. We still have to copy and paste the earlier context back into the chat for LLMs to βget itβ. A smart co-worker would pick up on these patterns, adaptβ¦