MIT's recent ""Recursive Language Models"" paper demonstrated that LLMs can process inputs 100x beyond their context windows — not by expanding the window, but by treating prompts as external environments the model explores programmatically. The results are striking: GPT-5-mini outperformed GPT-5 on long-context tasks while using comparable compute. This talk demonstrates how to build the same architecture in TypeScript using LangGraph and Node.js.
We'll implement an RLM system in which a root agent orchestrates recursive sub-agents, each operating on a focused context slice without suffering ""context rot."" We'll see how to leverage LangGraph's cyclic graph execution to spawn child agents, aggregate their findings into a shared state, and let the orchestrator synthesize results — all while keeping individual context windows small and fresh.
By the end, you'll have a working pattern for processing massive documents, codebases, or datasets that would choke a single LLM call, using tools you can deploy today.
Key takeaways:
- - Why bigger context windows don't solve context rot
- - Architecting recursive agent graphs in LangGraph
- - Managing state and tool execution across agent hierarchies
- - Cost and latency tradeoffs in production
This talk has been presented at Node Congress 2026, check out the latest edition of this JavaScript Conference.





















