March 26 - 27, 2026
Node Congress
Online

Node Congress 2026

Master Fullstack: JS Backends, DevOps, Architecture

Full remote ticket included with Multipass.

Master Fullstack: JS Backends, DevOps, Architecture and more! The conference on all things Node.js, DevOps, Edge-native workers (Cloudflare & others), Serverless, Deno & other JavaScript backend runtimes, gathering Back-end and Full-stack engineers across the globe.

Why Node.js Needs an Application Server
Upcoming
Why Node.js Needs an Application Server
You've been deploying Node.js wrong. For years, the community has treated Node.js as a simple runtime—start a process, put it behind a reverse proxy, scale horizontally. But this approach ignores fundamental architectural problems that become painfully obvious in production: the single-threaded event loop bottleneck, inefficient resource utilization, fragmented tooling, and the operational complexity of managing multiple services.In this talk, I'll make the case for why Node.js needs a proper application server—and why we built Watt to solve these problems. We'll go deep into the architecture: how SO_REUSEPORT enables kernel-level load distribution without IPC overhead, how multiple workers within a single deployment unit can achieve near-linear scaling, and how a unified runtime can orchestrate frontend frameworks like Next.js alongside backend microservices.You'll see real benchmark data: 93% faster median latency compared to PM2 clusters, 99.8% reliability under sustained load, and dramatic reductions in infrastructure costs. More importantly, you'll understand why these improvements happen at the architectural level.Whether you're running Next.js, Fastify, or any CPU-bound Node.js workload, you'll leave with a fundamentally different perspective on how Node.js applications should be built, deployed, and scaled.No magic. No hype. Just better architecture.
No REST for Cap'n Web
Upcoming
No REST for Cap'n Web
Cap'n Web is a new open source library that lets you expose JavaScript and TypeScript APIs across the Web, including over HTTP or WebSocket. Unlike almost all other RPC systems before it, Cap'n Web supports passing functions and objects over RPC "by reference". This simple-sounding feature has deep implications that completely change how APIs are designed. In this talk I will present a series of examples showing how to use Cap'n Web's first-class functions and objects to implement a variety of common API design patterns, such as authorization, pub-sub, and streaming. I will show how Cap'n Web can integrate nicely with reactive UI frameworks, and even touch briefly on how it can be a powerful tool for sandboxing AI agents.
The State of Node.js Security
Upcoming
The State of Node.js Security
I will provide an in-depth analysis of the initiatives led by the Node.js Security Team, exploring their significance and the benefits they bring to end-users. Since 2022, we've accomplished a lot of tasks and the goal of this talk is to showcase the concluded initiatives, highlighting the advancements made in fortifying the security of Node.js applications. Furthermore, I'll unveil what you can expect from upcoming releases, offering a glimpse into the future of Node.js security. From vulnerability management to secure coding practices and beyond, this talk will equip you with valuable insights into the measures taken to enhance protection and ensure a more secure Node.js environment.
Every API is a Tool for Agents with Code Mode
Upcoming
Every API is a Tool for Agents with Code Mode
At Cloudflare we have a lot of products. Our REST OpenAPI spec is over 2.3 million tokens. When teams wanted to let AI agents access their services, they did what everyone does: cherry-picked important endpoints for their product, wrote some tool definitions and shipped a an MCP Server that covered a small fraction of their API.I think we got it all wrong.The context limit is not an MCP problem. It's an Agent problem. Tools should probably be discovered on demand. CLIs get this for free, self-discoverable and documented by design. APIs just need a little help.This talk will cover how Code Mode works, why Dynamic Worker Loaders are super cool and how efficient sandboxes will be the great unlock for Agents. 
One Config File To Rule Them All
Upcoming
One Config File To Rule Them All
Node.js introduces --experimental-config-file, a new experimental flag that enables loading a node.config.json file at startup. While it adds yet another config file, this powerful addition finally allows developers to customize Node.js execution in ways that were previously impossible. This talk explores the capabilities unlocked by this feature, including fine-tuned runtime behavior, improved portability, and potential future extensions.
Building Model Context Protocol (MCP) Tools for AI Agents with Cloudflare Workers
Upcoming
Building Model Context Protocol (MCP) Tools for AI Agents with Cloudflare Workers
WorkshopPublic
Confidence Okoghenun
Confidence Okoghenun
In this hands on workshop participants will learn how to create a production‑ready Model Context Protocol (MCP) server on Cloudflare Workers. The session covers defining tool endpoints, integrating external APIs, persisting state with KV storage, and globally deploying the server so AI assistants can invoke custom tools in real time. By the end of the workshop every attendee will have a live MCP server they can extend and secure for their own AI‑agent projects.  
Register
Unlocking the Power of the Dependency Graph
Upcoming
Unlocking the Power of the Dependency Graph
Node.js applications are increasingly defined by their dependency graph, yet most tooling still treats it as an opaque side effect. node_modules hides structure, workspaces are layered on, and understanding how dependencies relate (or even why they exist) remains surprisingly difficult.This talk introduces the vlt client and shows what becomes possible when the dependency graph is treated as a first-class JavaScript artifact. By exposing graph construction, resolution, traversal, and querying as reusable JavaScript primitives, vlt enables both powerful CLI workflows and entirely new classes of programmatic tooling. Building the client in JavaScript ensures that advances in package management feed directly back into the ecosystem, strengthening the shared library corpus and raising the ceiling for Node.js tooling.
Garbage Collection Between V8, cppgc (Oilpan), and Native Runtimes
Upcoming
Garbage Collection Between V8, cppgc (Oilpan), and Native Runtimes
Modern JavaScript runtimes don’t only manage JavaScript objects — they also need to safely and efficiently garbage collect complex native objects written in C++ and Rust. This talk explores how V8’s cppgc (Oilpan) is used in Node.js, Deno, and Cloudflare Workers, comparing shim-based approaches with direct cppgc integration and explaining the performance and lifecycle trade-offs behind each design.
Breaking the Context Ceiling: Implementing Recursive Language Models with LangGraph and TypeScript
Upcoming
Breaking the Context Ceiling: Implementing Recursive Language Models with LangGraph and TypeScript
MIT's recent ""Recursive Language Models"" paper demonstrated that LLMs can process inputs 100x beyond their context windows — not by expanding the window, but by treating prompts as external environments the model explores programmatically. The results are striking: GPT-5-mini outperformed GPT-5 on long-context tasks while using comparable compute. This talk demonstrates how to build the same architecture in TypeScript using LangGraph and Node.js.We'll implement an RLM system in which a root agent orchestrates recursive sub-agents, each operating on a focused context slice without suffering ""context rot."" We'll see how to leverage LangGraph's cyclic graph execution to spawn child agents, aggregate their findings into a shared state, and let the orchestrator synthesize results — all while keeping individual context windows small and fresh.By the end, you'll have a working pattern for processing massive documents, codebases, or datasets that would choke a single LLM call, using tools you can deploy today.Key takeaways:- Why bigger context windows don't solve context rot- Architecting recursive agent graphs in LangGraph- Managing state and tool execution across agent hierarchies- Cost and latency tradeoffs in production
DevOps for JavaScript Developers: From Code to Production
Upcoming
DevOps for JavaScript Developers: From Code to Production
Workshop
Kristiyan Velkov
Kristiyan Velkov
Mentorship available
This workshop closes the old and very real gap between Node.js developers and production.Many Node.js developers can build APIs, services, and backends that work locally—but things fall apart when it’s time to containerize, deploy, scale, and operate them in real environments. This workshop fixes that.This is where Node.js stops being “just code” and becomes a reliable production system.What you’ll learn:Dockerize Node.js applications for development and production using battle-tested patternsDesign clean, fast, and secure Docker images for Node.jsBuild scalable CI/CD pipelines with GitHub ActionsOptimize Node.js apps for performance, stability, and observability
Register
We Deserve a Better Streams API for the Web
Upcoming
We Deserve a Better Streams API for the Web
While the Web streams API provides a uniform approach to streaming across all runtimes, it comes with steep performance costs and significant implementation complexity. We can, and should, do better.
Stop Paying for AI APIs: npm Install Your Way to In-Process Inference
Upcoming
Stop Paying for AI APIs: npm Install Your Way to In-Process Inference
Every Node.js developer adding AI to their apps faces the same choice: pay for external APIs or wrestle with some local inference like Ollama (that also requires API calls). But there's a third option nobody's talking about: running ML inference *inside* your Node.js process with Transformers.js. In this talk, I'll show you how to generate embeddings, classify text, and run LLMs with nothing more than `npm install`. No API keys, no network latency, no separate processes. Just JavaScript doing machine learning the way it should: simple, fast, and fully under your control.
Node's Concurrency With the Strength of a Bull With BullMQ
Upcoming
Node's Concurrency With the Strength of a Bull With BullMQ
Workshop
Edy Silva
Douglas Marques
2 authors
Node's concurrent nature is powerful already, but often we need to push work out of the main server for several reasons. In this work, we will explore a few scenarios in which work is cleverly pushed to another Node process to resolve.
Once we use a queue to distribute workloads, we need to identify the nature of the work to be done. For either I/O- or CPU-intensive work, the first is already perfectly covered by a single Node.js process; we will need to tweak the worker setup to match the available resources and throughput.
Register