March 26 - 27, 2026
Node Congress
Online

Node Congress 2026

Master Fullstack: JS Backends, DevOps, Architecture

Full remote ticket included with Multipass.

Master Fullstack: JS Backends, DevOps, Architecture and more! The conference on all things Node.js, DevOps, Edge-native workers (Cloudflare & others), Serverless, Deno & other JavaScript backend runtimes, gathering Back-end and Full-stack engineers across the globe.

Why Node.js Needs an Application Server
Upcoming
Why Node.js Needs an Application Server
You've been deploying Node.js wrong. For years, the community has treated Node.js as a simple runtime—start a process, put it behind a reverse proxy, scale horizontally. But this approach ignores fundamental architectural problems that become painfully obvious in production: the single-threaded event loop bottleneck, inefficient resource utilization, fragmented tooling, and the operational complexity of managing multiple services.In this talk, I'll make the case for why Node.js needs a proper application server—and why we built Watt to solve these problems. We'll go deep into the architecture: how SO_REUSEPORT enables kernel-level load distribution without IPC overhead, how multiple workers within a single deployment unit can achieve near-linear scaling, and how a unified runtime can orchestrate frontend frameworks like Next.js alongside backend microservices.You'll see real benchmark data: 93% faster median latency compared to PM2 clusters, 99.8% reliability under sustained load, and dramatic reductions in infrastructure costs. More importantly, you'll understand why these improvements happen at the architectural level.Whether you're running Next.js, Fastify, or any CPU-bound Node.js workload, you'll leave with a fundamentally different perspective on how Node.js applications should be built, deployed, and scaled.No magic. No hype. Just better architecture.
Every API is a Tool for Agents with Code Mode
Upcoming
Every API is a Tool for Agents with Code Mode
At Cloudflare we have a lot of products. Our REST OpenAPI spec is over 2.3 million tokens. When teams wanted to let AI agents access their services, they did what everyone does: cherry-picked important endpoints for their product, wrote some tool definitions and shipped a an MCP Server that covered a small fraction of their API.I think we got it all wrong.The context limit is not an MCP problem. It's an Agent problem. Tools should probably be discovered on demand. CLIs get this for free, self-discoverable and documented by design. APIs just need a little help.This talk will cover how Code Mode works, why Dynamic Worker Loaders are super cool and how efficient sandboxes will be the great unlock for Agents. 
Typescript Is SO SLOW...Or Is It?
Upcoming
Typescript Is SO SLOW...Or Is It?
Building apps with TypeScript is the norm these days, but there's one major pain point, speed. At first, everything runs smoothly, but as your project grows, build times drag, your editor lags, and even basic code completion slows to a crawl. The good news? A solution has been hiding in TypeScript’s documentation for years, overlooked but incredibly effective. In this talk, we’ll uncover this hidden gem, optimize your setup, and keep TypeScript fast as your app scales.
No REST for Cap'n Web
Upcoming
No REST for Cap'n Web
Cap'n Web is a new open source library that lets you expose JavaScript and TypeScript APIs across the Web, including over HTTP or WebSocket. Unlike almost all other RPC systems before it, Cap'n Web supports passing functions and objects over RPC "by reference". This simple-sounding feature has deep implications that completely change how APIs are designed. In this talk I will present a series of examples showing how to use Cap'n Web's first-class functions and objects to implement a variety of common API design patterns, such as authorization, pub-sub, and streaming. I will show how Cap'n Web can integrate nicely with reactive UI frameworks, and even touch briefly on how it can be a powerful tool for sandboxing AI agents.
The State of Node.js Security
Upcoming
The State of Node.js Security
I will provide an in-depth analysis of the initiatives led by the Node.js Security Team, exploring their significance and the benefits they bring to end-users. Since 2022, we've accomplished a lot of tasks and the goal of this talk is to showcase the concluded initiatives, highlighting the advancements made in fortifying the security of Node.js applications. Furthermore, I'll unveil what you can expect from upcoming releases, offering a glimpse into the future of Node.js security. From vulnerability management to secure coding practices and beyond, this talk will equip you with valuable insights into the measures taken to enhance protection and ensure a more secure Node.js environment.
npm install && pray
Upcoming
npm install && pray
We all know the ritual: add a dependency, trust it implicitly, ship it to production. For years, that worked well enough. But now the attacks have started getting smarter.

Supply chain attacks targeting the npm ecosystem aren't theoretical anymore. Malicious packages that steal credentials, hijack environment variables or silently exfiltrate data over HTTP are showing up in minor version bumps. The JavaScript ecosystem's greatest strength, its openness, has become its greatest liability.

And then we handed our keyboards to AI.

AI assistants are, of course, useful. But they introduce a new category of risk: code you didn't write, don't fully understand, and may never review fully enough. Models can leak API keys into generated output, AI-written code can accidentally delete files, their code can make unintended network calls, or worse. And if someone's poisoned the training data, your AI might do this on purpose.

In this talk, we'll walk through real examples of threat vectors, demonstrate how Deno's permission system can stop a supply chain attack in its tracks, and explore how sandboxing your code execution can give you a genuinely safe environment to run AI-generated code without the overhead of standing up Docker infrastructure.

You'll leave with a clearer picture of the threat landscape and practical tools to execute code you might not trust.
Building Model Context Protocol (MCP) Tools for AI Agents with Cloudflare Workers
Mar 23, 15:00
Building Model Context Protocol (MCP) Tools for AI Agents with Cloudflare Workers
WorkshopPublic
Confidence Okoghenun
Confidence Okoghenun
In this hands on workshop participants will learn how to create a production‑ready Model Context Protocol (MCP) server on Cloudflare Workers. The session covers defining tool endpoints, integrating external APIs, persisting state with KV storage, and globally deploying the server so AI assistants can invoke custom tools in real time. By the end of the workshop every attendee will have a live MCP server they can extend and secure for their own AI‑agent projects.  
Register
One Config File To Rule Them All
Upcoming
One Config File To Rule Them All
Node.js introduces --experimental-config-file, a new experimental flag that enables loading a node.config.json file at startup. While it adds yet another config file, this powerful addition finally allows developers to customize Node.js execution in ways that were previously impossible. This talk explores the capabilities unlocked by this feature, including fine-tuned runtime behavior, improved portability, and potential future extensions.
Unlocking the Power of the Dependency Graph
Upcoming
Unlocking the Power of the Dependency Graph
Node.js applications are increasingly defined by their dependency graph, yet most tooling still treats it as an opaque side effect. node_modules hides structure, workspaces are layered on, and understanding how dependencies relate (or even why they exist) remains surprisingly difficult.This talk introduces the vlt client and shows what becomes possible when the dependency graph is treated as a first-class JavaScript artifact. By exposing graph construction, resolution, traversal, and querying as reusable JavaScript primitives, vlt enables both powerful CLI workflows and entirely new classes of programmatic tooling. Building the client in JavaScript ensures that advances in package management feed directly back into the ecosystem, strengthening the shared library corpus and raising the ceiling for Node.js tooling.
Garbage Collection Between V8, cppgc (Oilpan), and Native Runtimes
Upcoming
Garbage Collection Between V8, cppgc (Oilpan), and Native Runtimes
Modern JavaScript runtimes don’t only manage JavaScript objects — they also need to safely and efficiently garbage collect complex native objects written in C++ and Rust. This talk explores how V8’s cppgc (Oilpan) is used in Node.js, Deno, and Cloudflare Workers, comparing shim-based approaches with direct cppgc integration and explaining the performance and lifecycle trade-offs behind each design.
Breaking the Context Ceiling: Implementing Recursive Language Models with LangGraph and TypeScript
Upcoming
Breaking the Context Ceiling: Implementing Recursive Language Models with LangGraph and TypeScript
MIT's recent ""Recursive Language Models"" paper demonstrated that LLMs can process inputs 100x beyond their context windows — not by expanding the window, but by treating prompts as external environments the model explores programmatically. The results are striking: GPT-5-mini outperformed GPT-5 on long-context tasks while using comparable compute. This talk demonstrates how to build the same architecture in TypeScript using LangGraph and Node.js.We'll implement an RLM system in which a root agent orchestrates recursive sub-agents, each operating on a focused context slice without suffering ""context rot."" We'll see how to leverage LangGraph's cyclic graph execution to spawn child agents, aggregate their findings into a shared state, and let the orchestrator synthesize results — all while keeping individual context windows small and fresh.By the end, you'll have a working pattern for processing massive documents, codebases, or datasets that would choke a single LLM call, using tools you can deploy today.Key takeaways:- Why bigger context windows don't solve context rot- Architecting recursive agent graphs in LangGraph- Managing state and tool execution across agent hierarchies- Cost and latency tradeoffs in production
Building the Node-API Conformance Test Suite
Upcoming
Building the Node-API Conformance Test Suite
I've been contributing a Conformance Test Suite to the Node.js project: For Node-API implementors to run across different JavaScript engines and runtimes, such as Bun, Deno and React Native.
I want to share motivation for the project, architectural decisions, the progress we've made so far and key techniques used to ensure the implementation stays runtime agnostic.
Production-like Testing in CI/CD with Testcontainers and Node.js
Upcoming
Production-like Testing in CI/CD with Testcontainers and Node.js
You’ve been there: Node.js tests pass both locally and in CI. You deploy with confidence. Then staging reveals the truth—bugs that only appear with real Postgres 16 collations, actual Redis connection limits, or Kafka partition behavior your in-memory mocks never captured.The solution are Testcontainers.Testcontainers is a testing library that provides easy and lightweight APIs for bootstrapping integration tests with real services wrapped in Docker containers. Using Testcontainers, you can write tests talking to the same type of services you use in production without mocks or in-memory services. Spin them up, run migrations, execute your Node.js service against them, assert results, auto-cleanup.In this talk, we’ll show you how to use Testcontainers with AWS CDK and AWS Lambda Node.js APIs to ship code faster, with stronger tests and a more reliable CI/CD pipeline.
DevOps for JavaScript Developers: From Code to Production
Mar 24, 15:00
DevOps for JavaScript Developers: From Code to Production
Workshop
Kristiyan Velkov
Kristiyan Velkov
Mentorship available
This workshop closes the old and very real gap between Node.js developers and production.Many Node.js developers can build APIs, services, and backends that work locally—but things fall apart when it’s time to containerize, deploy, scale, and operate them in real environments. This workshop fixes that.This is where Node.js stops being “just code” and becomes a reliable production system.What you’ll learn:Dockerize Node.js applications for development and production using battle-tested patternsDesign clean, fast, and secure Docker images for Node.jsBuild scalable CI/CD pipelines with GitHub ActionsOptimize Node.js apps for performance, stability, and observability
Register
Beyond Benchmarks: node.js, Deno, and Bun in Real Production
Upcoming
Beyond Benchmarks: node.js, Deno, and Bun in Real Production
Benchmarks are everywhere, but production rarely looks like a benchmark.This talk compares Node.js, Deno, and Bun from a pragmatic, production-first perspective: debugging, observability, deployment, cold starts, and the kind of edge cases that only appear when real users and real systems are involved.
We Deserve a Better Streams API for the Web
Upcoming
We Deserve a Better Streams API for the Web
While the Web streams API provides a uniform approach to streaming across all runtimes, it comes with steep performance costs and significant implementation complexity. We can, and should, do better.
Node's Concurrency With the Strength of a Bull With BullMQ
Mar 25, 15:00
Node's Concurrency With the Strength of a Bull With BullMQ
Workshop
Edy Silva
Douglas Marques
2 authors
Node's concurrent nature is powerful already, but often we need to push work out of the main server for several reasons. In this work, we will explore a few scenarios in which work is cleverly pushed to another Node process to resolve.
Once we use a queue to distribute workloads, we need to identify the nature of the work to be done. For either I/O- or CPU-intensive work, the first is already perfectly covered by a single Node.js process; we will need to tweak the worker setup to match the available resources and throughput.
Register
Stop Paying for AI APIs: npm Install Your Way to In-Process Inference
Upcoming
Stop Paying for AI APIs: npm Install Your Way to In-Process Inference
Every Node.js developer adding AI to their apps faces the same choice: pay for external APIs or wrestle with some local inference like Ollama (that also requires API calls). But there's a third option nobody's talking about: running ML inference *inside* your Node.js process with Transformers.js. In this talk, I'll show you how to generate embeddings, classify text, and run LLMs with nothing more than `npm install`. No API keys, no network latency, no separate processes. Just JavaScript doing machine learning the way it should: simple, fast, and fully under your control.
Creating a Test Runner: What Happens Behind the Tests?
Upcoming
Creating a Test Runner: What Happens Behind the Tests?
What's it like to create a test runner from scratch? More than that: how do you test a test runner?In this talk, I break down Poku, a test runner that makes testing easy for Node.js, Bun, Deno, and you.You will learn about the power of subprocesses, isolation, and the advantages of working with concurrency. All of this using only native language features.You will also understand how it is possible, in less than 200kB, to create a complete test runner and run the same test suite through all JavaScript runtimes for backend.