We all know the ritual: add a dependency, trust it implicitly, ship it to production. For years, that worked well enough. But now the attacks have started getting smarter.
Supply chain attacks targeting the npm ecosystem aren't theoretical anymore. Malicious packages that steal credentials, hijack environment variables or silently exfiltrate data over HTTP are showing up in minor version bumps. The JavaScript ecosystem's greatest strength, its openness, has become its greatest liability.
And then we handed our keyboards to AI.
AI assistants are, of course, useful. But they introduce a new category of risk: code you didn't write, don't fully understand, and may never review fully enough. Models can leak API keys into generated output, AI-written code can accidentally delete files, their code can make unintended network calls, or worse. And if someone's poisoned the training data, your AI might do this on purpose.
In this talk, we'll walk through real examples of threat vectors, demonstrate how Deno's permission system can stop a supply chain attack in its tracks, and explore how sandboxing your code execution can give you a genuinely safe environment to run AI-generated code without the overhead of standing up Docker infrastructure.
You'll leave with a clearer picture of the threat landscape and practical tools to execute code you might not trust.
This talk has been presented at Node Congress 2026, check out the latest edition of this JavaScript Conference.






















