AI-powered development tools are excellent for helping us deliver code more quickly. However, based on my extensive experience in test automation and identity management, I've noticed that these tools can also introduce subtle security issues that might even impress Skynet.
In this session, I will discuss real-world examples where AI assistants have inadvertently worked against developers, highlighting cases of data leaks, supply chain attacks, and prompt injection vulnerabilities. You will learn effective strategies to identify AI-generated security issues before they impact you. After all, if Skynet ever awakens, let's ensure it isn't due to an untested AI-generated function that set it off.
This talk has been presented at React Summit 2025, check out the latest edition of this React Conference.