Developers have gone full-steam from traditional programming to AI code assistants and vibe coding but where does that leave application security? What are the risks of LLMs introducing insecure code and AI applications manipulated through prompt engineering to bypass security controls?
In a series of real-world application hacking demos I’ll demonstrate how developers mistakenly trust LLMs in generative AI code assistants that introduce insecure code and result in vulnerable applications that attackers easily exploit against you. We don’t stop here. We’ll apply adversarial attacks on neural networks in the form of prompt injection payloads to compromise LLMs integrated into AI application workflows and weaponize them for SQL injection and other business logic bypass.
Join me in this session where you’ll learn how security vulnerabilities such as path traversal, prototype pollution, sql injection and other AI security risks are impacting LLMs and how to leverage AI security guardrails to secure GenAI code.
This talk has been presented at JSNation US 2025, check out the latest edition of this JavaScript Conference.