Tools like Playwright MCP (or AI) or Claude Code can generate tests for your code.
Cursor can give you testing suggestions about your codebase that sounds logical within seconds also.
Testing has never felt this easy before.
Until the need for a full scale, seamless are effective testing strategy for your product arises.
How do you know if these generated tests and suggestions really fit your standards and needs, despite looking so good?
How do you, as the lead engineer, balance the generated tests across different testing quadrants, between the “what” and the “when” of a testing automation pillar and design a scalable architecture for your team, and beyond?
How do you combine these AI tools to architect a scalable AI-driven testing workflow and still fully own your team’s code quality?
Join my talk and let’s find out.
This talk has been presented at JSNation 2026, check out the latest edition of this JavaScript Conference.






















