Most developers using AI coding tools hit the same wall on real projects: the demos look magical, but production work falls apart. This workshop shows why — and what to do about it.
The core lesson: execution is fast and cheap only when preparation is deep.
What we'll build, live
From an empty directory to a deployed production app — a mobile-first Conference Companion App with the event schedule, speaker profiles, search, and favourites. Attendees will open it on their phones before the session ends.
The four stages of shipping with AI
You'll see a complete production workflow, deliberately weighted toward the work that actually determines quality:
- Research — Gathering everything the AI will need upfront: official docs, code patterns, real data. By execution time, nothing has to be searched for. This is where quality comes from.
- Planning — Breaking the app into clear tasks with acceptance criteria, written live. Skipping this is why most AI builds fail.
- Execution — Ten tasks running in parallel via git worktrees, orchestrated by Navigator (a Claude Code plugin) on a React/Next.js stack. Code and docs written together.
- Review & Ship — Quality gates (tests, lint, types, build), clean commits, merge, deploy to Vercel. Live URL shared with the room.
Bonus: Human + Claude Code vs. fully autonomous agent
While the live build runs, Pilot — our autonomous coding agent — builds the same app from the same spec, on its own, in a separate repository. At the end, we open both side by side: two working apps, two pull request histories, two live URLs. One human-driven, one fully autonomous.
You'll leave with
A repeatable workflow for shipping real software with Claude Code, plus concrete techniques you can apply to your own projects on Monday morning.
This workshop has been presented at JSNation 2026, check out the latest edition of this JavaScript Conference.











