July 6 - 7, 2026
AI Coding Summit
London, UK & Online

AI Coding Summit London

See how AI transforms software development

Full remote ticket included with Multipass.

Don’t get left behind—discover how AI is revolutionizing software engineering at the AI Coding Summit. Dive into cutting-edge talks and hands-on workshops on AI-powered software development. Network with industry leaders, explore game-changing tools, and stay ahead in the fast-evolving world of AI-driven development. Do not miss the biggest shift in software engineering.

The Last Software Engineer
Upcoming
The Last Software Engineer
I'm not here to tell you software engineering is ending soon. Nobody can put a reliable date on that, and pretending otherwise is a distraction. But we also have to admit something humbling: a year ago, most of us would not have predicted coding agents would be this good. That should make us less confident about predicting what they'll be able to do one year, or five years, from now.

So let's use "The Last Software Engineer" as a thought exercise. If AI keeps taking over more of the implementation work, what remains most human and valuable for us to do? In this talk, we'll take one step back from the hypothetical end and focus on the durable skill that has always separated great engineers from merely productive ones: judgment.

The future belongs not to people who only know how to build, but to people who know what should be built. We'll talk about product engineering, accountability, trade-offs, constraints, evaluation, and how to keep making software worth having in an AI era.
Learnings From 100+ Experiments Comparing LLMs for AI Coding
Upcoming
Learnings From 100+ Experiments Comparing LLMs for AI Coding
On my YouTube channel AI Coding Daily, I've published 100+ videos comparing different models for coding: Opus vs GPT, Kimi vs GLM, New vs Older versions, Effort Medium vs High, etc. Now I see clear patterns for evaluating models and deciding which one to choose for specific tasks and projects.
Skill Design for LLM Agents
Upcoming
Skill Design for LLM Agents
What makes an agent skill reliable, performant, and maintainable? We will explore a robust approach to skill design, starting with foundational best practices, moving into automated skill generation, and validation. The second half of the talk focuses on the critical role of evaluation, demonstrating how tools like SkillGrade and benchmarks like SkillBench allow developers to catch regressions and ensure their agents behave predictably in complex environments.
Skills, Templates and Components for Claude Code and AI Coding workflows
Upcoming
Skills, Templates and Components for Claude Code and AI Coding workflows
This talk dives into how developers can build structured, repeatable coding workflows using Claude Code's ecosystem: Skills, Subagents, settings, Hooks, and MCP servers. I'll walk through the architecture behind Claude Code Templates, an open-source project with 120K+ npm downloads and 23K+ GitHub stars, showing how these components work together to create composable, reusable patterns within real software development projects. From automating code reviews to orchestrating multi-agent tasks, we'll cover practical setups that teams can adopt immediately.
For Agents, By Agents: Building AI Tools That Maintain Themselves
Upcoming
For Agents, By Agents: Building AI Tools That Maintain Themselves
Developer tools are no longer built only for humans at a terminal. They are also used, tested, broken, and improved by AI agents.

In this session, I will share how to create tools where you can have agent-reported issues, automated reviews, refactors, and release workflows to the point that such tools start to maintain themselves, and help one maintainer operate closer to a small team.
Automating Mobile QA with Cloud Agents
Upcoming
Automating Mobile QA with Cloud Agents
This talk shows how to design and operate QA agents that run against real iOS and Android devices hosted remotely. We’ll cover the architecture of a reliable agent, connecting Linux-based infrastructure to mobile devices running on macOS, and integrating outputs like screenshots, recordings, and logs directly into pull requests.
From Prompting to Orchestrating: Coding Is Now a System
Upcoming
From Prompting to Orchestrating: Coding Is Now a System
We thought AI would help us write code faster. Instead, it's changing what coding actually is.
We started with prompts, then copilots, then agents. Each step felt like a leap forward — until you try to build something real at scale.Because prompts don’t remember.
Agents don’t coordinate.
And models still hallucinate and miss context.
What’s emerging instead is a different approach: not writing code line by line, but designing systems that produce, validate, and evolve code.
Instead of a single assistant, we orchestrate multi-agent workflows — planning, implementing, reviewing, and testing — with shared context and feedback loops.In this talk, we’ll cover:
- why prompt-based and single-agent approaches break down
- how multi-agent systems reshape development workflows
- practical patterns for planning, execution, validation, and control loops
- where things fail — and how to make systems reliable

We’ll show how structured orchestration makes agent-based systems actually work in practice — especially when moving beyond isolated, task-level automation.
The shift isn’t from coding to prompting — it’s from coding to designing systems that write code.
Streaming Systems, Hidden Risks, And AI-driven Consequences
Upcoming
Streaming Systems, Hidden Risks, And AI-driven Consequences
Modern AI systems don’t just rely on static datasets—they depend on continuous streams of real-time data to train, update, and make decisions. But what happens when that data can’t be trusted?
In this talk, we explore how streaming data pipelines—often built on systems like Apache Kafka—are becoming a critical and undersecured attack vector for AI-driven applications.
Rather than targeting models directly, attackers can manipulate the data flowing into them. By injecting, modifying, or replaying events in real-time streams, adversaries can:
- Poison training data and degrade model accuracy over time
- Manipulate real-time features used in fraud detection or recommendation systems
- Trigger unintended behaviors in downstream AI systems
- Quietly influence decisions without ever touching the model itself
We’ll examine how these attacks work in practice, from subtle data drift manipulation to targeted event injection, and why they are difficult to detect using traditional security tools.
The talk will break down the weak points in modern data pipelines:
- Lack of validation and trust boundaries in event streams
- Over-reliance on infrastructure-level security (encryption, ACLs)
- Blind spots in monitoring data integrity and semantic correctness
We’ll also explore how these risks evolve in systems that continuously retrain or adapt, where corrupted data doesn’t just affect a single decision—but becomes embedded in the model itself.
Finally, we’ll discuss defensive strategies that go beyond securing infrastructure: treating data as an attack surface, implementing validation and anomaly detection at the data level, and designing pipelines that can detect and recover from adversarial inputs.
This talk offers a new perspective on AI security - not by focusing on models, but on the data pipelines that feed them, where some of the most impactful and least visible attacks can occur. 
AI Reviews AI – Closing the Loop in Agentic Development
Upcoming
AI Reviews AI – Closing the Loop in Agentic Development
AI-generated code is becoming the norm, but who reviews the reviewer? In this session, we explore how to close the feedback loop by letting AI agents review AI-written code. We'll look at local agent setups as well as cloud-based services like GitHub Copilot code review or Greptile, and discuss when each approach makes sense. Walk away with a practical mental model for building a self-correcting AI development workflow, without losing control over your codebase.
What is Claude, Coding?
Upcoming
What is Claude, Coding?
What can more than 20M public GitHub commits tell us about Claude Code's reach? In this talk, we move beyond vendor narratives to look at the real data: which developers are using Claude Code, what they're building, and crucially, what kinds of problems it's being applied to at the serious end of the stack.
Let’s Teach Neo Kung-Fu: Building an Agentic Skill with MCP Tools
Upcoming
Let’s Teach Neo Kung-Fu: Building an Agentic Skill with MCP Tools
Workshop
Misha Kazakov
Misha Kazakov
AI coding assistants are evolving from simple autocomplete to autonomous agents that can interact with external systems. But how do you teach an agent to follow your workflows and use the right tools at the right time?In this hands-on workshop, you'll discover:What MCP (Model Context Protocol) is and how it standardizes tool integration for LLMsHow Agent Skills package domain-specific knowledge and workflows for AI agentsThe key differences between MCP servers and Skills, and when to use eachHow to create a custom Skill that orchestrates multiple MCP tools into a cohesive workflowBy the end of this session, you'll build your own AI Skill that uses MCP tools — a Matrix-themed Neo fighting skill.Who should attend: This workshop is ideal for software developers who use AI coding assistants (Claude Code, Cursor, or similar) and want to extend their capabilities with custom integrations and workflows. No prior MCP or Skills experience required — just bring your curiosity and a laptop.
Register