June 11, 2026
TechLead Conference
Amsterdam

TechLead Conf Amsterdam 2026: Adopting AI in Orgs Edition

Event about leadership and seniority

Full remote ticket included with Multipass.

The Conference for Tech Leads, Staff Engineers, and Technical Eng Managers.

TechLead Conf 2026 tackles two critical challenges facing technical leaders today: navigating AI adoption in organizations and reducing system complexity. Through real-world case studies from startups to Big Tech, senior engineers and tech leads will share practical insights from the trenches.

Engage in discussion rooms, hallway track with experts, hands-on practical workshops, and tens of insightful talks.

GitNation Events Supporter

Catch, trace, and fix bugs across your entire stack Learn more

Current Trends in AI for Technical Hiring
Upcoming
Current Trends in AI for Technical Hiring
From video screenings and fake faces, to technical interviews and take-home tasks, the world of AI agents and assisted coding are changing the face of the hiring process in tech. In this talk, I'll cover both big and niche trends in how to use and evaluate engineering candidates in a world where AI assistance is assumed, and how you can continue to develop your processes in a secure way that remains fair to candidates while helping you find the talent you need.
Why Engineers Must Become Multipliers in the AI-Era
Upcoming
Why Engineers Must Become Multipliers in the AI-Era
The role of engineers is evolving in the AI era. As development tools become more powerful and accessible, the expectations for engineers are shifting from simply writing code to creating meaningful impact across teams and organizations.
In this talk, Gregor will share the concept of the engineering multiplier: an engineer who amplifies the effectiveness of the people around them, takes ownership beyond implementation, and proactively drives the most impactful work.
Agentic AIrways: Orchestrating a Dev Team with Multi-Agent Systems
Jun 11, 8:00
Agentic AIrways: Orchestrating a Dev Team with Multi-Agent Systems
Workshop
Mo Khazali
Mo Khazali
In this hands-on workshop, attendees split into teams - Project Management, Development, QA, and Marketing - to build an AI-enabled development for a fictional airline, AgenticAIrways.

Each team builds an AI agent for its role in the delivery workflow. The PM team defines requirements and passes them to the Dev team, who build features with tool use and domain-specific logic. The Dev team hands off to PM and QA for review and stress-testing with adversarial scenarios and edge cases. Once the system passes muster, it's handed to Marketing to define how the product is communicated to end users.

Agents communicate and coordinate using MCP and Agent-to-Agent (A2A) protocols, mirroring how agentic software will increasingly be embedded in real development workflows.

Whether the airline launches successfully or descends into chaos, every team will walk away with practical experience designing, constraining, and orchestrating multi-agent systems — and a clear understanding of how AI agents can augment each role on a dev team.
Register
Effective Thinking in the Age of Augmented Tooling
Upcoming
Effective Thinking in the Age of Augmented Tooling
AI tools are writing more code than ever — but the engineers and leaders getting the most from them aren't the ones with the best prompts. They're the ones who think most clearly.Drawing on The 5 Elements of Effective Thinking by his former math professor Michael Starbird, Lindsey makes the case that a generalist's toolkit — deep understanding, productive failure, relentless questioning — is the real multiplier in an AI-augmented workflow. The same habits that make you a better thinker make you a better collaborator with machines.This talk also draws on Vercel's own AI adoption journey: rolling out AI tooling company-wide, developing shared vocabulary for working with coding agents, and building systems that connect AI-generated code to real production impact. Lindsey shares practical lessons on what's worked, what hasn't, and why "learning how to think" may be the most underrated leadership skill in tech right now.
Lean Tech: How to Lead on Creating More Value With AI
Upcoming
Lean Tech: How to Lead on Creating More Value With AI
The Hype vs. RealityTrillions of dollars invested in AI are fueling massive excitement. Yet amid the buzz, it's tough to separate fact from fiction. Science paints a sobering picture: MIT reports 95% of AI investments deliver no value, while METR finds AI slashing productivity by -20%. Sound familiar? In your organization, vocal AI advocates push big claims, often sidelining skeptical senior engineers.The Problem in Your TeamsThis dynamic risks wasting resources and alienating expertise. How do you cut through the noise and pinpoint where AI truly creates value?Lean Tech: The Scientific AntidoteEnter Lean Tech—an adaptation of Lean Thinking for tech, rooted in Toyota's methods and Deming's scientific approach to work. It's the ideal framework for navigating AI's revolution, focusing ruthlessly on value creation.Key Lean Tech Principles in Action (from Theodo's Experience): - Value for the Customer: Spot AI-solvable problems and measure improvements rigorously.- Tech-Enabled Network of Teams: Empower autonomous teams to experiment with AI tools.- Right-First-Time: Analyze every issue, no matter how small, to extract lessons.- Just-in-Time: Track lead-time gains for productivity, not just cycle time.
Building a Learning Organization: Use 6-step Kaizens, standards, skills matrices, and dojos to scale knowledge.Proven Outcomes: 3x faster legacy modernizations and 2x acceleration on projects where product decisions aren't the bottleneck.Attendees will walk away with a step-by-step playbook to inject science into AI adoption: identify high-impact opportunities, measure real ROI, foster autonomous experimentation, and build lasting learning systems—ensuring your teams deliver tangible wins without the hype.
Training Engineers for AI Without Turning Them into Prompt Monkeys
Upcoming
Training Engineers for AI Without Turning Them into Prompt Monkeys
AI is reshaping how engineers work, but many organizations are training teams in the wrong direction—optimizing for prompts instead of thinking. This leads to fast output, shallow understanding, and fragile systems. This talk focuses on how tech leaders and senior engineers can adopt AI while preserving engineering judgment, ownership, and long-term system quality. You’ll learn how to train engineers to use AI as a tool—not a crutch—without sacrificing craft or increasing complexity.
Beyond the Hype Cycle: Driving real ROI with AI in Your Organization
Upcoming
Beyond the Hype Cycle: Driving real ROI with AI in Your Organization
88% of organizations now report using AI, yet only 39% capture meaningful enterprise value from it (McKinsey State of AI Survey, Nov 2025). The gap between "we use AI" and "AI transformed our business" has never been wider. In this talk, I'll dissect why most AI adoption metrics are vanity metrics dressed in executive clothing, drawing on strategic AI research and my own experience rolling out AI-powered tools across IKEA's global supply chain. You'll walk away with a practical framework for measuring what actually matters: workflow redesign depth, decision-quality uplift, and compounding capability gains, not chatbot logins per month or tokens burned by your teams.
Scaling AI Adoption: The Real Challenges of Transforming 300 Engineers
Upcoming
Scaling AI Adoption: The Real Challenges of Transforming 300 Engineers
Most companies talk about becoming "AI-native". Very few actually do it.

In this talk, I’ll share how we’re approaching the upskilling of ~300 engineers to move beyond experimentation and into real, repeatable AI-native development.

We’ll cover how we’re introducing new paradigms like AI-Native Engineering (AINE) and Spec-Driven Development, how we’re structuring the individual contributor journey, and how we’re driving adoption across teams with very different levels of maturity.

More importantly, we’ll dive into what doesn’t work: resistance patterns, false starts, over-reliance on tools, and the gap between perceived and actual productivity gains.

This talk will give you a concrete blueprint along with the trade-offs and lessons learned along the way.
AI-Powered Code Review
Jun 2, 14:00
AI-Powered Code Review
Workshop
Serhii Yakovenko
Serhii Yakovenko
Every engineering organisation is experimenting with AI coding assistants, but few have built production-grade LLM integrations into their core developer infrastructure. I have such an experience, and I will share real patterns from deploying an AI-powered code review system across a 400+ person engineering organisation (~200 developers) — covering a competitive evaluation of 4 tools across 18 dimensions, building a webhook-based review architecture with slash commands and auto-review, evolving context enrichment from static rules to AI-powered document selection, managing a 4-model fallback chain on Vertex AI, and measuring impact through a feedback dashboard. Attendees will leave with a battle-tested
playbook for integrating LLMs into their own engineering workflows — not as toys but as production infrastructure.

Workshop Structure
1. The Code Review Bottleneck at Scale
2. Tool Evaluation — 4 Candidates, 18 Dimensions
3. Architecture — Webhook Server & Auto-Review
4. Context Enrichment — From Path Rules to AI- Document Selection
5. Model Strategy — Migration & Fallback Chain
6. Measuring Impact — Feedback Dashboard
Register
Building Blocks of an Agentic Engineering Platform: What SRE Taught Us About Running Agents
Upcoming
Building Blocks of an Agentic Engineering Platform: What SRE Taught Us About Running Agents
Agents are the next distributed system: non-deterministic, autonomous, and tool-connected. Some patterns we rely on to run distributed systems reliably transfer directly; others break down and need rethinking.

This talk applies lessons from DevOps, platform engineering, and SRE to the agent era: enablement structures, hybrid pipelines mixing deterministic and probabilistic steps, SLOs and error budgets for agent reliability, context engineering as the new dependency management, golden paths for non-deterministic actors, and where human judgment belongs in agent pipelines.

Building blocks, patterns, tactics, and hard learnings from running agents in production on enterprise client engagements.
The Factory Model for AI Agents: WIP Limits, Flow, and 10x Throughput
Jun 4, 14:00
The Factory Model for AI Agents: WIP Limits, Flow, and 10x Throughput
Workshop
Denis Ermakov
Denis Ermakov
AI agents are becoming part of the software development process, but most teams treat them like isolated tools rather than participants in a structured workflow. Without coordination, agent-driven development quickly turns into chaos: duplicated work, endless retries, and unpredictable delivery.

I will introduce a practical approach to organizing AI agents using principles from lean manufacturing and Kanban flow systems. By applying concepts such as pull-based work, WIP limits, and bottleneck management, engineering teams can orchestrate multiple AI agents—system analyst, developer, and tester—into a predictable software delivery pipeline.

Through a live demonstration using GitHub Projects and modern AI coding tools, I will show how agents autonomously pull tasks, move work across pipeline stages, and escalate to humans only when necessary. The result is a development workflow that reduces coordination overhead while dramatically improving throughput and visibility.
Register
Tech Talking Money: How Engineering Leaders Win Budget by Speaking the Language of Risk
Jun 3, 14:00
Tech Talking Money: How Engineering Leaders Win Budget by Speaking the Language of Risk
Workshop
Viktor Didenchuk
Viktor Didenchuk
Every engineering leader has been told "We don't have budget" - whether for tackling technical debt, modernising legacy systems, or adopting AI tooling. The problem is rarely the idea itself. It is how we present it. We frame platform health as an engineering preference when it should be positioned as business risk.

In this interactive workshop, Viktor Didenchuk shares a battle-tested framework from leading cloud platform delivery at JPMorganChase that translates any technical investment - from incident tooling to AI adoption - into the three languages executives actually speak: Revenue Risk, Regulatory Exposure, and Operational Resilience. Through three real-world scenarios with live audience polling, attendees will practise reframing technical asks into compelling, quantified business cases that survive quarterly financial reviews.

Walk away with a repeatable playbook you can apply on Monday morning to secure budget for the initiatives your organisation needs - including AI.
Register
Ensuring Quality with AI
Upcoming
Ensuring Quality with AI
While most of the conversation around AI in software engineering is about using it to pump out new features at a rate we haven't seen before, one of the most interesting use cases for AI is ensuring the quality of your product. From PR reviews to bug fixes to code cleanup, AI can help engineering teams focus on what they enjoy working on, while helping them create a better product. 
Organic Leadership in the Age of AI: Why human Touch Becomes More Valuable Than Ever
Upcoming
Organic Leadership in the Age of AI: Why human Touch Becomes More Valuable Than Ever
In this talk, Kseniia Korostelova shares the results of a real experiment where she attempted to delegate parts of her engineering leadership work to AI. From architecture decisions and feature planning to design reviews and performance feedback, some tasks worked surprisingly well while others failed completely.
These experiences led to the concept of Organic Leadership: a leadership approach that uses AI to remove noise while preserving the human judgment, trust, and context that teams rely on.
You will learn where AI can genuinely augment engineering leadership today, where it cannot, and why the rise of AI may actually increase the value of truly human leadership.
Friends Don’t Let Friends Agent Alone
Upcoming
Friends Don’t Let Friends Agent Alone
This talk is about what gets unlocked when engineers pair with each other and an agent, instead of disappearing into silos with one human and one machine. We never paired to type. We paired to stay aligned, challenge assumptions, and make better decisions together. When code gets cheap to produce, the human collaboration layer matters more, not less. We'll explore what effective human+agent collaboration actually looks like in practice and you'll leave with a framework for keeping teams aligned without slowing them down. 

Building for Agent Experience
Upcoming
Building for Agent Experience
Every cloud platform was designed for developers: humans who read docs, click dashboards, and push to git. But AI agents are already using these platforms, and they experience them very differently. At Render, we've watched agents parse our marketing pages, struggle with our APIs, and surface (or not surface) our platform in LLM recommendations. Building our MCP server, CLI, and agent skills meant designing for two users at once, and rethinking what "developer experience" even means when the developer isn't human. This talk distills what we learned: where our assumptions failed, what we changed in response, and the concrete principles engineering leaders can apply to build tools, docs, and APIs that serve both humans and AI agents.
Interviewing in the Post-LLM World
Upcoming
Interviewing in the Post-LLM World
As LLMs become everyday tools for developers, the way we interview engineers must evolve.
We will learn strategies to adapt technical interviews, embracing AI as a tool while still assessing judgment, critical thinking, and collaboration.
Your Platforms Matter More Than Ever With AI
Upcoming
Your Platforms Matter More Than Ever With AI
Manual coding as the primary activity in software development is ending.

Developers will not spend most of their time writing code. They will review, steer and constrain what AI systems produce. The bottleneck shifts from typing syntax to defining context, boundaries and intent.

When code can be generated instantly, the leverage moves to the system around it. Your internal developer platform, your golden paths, your compliance controls, your deployment standards and your observability become the real foundation of product development.

Without a strong platform, AI accelerates fragmentation. Different tools, inconsistent environments and ad hoc processes will compound into chaos at machine speed. With standardized workflows and paved roads, AI becomes a multiplier for quality and consistency instead of risk.

At the same time, each developer will likely be responsible for more software than ever before. As output increases, cognitive load must decrease. That means less variability, fewer bespoke setups and stronger defaults. The complexity of the systems must go down even if the volume of code goes up.

Internal developer platforms are no longer a productivity initiative. They are a prerequisite for succeeding in the AI era of software development. If you are rethinking your Content experience platform for this AI-driven era, partnering with an engineering-led headless CMS agency like FocusReactive turn these principles into a practical, scalable content platform.
Why Building Eval Platforms Is Hard
Upcoming
Why Building Eval Platforms Is Hard
An eval platform is not just a test runner. You are building shared definitions of “good,” reliable data pipelines, labelling workflows, versioning, and trust in results across many teams and model changes. This session breaks down the hidden complexity, the common failure modes, and the design principles that make evals credible and usable in day-to-day engineering.