June 11, 2026
TechLead Conference
Amsterdam

TechLead Conf Amsterdam 2026: Adopting AI in Orgs Edition

Event about leadership and seniority

Full remote ticket included with Multipass.

The Conference for Tech Leads, Staff Engineers, and Technical Eng Managers.

TechLead Conf 2026 tackles two critical challenges facing technical leaders today: navigating AI adoption in organizations and reducing system complexity. Through real-world case studies from startups to Big Tech, senior engineers and tech leads will share practical insights from the trenches.

Engage in discussion rooms, hallway track with experts, hands-on practical workshops, and tens of insightful talks.

GitNation Events Supporter

Catch, trace, and fix bugs across your entire stack Learn more

Current Trends in AI for Technical Hiring
Upcoming
Current Trends in AI for Technical Hiring
From video screenings and fake faces, to technical interviews and take-home tasks, the world of AI agents and assisted coding are changing the face of the hiring process in tech. In this talk, I'll cover both big and niche trends in how to use and evaluate engineering candidates in a world where AI assistance is assumed, and how you can continue to develop your processes in a secure way that remains fair to candidates while helping you find the talent you need.
Why Engineers Must Become Multipliers in the AI-Era
Upcoming
Why Engineers Must Become Multipliers in the AI-Era
The role of engineers is evolving in the AI era. As development tools become more powerful and accessible, the expectations for engineers are shifting from simply writing code to creating meaningful impact across teams and organizations.
In this talk, Gregor will share the concept of the engineering multiplier: an engineer who amplifies the effectiveness of the people around them, takes ownership beyond implementation, and proactively drives the most impactful work.
Lean Tech: How to Lead on Creating More Value With AI
Upcoming
Lean Tech: How to Lead on Creating More Value With AI
The Hype vs. RealityTrillions of dollars invested in AI are fueling massive excitement. Yet amid the buzz, it's tough to separate fact from fiction. Science paints a sobering picture: MIT reports 95% of AI investments deliver no value, while METR finds AI slashing productivity by -20%. Sound familiar? In your organization, vocal AI advocates push big claims, often sidelining skeptical senior engineers.The Problem in Your TeamsThis dynamic risks wasting resources and alienating expertise. How do you cut through the noise and pinpoint where AI truly creates value?Lean Tech: The Scientific AntidoteEnter Lean Tech—an adaptation of Lean Thinking for tech, rooted in Toyota's methods and Deming's scientific approach to work. It's the ideal framework for navigating AI's revolution, focusing ruthlessly on value creation.Key Lean Tech Principles in Action (from Theodo's Experience): - Value for the Customer: Spot AI-solvable problems and measure improvements rigorously.- Tech-Enabled Network of Teams: Empower autonomous teams to experiment with AI tools.- Right-First-Time: Analyze every issue, no matter how small, to extract lessons.- Just-in-Time: Track lead-time gains for productivity, not just cycle time.
Building a Learning Organization: Use 6-step Kaizens, standards, skills matrices, and dojos to scale knowledge.Proven Outcomes: 3x faster legacy modernizations and 2x acceleration on projects where product decisions aren't the bottleneck.Attendees will walk away with a step-by-step playbook to inject science into AI adoption: identify high-impact opportunities, measure real ROI, foster autonomous experimentation, and build lasting learning systems—ensuring your teams deliver tangible wins without the hype.
Training Engineers for AI Without Turning Them into Prompt Monkeys
Upcoming
Training Engineers for AI Without Turning Them into Prompt Monkeys
AI is reshaping how engineers work, but many organizations are training teams in the wrong direction—optimizing for prompts instead of thinking. This leads to fast output, shallow understanding, and fragile systems. This talk focuses on how tech leaders and senior engineers can adopt AI while preserving engineering judgment, ownership, and long-term system quality. You’ll learn how to train engineers to use AI as a tool—not a crutch—without sacrificing craft or increasing complexity.
Building Blocks of an Agentic Engineering Platform: What SRE Taught Us About Running Agents
Upcoming
Building Blocks of an Agentic Engineering Platform: What SRE Taught Us About Running Agents
Agents are the next distributed system: non-deterministic, autonomous, and tool-connected. Some patterns we rely on to run distributed systems reliably transfer directly; others break down and need rethinking.

This talk applies lessons from DevOps, platform engineering, and SRE to the agent era: enablement structures, hybrid pipelines mixing deterministic and probabilistic steps, SLOs and error budgets for agent reliability, context engineering as the new dependency management, golden paths for non-deterministic actors, and where human judgment belongs in agent pipelines.

Building blocks, patterns, tactics, and hard learnings from running agents in production on enterprise client engagements.
Ensuring Quality with AI
Upcoming
Ensuring Quality with AI
While most of the conversation around AI in software engineering is about using it to pump out new features at a rate we haven't seen before, one of the most interesting use cases for AI is ensuring the quality of your product. From PR reviews to bug fixes to code cleanup, AI can help engineering teams focus on what they enjoy working on, while helping them create a better product. 
AI-Powered Code Review
Upcoming
AI-Powered Code Review
Workshop
Serhii Yakovenko
Serhii Yakovenko
Every engineering organisation is experimenting with AI coding assistants, but few have built production-grade LLM integrations into their core developer infrastructure. I have such an experience, and I will share real patterns from deploying an AI-powered code review system across a 400+ person engineering organisation (~200 developers) — covering a competitive evaluation of 4 tools across 18 dimensions, building a webhook-based review architecture with slash commands and auto-review, evolving context enrichment from static rules to AI-powered document selection, managing a 4-model fallback chain on Vertex AI, and measuring impact through a feedback dashboard. Attendees will leave with a battle-tested
playbook for integrating LLMs into their own engineering workflows — not as toys but as production infrastructure.

Workshop Structure
1. The Code Review Bottleneck at Scale
2. Tool Evaluation — 4 Candidates, 18 Dimensions
3. Architecture — Webhook Server & Auto-Review
4. Context Enrichment — From Path Rules to AI- Document Selection
5. Model Strategy — Migration & Fallback Chain
6. Measuring Impact — Feedback Dashboard
Register
The Factory Model for AI Agents: WIP Limits, Flow, and 10x Throughput
Jun 4, 14:00
The Factory Model for AI Agents: WIP Limits, Flow, and 10x Throughput
Workshop
Denis Ermakov
Denis Ermakov
AI agents are becoming part of the software development process, but most teams treat them like isolated tools rather than participants in a structured workflow. Without coordination, agent-driven development quickly turns into chaos: duplicated work, endless retries, and unpredictable delivery.

I will introduce a practical approach to organizing AI agents using principles from lean manufacturing and Kanban flow systems. By applying concepts such as pull-based work, WIP limits, and bottleneck management, engineering teams can orchestrate multiple AI agents—system analyst, developer, and tester—into a predictable software delivery pipeline.

Through a live demonstration using GitHub Projects and modern AI coding tools, I will show how agents autonomously pull tasks, move work across pipeline stages, and escalate to humans only when necessary. The result is a development workflow that reduces coordination overhead while dramatically improving throughput and visibility.
Register
Tech Talking Money: How Engineering Leaders Win Budget by Speaking the Language of Risk
Jun 3, 14:00
Tech Talking Money: How Engineering Leaders Win Budget by Speaking the Language of Risk
Workshop
Viktor Didenchuk
Viktor Didenchuk
Every engineering leader has been told "We don't have budget" - whether for tackling technical debt, modernising legacy systems, or adopting AI tooling. The problem is rarely the idea itself. It is how we present it. We frame platform health as an engineering preference when it should be positioned as business risk.

In this interactive workshop, Viktor Didenchuk shares a battle-tested framework from leading cloud platform delivery at JPMorganChase that translates any technical investment - from incident tooling to AI adoption - into the three languages executives actually speak: Revenue Risk, Regulatory Exposure, and Operational Resilience. Through three real-world scenarios with live audience polling, attendees will practise reframing technical asks into compelling, quantified business cases that survive quarterly financial reviews.

Walk away with a repeatable playbook you can apply on Monday morning to secure budget for the initiatives your organisation needs - including AI.
Register
Friends Don’t Let Friends Agent Alone
Upcoming
Friends Don’t Let Friends Agent Alone
This talk is about what gets unlocked when engineers pair with each other and an agent, instead of disappearing into silos with one human and one machine. We never paired to type. We paired to stay aligned, challenge assumptions, and make better decisions together. When code gets cheap to produce, the human collaboration layer matters more, not less. We'll explore what effective human+agent collaboration actually looks like in practice and you'll leave with a framework for keeping teams aligned without slowing them down. 

Building for Agent Experience
Upcoming
Building for Agent Experience
Every cloud platform was designed for developers: humans who read docs, click dashboards, and push to git. But AI agents are already using these platforms, and they experience them very differently. At Render, we've watched agents parse our marketing pages, struggle with our APIs, and surface (or not surface) our platform in LLM recommendations. Building our MCP server, CLI, and agent skills meant designing for two users at once, and rethinking what "developer experience" even means when the developer isn't human. This talk distills what we learned: where our assumptions failed, what we changed in response, and the concrete principles engineering leaders can apply to build tools, docs, and APIs that serve both humans and AI agents.