Build Your Team's Automation Layer: The AI Glue Agent Pattern

This ad is not shown to multipass and full ticket holders
JS Nation
JSNation 2026
June 11 - 15, 2026
Amsterdam & Online
The main JavaScript conference of the year
Upcoming event
JSNation 2026
JSNation 2026
June 11 - 15, 2026. Amsterdam & Online
Learn more
Bookmark
Slides
Rate this content
Sentry
Promoted
Code breaks, fix it faster

Crashes, slowdowns, regressions in prod. Seer by Sentry unifies traces, replays, errors, profiles to find root causes fast.

Learn an architectural pattern to automate "glue work" and build self-executing workflows. This talk will demonstrate how to use "collaboration planes" with lightweight, locally-hosted AI agents as an invisible automation layer to eliminate manual tasks like exporting data and bridging tools. Through an example of a self-hosted AI stock analyst and insights from my previous projects for CI/CD automation, you'll learn how to shift from being the executor of repetitive tasks to the architect of autonomous systems.

This talk has been presented at AI Coding Summit 2026, check out the latest edition of this Tech Conference.

FAQ

The 'glue person' is the individual who becomes the human API between tools, often manually executing tasks like running scripts or copying updates between systems, which interrupts deep focus and productivity.

Both cognitive science and engineering practice highlight that deep focus is fragile, with interruptions shattering the state and taking 20-30 minutes to regain full immersion in complex tasks.

Shubh Ravishankar Gawhade is a specialist in bridging technical architecture to build software for global organizations, focusing on transforming complex challenges into efficient, user-centric systems.

Organizations can identify 'glue people' by finding collaboration planes where intent is manually pushed into systems and then inserting AI glue agents to automate and optimize these processes.

A collaboration plane is a human-readable source of truth where intent is declared, and an AI glue agent is an automated orchestrator that executes predictable work, turning intent into action.

Shubh created a Google Sheet as a collaboration plane, capturing narrative units, branching, and localization data that decoupled narrative from code, reducing chaos and improving workflow.

Automation in orchestration aims to replace manual tasks with deterministic, testable processes, allowing humans to focus on strategic contributions rather than repetitive tasks.

AI agents can flag inconsistent speaker names, missing variables, or tone issues, and suggest improvements in narrative text, enhancing the quality and coherence of multilingual content.

Running AI models locally ensures data privacy, control over compliance, cost predictability, and protection from third-party training on internal data.

The shift involves moving from simple if-this-then-that rules to systems where AI agents can understand, reason, and adapt within defined guardrails, enhancing strategic capability.

Shubh
Shubh
18 min
26 Feb, 2026

Comments

Sign in or register to post your comment.
  • Cassia Almeida
    Cassia Almeida
    I'd like to share the same feedback as Pascal's. This was an audiobook, rather than a proper presentation. It was a shame as the topic is interesting.
  • Shubh
    Shubh
    @Pascal Klötzer Thanks for the honest feedback. You're right that the format leaned heavily on the content rather than the delivery. I was focused on packing in the technical patterns. But I hear you, presenting it needs more energy and pacing. Appreciate you taking the time to comment.
  • Pascal Klötzer
    Pascal Klötzer
    If you are just reading a text out loud and not present the topic, make it an ai created video for entertainment.
Video Summary and Transcription
Speaker specializing in technical architecture discusses the impact of interruptions on deep focus, productivity, and software development challenges in global organizations. Content adaptation to multiple languages required manual glyph corrections and a structured collaboration plane. Transitioning to automated systems raises the question of why orchestration remains manual, with AI as a glue agent. Strategic stock analysis involves N8n workflows, specialized agents, and structured contracts for consistency. Workflow architecture includes JSON contracts, specialized agents, and structured HTML emails for stock analysis. Workflow reliability focuses on independence, idempotence, and robust frameworks for adaptability. Automation and AI integration emphasize process streamlining and human intervention in decision-making. Intelligent orchestration systems involve encoding intent by humans and AI turning it into action, focusing on clear workflows and architecting for control.

1. Challenges in Global Software Development

Short description:

Speaker specializing in technical architecture discusses the impact of interruptions on deep focus, productivity, and software development challenges in global organizations.

Hello, thank you for joining my talk. My name is Shubh Ravishankar Gawhade, and I specialize in bridging technical architecture to build software for global organizations. My work focuses on transforming complex challenges into efficient, user-centric systems by architecting pipelines that turn human intent into scalable products.

In almost every organization, there is still a hidden rule. The glue person, this isn't a job title, is the person who becomes the human API between your tools. The one who's paying to just run the script, manually export that report, or copy updates between Jira, Shields, and a status doc. It looks like productivity, but it's actually a tax on your most expensive resource, deep uninterrupted focus. Cognitive science and engineering practice both show just how fragile that focus is. Once an engineer is in deep flow, a single interruption can shatter that state.

Studies consistently find that regaining full immersion in a complex task takes around 20 to 30 minutes. Add attention residue, where a part of your mind is still stuck on the previous task. This effect is further amplified by daily meetings and the time spent in preparing for them. Each quick ping quietly burns a significant chunk of your day, resulting in fewer hard problems being solved, more bugs, and a culture that rewards interrupt-driven responsiveness over strategic contributions.

I learned this the hard way. In my Honors narrative project, our story lived in Twine. Audio assets lived elsewhere, and our game code was littered with functions triggered by dialogue IDs. Each change turned into a forensic investigation. Search the code, cross-reference with Twine, DM the narrative lead to confirm the update, rebuild, and hope nothing else broke. The system was optimized for the first version, not the second. It was built for creation, not iteration. That pain compounded when I started working on multilingual educational narrative projects for global organizations and audiences.

2. Addressing Multilingual Content Challenges

Short description:

Content adaptation to multiple languages required manual glyph corrections and a structured collaboration plane. The system aimed to eliminate fragmentation and automate processes through a shared data source.

Suddenly, our content had to work in English, Hindi, Thai, and Burmese, and more. For some engines, we couldn't paste text directly. Non-Latin scripts required encodings like Krithidev for Hindi, Zoggi for Burmese, and Thai vowels would render incorrectly without manual glyph correction. The challenge was no longer just story logic. It was the entire pipeline, from narrative to encoded rendered content.

So I built a collaboration plane, a Google Sheet, as the single source of truth for production. Each row became a narrative unit, a tutorial step, a line of dialogue, a branching choice node. Some of the columns that were captured are dialogue ID to decouple narrative from the code, UI type and speaker to drive UI, animation, and speaker info. HasChoice and LinkedDialogueID to define branching as data, not code. Localized text columns for English, Hindi, Burmese, etc. So translators worked in context. Encoding columns for engines that needed specific fonts or glyph mappings.

This sheet became a contract between disciplines. Writers saw a script with structure, designers saw a tunable graph. Translators worked in context, and developers consumed structured data instead of chasing hard-coded IDs. The chaos of what and which version is live disappeared, but a new problem emerged. Every time a translator finished a column or a writer fixed a typo, the message came. The sheet is updated, can you please run the build? I had become the glue person for my own system, exporting CSVs, running encoding and validation scripts, triggering builds, and sanity checking outputs.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Scaling Up with Remix and Micro Frontends
Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
This talk discusses the usage of Microfrontends in Remix and introduces the Tiny Frontend library. Kazoo, a used car buying platform, follows a domain-driven design approach and encountered issues with granular slicing. Tiny Frontend aims to solve the slicing problem and promotes type safety and compatibility of shared dependencies. The speaker demonstrates how Tiny Frontend works with server-side rendering and how Remix can consume and update components without redeploying the app. The talk also explores the usage of micro frontends and the future support for Webpack Module Federation in Remix.
Understanding React’s Fiber Architecture
React Advanced 2022React Advanced 2022
29 min
Understanding React’s Fiber Architecture
Top Content
This Talk explores React's internal jargon, specifically fiber, which is an internal unit of work for rendering and committing. Fibers facilitate efficient updates to elements and play a crucial role in the reconciliation process. The work loop, complete work, and commit phase are essential steps in the rendering process. Understanding React's internals can help with optimizing code and pull request reviews. React 18 introduces the work loop sync and async functions for concurrent features and prioritization. Fiber brings benefits like async rendering and the ability to discard work-in-progress trees, improving user experience.
Thinking Like an Architect
Node Congress 2025Node Congress 2025
31 min
Thinking Like an Architect
Top Content
In modern software development, architecture is more than just selecting the right tech stack; it involves decision-making, trade-offs, and considering the context of the business and organization. Understanding the problem space and focusing on users' needs are essential. Architectural flexibility is key, adapting the level of granularity and choosing between different approaches. Holistic thinking, long-term vision, and domain understanding are crucial for making better decisions. Effective communication, inclusion, and documentation are core skills for architects. Democratizing communication, prioritizing value, and embracing adaptive architectures are key to success.
Full Stack Components
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Top Content
RemixConf EU discussed full stack components and their benefits, such as marrying the backend and UI in the same file. The talk demonstrated the implementation of a combo box with search functionality using Remix and the Downshift library. It also highlighted the ease of creating resource routes in Remix and the importance of code organization and maintainability in full stack components. The speaker expressed gratitude towards the audience and discussed the future of Remix, including its acquisition by Shopify and the potential for collaboration with Hydrogen.
The Dark Side of Micro-Frontends
React Advanced 2025React Advanced 2025
29 min
The Dark Side of Micro-Frontends
In the Talk, various key points were discussed regarding micro-front-end architecture. These included challenges with micro-intents, common mistakes in system design, the differences between micro-intents and components, granularity in software architecture, optimizing micro-front-end architecture, efficient routing and deployment strategies, edge computing strategies, global state and data sharing optimization, managing data context, governance and fitness functions, architectural testing, adaptive growth, value of micro-frontends, repository selection, repo structures, and web component usage.
The Eternal Sunshine of the Zero Build Pipeline
React Finland 2021React Finland 2021
36 min
The Eternal Sunshine of the Zero Build Pipeline
For many years, we have migrated all our devtools to Node.js for the sake of simplicity: a common language (JS/TS), a large ecosystem (NPM), and a powerful engine. In the meantime, we moved a lot of computation tasks to the client-side thanks to PWA and JavaScript Hegemony.
So we made Webapps for years, developing with awesome reactive frameworks and bundling a lot of dependencies. We progressively moved from our simplicity to complex apps toolchains. We've become the new Java-like ecosystem. It sucks.
It's 2021, we've got a lot of new technologies to sustain our Users eXperience. It's time to have a break and rethink our tools rather than going faster and faster in the same direction. It's time to redesign the Developer eXperience. It's time for a bundle-free dev environment. It's time to embrace a new frontend building philosophy, still with our lovely JavaScript.
Introducing Snowpack, Vite, Astro, and other Bare Modules tools concepts!

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Model Context Protocol (MCP) Deep Dive: 2-Hour Interactive Workshop
AI Coding Summit 2025AI Coding Summit 2025
86 min
Model Context Protocol (MCP) Deep Dive: 2-Hour Interactive Workshop
Workshop
Stepan Suvorov
Stepan Suvorov
Join a focused 2-hour session covering MCP's purpose, architecture, hands-on server implementation, and future directions. Designed for developers and system architects aiming to integrate contextual data with ML models effectively. Agenda:- Introduction & Why MCP? Key challenges MCP solves and core benefits.- Architecture Deep Dive: components, interactions, scalability principles. - Building Your Own MCP Server: guided walkthrough with code snippets and best practices; live demo or code review.- Future of MCP Developments: potential enhancements, emerging trends, real-world scenarios.
Key Takeaways:- Clear understanding of MCP's rationale.- Insight into design patterns and scaling considerations.- Practical steps to implement a prototype server.- Awareness of upcoming trends and how to apply MCP in projects. 
Node's Concurrency With the Strength of a Bull With BullMQ
Node Congress 2026Node Congress 2026
95 min
Node's Concurrency With the Strength of a Bull With BullMQ
Workshop
Edy Silva
Douglas Marques
2 authors
Node's concurrent nature is powerful already, but often we need to push work out of the main server for several reasons. In this work, we will explore a few scenarios in which work is cleverly pushed to another Node process to resolve.
Once we use a queue to distribute workloads, we need to identify the nature of the work to be done. For either I/O- or CPU-intensive work, the first is already perfectly covered by a single Node.js process; we will need to tweak the worker setup to match the available resources and throughput.