Vibe Coding at Enterprise Scale: What Happens When AI Joins Your Dev Team

Bookmark
Rate this content

What if you let AI code alongside your devs—would it speed things up or slow them down? At Modus Create, we ran a 3-month internal experiment to find out. Two squads were tasked with building the same app: one squad had full access to AI coding agents like GitHub Copilot and Cursor, while the other worked with zero AI assistance. The results? Surprising, nuanced, and extremely relevant for any enterprise thinking about bringing AI tools into their software delivery process. In this session, we'll share productivity metrics, insights on code quality, cultural impact, and what we learned about pairing developers with AI. You'll walk away with a real-world perspective and a practical guide on how to scale AI-powered development across your engineering org—without the hype.

This talk has been presented at AI Coding Summit, check out the latest edition of this Tech Conference.

FAQ

Wesley Fisher is a senior principal engineer at Modus Create with 14 years of experience in software engineering, specializing in AWS, Java, JavaScript, and serverless applications.

The experiment aimed to understand how AI, specifically coding agents like GitHub Copilot, can enhance software development productivity and quality.

The experiment found a 45% productivity gain using AI, with the AI-assisted team completing tasks 40% faster than the non-AI team, despite having fewer members.

AI enabled developers to work faster and more efficiently, even allowing a Python engineer to be productive in a new tech stack, but still required human oversight for decision making and code review.

The experiment used AI tools like GitHub Copilot and Cursor, and development technologies such as Node.js, React, Ionic, Capacitor, and Amazon Web Services.

Wesley Fisher believes AI can significantly change software development, serving as a productivity boost rather than a replacement for human expertise.

Effective AI use involves clear communication with AI, breaking tasks into atomic steps, and maintaining a validation loop with human oversight for quality assurance.

Developers should provide clear and concise prompts with rich context focused on the task at hand to achieve the best results with AI.

Human oversight is crucial to ensure architectural clarity, make informed decisions, and validate AI-generated code, maintaining high software quality.

VIPE coding relies solely on AI without code review, while AI-assisted coding involves human oversight for decision making and reviewing AI-generated code.

Wesley Fuchter
Wesley Fuchter
22 min
23 Oct, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Wesley discusses the potential benefits of integrating AI tools in software development, demonstrating a 45% productivity improvement with AI usage. The importance of engineers providing clear direction for AI tasks and overseeing AI-generated code is highlighted. Engineers are reminded that AI is a complement, not a replacement, for human expertise, and that effective AI usage involves clear communication and human oversight. Contextual guidance and effective task management strategies for AI interaction are emphasized.

1. Insights on AI Tools in Software Development

Short description:

Wesley sharing insights on using AI tools in software development. He discusses the balance between different opinions on VIPE coding, emphasizing the need to understand how AI tools can aid in software development. Wesley presents an internal experiment conducted at Modus Create, comparing the performance of teams using and not using AI tools for coding. The experiment aimed to assess the impact of AI on development speed and team size, showcasing the potential benefits of integrating AI in the software development process.

Hello everyone, thanks for having me today here, I'm Wesley, I'm going to introduce myself real quick, but I want to talk about web coding at its highest scale. We did a pretty good experiment at Modus Create a couple months ago, and I would like to share about that. Alright? So, introducing myself, I am Wesley Fisher, I am from Brazil, I work at Modus Create as a senior principal engineer there. Working there for almost 7 years now, and I have 14 years working as a soft engineer. I have done many different things as a soft engineer, I have done front-end, back-end, DevOps, solutions like that as well. And my expertise is around Amazon Web Services, Java, JavaScript, and serverless applications. And lately I have been very focused on understanding how these new AI type of tools we are getting, coding agents, are going to change the way that we do software, right? Have been very focused on that in the last 9 months now. So while I'm going to the community, having conversations with folks, Reddit, Twitter, AXE, those different platforms, I have seen a lot of people doing different comments, having different opinions about VIPE coding, about using coding agents tools for software development, right? And the pattern I have seen is basically two different groups in opposite directions. People thinking that software developers as a profession is now gone, or soon to be gone, and in the other side, people that think that VIPE coding is something that we should not be doing, people think that VIPE coding, or using agents for coding, is not a good thing, So I am here today to try to put a balance on this conversation by bringing some facts, bringing some useful information of learnings we got in the last few months. I do believe that with AI, the way that we build software is changing a lot. I don't think that in the long run, in the next few years, we are going to be coding as we have been coding for the last 50 years, or at least in the last 25 years. I think that AI, and coding agents, GitHub Copilot, Cursor, Wingsurf, those tools, right? They are going to change a bit. But it's not, I don't have an opinion, a strong opinion about being something bad or good. I think we have a balance, and we need to think exactly how those tools can help us. For that, at Modus Create, we decided to VIPE code an application internally, an internal experiment to really understand how AI can help us do better software to build better applications to help our clients.

So, I'm going to talk about that, I'm going to present data from our experiment, I'm going to present the learnings we got, and I don't want to talk about hype, I don't want to talk about the trends, and what people are talking about out there, right? I'm going to focus basically on the learnings we got from that experiment, and what that taught us. We had basically 10 people working in a three month experiment between March and May, so it was a big thing for us, and we learned a lot that I want to share. Basically the structure of the experiment, we decided to build an application, a simple mobile application that helps users to translate magical jargon into common people knowledge, a couple of languages, right? Essentially, you grab your phone, you take a screenshot or a picture from a doctor's report, a lab report, a lab exam report, and that diagnosis you have there, we translate that to common knowledge, of course using AI in the background as well. We built that application twice. We had two teams, one team what we call DIY, essentially do-yourself team, they were not allowed to use any tools for coding, any AI tools for coding. They could use different tools, Stack Overflow, they could use the traditional software development as we are used to, right? But they could not use any tools for generating code using AI. And we had another team, the AI team, that was allowed to use cursor and GitHub Copilot to generate the code, and they were specifically asked to not code at all unless they really needed that, and when they needed to code, they would need to document that so we would understand why they needed to code something. The AI team was only two developers, two engineers, two full stack engineers, and the no AI team were three developers, right? So we had a smallest team working with AI. Both teams were asked to write plenty of documentation and to manage time sheets during the development process. So we would track to the detail of the task, how many hours they are taking, they took in a given particular feature, right? And we used that modern technology to develop the application using BadRock as our LLM in the backend for Amazon services. We used that Fargate, Node.js, the application was built using React, Ionic, and Capacitor, and deployed using GitHub Actions to both Apple and Google Play stores. Of course, we didn't deploy externally, only internally. The most important part of our experiment was in the very beginning we set up hypotheses on the expectations we had about that experiment. We thought that the use of AI, in this case, cursor and GitHub Copilot, they would help us to be at least 50% faster with a smaller team by 30%. So once again, the team that was using AI was one less engineer than the three engineers of the non-AI team, meaning 30% reduced capacity there.

2. Impact of AI Expertise in Software Development

Short description:

Using AI improved productivity by 45%, enabling a team of two to develop an application in half the time of a three-person team without AI. The experiment demonstrated that AI tools can help engineers be productive in new languages and tech stacks. Quality assessment showed no major differences between AI-assisted and non-AI code bases, emphasizing the importance of engineers' clarity in directing AI tasks.

We also thought that using AI, the need of expertise, technical expertise, in a given matter of the software development lifecycle would not be required. Just to give an example, we had, as a full stack engineer working on this project, an engineer that was primarily a backend Python engineer for his entire career. He was asked to work as a full stack writing React code on the front-end with TypeScript, both in the front-end and the backend. So it was a new language, a new tech stack for him. We expected the quality of the code base to be at least the same quality of the non-AI team quality of the code base. So basically, we used Sonar on both code bases to do static analysis on those code bases, and we compared outcomes of Sonar to understand if the code bases were as good as, at least without the use of AI. And of course, we expected to have learnings, to learn a lot on the things we should do and the things we should avoid while using AI for coding.

The results of the experiment were pretty good. We had a 45% productivity gain on using AI, meaning the team that developed using AI delivered the application 40% faster than the team that was working without AI, at that 30% reduced capacity. A team of two people developed the application almost at half of the time of the team with three people that was not using AI. The Python engineer that I mentioned was able to be productive and to deliver applications using TypeScript. The fact that languages are pretty similar between them, right? TypeScript, Python, they have differences in syntax, but the basics of logic and the basics of software element, they are the same. So the developer was not required to code TypeScript during the project. He was required to think on the features, to think on how to implement the application. So AI helped him to be productive using a language that he never worked at before.

The quality delivered by both code bases was pretty good and similar. Both code bases had a couple of issues here, a couple bugs there, normal stuff, nothing very complicated to fix. None of those code bases had major security hotspots. So we could evaluate both code bases and they were pretty much the same in terms of quality. But of course, nothing is perfect. We also noticed that the same Python engineer that I mentioned before, I asked him during the development to act as a DevOps person working on Amazon Web Services, especially around application load balancers set up in the architecture, right? And that was a point he struggled a bit, not because the ability of using AI didn't help much because he was not clear on what he wanted to develop in that moment. He was not aware that he should use an application load balancer versus an API gateway, for example. So the ability of knowing exactly what you want to ask AI to do reduced his performance. But he was able to deliver the feature. It just took longer than a person that was not using AI, the DevOps person of the other team. This proves that AI can do the job. But you as an engineer, you must know exactly what you're asking for AI to work for you. You cannot just, hey, you know, build this for me.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Rome, a Modern Toolchain!
JSNation 2023JSNation 2023
31 min
Rome, a Modern Toolchain!
Top Content
Rome is a toolchain built in Rust that aims to replace multiple tools and provide high-quality diagnostics for code maintenance. It simplifies tool interactions by performing all operations once, generating a shared structure for all tools. Rome offers a customizable format experience with a stable formatter and a linter with over 150 rules. It integrates with VCS and VLSP, supports error-resilient parsing, and has exciting plans for the future, including the ability to create JavaScript plugins. Rome aims to be a top-notch toolchain and welcomes community input to improve its work.
Conquering Complexity: Refactoring JavaScript Projects
JSNation 2024JSNation 2024
21 min
Conquering Complexity: Refactoring JavaScript Projects
Top Content
Today's Talk explores the complexity in code and its impact. It discusses different methods of measuring complexity, such as cyclomatic complexity and cognitive complexity. The importance of understanding and conquering complexity is emphasized, with a demo showcasing complexity in a codebase. The Talk also delves into the need for change and the role of refactoring in dealing with complexity. Tips and techniques for refactoring are shared, including the use of language features and tools to simplify code. Overall, the Talk provides insights into managing and reducing complexity in software development.
Improving Developer Happiness with AI
React Summit 2023React Summit 2023
29 min
Improving Developer Happiness with AI
Watch video: Improving Developer Happiness with AI
GitHub Copilot is an auto-completion tool that provides suggestions based on context. Research has shown that developers using Copilot feel less frustrated, spend less time searching externally, and experience less mental effort on repetitive tasks. Copilot can generate code for various tasks, including adding modals, testing, and refactoring. It is a useful tool for improving productivity and saving time, especially for junior developers and those working in unfamiliar domains. Security concerns have been addressed with optional data sharing and different versions for individuals and businesses.
Automate the Browser With Workers Browser Rendering API
JSNation 2024JSNation 2024
20 min
Automate the Browser With Workers Browser Rendering API
The Talk discusses browser automation using the Worker's Browser Rendering API, which allows tasks like navigating websites, taking screenshots, and creating PDFs. Cloudflare integrated Puppeteer with their workers to automate browser tasks, and their browser rendering API combines remote browser isolation with Puppeteer. Use cases for the API include taking screenshots, generating PDFs, automating web applications, and gathering performance metrics. The Talk also covers extending sessions and performance metrics using Durable Objects. Thank you for attending!
Static Analysis in JavaScript: What’s Easy and What’s Hard
JSNation 2023JSNation 2023
23 min
Static Analysis in JavaScript: What’s Easy and What’s Hard
Static analysis in JavaScript involves analyzing source code without executing it, producing metrics, problems, or warnings. Data flow analysis aims to determine the values of data in a program. Rule implementation in JavaScript can be straightforward or require extensive consideration of various cases and parameters. JavaScript's dynamic nature and uncertainty make static analysis challenging, but it can greatly improve code quality.

Workshops on related topic

Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
JSNation 2023JSNation 2023
44 min
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
WorkshopFree
Ryan Albrecht
Ryan Albrecht
You know that annoying bug? The one that doesn’t show up locally? And no matter how many times you try to recreate the environment you can’t reproduce it? You’ve gone through the breadcrumbs, read through the stack trace, and are now playing detective to piece together support tickets to make sure it’s real.
Join Sentry developer Ryan Albrecht in this talk to learn how developers can use Session Replay - a tool that provides video-like reproductions of user interactions - to identify, reproduce, and resolve errors and performance issues faster (without rolling your head on your keyboard).
Pragmatic 101 on Custom JS Tooling for Your Project
JSNation US 2025JSNation US 2025
122 min
Pragmatic 101 on Custom JS Tooling for Your Project
Workshop
Will Klein
Will Klein
Let’s demystify the secrets of modern JavaScript tooling. In this workshop, you will learn how linters, compilers, and code transforms work. You will take your first steps building your own developer tools, writing an ESLint rule, a compiler plugin, and a code transformation. Learn these skills to help your team and deepen your understanding of computer languages.Workshop Plan:A preview of how ASTs can be usedA brief walkthrough of how compilers workAn intro to working with an ASTThe secret to making thousands of changes in secondsHow to apply this skill