AI-Powered Frontend Development: Building Better UIs Faster

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

AI tools are changing how we design, build, and test user interfaces. This talk looks at current AI technologies that are improving the frontend development process and how to use them, including:

- Using Visual Copilot to convert Figma designs into production-ready code;

- Implementing AI agents for automated code writing and testing;

- Real-world examples showing how AI tools can improve development speed and joy.

This talk has been presented at Productivity Conf for Devs and Tech Leaders, check out the latest edition of this Tech Conference.

FAQ

Large language models (LLMs) are advanced predictive text systems powered by large neural networks, designed to predict the next word or text based on input. They mimic computational thinking and can act similarly to human brains if sufficiently large.

LLMs can enhance front-end development by automating code generation, iterating on code based on specific prompts, fixing errors, and providing design solutions that align with brand aesthetics.

Cursor IDE is a development environment that integrates with LLMs to automate and assist in coding tasks. It features agent mode for contextual code generation and error resolution without manual specifications.

Builder.io's Figma plugin uses AI to generate design elements that match brand styling. It allows users to create components like contact forms in Figma with AI-generated styling and functionality.

Agent mode in Cursor IDE is a feature that allows the IDE to automatically determine context, generate relevant code, and fix errors without the user needing to manually specify every detail.

Using tests with LLMs allows developers to write tests first, then code, letting the LLM iterate until the tests pass. This approach provides stronger guarantees of code correctness.

Builder.io's CLI can convert Figma designs to code directly in the IDE. It allows designs to be updated or modified in Figma and then automatically reflected in the codebase through simple CLI commands.

The author suggests using AI as a collaborative tool, similar to working with a junior developer. This involves providing feedback, refining processes, and using LLMs to automate repetitive tasks, enhancing productivity and creativity.

Coding is the primary use case for large language models. They are particularly effective in assisting with coding tasks, especially in front-end development.

Parallel development involves starting with draft designs and implementing functionality while allowing design changes to be integrated later. AI facilitates this by updating code to reflect design changes without requiring a complete overhaul.

Steve Sewell
Steve Sewell
19 min
27 Mar, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today's Talk introduces the use of large language models (LLMs) to enhance front-end development. LLMs can act like our brains by maximizing the good parts and minimizing the bad parts. A demo in Cursor, an IDE, showcases how LLMs can be used with the builder.io Figma plugin. The Talk emphasizes the automation of tasks, such as adding a settings button and resolving errors, with the AI agent. Feedback and manual verification are crucial to ensure desired results. Tests and continuous iteration are recommended for stronger guarantees of correctness. Monitoring and guiding the AI agents is important to stay on track. Connecting to other tools like Figma and using AI prompting can further enhance code generation. The CLI enables code base integration and parallel development. Visual prototyping and seamless updates are possible with the Builder tool. Overall, the Talk highlights how LLMs can revolutionize front-end development by automating tasks, improving efficiency, and facilitating collaboration.

1. Introduction

Short description:

Today I want to talk to you about how to build better front-ends faster by using LLMs. Large language models are just very fancy predictive text systems. They can start acting a lot like our brains think. The whole goal is to figure out how to maximize the good parts and minimize the bad parts. We'll focus on front-end development because LLMs are particularly strong there. I'm going to start demoing in Cursor, an IDE, and show you a basic use case with the builder.io Figma plugin.

What's up, everybody? Today I want to talk to you about how to build better front-ends faster by using LLMs in ways you probably don't already today. I guarantee at least one trick I show you here. So who am I? I'm the founder and CEO of Builder.io. I've also made some cool open-source projects. Let's talk a little bit more about some of this stuff in a bit.

So to set back, large language models, it's very easy to forget, are just very fancy predictive text systems. You give them a chunk of text and they predict what text would come next. That's it. All the amazing things really are just powered by that one basic technology. Now under the hood, there are very large neural networks. are designed after how our brains work. And it turns out, shockingly, that if you have a sufficiently large neural network, it can actually start acting a lot like our brains think. And it's almost like computational thinking, thinking on demand, which makes them surprisingly smart sometimes. And the thing that we tend to find with large language models is the whole goal is to figure out how to maximize the good parts and minimize the bad parts because there's lots of both. And interestingly, across LLM use cases, coding is the number one use case by most studies, especially when you look at what are people actually using and adopting most. So we have the privilege of being able to use this technology to help us with coding as it's surprisingly good at that compared to other tasks. And so we really want to dive into today a bit more on how can you best use them. We'll focus on front-end development in particular because LLMs are particularly strong there.

And from here forward, I will be live demoing and I'm going to start over here in Cursor. And let's go into this repo. Cursor is the IDE I'm going to be demoing on. Cursor is not the only great IDE that has these features. I would say WinSurf is another great option, cannot go wrong with it. But let's go into a basic use case here and let me explain first what I'm going to do. So I'm working on this Figma plugin, the builder.io Figma plugin. And I added a cool new feature where we can design things with AI. So maybe I'll choose like I want a contact form. And you can select something in Figma and it'll use AI to generate something based on what you ask for in the style of what you've selected. So that's a common problem with AI is I want not just generic Tailwinds looking AI slot, I want something that looks like my actual brand or the way I actually design, etc. Like here we can see it's building out my contact form, kind of step by step.

2. Adding a Settings Button

Short description:

I want to add a settings button that launches a pop over with a use auto layouts checkbox in my code base, without manually adding anything to context. The agent automatically figures out the relevant files and context.

And it looks with similar styling, similar branding as what you saw. And then when I'm ready, I can import it into Figma and then ba-bam, we've got it. Great. But I want to add a new feature to this. And the feature involves a UI, a button to pop over a few things. So I want to add like a settings button next to this button so that I can change what style of import. And I do. Right now we just do a generic import, but I want to make it so that we can also import using nested frames and auto layout. And so whereas you'd normally go and start building out the code for all this, let me show you a much easier way that I use pretty much every day.

Jumping back over to my IDE, I'm going to go over to cursor and use their agent mode. This is their new UI, by the way. I think most people have the new UI, but agent mode, I believe is the default now. You can also get to a command I, pop it in and out. So here I'm just going to type what I want. Now this is a real full-size code base. This code base has a lot of files, a lot of stuff in it, a lot of code. And what I'm going to do is I'm going to not add anything manually to context. I'm going to let the AI figure things out. So I'm just going to say next to the import to Figma button, let's add a settings button that launches a pop over that has a use auto layouts checkbox. When that box is checked, set use frames to true in HTML2 Figma. And let's go. I'm getting pretty specific there with my prompts and I'm just telling it like, Hey, I want to add, let's go up again, saying, Hey, I want to add a settings button. I didn't describe how I want it. Generally these agentic features are pretty good at figuring out use a certain icon library, I'm going to use that icon library, et cetera. I want to launch a pop over and what the behavior should be. And the agent automatically can start figuring out what files are relevant here. So it did a few kind of lookups here. I don't know why it's hitting some random errors, but whatever. But it's able to figure out the context automatically. I didn't have to specify it myself.

3. Implementing UI and Resolving Errors

Short description:

It started adding the UI and resolving linter errors automatically. I checked in Figma and the settings button and use auto layouts functionality are working fine. Although agents can automate code creation, it's important to supervise them and provide feedback. Always double check for linter errors and provide continuous feedback to ensure desired results.

It started adding the UI and importantly what it's doing here too, is it's noticing if it ever gets what it calls linter errors. If anything like the type check or the lint or anything is not up to standard, it'll automatically try to fix and resolve those things.

So here it is already added everything I asked for. Sounds like, so let me just go over to Figma, go to my button. Let's go generate design. Let's do something else. Let's do like button variants and let's see if we get, okay, so we've got our settings button. It's not bad. It does look similarly on par to here. And we've got our use auto layouts, drop down, pop over box is fun is generating all kinds of different button variants down here. And let's even hook that up correctly.

So if I look at HTML to Figma, to Figma, it did not pass our option, but that's okay. Keep it in mind, even though agents can automatically create code and iterate on the code, you still need to babysit them. I'll show another cool example in a second where I can be even more hands off in some ways, but you still need to always look over your shoulder. Let's double check here and we can see cool. There are no linter errors. So there are no problems with this file. The file is green and it already connected, corrected anything I need to. The big thing to remember with AI is don't try and one shot everything. Give it feedback. I don't see you passing the correct option to HTML to Figma. Please look again. So again, here, the main thing to always do is continuously give feedback. And it looks like it's double checking its own work. That is great. It may have missed this update when it was applying the patches in. There we go. Whenever I was going to commit so we could see a cleaner diff, but this is a pretty clean diff right here. That's exactly we wanted to see. Remember, everybody always give your AI feedback.

4. Providing Feedback and Writing Tests

Short description:

We can give feedback on the UI and iterate to generate different UIs quickly. An alternative approach for stronger guarantees about correctness is to write tests first, then write code and iterate until tests pass. The AI can generate tests and help with writing code, but it's important to manually verify the correctness.

And again, we can give feedback on the UI. I would say most of the time when it generates this UI, it just gives me a nice, clean little cog that kind of like cog and studies button. I didn't like it as much, but this is kind of the cool thing. You can continue to like iterate, have it generate different UIs for you all the time. And this, in my opinion, is a really cool way to just build these front end basic features faster. If you don't have a design for it and you just want to put something together and try it, this can work really well.

Now let me show you one other technique that I think is really interesting and useful when it comes to getting stronger guarantees about correctness. Because as you saw there, it said it passed lint, it passed every automatic check that the AI was doing, but it still did not do the correct thing. It said it did the behavior, but it wasn't there for some reason. Let me show you an alternative approach that can be really useful for getting more guarantees and feeling like you don't have to be working for the AI, checking to see if it did things right and have it check itself automatically. So let's close this and let's go over to this VApp that I have. So here I want to show you another technique that I think is really powerful and useful.

So in this fresh application, I'm going to give it a prompt. I'm just going to say, write me a function that converts our downstream to HTML. Most LLMs cannot one shot this. And so I'm going to show you a trick I use all the time that works really well for situations like this. Let me get some of these other things out of the way here for a second. And I'm going to say, write tests first, then write your code, and then iterate on the code until tests pass. So in order to do this, you need to turn YOLO mode on in cursor. I'll show you that in a second. But basically what we can do is it's writing tests for us. These look good. If these tests pass, that means we have something that does basically what we're looking for. We can always tell the LLM to add tests or manually add our own tests or use this approach on a test suite that already exists. Here it wrote down the markdown conversion function. It's figuring out my test suite. It figured out my test library automatically. And right now, three tests failed, two passed. Very common. Very common that LLMs can't one shot this.

5. Tweaking Code and Building on Test Suite

Short description:

The AI iterates on the code until the tests pass, providing guarantees of correct behavior. Using the right model, like CLAWD 3.5 SONNET, is important for optimal results. Building on the test suite allows adding and verifying new code.

But what you see it's happening now is it's now tweaking the code until it can get the tests to pass. So now it did worse in that case, and that's not uncommon. Sometimes what the LLM will do is it'll do a rethink of its approach to better handle tests, and sometimes more tests break before they get better. I think that's pretty common of humans as well. But it'll continue to run the test.

Okay, now we're back to three failed, two passed. And again, no hands. I am doing nothing right now. The AI is doing all the work and iterating until those tests pass. And this time when the tests pass, we have certain guarantees that the behavior works as expected or as desired. I don't have to manually test it each time. So now we're down to only two failures. It's still working. And this is, in my opinion, a true agentic flow.

You'll find a lot of people these days who say, oh, we have an agent for this. Check out our new agent feature does this agent. A true agent is when we're an LLM is on a loop. It does something. It evaluates its work. It refines and continues. And by the way, in my experience, it is ideal to use CLAWD 3.5 SONNET for this. It looks like it's interesting. Cursor has a default model. I don't even know what default means. I hope it's CLAWD 3.5 SONNET. CLAWD 3.7 SONNET sometimes is too clever and it'll start updating tests to pass, even though they really shouldn't. It'll change the test code so they pass anyway, which is not my favorite. Here I must have written particularly hard tests because usually I'm able to find that this is able to pass after just a couple iterations is most common for cursor and CLAWD 3.5 SONNET. And a cool thing you could do too is you can build on the test suite forever so you can continue to add code, add tests, and make sure any new code passes all tests. Let's give it another minute.

6. Monitoring and Guiding Agents

Short description:

Agents can continue working until they are certain of completion. However, it's important to monitor and intervene if they go off track. Treating AI tools like coworkers and providing guidance can lead to better results.

As you can see, though, agents can just continue to work until they are certain that things have finished and passed. And then it's done. So a lot of times I'll set this, I'll go off and live my life, do my thing and come back and check on it, or more likely probably answer emails and such. But keep in mind one thing too, you do still have to babysit these LLMs. Sometimes they are truly off track and you just need to intervene and just stop them and give them feedback. It's super not uncommon that I have to stop and say, hold on, you're going down a wrong path.

Working with an agent in my experience is a lot like working with a junior developer or a colleague or especially a new hire. And there are times where you need to just give them guidance. And so while we didn't see a perfect run this time, this is the expectation you should have. And sometimes you're going to want to always continue to use your development skills to look over what's wrong and maybe start pointing out, hey, here's a difference in approach you might want to take. Generally with AI tools I find if you can treat them like an employee, like someone else you'd be working with, like a coworker, and give them a detailed thought out plan, here's how I'd suggest implementing this and here and where especially where needed can go a very long way. I would not give up if your first try isn't a perfect result. I gave it quite a vague prompt and just try again anytime you want.

7. Connecting to Other Tools and AI Prompting

Short description:

You can connect to other tools like Figma and use the Builder.io plugin to convert designs into code. The Builder.io visual editor allows for code editing and the AI can be prompted to create tables. Visual editing with code is possible by passing in additional context.

Now the next thing I want to show is what I promised a moment ago, which is, this is all great. You could do a lot of cool stuff here with kind of text in text out. But what if you want to connect to another tool that's not pure text, like coming back to Figma. And so Figma is not pure text and Cloud and other tools are not usually so great at just handling raw JSON data. So I just gave it the raw Figma JSON data, it's not going to really convert that to code super effectively. So let me show you what you can do instead.

So here I'll actually use the production version this time so we can launch into the Builder.io plugin. And here's a design that I have. Actually I'm going to use a slightly different design for my demo. I want to use this because it shows a design that is actually incomplete. We'll get back to that in a second. But here from the Builder plugin I'm just going to hit this export to code button. What we're going to do under the hood is we use a combination of state of the art models and self trained models to convert this into code. When I'm ready I'm going to launch into Builder and then we can start taking this one step further. So once we are in like a code land, which is what LLMs can talk, this is where we can start doing some really cool things. So here's now what should feel like a Figma style interface is the Builder.io visual editor, but this is actually editing code.

So I'm going to go in and add a little padding here. I think it looks better. I'll hide this on mobile. There we go. Not bad. But as you saw, we don't have a table here. So instead of having to create that manually, what we can do is prompt the AI to do that. So I'm going to hop over here and say, replace this with a table of our users. Let me make sure I'm in the correct space. I am. Good. Okay, this should work as expected. So under the hood, because this is all code, now we can visually edit this with code. You can pass in additional context.

8. Generating Code and Customizing Functionality

Short description:

The AI can add missing functionality and generate code based on design elements. Visual edits can be made, such as sorting options and making buttons work. It allows for rapid prototyping and testing, even by non-developers. Code generation is available for different frameworks with customizable options.

What I did is I gave it just a really simple example of how our APIs work, not even full documentation or anything. I just said, Hey, we've got this user's API takes these params, use the output and it can figure out it's digital together. Because this is in code where my components live, where my design tokens live, et cetera, and came from Figma, when it's going to generate something or add functionality to something, it's going to do it in a way that is correct to my own personal code base or the style that we do these things.

What's cool is while I can fill in things that are missing, add things based on whatever I ask, it can also add full on functionality. There we go. There's a table. I paused it for a second because it was taking a moment to generate, but here I can go in and continue to make visual edits. What's cool is I can click on those different tiles to sort by different things. Like here, I'm going around and click in to sort by the different options. But I noticed also that we have these sort by options here, yet they're not doing anything. So let's go over here and type in my favorite prompt, make these buttons work. Oops, let's make sure we have both them selected. Bam. And now it can just go in and add the functionality. We can take our design to a working, call it prototype, or working basic version, here in just a couple seconds, and let the eye do the basic work. We can get prototyping, we can have people who are not even developers on the team start testing to make sure these interactions feel right, or functionality feels good, and get them to a point where we actually feel like it has the basics the way we need them. And here we can test it out. Cool. That looks beautiful. We can keep doing this to fill in our different pagination buttons, we could continue to visually edit, like maybe I want to go in here and let's say add a little bit of space, do whatever you want. This is like the world is your oyster here. And then when we're ready, all we got to do is go over to the develop tab and we can get code. We can get code across a ton of different frameworks. So here are a bunch of different front-end frameworks we can use, styling, languages. We could have it automatically generate tests, et cetera. Here it's thinking through a plan and starting to implement. Beautiful. The beautiful thing here in the code it generates is it'll use my choice of libraries, et cetera. I've got some custom instructions that say, hey, I like 10-stack query, so use that for all data fencing. It's creating hooks, it's creating the different components.

9. Working with CLI and Code Base Integration

Short description:

It's reusing components, importing them from the design system and library. Generating code in a CLI and pasting it directly into the code base. The CLI command can automatically figure out the frameworks, libraries, and add the required files.

It's reusing our components when relevant as well, and so to import components from our design system and library. And then when this is all done generating code, we can just grab it from a CLI and paste and run that in our code base. This could save you a bunch of time, but I want to bring up one other useful trick that I think will help a lot of folks. Because one thing that you might notice is that while we generate something net new, a lot of the times you want to actually work in updates.

And so let me show one last really cool thing before I run out of time here. So going back over to Figma, let's close out of this stuff and let me show you another really cool workflow. So back in Figma, let's grab this, and I'm going to take this page. I've got a local website that is related. Here's my beautiful localhost 3000 blog, and here is my new contact page I want to add. Let's jump in and let's run the builder plugin, and I'm going to hit export to code just like you saw before. But instead of jumping into the builder UI, I'm going to actually grab this command.

So here's a CLI command. It's a lot like copying from Figma. And then I can just go into my builder, into my, here's cursor, maybe it's code or whatever, and just run the CLI command. Copy from Figma, paste over here. And this time it's actually directly inside my code base. It can automatically figure out what frameworks I use. I'm using Nixjib, Sass, etc. Give it an instruction, like add this contact page, and let it rip. I'm just going to skip doing anything fancy here, like adding context. In a similar way to cursor, agentically, we can just figure out the context and add the correct files, plan it out, and add them what we're looking for, the way we want it.

So it knows we're using React and Tailwind here. It knows we're using NixJS AppRouter. It actually knows what libraries we use as well. In fact, you'll be able to see in this code, it's automatically going to use, add validation for me based off of the validation libraries that I'm using. And then once it's done generating, we'll be able to just pop in and see the end result right here inside of our IDE. If you saw quickly, you saw I was using Zod right there, so it's great. I'm using Zod, and React took form or something. I forget what library, the AI knows what library we're using better than I do sometimes. And cool.

10. Updating Designs and Parallel Development

Short description:

After generating the code, the CLI command can be used to update the contact page with the new design changes automatically, leaving the rest of the code untouched. This allows for parallel development and easy integration of design updates. The process involves having a draft design, implementing it, and then incorporating any subsequent changes or user feedback. The Builder tool can be used to sync designs from code, allowing for visual prototyping with the team and seamless updates as designs change.

When it's all done, I can just pop over. I can see I've got a new contact link in the header, so it figured that out automatically. And boom. There's a new contact page. It's beautiful. But getting a new design is great, but what if the design updated? So let me show you what we can do for that.

I'm going to exit out of here. I'm just going to save, commit it, and let's pretend our design updated. Let's say we're going to remove a couple of these fields, and I'm going to select a couple things and we're just going to use some blue highlights now. Maybe this and this. We're going to change these selection colors to primary blue. Okay. A little more fun blue accents.

If I want to bring those design changes in, all I got to do is hit Experts Code again. When it's done breaking it down, I'm just going to grab the CLI command. Same thing again, it's a lot like just copy and paste from Figma. I'm going to run the CLI. I'm going to this time, just say update the contact page, update the contact page with the updated styles, and then just let it rock and roll.

The cool part is it'll figure out automatically what should change. It'll leave the rest untouched. And this opens up some interesting opportunities that we call parallel development. If we want to move faster as a team, one thing that could be great is just have a draft of a design. It doesn't have to be perfect and final. We're not trying to do waterfall development here. And then go and start implementing.

And if the design changes, that's fine. We implemented the original design, the code, we added the functionality. And if the design went further, we got user feedback, et cetera. We can just work that in any time. And so here we can take a look at sort of our diff. We've got, great, we've got our updated colors. Let's go back over the index. So you could see we're updating the colors that we wanted, including dark mode styles here too, so it's able to infer that automatically and not changing anything else, as well as removing our form fields, et cetera, to match the design updates that we had. So here we could just commit the diff and move on. It doesn't have to be a net new thing every time. Anyway, so a million more things we could show here, but I hope this was a helpful overview of interesting tricks I would recommend trying. Cursor and cursor agents, not just for adding UIs based off of specifications you have, but turning on YOLO mode, giving it specific allowances, like it's only allowed to run tests automatically and have it run tests and iterate until tests pass, and then use tools like Builder to sync designs from code, if not just once. You can add and prototype visually with your whole team as well, and then pull down or use a CLI that connects and update every time designs changes. Try these out in your next workflow and let me know how it goes.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
Less Cruft, More Power: Leverage the Power of the Web Platform
C3 Dev Festival 2024C3 Dev Festival 2024
30 min
Less Cruft, More Power: Leverage the Power of the Web Platform
This talk focuses on the powerful features of CSS and HTML that can level up developer skills and enhance web UI. Scroll-driven animations, popover API, and anchor positioning are explored as ways to create dynamic effects, improve performance, and increase accessibility. The talk also emphasizes the benefits of building presentations with CSS and HTML, separating logic from styling, and leveraging core platform features. Wishlist features for the web platform and the challenges of pushing updates across browsers are discussed.
Forget Bad Code, Focus on the System
React Summit US 2023React Summit US 2023
27 min
Forget Bad Code, Focus on the System
Top ContentPremium
Watch video: Forget Bad Code, Focus on the System
Setting up the system and separating concerns are important in software development. Modular construction and prefab units are a new trend that makes construction quicker and easier. Architectural complexity can lead to a drop in productivity and an increase in defects. Measuring architectural complexity can help identify natural modules in the code. Best practices for avoiding architectural complexity include organizing code by business domain and using prop drilling. Atomic design and organizing a monorepo are recommended approaches for managing architectural complexity.
Confessions from an Impostor
JSNation 2022JSNation 2022
46 min
Confessions from an Impostor
Top Content
The Talk discusses imposter syndrome and reframes it as being a professional imposter. It emphasizes the importance of sharing and starting, embracing imposterism, and building inclusively for the web. The speaker shares personal experiences of being an imposter in various technical disciplines and highlights the significance of accessibility. The Talk concludes with the idea of building a collective RPG game to remove excuses for not making things accessible.
Improving Developer Happiness with AI
React Summit 2023React Summit 2023
29 min
Improving Developer Happiness with AI
Watch video: Improving Developer Happiness with AI
GitHub Copilot is an auto-completion tool that provides suggestions based on context. Research has shown that developers using Copilot feel less frustrated, spend less time searching externally, and experience less mental effort on repetitive tasks. Copilot can generate code for various tasks, including adding modals, testing, and refactoring. It is a useful tool for improving productivity and saving time, especially for junior developers and those working in unfamiliar domains. Security concerns have been addressed with optional data sharing and different versions for individuals and businesses.
Maximize Productivity with AI Agents
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
25 min
Maximize Productivity with AI Agents
I'm Tejas Kumar, a software developer with over 20 years of experience. AI agents are defined as entities that act on behalf of users or groups to produce specific effects. Agents consist of an orchestration layer, a language model, and tools represented as JSON functions. Langflow is an open-source tool that allows users to build their own AI agents by connecting language models and tools. Composio is a tool that enhances agent capabilities by offering integrations and apps, such as Google Calendar integration. MCP (Model Context Protocol) is a way to share context with models and extend their capabilities. It allows functions to be made available to models over standard input/output or an HTTP endpoint. MCP can be used with GitHub to perform various tasks like searching and fixing code issues. The Talk covered the basics of AI agents, building agents with Langflow and enhancing them with Composio, and using MCP with GitHub. The speaker encouraged audience questions and exploration of these concepts.

Workshops on related topic

How to Create a Web Application in an (Almost) Autonomous Way Using Clean Coder
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
95 min
How to Create a Web Application in an (Almost) Autonomous Way Using Clean Coder
Workshop
Grigorij Dudnik
Grigorij Dudnik
Imagine replacing yourself with a multi-agent AI programmer to develop your production web application. That's exactly what we did at my startup takzyli.pl. To achieve this, we designed and used the Clean Coder - AI agent framework for autonomous code writing (https://github.com/GregorD1A1/Clean-Coder-AI), which is hopefully open-source project. If it worked for us, why shouldn't it work for you?In this workshop, I'll show you how to create an entire web application in an (almost) autonomous way and drastically reduce the time you or your employees spend on writing code.
Test, Code, Repeat: Mastering AI-Assisted Development
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
53 min
Test, Code, Repeat: Mastering AI-Assisted Development
Workshop
Marco Pierobon
Marco Pierobon
"Test, Code, Repeat: Master AI-Assisted Development" introduces developers to a transformative way of coding with AI as a collaborative partner. This workshop focuses on how iterative workflows, such as the ping pong pairing technique, enable an enhanced interaction between human creativity and AI efficiency.