How Windsurf Breaks Through the Celling for Retrieval

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

FAQ

Winsurf is a newly released AI-native code editor designed to enhance productivity by integrating AI agents into software development workflows.

The main goal of Winsurf is to keep developers in the flow of their work, allowing the AI to handle repetitive tasks and grunt work, thereby unlocking developers' limitless potential.

Winsurf improves productivity by using an agent called Cascade, which can generate and refactor code, run terminal commands, search the web for documentation, and automate many other tasks.

Key features of Winsurf include code generation and refactoring, automatic terminal command execution, live data search, and the ability to use edit history and context to improve coding efficiency.

The 'Cascade' agent in Winsurf is an AI tool that assists developers by generating code, running commands, and automating tasks to enhance productivity in the coding environment.

Winsurf uses 'trajectories' to understand user actions and predict future needs, allowing the agent to assist in tasks like continuing work, running commands, and managing files automatically.

'Meta-learning' in Winsurf refers to the AI's ability to remember user preferences and codebase specifics over time, allowing it to adapt and provide personalized assistance.

Winsurf is built to scale with advancements in LLM technology, ensuring that its AI agent remains effective and can leverage the latest improvements to enhance user productivity.

Users can experience Winsurf by visiting windsurf.ai, where a generous free tier is available for them to try the product and explore its features.

By 2025, Winsurf envisions eliminating the need for explicit rules files, with AI inferring user preferences, and aims for the AI to handle even more complex tasks like generating full PRs and implementing design files.

Kevin Hou
Kevin Hou
22 min
27 Mar, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Hello, Productivity Conference. We are Winsurf, a brand new AI-native code editor. Today, I'm going to show you how the product works, the guiding principles behind its development, and some tips and tricks on using AI in development workflows. Our agent is a powerful tool that abstracts away grunt work, making developers focus on building and shipping great products. It performs background research, predicts next steps, and automates decision-making. Windsurf integrates deeply into the application, understanding what you're doing and providing tools to achieve your goals. The agent can remember instructions and behave like an extension of yourself. Building for the future of intelligence, Windsurf aims to improve productivity and revolutionize coding with AI agents.

1. Introduction to Winsurf

Short description:

Hello, Productivity Conference. We are Winsurf, a brand new AI-native code editor. Today, I'm going to show you how the product works, the guiding principles behind its development, and some tips and tricks on using AI in development workflows. Agents are the future of software development. We built Codium, an autocomplete extension, and quickly garnered over a million downloads. We believe that 2025 is the year of agents, and we released Windsurf, an agentic LLM system, to generate and refactor code for you. Here is a screenshot of Windsurf, showcasing its capabilities.

Hello, Productivity Conference. We are Winsurf. We are a brand new, released a few months ago, AI-native code editor. Today, I'm going to be showing you kind of how the product works, some of the guiding principles behind how we built the product, and most importantly, what you are all here for, some tips and tricks on how you can use AI in your development workflows to just become more productive.

So let's dive right into it. So for the next 15 minutes, I'm going to talk a little bit about how we built the best coding agent on the market. My name is Kevin Howell. I lead our product engineering team here at Winsurf, and we are out here in the beautiful Bay Area delivering you this presentation remotely. I just want to say thank you to the Productivity Conference team for inviting us to this conference. We're really excited to be a part of this journey, and I'm going to spend some time talking about some of the principles that our team lives by and breathes by in order to build a product like the Winsurf editor.

So we believe that agents are the future of software development. We are very bullish on the state of AI, and as you all can probably see and can probably relate to, every day the news is inundated with new models, new practices, new tools to accelerate all sorts of professions, but notably software developers. And so to start, I'm going to take you down a trip to memory lane 2022. This is when Copilot was just released. This was kind of the state of the art of what technology and AI could do for coders. We started to see ghost text. We started to see, you know, Copilot completing your functions and cleaning your utils, and it was one of the first times that developers got to experience the magic of AI while they were coding.

We, like many others, were really excited about this technology, and we decided to build our very first product called Codium, and this was an autocomplete extension that lived on VS code, JetBrains, Vim, and 30 other IDEs, and we were one of the first to market to bring autocomplete. The autocomplete experience to the masses, and we quickly garnered over a million downloads, and it was a very successful time for the company, but we always knew that we were going to be building more and more. And so we knew that intelligence, we predicted that intelligence was going to get faster, it was going to get cheaper, models were going to get bigger, paradigms were going to change, and as we've seen in the news, you know, things like DeepSeek, RL models like O1, all these things have been contributing to the rapid, rapid pace of software development in the last couple of months. We always want to give our users the best possible experience, and what this meant at the time was we were going to build the best, the fastest autocomplete product, but over time we knew that we could do more and more work for the user, and we had our eye on agents. We knew that, or we predicted that copy-pasting from ChatGPT would go away. We also hypothesized that these agents, right, these autonomous software development machines would be able to do more and more for our devs and for our users. So we believe that 2025 is this year, and we released Windsurf end of 2024, capitalizing on this brand new technology, which is an agentic LLM system. Windsurf has been pushing the bleeding edge of what is possible with LLMs, specifically to create an agent that can work on your behalf and generate code for you, it can refactor code for you, and we're going to get into what exactly that means. Here is a screenshot of Windsurf. You can see here that there is an agent on the right side, we call this Cascade, and it's actually building a Python web scraper, and you can see that with just a couple prompts we're able to generate a full file. It'll install dependencies, run your virtual environment, run commands inside of your terminal, and then actually write you the code and allow you to accept or reject that code, and finally actually run that code on a sample website. It gives us some stats about the website in question.

2. The Power of the Agent

Short description:

Our agent can control the editor, look up documentation, search the web, grab code, run terminal commands, and more. It's a powerful tool that abstracts away grunt work, making developers focus on building and shipping great products. We aim to minimize user interaction with the agent while still providing a sense of control. The agent performs background research, predicts next steps, and automates decision-making. Windsurf has already written billions of lines of code and is constantly improving. We've received thousands of messages and the agent is actively used by many developers.

It gives us some stats about the website in question. But you can see there's a number of things here that our agent is doing on behalf of a user like yourself. It's able to control the editor in ways that you would control it in order to make you as efficient and as productive as possible. Some other ways that you can use this tool, it can look up documentation, it has the ability to search the web for you automatically. So if you add a query that uses live data or you want to look up a specific version of documentation, it's able to actually go to the internet and look up that live data, use that as context. It can do things like grab through the code base, it can run things in your terminal, it can open up files and use your edit history as context. The list goes on and on and on. Overall, it's an incredibly powerful and flexible tool that makes developers so much more productive. And this might seem like a bunch of random features, but I'm going to walk you through generally how we think about what we are going to build. WinSurf is an incredibly powerful tool, but we could be building a lot of different things. We can go in a lot of different directions. So we want to make sure that as a team, we're always going back to our mission, which is to keep you in the flow so that you can unlock your limitless potential. What this means is we want the AI to handle grunt work for you. Things like debugging your stack traces, looking up your original source code, pulling correct versions of documentation for you, all these things should be sort of abstracted away from you. The LLM and the agent should make you more productive and make you focus on things that you are uniquely good at, building and shipping great products.

So with our goal in mind, how do we know that what we're building is actually working? To us, it's a game of input and output. How much explicit input, how much can the user, you or I, type into the prompt to yield the most amount of correct production-ready code as the output? We do this by reducing the amount of human in the loop. So we want you to interact with the agent as minimally as possible, but just enough so that you feel like you're in control and can course-correct it if necessary. This means that the agent does a lot of things like background research. It can predict your next steps, your next couple of steps. And then it can also automate some of the decision-making. If you're running a terminal command that we have high confidence in, we should just run that for you automatically so that you don't have to. And this might seem like a far-off future, but this is actually happening today. Windsurf was released a couple of months ago. And we're already seeing we've written 4.5 billion lines of code with Cascade, which is the agent inside of Windsurf. That's an astonishing amount of code that we've written for our users. And even since I started this presentation, we've had thousands of messages sent. 600, to be precise, 670 Cascade messages sent every single minute. And a fun stat that I like to share because we're all engineers here, we've been woken up 16 times due to pager duty telling us that we've hit our capacity. This is problems to success.

3. Principles and Trajectories

Short description:

We have been one of the largest consumers of LLMs and have gained control over the service's capacity. We integrate our agent deeply into the application, understanding what you're doing and giving you tools to achieve your goals. The agent can run your terminal commands and provide an elegant user experience for accepting or rejecting changes. Trajectories are a key principle, where the agent constantly works in the background to understand and contribute to your actions. This shared timeline allows seamless collaboration between you and the agent. An example of trajectories in action is shown with the addition of a new React form handler.

We have been one of the largest consumers of LLMs, and so with that comes, unfortunately, sometimes when there's not enough capacity to run the service. We've gotten this more and more under control as time has gone by, but thank you to all of our users who helped us be in kind of an overnight success in the IDE space.

So I'm going to talk through now the meat of the presentation, which is on the principles. How do we actually approach building an agent like this? So with this mission and metric in mind, let's talk a little bit about some of the principles that the team uses. The first principle is called trajectories. So this in short means we try to read your mind. Windsor tries to read your mind. Unlike the other editors on the market, there are a handful of other AI editors in the market like Cursor or VSCode with Copilot. We integrate our agent incredibly deeply into the application. So what this means is on one half, we are understanding what you are doing. And in the other half, we're giving you tools and giving the agent ability to actually do the things that you would do.

This means understanding what files you've most recently looked at, what your most recent edits are, all the way to actually editing the files and looking things up inside the search bar. This has led to really, really powerful features for example, the ability to just simply type continue my work and the agent will be able to use the context from what it's learned about you in your most recent actions to actually go out and finish what you were doing and finish what you started. This also means that we can do things like run your terminal for you automatically. For example, if it needs a dependency, it can just run npm install for you. And sometimes you don't even need to allow it to explicitly do this. It will infer that this is a safe command for you to run automatically. And then we also surface a very elegant user experience for you to accept and reject changes so that you can stay in the loop, but be able to review what the agent is doing incredibly quickly. This is how a trajectory more or less works. The agent is constantly doing work in the background to understand what you are doing and then also contribute to that.

So in this animation, you're seeing that, if the user first navigates, the AI will then use that to then start doing research. They might make edits on your behalf. We are storing things in kind of a centralized timeline. And we call this the trajectory. So as you're making changes, as you're making edits, the agent will also live on that same timeline and has complete awareness and context over what you have most recently done. You can imagine this shared timeline is how we are able to flow back and forth between things that you contribute and things that the agent contributes so that context is cohesive and unified across those two fronts. Here is an example of trajectories in action. So the agentic experience in Windsurf is being shown on the right. On the left side, the user actually just added a new function. And in this case, we're adding a new React form handler.

4. Trajectories and Meta-Learning

Short description:

The agent can pick up recent changes and refactor files based on user input. Terminal commands can be recognized and used by the agent to implement dependencies. The future of trajectories involves seamless integration between the terminal and the chat. Windsurf aims to keep the agent in harmony with your working environment. The meta-learning principle allows Windsurf to adapt and remember things about your codebase and preferences.

And you can see that this got added to the trajectory. The agent was able to pick up the most recent changes. And simply by saying, continue my work, we can now refactor the rest of that file and neighboring files to then use the code that the user just wrote. And so in this way, we're handing off back and forth this notion of a unified timeline and a trajectory. We're handing back and forth between what the agent can do for you and what the user does.

Another example of this is in the terminal. If you were to run a series of commands in the terminal, for example, if you pip installed a dependency or if you npm installed a dependency, we'd be able to pick that up. And the agent would be able to say, OK, you probably want to implement that dependency inside of your project. And you can see here, after you run a set of terminal commands, we have context over that. And we're able to use that when the agent goes off and does its work. We believe that there is a future where your terminal, your context, your edits are all part of this unified trajectory, where you don't need to copy paste between the terminal and the chat. You never need to be able to copy paste between the file and the chat. And we believe that the future is going to be automatic like this. And we certainly don't believe that the future is things like at terminal.

Here's another example of a trajectory where we are trying to abstract away decisions for you. And so in this case, we are running Python commands for you on your behalf inside of your terminal. So what we do is actually run the command inside your native terminal so that we can decrease the distance between what you are working, what the environment you work in, and what the environment the agent works in. And so in this way, we can leverage your virtual environment, leverage your dependency caches, and things of that nature. The whole goal of this is to keep the agent as close in harmony with the way that you do your work as possible. We also believe that developers are here to stay. Windsurf will be able to think more and more into the future. We believe that the future of trajectories is that Windsurf will be reading your mind not only just one or two steps into the future, but 10 to 20 steps. It'll be doing things like writing unit tests before you've even finished the original function or performing code-based wide refactors after just performing a couple lines of changes inside the editor. So trajectories is a very, very powerful concept. And this is what allows users to feel like they're in seamless harmony with Windsurf.

The second principle is meta-learning. So even if Windsurf understands what you're doing in the moment, there is kind of an inferred understanding of your code base, of your preferences, and some of your organizational guidelines that, say, a senior engineer on your team would be able to pick up over time. We call this concept meta-learnings. Windsurf can adapt and remember things about you just with usage. So if you think about a Frontier LLM, the analogy that we like to use at the company is Frontier LLMs are the smartest engineers you know, but they have no exposure to how you or your company writes code.

5. Building Features and Auto-generated Memories

Short description:

The agent can remember specific details about how you work, such as preferred dependencies and terminal commands. Auto-generated memories and custom tools via MCP protocol enhance the agent's capabilities. The goal is for the agent to seamlessly remember instructions and behave like an extension of yourself. The future of meta-learning involves inferring tendencies from code and usage, creating personalized experiences for each user.

They're able to write an immense amount of code very, very quickly. But how do you get that to actually integrate well into your code base? So this means we need to build up features and infrastructure to allow the agent to sort of remember how you specifically work.

We built things like auto-generated memories. For example, if you tell the agent, remember that I use Tailwind version 4, it will be able to remember that into the future. We implemented custom tools via MCP protocol. For those of you that don't know, this is a way for you to link your Figma account to the agent. And all of a sudden, now the agent can read your Figma files and then make changes on your behalf. We also do things like allow you to whitelist or blacklist certain terminal commands.

So if you always want the agent to be able to install dependencies on your behalf, you can whitelist them. Or you can also add things like never RMRF my code base. So if you think about what makes a developer very effective, they are able to remember these lessons and guides after you've just told them to them once. So Windsurf also needs to model this behavior, if we ever hope, for AI agents and tools in general to write and maintain full projects for us.

So in the short term, this means that you shouldn't need to prompt an agent multiple times to remember the same thing. You should just be able to tell it once and have it remember. The agent should feel like a seamless extension of yourself. And we have a saying at the company, ideas are cheap.

Here is an example of auto-generated memories that I've been talking about, sort of in action. Here you can see that we're kind of investigating this code base. It looks like it has a bunch of endpoints. Windsurf is going out and analyzing said endpoints and is creating almost like a memory of the technical architecture. So that the next time you ask a question about the code base, it doesn't have to go through and do this research again. It can remember and maybe answer your question in one shot. So this is just one example of memories in action.

What is the future of meta-learning? We believe that the future is where all of your tendencies are kind of inferred from the code base and from your usage. So autogenerate memories are a step in that direction. But ultimately, having things explicitly defined in, say, a rules file is a crutch. I think in the long term, we believe that agents will continue to build up this notion of understanding, this notion of memory banks, and be able to effectively utilize them so that every single Windsurf instance, every single downloaded instance of Windsurf will feel very personalized to the user that is using them. And we're going to continue investing towards that future.

6. Building Scalable and Intelligent Agents

Short description:

Agents will continue to build up an understanding and memory banks to provide personalized experiences. The third principle is scale with intelligence, ensuring tools keep pace with the trend of intelligence. Building a resilient product that can inherit learnings from LLMs is crucial. In 2021, Chat GPT was limited by its intelligence.

I think in the long term, we believe that agents will continue to build up this notion of understanding, this notion of memory banks, and be able to effectively utilize them so that every single Windsurf instance, every single downloaded instance of Windsurf will feel very personalized to the user that is using them. And we're going to continue investing towards that future.

And finally, the third principle that we use to build a tool, to build the Windsurf tool, is the idea of scale with intelligence. So Windsurf, now we've covered, understands what you're doing in the moment, trajectories, and is able to improve over time, the meta-learning. But how do we build an agent that can scale with the pace of LLMs? You're at this conference because you want to know how to enhance or how to enhance your workflows and be more productive using AI. And we want to make sure that the tools that we're building are scaling with that trend of intelligence.

So we might have given you a great tool today, but how do we know that if the LLMs were to improve by 50%, by 100%, by 3x, that the product is actually going to be resilient to that? And in a year, we'll be able to inherit the learnings from those LLMs. So to take you back, we're going to go back to 2021, when Chat GPT first came out again. Our imaginations were really running wild. You were thinking about all the different workflows that you could improve or accelerate. But at the time, the models were, quite frankly, a little bit too dumb to be able to accomplish a lot of these things.

7. Building for the Future of Intelligence

Short description:

Building infrastructure to compensate for weaknesses. Deleted chat and replaced it with an agent. Agent uses tools to find information and take action. Building for the future of intelligence. Architected to inherit gains from improving models.

And so what did we do? We ended up building up a lot of infrastructure to be able to compensate for some of those weaknesses. An example of a weakness is retrieval. We decided to build things like embedding indexes, search repositories, retrieval heuristics, and output validating systems, all in an effort to crutch the model into being able to retrieve the best context for your query. This was really helpful for squeezing the performance out of these models, that last little bit. But ultimately, all this development was for a fixed notion of intelligence.

Now we've seen very clearly that models have been improving very, very rapidly, and every week there seems to be a new breakthrough. And we want to make sure that our product will inherit the learnings from these LLMs so that you can always be sure that you're using the best tool available on the market. A great way that we demonstrated this is that we deleted chat. So chat is this, a chat bot, in a very naive sense, is kind of a legacy platform. It's a legacy paradigm. So we completely replaced chat with an agent. Now those are two separate things, and this is really what drives why Windsurf is unique.

So here we deleted chat, and some users don't even know that we deleted chat in favor of an agent, but they just know that they're getting a better experience. So what does this mean? An agent is able to use tools at its disposal to find information and to take actions. So for example, if you asked it a question about a file, it would first look at that file, then would respond accordingly. An example of a crutch that chat bots used to handle this case was at mentions. In chat, you could type at in a file name and then actually search for a file and then make sure that that file context was included in your prompt. Today, Windsurf can automatically infer the relationships between files, and 90% of the time you do not need to type at in order for it to understand that you want to pull in that piece of context or search the web or look for a certain piece of documentation. These are all things that the models have gotten so good at recently that we don't need the crutch of building things like at chat, at file, at web. The LLM just kind of knows.

And as a small engineering team, we want to make sure that we're always scaling with intelligence and building for the future and not crutching the models of today. When you build things like at file and at mentions, you end up building these kind of crutches, these hard-coded rules for what the model should do, instead of relying on the LLM to actually be smart enough to actually understand what to do. And in this way, we want to build a product where the user can be as productive and free-flowing as possible. We want to make sure that we can automatically decide what pieces of context to retrieve, whether or not to use tools, whether or not to edit the file. We don't want the user to have to explicitly specify these things. So how do we build for the future of intelligence? We believe that models will continue to get better and faster. And we want to make sure that our agent can do things like do unsupervised work. And in the future, we'll be able to do things like generate full PRs and consume even the most complex documentation and do crazy things like implement Figma designs inside of a React app. Windsurf is architected from the ground up to inherit these gains from improving models. And if it doesn't, rest assured, we'll know and we'll be able to build the next product that will be able to leverage this intelligence most effectively.

8. Improving Productivity with Windsurf

Short description:

Recap of trajectories and meta-learning. Improving productivity with Windsurf. The future of coding with AI agents.

So to recap, we have trajectories, the idea that Windsurf can read your mind, meta-learning, that Windsurf gets personalized to you over time, and three, scale with intelligence. We are agentic to our core. And that is what enables us to improve with model intelligence.

So where are we going with this? Windsurf is already today able to do a number of things that can improve your productivity. It can generate PRs end to end and even type up the PR descriptions for you. So you're going to start typing. You're going to see a suggestion for not just the rest of your file or your function, but for your entire pull request. We're going to be able to start fixing bugs before you've even completed functions and start writing and executing unit tests in a sandbox behind the scenes.

We also believe that things like a dot rules file are going to be completely dead by the end of 2025, and that 99% of your preferences will be inferred or learned from your usage. We're already seeing users do staggering things to improve their productivity with Windsurf, and it's only going to get better and better. Windsurf is the most advanced editor on the planet. It's already writing 90% of our users code. And users are really, really eager to switch over to Windsurf to experience the magic and the productivity improvements that come when you embrace AI agents inside of your software development workflow. We're right in the heart of this AI revolution, and I want to make sure that every single developer is armed with the best tools and the best tools available on the market. You can experience the magic for yourself at windsurf.ai. We have a generous free tier, and we'd love for you to try it. Thank you for listening. Feel free to send me feedback on the product on Twitter, and thank you again for listening.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
Rome, a Modern Toolchain!
JSNation 2023JSNation 2023
31 min
Rome, a Modern Toolchain!
Top Content
Rome is a toolchain built in Rust that aims to replace multiple tools and provide high-quality diagnostics for code maintenance. It simplifies tool interactions by performing all operations once, generating a shared structure for all tools. Rome offers a customizable format experience with a stable formatter and a linter with over 150 rules. It integrates with VCS and VLSP, supports error-resilient parsing, and has exciting plans for the future, including the ability to create JavaScript plugins. Rome aims to be a top-notch toolchain and welcomes community input to improve its work.
Forget Bad Code, Focus on the System
React Summit US 2023React Summit US 2023
27 min
Forget Bad Code, Focus on the System
Top ContentPremium
Watch video: Forget Bad Code, Focus on the System
Setting up the system and separating concerns are important in software development. Modular construction and prefab units are a new trend that makes construction quicker and easier. Architectural complexity can lead to a drop in productivity and an increase in defects. Measuring architectural complexity can help identify natural modules in the code. Best practices for avoiding architectural complexity include organizing code by business domain and using prop drilling. Atomic design and organizing a monorepo are recommended approaches for managing architectural complexity.
Confessions from an Impostor
JSNation 2022JSNation 2022
46 min
Confessions from an Impostor
Top Content
The Talk discusses imposter syndrome and reframes it as being a professional imposter. It emphasizes the importance of sharing and starting, embracing imposterism, and building inclusively for the web. The speaker shares personal experiences of being an imposter in various technical disciplines and highlights the significance of accessibility. The Talk concludes with the idea of building a collective RPG game to remove excuses for not making things accessible.
Conquering Complexity: Refactoring JavaScript Projects
JSNation 2024JSNation 2024
21 min
Conquering Complexity: Refactoring JavaScript Projects
Today's Talk explores the complexity in code and its impact. It discusses different methods of measuring complexity, such as cyclomatic complexity and cognitive complexity. The importance of understanding and conquering complexity is emphasized, with a demo showcasing complexity in a codebase. The Talk also delves into the need for change and the role of refactoring in dealing with complexity. Tips and techniques for refactoring are shared, including the use of language features and tools to simplify code. Overall, the Talk provides insights into managing and reducing complexity in software development.

Workshops on related topic

Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
JSNation 2023JSNation 2023
44 min
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
WorkshopFree
Ryan Albrecht
Ryan Albrecht
You know that annoying bug? The one that doesn’t show up locally? And no matter how many times you try to recreate the environment you can’t reproduce it? You’ve gone through the breadcrumbs, read through the stack trace, and are now playing detective to piece together support tickets to make sure it’s real.
Join Sentry developer Ryan Albrecht in this talk to learn how developers can use Session Replay - a tool that provides video-like reproductions of user interactions - to identify, reproduce, and resolve errors and performance issues faster (without rolling your head on your keyboard).
How to Create a Web Application in an (Almost) Autonomous Way Using Clean Coder
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
95 min
How to Create a Web Application in an (Almost) Autonomous Way Using Clean Coder
Workshop
Grigorij Dudnik
Grigorij Dudnik
Imagine replacing yourself with a multi-agent AI programmer to develop your production web application. That's exactly what we did at my startup takzyli.pl. To achieve this, we designed and used the Clean Coder - AI agent framework for autonomous code writing (https://github.com/GregorD1A1/Clean-Coder-AI), which is hopefully open-source project. If it worked for us, why shouldn't it work for you?In this workshop, I'll show you how to create an entire web application in an (almost) autonomous way and drastically reduce the time you or your employees spend on writing code.
Test, Code, Repeat: Mastering AI-Assisted Development
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
53 min
Test, Code, Repeat: Mastering AI-Assisted Development
Workshop
Marco Pierobon
Marco Pierobon
"Test, Code, Repeat: Master AI-Assisted Development" introduces developers to a transformative way of coding with AI as a collaborative partner. This workshop focuses on how iterative workflows, such as the ping pong pairing technique, enable an enhanced interaction between human creativity and AI efficiency.