ChatGPT Unleashed: Revolutionizing Web Development from Idea to Deployment

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In this engaging talk, we will delve into the journey of building, testing, and deploying a web app while examining the capabilities and limitations of ChatGPT in web development. Through the lens of this comprehensive process, we will explore how AI-driven development techniques can contribute to a more efficient workflow and discuss the challenges that may arise. Attendees will gain a deeper understanding of how to effectively incorporate ChatGPT into their projects, focusing on striking the right balance between automation and manual efforts. Join us as we navigate the ever-evolving landscape of web development and investigate the role of AI in shaping our development strategies.

This talk has been presented at Productivity Conf for Devs and Tech Leaders, check out the latest edition of this Tech Conference.

FAQ

VolQAI is a SaaS startup founded by Nikolai, aiming to make automated software testing fun, effective, easy, and more reliable for software development processes.

Nikolai faced challenges with GPT-03-Mini-High in keeping up with rapidly changing technologies, handling API updates, and troubleshooting errors during the setup and deployment process.

The application was functional, though creating it involved a painful process with many errors. It was an improvement over previous attempts but still required significant troubleshooting.

Nikolai found that many automated tests failed despite the LLM having full context of the project. The LLM struggled to write passing tests, a common issue across different LLM versions.

AI has enhanced individual capabilities, reducing the need to hire additional engineers. This has led to fewer new hires rather than replacing existing positions, influencing tech industry hiring trends.

Nikolai emphasized simplicity, clean and intuitive interfaces, mobile responsiveness, accessibility, security, and scalability in his application design.

Nikolai believes AI enhances developer capabilities rather than replacing them, but acknowledges it may reduce the need for hiring new developers.

Nikolai sees AI as a tool that makes every engineer more capable, allowing them to do more with less and focus on complex tasks, thereby enhancing productivity in software engineering and testing.

The presentation focuses on the capabilities of GPT-03-Mini-High in building, testing, and deploying web applications.

Nikolai is a Russian Ukrainian Jew, originally born in Uzbekistan, who migrated to the US in 1996. He has been in the automated testing, software engineering, and DevOps industry since 2008, and is the CEO and founder of VolQAI.

Nikolay	Advolodkin
Nikolay Advolodkin
22 min
27 Mar, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Welcome to Chad GPT, Unleashed, Revolutionizing Web Development. I'll share my journey and lessons learned about GPT-03-Mini-High. TLDR: Good with slower-changing technologies, poor with rapidly moving ones. Let's test the application by building a vision board. User stories, requirements, acceptance criteria, and design principles were well-executed. Mobile responsiveness, accessibility, security, and scalability are important. LLM struggled with error handling and installation, but we managed to troubleshoot. The app creation process was painful, but we created a better version. Some end-to-end tests failed. Automated tests didn't consider challenges and project structure. Continuous integration failed during deployment, but manual deployment worked. Majority of LLM tools worked. AI's impact on hiring: current roles remain, but new hires are affected. AI enhances my abilities, reducing the need for hiring. Thank you for your time.

1. Introduction to Chad GPT and My Journey

Short description:

Welcome to Chad GPT, Unleashed, Revolutionizing Web Development. I'll share my journey and lessons learned about GPT-03-Mini-High. It's good at building, testing, and deploying web applications. I started this journey in 2022 and gave my first presentation in 2023. TLDR: Good with slower-changing technologies, poor with rapidly moving ones.

Hey, everyone. Welcome to Chad GPT, Unleashed, Revolutionizing Web Development, From Idea to Deployment. The journey that I went on in this presentation and that I want to take you on as well about my lessons learned is how good is GPT-03-Mini-High at building, testing, and deploying a web application. Right. We all know that it does have some coding capabilities. We all know that it has, you know, reasoning capabilities as well, but what can it actually do and how well does it do it when it comes to building some kind of a web application? And so that's what we're going to figure out today.

So let me tell you a little bit more about myself just before we jump into the presentation. My name is Nikolai, originally born in Uzbekistan. I'm a Russian Ukrainian Jew, migrated to US in 1996. I was raised in like Maryland, Virginia area until I got tired of the snow and the cold, and then I moved to Florida where I've been residing for about six years now, and I absolutely love the blue skies and sunshine. I've been in automated testing, software engineering, DevOps industry since 2008. I love it. It's my passion. It's what I do pretty much every single day since I ever started. Recently, I've been working on my first SaaS startup as a CEO and founder of VolQAI where we are building automated software testing that's actually fun, that actually works, that's actually easy, and makes software development processes extremely better and extremely more reliable using our technologies. And in the picture, I'm a dog dad. You'll see my little baby Mia. She is always here with me as I work from home bossing me around.

So I actually started this journey back in 2022. I don't know if you all remember back in 2022, that's when ChadGPT made its first entry into the world and shocked the world. I think, what was it? Is it number one fastest growing application ever? And they got like millions of users within the first week or something. So it made a huge splash, and so of course I jumped on and tried to see what is this thing? How does it work? And so I think it was about February of 2023, I gave my first presentation at a conference in California about its capabilities. I tried to build a web application with it then. I'm still continuing to build applications with it now and including other technologies. This is how it performed back then in different areas. Ultimately if I was to give you like a TLDR, is that it's good. It was good at technologies that are slower to change, where documentation is better. And it was very poor with rapidly moving technologies, where of course it was trained on data that it couldn't keep up with. So whenever APIs changed, it couldn't keep up with those API changes. And so there were many challenges that broke different kinds of uses of those rapidly moving technologies.

2. Building a Vision Board

Short description:

Let's test the application by building a vision board. The LLM provided wireframes and met my requirements. User stories, requirements, acceptance criteria, and design principles were all well-executed.

All right, enough talking. Let's actually go for a test drive here and see what we can make happen. So I wanted to start out with a concept and design for the application and the application I wanted to build that I've had a dream about building was a vision board. I have a vision to build a vision board. I've never built a good one. I will continue to try whenever I find free time. And maybe over the years as different LLMs get smarter, my vision board will finally come to life as I continue to do these presentations.

But ultimately, starting with a concept and design. These are the wire frames that the LLM provided me. I did give it a prompt to build me a vision board, and I supplied it a screenshot of my requirements that I had all the way back in twenty twenty two. And so and this is all the information that it told me afterwards. And so you can see here are the different requirements like the log and register page, vision board, the gold editing and responsive nature, which all of which I really like. And so I gave it a little happy face saying that I did a good job.

Here are the user stories. Granted, these user stories came from my original vision and this is exactly what I pasted into the prompt. Actually it's a screenshot, which is super useful. As you can see, it even says here from requirements. And so all of these actually look really good to me. So I gave it a happy face as well because I think that the LLM here did a good job as well. Here are the requirements. There are a lot more, but this is just a screenshot of some of them. Like you use a registration, there's the requirements. It provided some acceptance criteria, which I really like some user user login also with acceptance criteria, even getting to the details of like using JSON Web Tokens or cookie based authentication and then talking about adding goals. And there are others. I think that this is also really good, especially with the acceptance criteria where it kind of gives you a really good idea and really more clarity about what is and is not acceptable. Then it had some design principles that it provided. Simplicity, love that. For me, a clean intuitive interface is always very important. Responsiveness for a vision board. That's extremely important because most of the time I'm actually going to be using it on my mobile phone as I'm walking and walking around, walking my dog and looking at my vision board or maybe I'm going to Whole Foods and looking at my vision board, visualizing my goals coming true.

3. Error Handling and Installation

Short description:

Mobile responsiveness, accessibility, security, and scalability are important. The LLM struggled with error handling and installation, but we managed to troubleshoot and get it working. Although there were dependencies and version issues, we were able to achieve a solid start with the LLM-generated dashboard and user interface.

So mobile mobile responsiveness is extremely important. Accessibility. It's nice to have for us here in U.S. accessibility is less important. Unfortunately, we don't have legal standards that we must meet like you, but it is important regardless. And then there's security and scalability. That part, I think, is important as well.

In terms of error handling. Once I started having the LLM trying to take me through the process of actually setting up the software on my local machine in order to be able to deploy it, it didn't do a good job. I actually spent a good amount of time, probably an hour troubleshooting different kinds of errors. The installation actually went pretty well. But then when we started getting into the installing Tailwind, that's where I started to run into a bunch of issues and it took me some time to troubleshoot these issues. And so I don't think it did a good job. Of course, there are so many dependencies. I think ultimately it ended up being a version of Tailwind that was installed that I needed to downgrade in order for it to work.

The version of Tailwind that I was using was like an alpha version or maybe a beta version that probably wasn't ready. But it was funny because the LLM was the one that recommended me to set up instructions. And so I guess the fact that it couldn't anticipate this or didn't anticipate that, you know, may not bode really well for its reasoning capabilities. But ultimately, we were able to get something up and running, which I thought was actually impressive. Having started this in 2022 to now being 2025 and already being on many versions later since ChatGPT 3.5, I think what we have here is pretty good. You know, it's not perfect. It's not amazing. But it is a solid start for something that is LLM generated. You'll see here the dashboard. You'll see us being able to add a goal. You'll see a nice, clean user interface that's very simple. It works. The date picker is nice. It works as expected. And then it does add the goals to our vision board.

4. Code, App Creation Process, and Testing

Short description:

Here's the code that the LLM constructed. The app creation process was painful, but we managed to create a better version with more capabilities. The experience with O3 Mini High was worse than other models like Anthropx. I wrote end-to-end playwright tests, and some of them failed.

So that's pretty cool. Here's some of the code just to show you what that looks like. And everything here was LLM constructed. I made no updates to any code anywhere. Looking at the code, it looks pretty solid. Clean code. Well-written. Using, you know, good tailwind syntax and so on.

In general, if I was to rate the app creation process from start to finish, like, what did it take to install everything? What did it take to add all the files, paste the code in there, troubleshoot? It was actually a very painful process. When I first started this journey, I created a very simple version of the application. This one I would say was more complicated. It had more pages, more interactions, more capabilities. My very first version, all you could really do was modify the dummy code such as, like, for the goals. I had a dummy JSON file that I could go and modify, and then that would ultimately modify the application. But now this current application, as you saw, you could add, actually, a goal, and it updates the dashboard. So it is a better, stronger version of the app, but it was still a painful process.

I don't know if you all have used other types of models like Anthropx, Cloud, Cloud like SONA 3.7, for example. I'm really used to that one, and that is a fantastic coding experience. Not perfect as well. It does run into issues, but to me, that's a much smoother experience than this version of O3 Mini High, which is actually designed for coding. Literally when you choose it, it says good for logic and coding tasks, but I thought it was significantly worse than an experience with something like Anthropx, for example. But if you do want to see the deployed version of the application, you can go ahead and grab it right there.

So I asked the LLM to code me some end-to-end playwright tests, and this was the outcome here. I wrote five tests. Let's see. Should navigate from home to log in, that test failed. Should navigate from home to register, that test failed. Should add a goal and display it on the dashboard, that one also failed. Should log in and display the dashboard, that one passed. That's a very easy test to pass.

5. Automated Testing and Test Results

Short description:

I'm surprised that one test passed when we don't have the capability for it. Overall, three tests failed and two tests passed. It's disappointing that the automated tests didn't consider the challenges and project structure. Even after years, LLMs still struggle with writing automated tests. The execution got stuck due to a timeout, and it passed a test we don't have the capability for.

And should view a goal and delete it. Oh, that one is interesting. I'm actually surprised that that one passed, because we don't have delete goals capabilities. So I don't know where that came from and how that test was performed. Maybe we can look at the code later. But overall you see that three of the tests failed, two tests passed. One of them extremely easy. This one I am now questioning the validity of. And it's a poor outcome in my experience. It's a poor outcome from what I think.

Because if you think about it, the LLM had the context of the entire interaction that I had with it. It had me create the entire web application. It knew all of the challenges that I ran into. It knew everything about my project structure. So how did it not create passing automated tests? It's a very disappointing outcome in my opinion. And surprising as well because since the beginning, these LLMs have struggled to write automated tests. And it's funny because years later now, they continue to struggle to write automated tests. And I don't know, to me it's funny because also I guess it's like us humans don't like testing as well. Seems like the LLMs also don't enjoy it.

Here is an execution of the tests. Yes, so it ran through a bunch. But then it got stuck because we do have a 30 second timeout. It got stuck on one of the tests. And as you can see here, we don't have a delete goal capability in our dashboard. So I don't know how it passed that test. That would be an interesting code analysis afterwards. Let's go back and see some of the other tests. I should have ran it slower and one at a time. It did some things, which is cool. But yeah, then it gets stuck.

6. Continuous Integration and Deployment

Short description:

Continuous integration failed during deployment to Netlify, but manual deployment worked. Chat GPT showed more capabilities this time, with improvements and some areas for further improvement. LLMs still struggle with testing and understanding rendered applications. Challenges in GitHub Action CI and Next.js application were overcome.

And then ultimately it fails. And then there is the report that is produced afterwards. Cool.

So that was testing. What about some continuous integration? It also failed here. I tried to deploy to Netlify. And as you'll see here, the build step ultimately ended up failing. I'm not sure why it failed. I didn't actually troubleshoot this because that wasn't the goal of this project was for me to troubleshoot every single error. But back in 2022, the chat GPT struggled with CICD as well. And it looks like it still continues to struggle. I was able to manually deploy the application linking Netlify to my GitHub account. And then Netlify was able to successfully do the deployment. But the LLM, with all of the knowledge and the context that it had, was not able to successfully deploy the app.

So in summary, overall, I would say that chat GPT was more capable this time than before, than back in 2022. So that's really nice. And certainly, there have been a lot of improvements in the LLM over that period. If you saw my previous table before, there was a number of red arrows that were pointing down, which means that it did really terrible. But in this case, it's a few arrows where it did well, and a few arrows where there's still some improvement to do. Like the playwright testing. In general, I see these LLMs have continued to struggle with, which is really interesting and really funny to me. But it is actually a very tough challenge to solve of how do you do testing well? Can you understand the rendered application well enough in order to be able to interact with it in the proper manner? Those are some of the challenges that, for example, we're dealing with at Vault Q. And so I can understand why the LLMs really struggle here as well. GitHub Action CI, still missing a number of details throughout in order to make that work. The working Next.js application, I gave it a yellow arrow because we were able to get an application going. As you saw, the application is not terrible. It's a working application. But it did take me some struggling through many prompts, many errors in order to get it resolved. But I was able to get it resolved. And it was an improvement over chatGPT 3.5, for example.

7. AI's Impact on Developers and Code Generation

Short description:

Majority of tools for LLM worked, improving design. AI's impact on replacing developers is a mixed answer. AI enhances individual capabilities, making them more capable. However, there is a replacement happening in some areas, such as startups using AI to generate code.

Installation of tools, majority of them worked. Some of them did not. The easy stuff, right, like text-based information, creative-based information, wireframes, user stories, requirements, easy for the LLM since the beginning, continues to do a good job. And the design in this case, I think it was improved. Previously, the design that chatGPT 3.5 had was pretty ugly and terrible. Now, the current version, I think, was usable. And if I had that kind of an application that looked like that, and I was able to use it, I would not be upset at it. It would be totally usable, and I would continue to use it on a regular basis.

One question that commonly comes up through these kinds of presentations, and I think it will continue to come up for a really long time, is, will AI replace developers? And it's a question that I personally also like to revisit as time goes on, as developments in AI continue to materialize forward, right, and we always have so many developments. I feel like AI keeps moving more and more rapidly, and there keep being more and more innovations throughout the industry. Previously, my answer was I didn't really see how AI was going to be replacing developers. Today, I think it's a mixed answer from what I'm seeing in the industry.

I'll give you an example for myself. This application that I built here, right, it's relatively simple, but it's a solid, maybe MVP, to try out a concept on. And I was able to create it with very little front end development knowledge. I'm no front end expert. I can get around, especially with an LLM, I can follow documentation and be pretty good, but I am no expert. I can't code something like this from scratch by myself without heavy assistance from, like, docs. I was able to create something like this. So that's kind of an enhancement of my skill, right? So now that's one engineer that is more enabled, capable of doing more than I was ever able to do before. So that expands my capabilities, it's not necessarily a replacement. And I think that's one area where AI will really help us to shine is it will make every individual more capable. For example, in software engineering, in automated software testing as well, that's exactly what we are seeing is AI is making everybody more capable to do more with less, right? So maybe instead of supporting one web application, you will now be able to support multiple web applications because you have more time in your day to do the important, complicated tasks.

However, I do think there's another side to this story where there is a replacement happening of engineers. So for example, building a startup, I'm learning a lot about that experience. And one of the most recent interviews that I saw with Y Combinator is that over 90 percent of the code written of the latest Y Combinator startups is generated by AI. And this code is generated by AI, by really smart engineers. These engineers can create their own applications from scratch. And yet they are using AI to generate over 90 percent of the code for their apps. And so what does that mean to me? Of course it makes you more efficient, but also it means that there is less necessity to bring in other engineers to help you with the process because code creation is now easier than ever.

8. AI's Impact on Hiring and Conclusion

Short description:

AI's impact on hiring: current roles remain, but new hires are affected. Difficulty in finding new positions in the industry. AI enhances my abilities, reducing the need for hiring. Follow me on LinkedIn and Instagram. Thank you for your time.

And so maybe people in their current roles are not getting fired because of AI, but certainly new people are not getting hired. And that probably correlates with what we're seeing in the world today, right? We are seeing a lot of tech layoffs. We are seeing it a lot harder to get new positions in the industry. Could that be AI-related? Yeah, maybe. Of course, there are many other factors, but I think it possibly is.

And for example, for me, for myself, building these applications, running a business, I can see how AI has enhanced me so much that I find myself not needing to hire for other roles because AI can handle it for me.

So with all of that said, I really hope that you enjoyed this presentation. Again, I've been Nikolai Advolatkin. You can follow me on LinkedIn right there where I'll post the link to this presentation. You can keep up with me with all of my updates that are going on with my life if you want to keep up with me. That won't be on LinkedIn, that will be on my Instagram, you can find me there. Thank you so much for your time. It's been such a pleasure and I hope you have a beautiful day. Take care.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
Domain Driven Design with Vue Applications
Vue.js London 2023Vue.js London 2023
14 min
Domain Driven Design with Vue Applications
Top Content
Welcome to this talk on domain-driven design in Vue.js application. Today we are going to look into domain-driven design, its benefits and how it works with Vue.js domain-driven design versus the MVVM model. Vue.js thrives in domain-driven design, a design approach that models software to match a domain. DDD emphasizes understanding business logic and creating a domain that reflects the language and concepts. Integrating DDD in Vue.js offers benefits such as effective modeling of complex business domains, structured code reflecting domain logic, and easier onboarding and ownership.
Forget Bad Code, Focus on the System
React Summit US 2023React Summit US 2023
27 min
Forget Bad Code, Focus on the System
Top ContentPremium
Watch video: Forget Bad Code, Focus on the System
Setting up the system and separating concerns are important in software development. Modular construction and prefab units are a new trend that makes construction quicker and easier. Architectural complexity can lead to a drop in productivity and an increase in defects. Measuring architectural complexity can help identify natural modules in the code. Best practices for avoiding architectural complexity include organizing code by business domain and using prop drilling. Atomic design and organizing a monorepo are recommended approaches for managing architectural complexity.
Confessions from an Impostor
JSNation 2022JSNation 2022
46 min
Confessions from an Impostor
Top Content
The Talk discusses imposter syndrome and reframes it as being a professional imposter. It emphasizes the importance of sharing and starting, embracing imposterism, and building inclusively for the web. The speaker shares personal experiences of being an imposter in various technical disciplines and highlights the significance of accessibility. The Talk concludes with the idea of building a collective RPG game to remove excuses for not making things accessible.
Improving Developer Happiness with AI
React Summit 2023React Summit 2023
29 min
Improving Developer Happiness with AI
Watch video: Improving Developer Happiness with AI
GitHub Copilot is an auto-completion tool that provides suggestions based on context. Research has shown that developers using Copilot feel less frustrated, spend less time searching externally, and experience less mental effort on repetitive tasks. Copilot can generate code for various tasks, including adding modals, testing, and refactoring. It is a useful tool for improving productivity and saving time, especially for junior developers and those working in unfamiliar domains. Security concerns have been addressed with optional data sharing and different versions for individuals and businesses.
Maximize Productivity with AI Agents
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
25 min
Maximize Productivity with AI Agents
I'm Tejas Kumar, a software developer with over 20 years of experience. AI agents are defined as entities that act on behalf of users or groups to produce specific effects. Agents consist of an orchestration layer, a language model, and tools represented as JSON functions. Langflow is an open-source tool that allows users to build their own AI agents by connecting language models and tools. Composio is a tool that enhances agent capabilities by offering integrations and apps, such as Google Calendar integration. MCP (Model Context Protocol) is a way to share context with models and extend their capabilities. It allows functions to be made available to models over standard input/output or an HTTP endpoint. MCP can be used with GitHub to perform various tasks like searching and fixing code issues. The Talk covered the basics of AI agents, building agents with Langflow and enhancing them with Composio, and using MCP with GitHub. The speaker encouraged audience questions and exploration of these concepts.

Workshops on related topic

Web Accessibility in JavaScript Apps
React Summit 2022React Summit 2022
161 min
Web Accessibility in JavaScript Apps
Workshop
Sandrina Pereira
Sandrina Pereira
Often we see JavaScript damaging the accessibility of a website. In this workshop, you’ll learn how to avoid common mistakes and how to use JS in your favor to actually enhance the accessibility of your web apps!
In this workshop we’ll explore multiple real-world examples with accessibility no-nos, and you'll learn how to make them work for people using a mouse or a keyboard. You’ll also learn how screen readers are used, and I'll show you that there's no reason to be afraid of using one!
Join me and let me show you how accessibility doesn't limit your solutions or skills. On the contrary, it will make them more inclusive!
By the end, you will:- Understand WCAG principles and how they're organized- Know common cases where JavaScript is essential to accessibility- Create inclusive links, buttons and toggleble elements- Use live regions for errors and loading states- Integrate accessibility into your team workflow right away- Realize that creating accessible websites isn’t as hard as it sounds ;)
How to Create a Web Application in an (Almost) Autonomous Way Using Clean Coder
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
95 min
How to Create a Web Application in an (Almost) Autonomous Way Using Clean Coder
Workshop
Grigorij Dudnik
Grigorij Dudnik
Imagine replacing yourself with a multi-agent AI programmer to develop your production web application. That's exactly what we did at my startup takzyli.pl. To achieve this, we designed and used the Clean Coder - AI agent framework for autonomous code writing (https://github.com/GregorD1A1/Clean-Coder-AI), which is hopefully open-source project. If it worked for us, why shouldn't it work for you?In this workshop, I'll show you how to create an entire web application in an (almost) autonomous way and drastically reduce the time you or your employees spend on writing code.
Build a Universal Reactive Data Library with Starbeam
JSNation 2023JSNation 2023
66 min
Build a Universal Reactive Data Library with Starbeam
WorkshopFree
Yehuda Katz
Yehuda Katz
This session will focus on Starbeam's universal building blocks. We'll use Starbeam to build a data library that works in multiple frameworks.We'll write a library that caches and updates data, and supports relationships, sorting and filtering.Rather than fetching data directly, it will work with asynchronously fetched data, including data fetched after initial render. Data fetched and updated through web sockets will also work well.All of these features will be reactive, of course.Imagine you filter your data by its title, and then you update the title of a record to match the filter: any output relying on the filtered data will update to reflect the updated filter.In 90 minutes, you'll build an awesome reactive data library and learn a powerful new tool for building reactive systems. The best part: the library works in any framework, even though you don't think about (or depend on) any framework when you built it.
Table of contents- Storing a Fetched Record in a Cell- Storing multiple records in a reactive Map- Reactive iteration is normal iteration- Reactive filtering is normal filtering- Fetching more records and updating the Map- Reactive sorting is normal sorting (is this getting a bit repetitive?)- Modelling cache invalidation as data- Bonus: reactive relationships
Test, Code, Repeat: Mastering AI-Assisted Development
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
53 min
Test, Code, Repeat: Mastering AI-Assisted Development
Workshop
Marco Pierobon
Marco Pierobon
"Test, Code, Repeat: Master AI-Assisted Development" introduces developers to a transformative way of coding with AI as a collaborative partner. This workshop focuses on how iterative workflows, such as the ping pong pairing technique, enable an enhanced interaction between human creativity and AI efficiency. 
Build Web3 apps with React
React Summit 2022React Summit 2022
51 min
Build Web3 apps with React
Workshop
Shain Dholakiya
Shain Dholakiya
The workshop is designed to help Web2 developers start building for Web3 using the Hyperverse. The Hyperverse is an open marketplace of community-built, audited, easy to discover smart modules. Our goal - to make it easy for React developers to build Web3 apps without writing a single line of smart contract code. Think “npm for smart contracts.”
Learn more about the Hyperverse here.
We will go over all the blockchain/crypto basics you need to know to start building on the Hyperverse, so you do not need to have any previous knowledge about the Web3 space. You just need to have React experience.