The Future is Today: Leveraging AI in Software Testing

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In this talk, we will discuss leveraging Machine Learning practices in Software Testing with several practical examples and a case study that I used in my project to do Bug Triage. Let's embrace the future together!

This talk has been presented at TestJS Summit 2023, check out the latest edition of this JavaScript Conference.

FAQ

Mesut Turkal is a software quality assurance engineer who talks about integrating machine learning into software testing activities.

Machine learning is important in software testing because it can improve efficiency, reduce manual tasks, and help in various stages of the software testing lifecycle.

The main challenges in software testing include handling complex applications, multiple interfaces, time constraints, and resource issues such as budget and cost.

Machine learning can help in analyzing requirements and designing test cases by observing and learning from data, generating test cases automatically, and providing accurate and efficient test scenarios.

Yes, machine learning can be used in test code implementation by generating test code based on natural language descriptions of test steps and automating the coding process.

In test execution, machine learning can collect metrics such as execution duration, identify outliers, and notify testers of unexpected situations, thereby improving the reliability and efficiency of the testing process.

Machine learning can assist in test maintenance by performing code reviews, identifying anti-patterns, suggesting fixes for failing test cases, and migrating test code between different frameworks.

Self-healing in test automation refers to the capability of machine learning algorithms to identify the root cause of test failures and provide automated solutions to prevent similar issues in the future.

Machine learning can be applied to bug management by classifying and clustering bugs based on their severity and priority, thereby improving the efficiency of bug triage and resolution processes.

While machine learning can significantly enhance testing processes, it is not expected to replace human testers. Instead, it is likely to complement human efforts by automating repetitive tasks and providing valuable insights.

Mesut Durukal
Mesut Durukal
25 min
11 Dec, 2023

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk discusses integrating machine learning into software testing, exploring its use in different stages of the testing lifecycle. It highlights the importance of training data and hidden patterns in machine learning. The Talk also covers generating relevant code for test automation using machine learning, as well as the observation and outlier detection capabilities of machine learning algorithms. It emphasizes the use of machine learning in maintenance, bug management, and classifying bugs based on severity levels. The Talk concludes with the results of classification and bug management, including the use of clustering.

1. Integrating Machine Learning into Software Testing

Short description:

In this session, I want to talk about integrating machine learning into our daily software testing activities. We will discuss integrating machine learning activities in different software testing stages. We will start from the first stage in the lifecycle, analyze the requirements, design test cases, implement test code, and discuss maintenance activities. We need to use machine learning to improve our activities due to the complexity of the systems we test, the need to cover different interfaces and integrations, and the time constraints and resource issues we face. Machines and robots can help us, as they already do in our daily lives.

Hello, everybody! My name is Mesut Turkal. I'm a software quality assurance engineer, and in this session, I want to talk about integrating the machine learning into our daily software testing activities.

Machine learning is nowadays a very hot topic, everyone is talking about it, everyone is trying to get advantage to improve their efficiency. So, how about quality assurance activities, right? We can also improve our efficiency, maybe we can reduce some manual tasks, and we can somehow get advantage of using machine learning in our activities in different stages.

So, this is what we will do. We will discuss integrating, or leveraging machine learning activities in different software testing stages. We will start from the first stage in the lifecycle, which is even analyzing the requirements and designing some test cases, and then we will discuss how we can implement test code as well, because test automation is very important, and eventually we will discuss some maintenance activities by the help of machine learning activities. So, this is what we will go through.

First of all, let's have a check about the background and why we need to improve our activities by the help of machine learning. And in the second part of the presentation, this is probably the most important part in which we will go through all stages in the software testing lifecycle, and eventually in the last part I will share one practical example where I tried to use machine learning in my personal project and I will explain what I did and I will share some results. So let's get started with the first part, the introduction part and the needs or the background of using machine learning in software testing.

We need to use machine learning to improve or to support our activities because we have several challenges. Software testing is not easy anymore. The applications the systems that we are testing are too complex, too complicated. We have several interfaces or interactions and the applications that we are testing are talking to, communicating to different applications on different platforms, right? So we have to test or cover different interfaces or integrations. So this means we have a comprehensive scope to test and of course we have a time constraint. Time is precious. It's very valuable. And if our test cases are slowing down the pipelines, then after sometime it won't be acceptable, right? Because the developers or the product managers will start complaining about the test cases, slowly running test cases, because we want to fix our failures or the issues as soon as possible. We want to because this is one of the best deliveries, one of the quality dimensions on top of the quality context.

But to support fast delivery, we have to cope in time. We have to be quickly adapting our solutions. We have to quickly covering a lot of scope and integrations. So we have time constraints, scope resources, of course, budget is all resource issues, mean, cost and the budget issues. So from lots of different dimensions, we have several challenges. So it looks a little bit difficult and tough to cope with this challenge. But maybe we can find some help. We can get some help from someone. And could machines or robots be someone who can help us? Actually, they can because even in our daily routines, daily life, we can see in several different situations, they are already helping us. Even if we are watching over some social media or reading some articles, we are seeing they are recommending us.

2. Machine Learning in Software Testing

Short description:

In this part, we discuss how machine learning works and its working principle in software testing. We explore the use of machine learning tools, such as natural language processing algorithms, in software testing practices. We also highlight the importance of training data and how hidden patterns are revealed to generate a model for predicting future reactions or results. Additionally, we draw parallels between machine learning and biological learning, emphasizing the need for learning and observation in testing. Finally, we examine the software testing life cycle and the stages involved, from analyzing requirements to executing test cases and performing maintenance.

Like even if I read one article, I see that I might be interested in some more similar articles. And most of the time, it's very accurate, right? How does it work? How do they know me? Like, what kind of scope or context I might be interested in? Because they watch me, they observe me, they already know previously which articles I visited, previously which context I was interested in. And similarly, the prediction would be, what kind of similar scope I could be interested in? So, there is a pattern underlying my actions and when this hidden pattern is revealed after the observation, which is called as learning in machine learning, then of course the future actions would be predicted easily. So this is the working principle of machine learning, right?

So similarly, it can work in our software testing activities as well. For example, when I check what kind of tools are using only the machine learning, those tools which are performing software testing performance, even this query I can make with the help of machine learning. Nowadays, of course all of us know there are several natural language processing algorithms or protocols we can communicate with. So after sending our queries we can find several accurate answers. So not only the NLP protocols or algorithms, but lots of several algorithms we can use in our software testing practices and we will see how fast, how reliable they are working. And one more time let's quickly remind or remember how a dimensional algorithms work. Of course the training data is very important to predict some accurate results because if they don't learn us very well then maybe the results they are generating might not be that accurate or the expected results. So after having enough and accurate consistent data, then after observing this data the hidden patterns are revealed and a model is generated so whenever an upcoming data is coming then this model can generate what, how or in what ways this input data will be reacted. The system will react in what way. This reaction or the result can be predicted by the model and this will be our result and whenever we complete our model we can evaluate the performance and if we are convinced with the performance or satisfied with the performance of the model then we can go and just deploy in production but otherwise we can continuously try to improve by changing by the parameters, by doing some fine tuning activities, by playing with the the optionals or the parameters of the model. So it looks very similar to our biological learning because I tried to give some examples previously in the previous slides, like how we learn some things. Like first of all we observe, we try to learn the whole system how it works and then for the upcoming situations we try to predict what might be the related reactions or the results. For example as a human tester, if you ask me to test your system my answer would be most probably okay if I have resource or bandwidth, of course I can test your system, but first of all please teach me, please let me learn your system, like even if you yourself do not teach me I will learn it by some exploratory activities or by learning the documents going through the materials, I will do the learning, I will observe, I will do some exploratory actions, I will click the buttons, I will maybe navigate to different pages and I will observe, I will see how the system will react to my actions and then eventually after I complete my learning then I can guess what can be the results. For example whenever I see the APIs are working in a secure way then most probably I can guess if I send a request with an unauthenticated tokens or credentials then most probably I will have a 403 response code. This is my prediction right, because I already learned, I already observed the system is working with a secure way. This will work in the same way with the machines and let's see how it works in our software testing life cycles or the stages. This is a very usual life cycle we follow. Of course it's not working in a waterfall way or manner anymore nowadays. It is working like an iterational way, in an agile way. But basically this is how it works, even through the iterations. We start with analyzing the requirements, understanding the features, and then we design some test cases to cover those requirements or the features. Then whenever we design our test cases, we have to execute them. And we can execute either in manual ways or automated ways. So if we are doing the automated testing we should the test code. This is the implementation which is a part of the environment setup stage. After the execution, test case execution is the next after environment setup. And eventually, after the execution, we close our testing activity by doing the maintenance.

3. Leveraging Machine Learning in Test Case Lifecycle

Short description:

If we have any issues in the test case itself, we can improve the test code. Otherwise, if we find any issues with the product, then we can raise the bugs or the tickets. So, in each stage, we can somehow leverage machine learning activities. Let's start discussing each stage one by one, starting with analyzing the requirements and then relevantly, generating some test cases. An ISBN number is a combination of different numbers in a different format. After I observe this, I can generate some other test cases by injecting some different values. But of course, there is a much more straightforward way to do that nowadays, by using the NLP protocols or the algorithms.

If we have any issues in the test case itself, we can improve the test code. Otherwise, if we find any issues with the product, then we can raise the bugs or the tickets. This is the last stage of one single test case life cycle.

So, in each stage, we can somehow leverage machine learning activities. Let's start discussing each stage one by one, starting with analyzing the requirements and then relevantly, generating some test cases. Again, after observing, after doing our learning, after doing our training, then we can generate some test cases.

For example, let me immediately go through an example, which might be an API that we will test. The API of the Application Protocol Interface of a library. For example, we have a library and inside we have several books. Each book entity has some different attributes, like the ISBN number, the price or the publish year of book. Whenever I send some queries, I will get the relevant response. After I get these responses, I can see what kind of values are representing these attributes.

An ISBN number is a combination of different numbers in a different format. First of all, we have a digit and then we have the hyphen character and then three more digits and another hyphen character. This is one format designed to represent the ISBN number. This is already one training, already one learning. After I observe this, I can generate some other test cases by injecting some different values. Not this exact value but some similar values and even some intentionally wrong values. For example, if I start with two digits, what happens? Because it will be violating the standard defined for this ISBN number. It will be generating negative test cases as well, which we generate intentionally, injecting some unexpected values. So, this is all the generations we can do after learning, after seeing and doing our observations. But of course, there is a much more straightforward way to do that nowadays, by using the NLP protocols or the algorithms. So, we can just send our query.

For example, in this example, I'm sharing on the slides. I'm just explaining my problem, like, I am a tester, I have to design some test cases and my use scenario is something like this, users are going to this web page, and then performing some queries by sending some keywords to the text fields on the web page. So, this is my scenario. Please define some test cases for me, and I can see it already generates maybe six, seven test cases, including positive scenarios, corner cases, and lots of different coverage points. After we design our test cases, the next stage is generating the code, implementing the test code, which can be again performed by using NLP. On this example, I am sharing on this slide. Now, I am explaining my problem, the test steps.

4. Generating Relevant Code for Test Automation

Short description:

In this part, we discuss generating relevant code for test automation using machine learning. We explore the use of different programming languages, libraries, and frameworks, such as Python with Selenium, Cypress, Playwrights, and Selenium. The generated code is similar to what we would implement manually. Machine learning helps automate the code generation process and saves time and effort.

Step number one, go to this page, navigate to this URL. Step number two, find, locate this element, and do the user interaction, like click the button or maybe type some keywords into the text fields, and then I can see the relevant code is generated. This one was, I guess, Python with some Selenium and some other libraries. Not only specific programming languages or the libraries, but also I can ask to generate the relevant code in some test automation frameworks. For example, in this example, I'm asking to generate the code with a similar scenario, not the same but a similar scenario, in Cypress. But I can ask for different frameworks as well, like Playwrights, or Selenium, or any other. So again, I can see that it is generating a code which is very similar to what I would implement. Because if I did not ask, most probably I would implement very similar code.

5. Code Generation and Test Execution

Short description:

Now, in terms of code generation, I'm following a different path. I start with a Postman query and convert it to different programming languages. Implementation of test cases in UI automation requires visual recognition of elements. Instead of using traditional attributes, we can visually recognize elements. We can train the code to match visually using similar icons. After implementation, we can execute the test case and collect metrics such as execution duration.

Now, in the third example, in terms of code generation, this time I'm following a different path. I'm starting with a Postman query, and I ask it to convert this Postman query to different programming languages, like, not only Python, but JavaScript, and some others as well. So first, I implement my Postman query, and I copy the curl value, and fully automated, right? I don't waste any time. Everything is very fast, very accurate, very reliable. The code which is generated eventually is something that I can directly inject into my test automation environment.

Moving on, so talking about implementation of test cases, when we are doing the UI automation, visual recognition of elements is much more important, right? Because we have to locate the elements on the web pages, and how we do in the UI automation terminologies, we have to locate the elements. Actually, the code should locate the elements. Normally, how we do is we use the traditional attributes of the elements, or maybe the path, or the classes, the IDs of the elements, but we all know that they are sometimes flaky. They might be changed by the development team. They can just change the layout of the page, and sometimes the locators are broken. So, what should we do? Maybe we should, instead of using those traditional attributes or the classes, the paths, why don't we try to visually recognize, as just we do as human beings. For example, whenever we are scanning the web page, by our eyes, we are trying to see where the elements are, right? Where are the buttons? On the left-most corner, or in the middle of the page, where is it? We can see it by our eyes. So, the same way, we can let the code try to match visually. Like, if we train it by several similar icons. For example, on this page, I'm sharing an open repository. Since it's open, I'm just sharing without any commercial concerns. You can just go and check. There are several icons. For example, in this case, it was online shopping cart. If you are testing an online website, then you can train in this way. And whenever you have a similar icon, not necessarily 100% same, but a similar one, it will be recognized as a shopping cart. And whenever you click it, it can be recognized there, the location. And the code can click on the recognized element. And we can continue with the rest of the test steps.

Next, after we complete our implementation, then we can execute the test case. And what we can do during the test execution, we can collect several metrics. For example, the execution duration. How much time do we need to execute to complete our test execution? For example, let's say it is 15 seconds per test case. And this is the observation.

6. Observation and Outlier Detection

Short description:

Machine learning algorithms can detect outliers in execution times, automatically notifying us of risky or unexpected situations. By analyzing observations and comparing them to previous results, we can identify potential issues and investigate further.

And this is the observation. This is the training, right? And whenever we have a new execution, if it doesn't take 15 seconds, but two minutes, for example, not 15 seconds, not 20 seconds, but two minutes, it's obviously an outlier, right? So in this case, machine learning algorithm can warn me, can modify me. My expected result was around 15 seconds, because this was my previous observation. But this time, it took two minutes. Please go and check if there is something wrong. There might be some responses which came late by the system, or they might be something else. So all these kind of observations can be done automatically. And instead of going through all the executions, one by one, we can be automatically notified about the risky, about the execution, which might have some outliers or the unexpected situations.

7. Maintenance, Bug Management, and Machine Learning

Short description:

In the maintenance stage, we can teach machines to do code reviews and identify anti-patterns. Self-failing test cases can help identify root causes and provide fix options. Migrating between frameworks can be achieved through model training. A case study on bug management will be shared, focusing on classifying bugs based on severity levels. Pre-processing and feature extraction are key steps in machine learning. Converting text into numbers allows computers to understand the data. Classification is the next stage, followed by collecting results.

And the final stage is the maintenance. And refactoring is one activity, one option that we can do in the maintenance stage. For example, reviewing the code, or improving refactoring the code. So if we can teach somehow machines to do code reviews, like if we teach them the root practices, or even the anti patterns that we have to avoid, then they can do the code reviews. For example, whenever I have a hardly coded magical number inside the code, and if I teach machines that it's not an anti-pattern I want to avoid, then it can, whenever it detects, it can notify me, I found the anti-pattern here, please go and check. It will fasten our processes because otherwise what we could do, we could do just peer reviews, and it would take some time, right, I would send my code to a teammate, and he or she will check the code, provide the feedback. I will check again, it will take some time, it will go back and forth, but in this way we can already complete or even do the first review by the help of machines. And self-failing is another option we can perform, which is like whenever a test case is failing, we can try to understand the root cause automatically, and it will provide some options to make the fix to avoid this failure. And migrate, I discussed it in one of the previous slides, like migrating from Cypress to Playwright or converting from one framework to another framework, those all can be achieved by training our models. And eventually, the last part will be a case study where I will share my personal experiences, it is related to bug management and I will explain how we can even manage our bugs of tickets in our projects by using machine learning. So, in this project where I applied machine learning, I had almost 900 tickets in our issue tracking system. And first of all, I exported these tickets from the project management tool to an Excel spreadsheet, and I had three different severity levels. Severity 1, 2, 3, right? And those are different criticality, or the priority levels. So, what I'm trying to do is to classify those bugs into different severity levels. So, what I did for this purpose is, first of all, I did the pre-processing, like cleaning the data. If I need any text in the strings in the descriptions that are raised inside the tickets, then I remove them, eliminate them. Or I do some formations, like remove the punctuation, or make all the words lowercase. So, these kind of formations I do. And then I do the feature extraction. What is feature extraction? As we all know in machine learning, normal string, the English description I write to describe my problem, to describe my bug, cannot be understood by machines, right? Because this is our language as humans. But if we want to talk, not to another human, but to a PC, then I don't have to use, or I should use some other languages, not English. And it is the machine language. And how does it work? I have to convert these strings, these text values, into some numbers, basically zeros and ones. This is what a computer understands. So, how I can do this conversion is, I can follow different approaches. But what I did in the specific case was just counting the number of frequencies of words in each sentence. And then I will have the ones and zeros. For example, if a specific word is happening two times, three times in a sentence, then the binary form of this number would be my feature for this sample. And after having the features, the next stage is doing the classification itself. And eventually, we can collect the results from these models.

8. Results of Classification and Bug Management

Short description:

In the first cycle, I had a confusion matrix with around 73% accuracy. I merged class 2 and 3 to avoid bias, as class 2 was important to detect. I collected all other classes into one. The resulting accuracy was 82%, but still not sufficient for all activities. We can also use clustering to manage bugs.

So let me share some results. In the first cycle, this is the confusion matrix I had. And I had around 73% accuracy. And then I tried to collect class 2 and 3 together, and the labels for that was 3 and 4. I merged those two just to avoid bias, because class 2 is the class where most of the bugs are labeled with. So I combined 3 and 4. Which means because class 2 for me was important to detect. If I have a class 2 issue ticket, then it was a release blocker for me, for deciding if class 2 or not is an important decision. So I collect all the others in one class. And in this case, I had 82% maybe still not 100% sufficient to leverage in my activities because it was a difficult problem. Even for ourselves, as humans, sometimes we are discussing a lot about the bug triage, right? For me, sometimes it's 72. For someone else, it's a 73 or 4. So on top of classification, we can do the clustering as well. And we can manage our bugs.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Network Requests with Cypress
TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Cecilia Martinez, a technical account manager at Cypress, discusses network requests in Cypress and demonstrates commands like cydot request and SCI.INTERCEPT. She also explains dynamic matching and aliasing, network stubbing, and the pros and cons of using real server responses versus stubbing. The talk covers logging request responses, testing front-end and backend API, handling list length and DOM traversal, lazy loading, and provides resources for beginners to learn Cypress.
Testing Pyramid Makes Little Sense, What We Can Use Instead
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
Gleb Bahmutov
Roman Sandler
2 authors
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
Full-Circle Testing With Cypress
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress is a powerful tool for end-to-end testing and API testing. It provides instant feedback on test errors and allows tests to be run inside the browser. Cypress enables testing at both the application and network layers, making it easier to reach different edge cases. With features like AppActions and component testing, Cypress allows for comprehensive testing of individual components and the entire application. Join the workshops to learn more about full circle testing with Cypress.
Test Effective Development
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
This Talk introduces Test Effective Development, a new approach to testing that aims to make companies more cost-effective. The speaker shares their personal journey of improving code quality and reducing bugs through smarter testing strategies. They discuss the importance of finding a balance between testing confidence and efficiency and introduce the concepts of isolated and integrated testing. The speaker also suggests different testing strategies based on the size of the application and emphasizes the need to choose cost-effective testing approaches based on the specific project requirements.

Workshops on related topic

Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Detox 101: How to write stable end-to-end tests for your React Native application
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
Workshop
Yevheniia Hlovatska
Yevheniia Hlovatska
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop