Revolutionizing JS Testing with AI: Unmasking the Future of Quality Assurance

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

"Revolutionizing JS Testing with AI: Unmasking the Future of Quality Assurance" is a forward-thinking talk that delves into the transformative power of AI in JavaScript testing. The presentation offers an enlightening exploration of AI Testing principles, practical applications, and future potential. By featuring AI-driven tools like Testim, ReTest, Datadog, and Applitools, this talk brings theory to life, demonstrating how AI can automate test case generation, optimize anomaly detection, and streamline visual regression testing. Attendees will also gain insights into the anticipated advancements in AI Testing for JavaScript. The talk concludes with a lively Q&A, inviting everyone to delve deeper into the world of AI and JavaScript testing. Be prepared to reimagine your QA process with AI!

This talk has been presented at TestJS Summit 2023, check out the latest edition of this JavaScript Conference.

FAQ

Generative AI revolutionizes JS testing by automating the creation of test cases and enhancing error detection. This allows for faster test generation and the ability to handle complex test scenarios more efficiently than manual methods.

AI improves software testing by automating test generation, reducing human error, and providing templates for building better tests. It also assists in error detection based on historical data, improving test accuracy and reliability.

Key technologies in AI-driven testing include natural language processing, predictive analytics, and neural networks. These technologies enable enhanced processing of test instructions and contribute to smarter, data-driven testing strategies.

AI testing offers multiple benefits including increased speed and efficiency of test generation, enhanced test coverage, adaptability to code changes, and improved bug detection capabilities.

Yes, the use of generative AI in testing opens up the possibility for non-technical users to participate in test creation. AI can generate test cases from simple instructions, making the process accessible to those without deep technical expertise.

High-quality data is crucial for effective AI testing. The performance of AI models in generating and evaluating test cases significantly depends on the quality and relevance of the data used in training these models.

Popular tools for AI-driven JS testing include Amazon Bedrock, Hedgey, Datadog, and Applitools. These platforms offer various features like anomaly detection, visual regression testing, and automated test case generation.

AI excels in identifying subtle discrepancies and non-obvious bugs through advanced pattern recognition and anomaly detection techniques. This capability allows for more thorough and accurate testing, especially in complex applications.

Renaldi Gondosubroto
Renaldi Gondosubroto
20 min
11 Dec, 2023

Comments

Sign in or register to post your comment.
  • Swati Gajjar
    Swati Gajjar
    Hello, at end of your presentation you are showing some demo of AI powered code less testing tool, can you provide the name of that tool?
Video Summary and Transcription
AI testing with generative AI is revolutionizing JS testing by automating test creation and improving software test processes. Key technologies like natural language processing and neural networks, as well as quality data, play a crucial role in AI testing. The benefits of AI testing include speed, efficiency, adaptability, bug detection, and limitless potential. Generating JavaScript tests can be tailored to different tools like Selenium, and there are popular tools available for automating test automation. AI tools like Datadog, RecheckWeb, and Applitools Eyes offer powerful capabilities for anomaly detection, visual regression testing, and code list testing. The horizon for AI in testing continues to expand with evolving capabilities, and understanding AI's role in testing revolution and machine learning is crucial for practical application and continuous learning.

1. Revolutionizing JS Testing with AI

Short description:

Hello everyone! My name is Rinaldi, and today I will be delivering a talk on revolutionizing JS testing with AI. AI, particularly generative AI, has been making big strides with changing the landscape of programming and testing. It has opened up opportunities for improvement within quality assurance. We will explore the growing trend of generative AI within testing, its potential to automate test creation, and how it revolutionizes the landscape. The role of machine learning in AI testing is to improve software test processes, prevent human error, and automate error detection based on history.

Hello everyone! My name is Rinaldi, and today I will be delivering a talk on revolutionizing JS testing with AI. Unmasking the future of quality assurance. So as you probably already know by now, AI, particularly generative AI, has been making big strides with changing the landscape of different kinds of programming. Javascript is just one of them. And not just the programming aspect of itself, but also the testing element as well. And that has been able to open up a lot of opportunities for improvement within quality assurance. And hence, that's where we're going to delve into this topic today.

So without further ado, let's get straight into it. So I'd just like to briefly introduce myself. So I'm a software engineer for Seek. I am also a holder of all 13 certifications from AWS. I'm also a subject matter expert for the Solutions AWS Architect Professional and AWS Data Analytics Specialist Certification. I'm an international speaker at over 30 events and conferences. I also enjoy all things AWS, open source, testing and virtual reality.

So diving into this topic directly, what is the main meat that we want to get into today? Well, really it's all about being able to understand the growing trend of generative AI within testing, because we've seen a bigger trend of how it's currently being conducted within the realm of testing. And nowadays, it's not only that you can just automate and create new text or generate new stories with generative AI, but now you can actually create code with generative AI, create tests with generative AI for your code. So it just has so much potential within what you can do with it. And as mentioned before, this leads to a lot of new areas such as codeless creation of test cases. And of course that then leads to the potential of opening test creation to anyone. So it's not only those who are very well versed within test creation that can do this, but normal devs or even non-technical people can even start looking into this and help out with the development process too of tests. So in general, it's just revolutionizing the landscape in a really big way.

What is the role of machine learning within AI testing? Well, firstly, we're using AI to be able to improve software test processes. It's becoming an assistant for us to be able to work with, to be able to create us a template to be able to build on. And aside from that, it helps us to ensure that what we are doing is right. So one of the things that is very common in test case creation is the occurrence of human error. Introducing AI to the mix, it can help us to prevent that from happening and redirect us instead to be more focused on how we can make better tests and how we can make more error proof tests. So that is the power of generative AI. And we want to be able to also automate error detection based on history. That's one of the things that it has been able to do for us too, because what we can do is that we can create an automated process where error handling and error checking is a normal thing, so that AI can immediately just check based on the history. Maybe there could be potential errors here and accordingly just provide and provide better suggestions based on that.

2. Key Technologies and Data in AI Testing

Short description:

Aside from redefining quality assurance, AI testing involves key technologies like natural language processing, predictive analytics, and neural networks. The role of data is crucial as feeding quality data determines AI's performance. Fine-tuning solutions requires sufficient data.

Aside from that, it is also redefining how we are able to perform quality assurance, as mentioned before, we can also integrate it as part of our pipeline and hence build based on that to be able to ensure that the quality that we have in each stage is assured because of the checks that the AI does.

So what are the key technologies that are involved in this? To name a few, some of them include natural language processing, predictive analytics, and neural networks. Natural language processing, for example, in this particular case scenario is a very important thing because it really determines how we are processing the text that we put through. And that's why problem engineering is a very big thing within AI because we want to make sure that we are actually telling it the right instructions instead of making it vague. Well, we're going to cover it a bit later as well.

Aside from that, there is a very big role that data plays in this because feeding the AI with quality data really determines how well it's going to perform. We have seen a lot of different providers such as chat.gtbt or Amazon's bedrock models perform whether it be good or bad based on a number of parameters that they are fed based on the data that has been used to train them. So it really affects this and it's important to understand that this affects it as well. So if you, for example, decide to look into fine-tuning your solutions, that's definitely a big consideration because you want to make sure that you are fine-tuning it based on enough data and not just partial data.

3. Benefits of AI Testing and Using Generative AI

Short description:

AI testing brings speed, efficiency, adaptability, bug detection, and limitless potential. It can generate tests faster and refine them based on previous tests. AI helps developers account for edge cases and uncover subtle bugs. When using generative AI, specify the language, functionality, and include what you want to be tested. Be cautious when inserting code into generative AI and use APIs for safety.

So if you, for example, decide to look into fine tuning your solutions, that's definitely a big consideration because you want to make sure that you are fine tuning it based on enough data and not just partial data.

So what's the benefits of AI testing? Firstly, it's just speed because tests can be run and invalid rapidly. There is really a limitless potential to be able to take in with the amount of tests that you can actually run with that. And you can generate them much faster than you can do manually because, although they may not be correct or as per your visualization on the spot, at least it is able to then accordingly help to refine it. So you can accordingly ask it to refine based on what the test it created before, or you can actually manually directly develop it too. So that really helps it develop much faster compared to if you're coding tests one by one.

Aside from that, it also brings efficiency. So being able to do more tests in less time would reduce human effort. You're also able to provide more coverage, as we know, JavaScript testing has coverage as one of its most important aspects. So we know that as devs, one of the most important things is that, can we account for the edge cases that are provided? And how can we ensure that it is as clean as possible from errors. And with AI, we're able to really plan for that better, and cover the potential test cases that we might not have thought of. And that's why it's good to have them as an assistant.

Fourth, we also have adaptability. We can make quick responses to changes in code and functionality. We're going to be seeing some examples of that later on, too. Finally, we also have bug detection, because they're really great at being able to uncover subtle or non obvious bugs. One of the best cases of this is how there is visual regression testing nowadays, and also in regards to how AI can compare subtle differences between imagery, for example, or within code. Really, the potential is limitless.

This is an example of how to specifically say to a generative AI, and it really depends as well on what LLM you're using. It could be ChachiBT. It could be Amazon Bedrock. This particular one, for example, I put into Amazon Bedrock's Cloud Foundational Model because I wanted to see how it performs, and it turned out that it was able to do this well. So I'm providing this as the template right now to be able to better understand how to provide this kind of prompt as a prompt to generate new test cases. So for example, we're providing it essentially instructions based on this. So we can see that we're specifying what language we're using, we're specifying what functionality we want, and we're basically just putting through test cases. Now you can put test cases, you might not want to, so it really depends, but what you really want to be able to do from this is that you want to be able to instead of generically mentioning just that generate me a suite of test cases based on this code, instead you want to be able to ensure that you're also including what you want to be tested as well. So for example, let's say you want to be able to test if this button in this webpage works, then you'll say webpage for button X, and after that in the description, you'll say, I want to test button X to see that if it actually works properly, and you can also might not be able to. And again, a pretty much like a caution I would like to make as well is be mindful as well of just inserting code randomly within generative AI because you need to make sure that you're using the API instead of a public facing interface. For example, like HatchiBT, it's usually more, it's very risky and it's very, very ill advised to insert your code or any PII directly in the interface. But meanwhile, you might have an in-house solution or you might use it through an API, in which case your data will be more likely safer.

4. Generating JavaScript Tests and Popular Tools

Short description:

When generating JavaScript tests, you can specify extra tokens like input, expected output, and special considerations. The tests can be tailored to different tools, such as Selenium. Popular tools like Amazon Bedrock and Hedgey allow you to generate cases by providing code and instructions. We will also discuss other ready-made tools that automate test automation.

So that's just a quick caveat when you're considering this. Aside from that, you're just putting extra tokens like input, expected output and special considerations as well as to what constraints or conditions you want. And from this, you're able to generate JavaScript tests in an easy way. And they can also be tailored to different tools as well.

For example, if you want to test with Selenium, it can do so as well. You just have to specify in this particular case scenario. This is just a template to be able to show you how to specify and provide the LLM with the proper considerations that it needs.

So, some popular tools, a couple of popular tools include Amazon Bedrock and Hedgey, which I was mentioning before. These are the ones I've been using a lot. I've been experimenting with their APIs, I've been experimenting with how they perform. And this in general is one way of being able to generate cases. Because it is the way that you just put in your code, you put in the instructions for that, and you generate the cases based on that. But, of course, we'll be talking as well about the different other readymade tools that already help you automate your tests in an easier fashion. And here's one of them.

5. AI Tools for Testing and Monitoring

Short description:

So, Datadog is a powerful tool with AI capabilities for anomaly detection and continuous monitoring. Their Bits.ai feature enables querying anomalies with general text. RecheckWeb checks for small errors in code and performs visual regression testing. Applitools eyes is a popular tool for visual regression testing. AAPower code list testing allows recording and comparing user interactions for testing purposes.

So, Datadog is one of the tools that has been used over time that has also developed with AI capabilities. So, one of its capabilities is being able to perform anomaly detection based on history. So, with that, you're able to directly obtain recommendations based on the graph that you provided, and it will use previous anomalies and recommend you to watch out for indicators and patterns based on that history. So, it's a really powerful tool to look into, and it's a really great one to definitely adopt as part of a continuous monitoring solution.

And one of the things I feel is worth mentioning as well is their Bits.ai feature, which is their new component that enables you to query these kinds of anomalies or data with general text. So, you don't even need to search for, say, go to this time period, go to this specific part of the graph, you can just say, oh, like, yeah, has an anomaly happened in this particular segment?

Aside from that, RecheckWeb is another big tool that's currently being used a lot. So, one of the things it is able to do is that it is able to check for small errors that you might have made to your code, which may affect the visualization. So, in normal cases, you might actually have a problem with visualizing broken code, but in this case scenario, it points at the broken code that you have while being able to visualize it based on the assumption that the code is still intact as well and actually not broken based on changes. So, as we can see here, for example, with button dot recommend dot slash dash cert, I accidentally put it to, instead of cert, I put it to cet, set, but it's able to detect this and because of this, it's able to then detect that particular fault and then visualize it appropriately. There's also a visual regression testing as well, and one of the biggest use cases right now is with applitools. They have applitools eyes, which is able to perform visual regression, which helps to be able to ensure that what you're currently testing on actually is comparable with what it should be. And this really helps a lot. So you can integrate the Selenium WebDriver with this. They're able to perform this, and you're able to perform functional and visual testing through the tool. So this is just an example test case that I wrote up as part of being able to work with applitools eyes. So you can see that I'm pretty much just calling eyes open, eyes checked, and finally, so you can see that this is a great way of being able to integrate with applitools eyes and appropriately use it for this case scenario. Yeah.

Aside from that, we also have AAPower code list testing. So there's a really great one that I encountered recently, and this is just an example of how I was able to do this. So we can see that with this, I'm testing my website. So I'm basically just trying to click on a button. I'm trying to navigate through from the home page through clicking, track my CPEs. And the best thing is you can just record this on screen and it can follow you around and record every step you take. So we can see it's able to track, I click track my CPEs, click the activities, click add activity. So I wanted to be able to illustrate that add activity functionality, and what it can do later is it can use this to be able to use as a test and compare against what you currently already have, the behavior in your code, to be able to ensure that it's actually working properly. So you can essentially just do this test, run it on your, for example like staging environment, and what it'll do is that it'll spin up a Chrome driver. For example, and it'll basically just test to see if it works on the Chrome driver and close appropriate to, and if it passes, then it'll say it passes. And this is an example of how it works. So I can play this example of like how it's able to set up the page, then after that this is run by itself. And this is not me, this is pretty much them doing it by their own, they can just mention this accordingly.

6. Automating Processes and Expanding Horizon for AI

Short description:

And this is by test them. So it's a brief case study where I was looking to automate processes, including anomaly detection and test case generation. Implementing AI tools like Datadog resulted in enhanced quality and reduced test maintenance. The horizon for AI continues to expand with evolving capabilities, addressing future needs in automation, test case generation, and security. Best practices include prioritizing quality data, balancing AI and human insight, implementing version control, fostering AI understanding, establishing instrumental integration, and staying updated with AI evolution. Understanding AI's role in testing revolution and machine learning is crucial, along with practical application and continuous learning.

And this is by test them. So they're basically another tool that is really great to look into that I have been using as well for this pretty good case scenario. So it's a brief case study, more or less, I had a scenario as well, where I was looking to automate processes. And I had the problem as well of being able to automate anomaly detection and generate test cases better too, because one of the things that I was looking into is how can I more efficiently do what I'm currently doing, and automate these kinds of processes to be able to ensure that we don't have errors popping up here and there.

Because one of the things that I'm sure that a lot of testers are here are quite familiar with as well is, if you leave your backlog of errors or security risks alone for a while, they will definitely quickly populate if you're not really taking care of too much testing. So being able to automate is a very important thing. So as part of the implementation, integrate AI tools such as Datadog for anomaly detection and also test them into as part of the workflow. And what I then was able to get was enhanced quality and being able to have reduced time needed for test maintenance.

So there's definitely going to be a continued expanding horizon for AI. For tomorrow, it's definitely going to be more potential growth areas with evolving capabilities as well. It's going to keep on developing as we see it right now. We're going to continue on being able to anticipate the needs of future JavaScript applications because there's going to be a lot. And with the development of AI, it's going to just continue on proving the potential of it to be able to help address future needs, including automation and test case generation and security. So there's just a lot that it's able to do. And of course, automate, automate, automate, because if you can, why not, makes your life easier as a tester and as a developer.

So some best practices and patterns. Prioritize quality data, balance AI human insight, implement version control, foster AI understanding, establish instrumental integration, keep abreast with AI evolution, test data management, continuous monitoring analysis. And some key takeaways are, it's important to understand AI is pretty much a testing revolution. We also have machine learning as a pivotal role, too, to be able to understand. Understand how you can integrate tools appropriately, as I mentioned before, with the four examples that I provided before, with excluding ChatGPTN Amazon Bedrock, as you can also definitely use them as tools as well. Practical application is very important to understand, understand the future outlook of how it looks in the future, and understand that it's very important to look at it as empowerment through knowledge, because it's continuous learning process. And that is all for me. Thank you again so much, everyone, for listening to this session. And I'm happy to take any questions after the session as well.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Network Requests with Cypress
TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Cecilia Martinez, a technical account manager at Cypress, discusses network requests in Cypress and demonstrates commands like cydot request and SCI.INTERCEPT. She also explains dynamic matching and aliasing, network stubbing, and the pros and cons of using real server responses versus stubbing. The talk covers logging request responses, testing front-end and backend API, handling list length and DOM traversal, lazy loading, and provides resources for beginners to learn Cypress.
Testing Pyramid Makes Little Sense, What We Can Use Instead
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
Gleb Bahmutov
Roman Sandler
2 authors
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
Full-Circle Testing With Cypress
TestJS Summit 2022TestJS Summit 2022
27 min
Full-Circle Testing With Cypress
Top Content
Cypress is a powerful tool for end-to-end testing and API testing. It provides instant feedback on test errors and allows tests to be run inside the browser. Cypress enables testing at both the application and network layers, making it easier to reach different edge cases. With features like AppActions and component testing, Cypress allows for comprehensive testing of individual components and the entire application. Join the workshops to learn more about full circle testing with Cypress.
Test Effective Development
TestJS Summit 2021TestJS Summit 2021
31 min
Test Effective Development
Top Content
This Talk introduces Test Effective Development, a new approach to testing that aims to make companies more cost-effective. The speaker shares their personal journey of improving code quality and reducing bugs through smarter testing strategies. They discuss the importance of finding a balance between testing confidence and efficiency and introduce the concepts of isolated and integrated testing. The speaker also suggests different testing strategies based on the size of the application and emphasizes the need to choose cost-effective testing approaches based on the specific project requirements.

Workshops on related topic

Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Detox 101: How to write stable end-to-end tests for your React Native application
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
Workshop
Yevheniia Hlovatska
Yevheniia Hlovatska
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop