Video Summary and Transcription
Hi everyone, I want to share a story about my experience as a junior developer working with a client who had strict technical standards. They required high performance for web and mobile apps, but lacked a measurable way to determine performance. To address this, I created a tool called Flashlight, similar to Lighthouse, that measures mobile app performance. Flashlight provides performance scores and can run end-to-end tests to explore the app. The speaker's dream is to have an easy way to determine app performance without extensive testing. Using AI to automate the tedious task of exploring an app is the speaker's proposal. The speaker explains that the repository for the app exploration tool is open source and will be shared later. The tool requires commands to be run, such as logging in and performing a search. The AI tool helps identify performance issues, suggests using React DevTools for debugging, and recommends using FlashList instead of FlatList in React Native. The AI can continue after errors occur, and controlling the AI's completion of goals can be achieved by adding a human or another AI agent manager into the loop. AI in end-to-end testing is gaining popularity, with companies launching AI-based testing tools. The goal is to automate app exploration using AI and ensure quality checks for performance, accessibility, and security. AI can help QA engineers by automating repetitive tasks and allowing them to focus on more important aspects. The cost of performing one test case using the 4.0 Mini model is usually less than one cent. AI testing with Flashlight is still experimental, but the non-AI part is reliable. Future improvements include running on CI and easy iteration on multiple projects.
Video transcription and chapters available for users with access.
Comments