JavaScript Beats Cancer

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Skin cancer is a serious problem worldwide but luckily treatment in the early stage can lead to recovery. JavaScript together with a machine learning model can help Medical Doctors increase the accuracy in melanoma detection. During the presentation, we show how to use Tensorflow.js, Keras and React Native to build a solution that can recognize skin moles and detect if they are a melanoma or a benign mole. We also show issues that we have faced during development. As a summary, we present the pros and cons of JavaScript used for machine learning projects.

This talk has been presented at JSNation 2022, check out the latest edition of this JavaScript Conference.

FAQ

Karel Prystalski is an expert with 15 years of experience in machine learning, particularly in medical imaging and dermatoscopy. He holds a PhD in artificial intelligence and has founded a company that provides data science and machine learning solutions.

Karel Prystalski has 15 years of experience in machine learning, specifically in applications related to medical imaging and dermatoscopy. He has also published research papers on the topic.

JavaScript is used in skin cancer analysis primarily for building mobile and web applications that utilize machine learning models trained in Python. TensorFlow.js is used to load, use, and retrain these models on mobile devices.

Karel Prystalski chose JavaScript for its robustness, ease of use in mobile app development, and the ability to leverage libraries like TensorFlow.js for deploying machine learning models on mobile devices.

The highest risk group for skin cancer includes individuals with phototype I, characterized by blond hair, blue eyes, and skin that turns red rather than brown when exposed to the sun.

Karel Prystalski used tools and libraries such as TensorFlow.js, React Native, Docker, JupyterLab, and various JavaScript libraries for data manipulation and visualization.

The app works by using a dermatoscope attached to a mobile phone to capture high-quality images of skin moles. These images are then analyzed using machine learning models to detect patterns indicative of skin cancer.

Data sets like the one available on ISICarchive.com, which contains around 26,000 images of skin moles, are available for skin cancer research.

Challenges include a lack of robust machine learning libraries compared to Python, limited support for machine learning DevOps, and a smaller community focused on machine learning topics.

Karel Prystalski’s research papers can be found on Google Scholar, and his code samples are available on his GitHub repository.

Karol Przystalski
Karol Przystalski
25 min
20 Jun, 2022

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk discusses using JavaScript to combat skin cancer, with a focus on machine learning integration. The speaker has experience in medical imaging and has partnered with dermatoscopy companies to develop hardware. JavaScript libraries like TensorFlow.js and Pandas.js are used for model deployment and data analysis. The Talk also covers building neural networks, analyzing skin cancer using scoring methods and image processing techniques, and extracting asymmetry in skin images using Python and JavaScript.
Available in Español: JavaScript Vence al Cáncer

1. Introduction to JavaScript and Skin Cancer

Short description:

Hi, my name is Karel Prystalski and I will tell you more today about how to use JavaScript to beat skin cancer. I have 15 years of experience in machine learning and specifically in medical imaging. I decided to cover this topic and build solutions in this area because of the increasing importance of skin cancer, especially in countries like Germany, Scandinavia, the US, and Australia. I have also partnered with dermatoscopy companies to develop hardware, such as the dermatoscope, which is used by dermatologists. My solution combines the dermatoscope with special lenses and light to capture high-quality images of skin moles.

Hi, my name is Karel Prystalski, and I will tell you more today about how to use JavaScript to beat skin cancer. My experience is about 15 years in machine learning. So my background is machine learning, it's computer science, I did a PhD degree in artificial intelligence, how to use it in medical imaging and dermatoscopy as well.

You can find some of my papers, research papers in this topic on Google Scholar, for example. So feel free here is one of the articles that I have published, actually it is around five years ago, about analysing of skin cancer on multispectral images. Actually, in that case I use Python, but because of the, well, became more and more popular in the recent years, and also the usage of JavaScript specifically for this topic, I decided to also, well, prepare a presentation and also a solution app for skin cancer analysis.

So my background is not only scientific, I also have founded in 2010 so 12 years ago, a company, a service company working for fortune 500 companies, building also data science, machine learning solutions. And yeah, before that I had I did also some some, you know, some other commercial work, for example, at IBM. So, as I said, I have 15 years of experience in machine learning and specifically in medical imaging, I mean, in applications in medical imaging.

So, how, why I why I decided to actually cover this topic and to build some solutions in this area? Well, as you can see, I don't I'm not really in the risk group when it comes to skin cancer because, you know, the biggest group of of the risk group is actually the blond people with blue eyes. So, this is the phototype number one with the highest risk of having skin cancer, especially if you're becoming kind of your skin doesn't doesn't isn't well it doesn't become brown when you're exposed to the sun but actually it's more going in the direction of red, and actually also, the risk of actually getting skin cancer is high in this group.

So, the darker the skin is and how it reacts to the sun, the lower the the probability is to get a skin cancer. So, there are six type phototypes of skin. I'm more or less in the third group because of my color, hair color, eye color, and so on. That's why the biggest problem actually, it is the biggest, the countries like Germany, Scandinavia, and the Nordic countries, the US, Australia, especially Australia, this is actually where this problem is even more and more important. In the meantime, I also have done some partnership with some dermatoscopy companies, I mean companies who actually develop the hardware. So yeah, as you can see here, here's one of the device. This is our dermatoscope here. That's something, that is a device that is actually used by the dermatologists. In this case, I have also used an iPhone here on the front because this is actually an extension. So it's not a typical dermatoscope, usually it doesn't come with an iPhone or any kind of mobile phone. It comes alone, it's a standard on the device. Some dermatologists use also this kind of extension case just to take the pictures in an easier way. And obviously it's quite small, so we can take it to your pocket and actually visit even your patient to take a look on the mole like this. So this is how actually my solution is used and it is combined with the special lenses, special light to get the best possible image of the skin mole. When comes to the data set because any kind of machine learning topic, model should be fed by some data. Now when I started my research I actually started with 50, 53 images or less. So as you can imagine, that's not a big enough data set to do any kind of research. So what I did is I met, I guess, almost every company in public or private that do anything with dermatology in the city where I live, in Krakow, in Poland. Most of them actually declined to collaborate and actually build some models.

2. Machine Learning and JavaScript Integration

Short description:

Machine learning became a buzzword and the hype around AI has dramatically increased. Obtaining data sets for research has become easier, allowing the development of algorithms for various skin illnesses. Code samples and a Docker image with JavaScript libraries are available for download. The architecture involves combining machine learning with JavaScript to build a mobile app.

It was in 2007, 2008. So the way how people thought about machine learning was totally different than actually compared to what's happening now. Actually, machine learning is AI became a buzzword and everyone wanted to do AI. In the past, I mean, 15 years ago when I said AI, most people said, oh, no, thank you. I'm not interested. Now it's totally opposite. I need to explain people why not to use machine learning rather than actually use machine learning. So it changed dramatically in the COVID pandemic, even increased the hype on AI.

So when I reached to the companies, I obtained a data set of around 5,000 images. Now you can easily download a data set of about 26,000 images of skin moles. It's available on the ISICarchive.com website, and you can use it for your research. So now it's even easier to develop algorithms to find different kind of skin illnesses, not only cancer, which is technically, I mean, it's not the most popular. That's good illness when it comes to skin.

So for all of you that want to use of my code samples that I have prepared for you, you can always download my Docker image that contains a JupyterLab, JupyterHub, together with some kernels for JavaScript as well, and also some libraries, JavaScript libraries. It's a bit old because I am doing it for many years already. So it might be that I will update it soon, but it's still working. So you can easily use it with the notebooks that I will show you next. So the architecture, how I started to actually use machine learning, well, how I combined machine learning with JavaScript, because of this device, I decided, obviously, to use one of the JavaScript solutions to build a mobile app because the mobile devices are changing every year.

3. JavaScript Integration and Model Deployment

Short description:

I decided to use JavaScript instead of building a native application. For the machine learning part, I used TensorFlow.js because it's the most robust library when it comes to JavaScript. The model is trained in Python, but it is used with JavaScript. JavaScript is used to load, use, retrain, and deploy the model on mobile phones. It is also used together with Kubeflow and TensorFlow servers. There is a web app that connects to the mobile app for retraining and finding similar models. A JavaScript library called Pandas.js is available, which is a fork of the original Pandas library from Python.

As you can see, this is an iPhone 6S, so quite old, and probably I will need to change it also soon to a newer one. Still, it has a good, good, it's able, I'm able still to use this phone to make good quality pictures because the quality is here, not here, right? So it's not in the phone, but it's actually in the lens here. So that's why I decided to use JavaScript instead of building a native application.

And in the past, I had to use some different kind of solution, starting from Cordova PhoneGap, now I'm actually working on React Native. And for the machine learning part, I used TensorFlow.js, so you might say, okay, why TensorFlow.js, and not, I don't know, Keras, or actually Torch, for example. Well, there is one reason why I use TensorFlow.js, because it's the most robust library, when it comes to JavaScript, obviously. So why I use, or why I choose JavaScript in this case, that's not exactly that I said, okay, let's do everything in JavaScript, and that's not true.

I mean, the true is that actually the model is trained in Python, but it is used with JavaScript. So I use TensorFlow.js not really for the training, and to be honest, I don't know anyone who actually do that. I mean, maybe because I'm into the data science field for a long time, and I know many people in this area, and they actually do mostly in Python, or I don't know, maybe Scala, some of them, especially related to big data. So in this case, in this specific case, I use JavaScript exactly because of the possibility, actually, to use TensorFlow.js to load the model, use the model, retrain the model, and use it on our phone, mobile phone. So in production, it is used together with some Kubeflow and TensorFlow servers. I mean, there are two models, one that you see on the left. Actually, you can combine it together with the main app. And that's how I did it here on the phone. But actually, in many cases, it also caused that service that actually is trying to find similar models, similar visions. And actually it is a web app. So the app is actually connecting to the web app and then also use that for retraining as well.

How does it look like? Let me just shortly move to some examples. Here we go. Here we go. That's partially, I did it also in the past for another conference at Hockey for Nukraine. I will just use part of the part of the notebook. You can easily find it on my GitHub account on my company's GitHub account. And a few repositories that are about machine learning, AI and JavaScript. When you start to do any kind of research related to data, when you do it in Python, you might use probably the first library that you think about is Pandas. And there is a fork of this library in JavaScript. JavaScript is called Pandas.js, obviously. To some degree, it's very similar to Pandas, to the original one from Python. But actually, it's quite limited compared to the original Pandas.

4. JavaScript Libraries and TensorFlow

Short description:

Many advanced features from the original Pandas are not yet implemented in JavaScript. However, there are several other libraries available for data analysis and manipulation, such as DataFrame.js, Reclaim, and DataForge. JavaScript also offers better visualization libraries compared to Python, including MATLAB and Seaborn. When it comes to machine learning, Scikit-Learn is the popular choice in Python, but in JavaScript, there is no comprehensive library like Scikit-Learn. TensorFlow, on the other hand, provides easy implementation of linear regression and offers various APIs in different programming languages.

Still, many, many features are not yet implemented. So to some degree, you can use it in JavaScript. But still, there are a lack of many, many advanced features from the original Pandas, especially the one that actually are already just statistics.

Another library that you can find, or you can use in JavaScript for data analysis or data manipulation, DataFrame.js, Reclaim, DataForge, and so on. So there are plenty of such libraries to use. This is just a few examples, how to work with series, how to use with DataFrames. That's something normal for people that work on data, probably for JavaScript engineers, not so typical, but still easy to manipulate, easy to export to JSON, for example.

And so this is just a notebook on how to use that, just to show a few examples. When it comes to the visualization, in my opinion, JavaScript do a better job than Python. Obviously, there are some libraries like MATLAB, Seaborn, and some other ones and in Python, when you compare it to the one that actually are available in JavaScript. I think here, JavaScript has a huge advantage because there are good libraries for visualization for printing the charts. But I think in many cases, are just doing better jobs than the Python ones. So, that's good. So one, one, let's say.

And then, when it comes to Scikit-Learn, the most popular library for machine learning for the shallow methods. So, tensorflow, I mean, that's for building neural networks, but actually Scikit-Learn is more about the shallow methods. It's actually most of the cases that you want to deal with when it comes to machine learning can be solved or should be solved using shallow methods. So, Scikit-Learn, that's the first library we will use and easy to use in Python. In JavaScript, you have JS.kit.learn and Scikit-Learn as a fork of Scikit-Learn, but well, not updated in the recent years. So, it looks like someone did that, started something, but well, then dropped the maintenance and they are gone. I mean, not really a good library here to use. There are plenty of libraries that actually implement some specific shallow method, like I don't know, SVM, KNN, and then so on. But actually, if you want to have it in one place like it is in Python, then you don't have such a library in JavaScript, unfortunately.

So, when it comes to TensorFlow, here's a very easy example of how actually you can build a linear regression. Here are some examples how to import, how to use, how to implement a linear regression. And here's an example, here's also an image from the documentation of TensorFlow.js. You have many kinds of APIs. The most popular is in, written in Python, but actually back is written in C++, and you have also some other forks like Java, Go, JavaScript, and so on. So JavaScript is not, I mean, it's not special here. It's just another fork to get to the core of TensorFlow, really.

5. Building Neural Networks and Analyzing Skin Cancer

Short description:

Here's an example of how to build a loss function, combine it, use it, import it, and train it. In Python, we typically use cameras to build neural networks. JavaScript has Keras JS, but it's not as developed as the Python version. When it comes to skin cancer, doctors use scoring methods like ABCD and check for patterns like asymmetry, border sharpness, and number of colors. Using different wavelengths of light, such as infrared and UV, can provide more detailed information. Image processing techniques like binarization and fractal analysis can also be used.

Here again, example of how to build a loss function, combine it together, how to use it, import it and train it. When it comes to build, when it comes to neural networks, how we do it actually in Python, so mostly we do it with cameras. I don't have PyTorch. Actually, you can easily combine the network. I mean, build it from a blocks. I actually have layers. Connect the layers together and you can easily build a very huge network shortly. In JavaScript, you have Keras JS, but to be honest, it's far, far away from the one that is developed in Python, unfortunately, right? Okay. That's the one thing.

Let's just move back to the slides. When it comes to the skin cancer, as you can see here, you have three images and only two of them actually are cancers. The one on the left and one on the right. Here on the left, you can see some patterns, I mean, the dots here and the stripes. Some vessels here visible that all of that is not very symmetric. The borders are quite smooth really. Here on the right, you have this kind of a pattern that is called the blue wave pattern. You can see kind of a white-blue colors here. It means that actually, well, it's going deep. I mean, the cancer is actually going deeper into the skin and actually trying to get to the vessels. That's bad for the patient, but that's also a pattern that actually tell us, oh, that's really bad. Here in the middle, you have an example of a suspicious care role, but not confirmed to be cancer because all of them actually are pathology confirmed or not. I mean, confirmed that they are cancer or not. So how do medical doctors do that? They use some kind of a scoring methods like ABCD, seven-point checklist, seven-point score, hunter's call, three-point checklist, and so on, and so on. So they have, they use some panels, like asymmetry, border sharpness, number of callers. I mean, in ABCD you have six calls that actually, it really counts, and they check how many calls there are. When you combine it together, I mean, you write down one by one how many different patterns are available, you can count easily more than 30 patterns that actually the doctor, the medicals can find out on the image, or just, you know, using the loop. Directly. What you can also do, and what I did in my research, is that I used different kinds of wavelengths of light to get not only what you see with the visible light, but also what's actually deeper in the skin using, for example, the infrared light. Actually, I used four different wavelengths of light in total, but actually here you can see one with the infrared, UV, to get the, to get the melanin, right? The vessels more visible, and that's how you can actually do a better research, because we have more details, more information. Oh, what you also do, you can also do some image processing, like binarization in this case, right, to find out, oh, this here border is not smooth, right? You can, for example, use fractal methods to analyze that.

6. Extracting Asymmetry in Skin Images

Short description:

In this demo, I will show you an example of how to extract asymmetry in skin images using Python and JavaScript. By dividing the picture into regions and calculating the symmetry of opposite sides of the blocks, we can easily count the asymmetry. However, some patterns require more sophisticated models based on neural networks.

All right, so the demo, another one, because I would like to show you also how you can also use that, how you can, what kind of, what kind of methods you can use. Really, here's one of the examples you can find more actually using, going to my, going to my GitHub repository, here's a way how to extract the asymmetry, actually extract the region here, first of all, right? So we could take this part, this region. I use here the ISIC as you can see there and next, that's the next step to calculate, in this case i can see it's Python, because I have developed, develop the modern Python, that actually exported to JavaScript, to be used in JavaScript, and here you can use, divided the picture into some regions and calculate the symmetry of the opposite sides of the blocks. So this is how you count easy the asymmetry. And this is how you can do it in many panels, I mean do some basic image processing, it allows us to find out some of the patterns. Some of them actually are more difficult, you need to build some sophisticated models based on neural networks. Please feel free to get more about that. But what is important, because I mentioned about the neural networks and also the shallow models, so just to give you a better understanding, here you have, because we have the black box and white box battles, so the black box are neural networks, in most cases you can see here a very short network with three layers here. It's easily trained with an accuracy of 98%, very high. But if you actually draw the weights, you can see, I mean, printed weights of just one layer, it looks like that. So if you try to explain that, it's very hard, even not possible, to explain each of this number, what does it mean. So there are some explainability, explainable methods, to explain the numbers, in the weights, in the neural network, but it's more complex.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building Brain-controlled Interfaces in JavaScript
JSNation Live 2021JSNation Live 2021
27 min
Building Brain-controlled Interfaces in JavaScript
Top Content
Learn how to build brain-controlled interfaces using JavaScript and brain sensors. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore examples of calm and focus detection, as well as the Kinesis API for mental commands. Discover the applications of brain-controlled interfaces, such as scrolling web pages and password-less authentication. Understand the limits and opportunities of brain control and the potential for using brain sensors in medical applications.
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
JSNation 2022JSNation 2022
21 min
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
Top Content
ChirpX is a technology to securely run binary code in the browser, written in C++ and compiled to JavaScript WebAssembly. It can run a full virtualized system in the browser, including Bash and other languages like Python and JavaScript. ChirpX aims for scalability and the ability to work with large code bases, supporting multiprocessing and multithreading. It uses a two-tiered execution engine with an interpreter and a JIT engine. Future plans include running the full X.Org server in the browser and implementing the Windows system call. WebVM, the underlying technology, has a virtual file system backed by Cloudflare.
TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
Raising the Bar: Our Journey Making React Native a Preferred Choice
React Advanced 2023React Advanced 2023
29 min
Raising the Bar: Our Journey Making React Native a Preferred Choice
Watch video: Raising the Bar: Our Journey Making React Native a Preferred Choice
This Talk discusses Rack Native at Microsoft and the efforts to improve code integration, developer experience, and leadership goals. The goal is to extend Rack Native to any app, utilize web code, and increase developer velocity. Implementing web APIs for React Native is being explored, as well as collaboration with Meta. The ultimate aim is to make web code into universal code and enable developers to write code once and have it work on all platforms.
Opensource Documentation—Tales from React and React Native
React Finland 2021React Finland 2021
27 min
Opensource Documentation—Tales from React and React Native
Documentation is often your community's first point of contact with your project and their daily companion at work. So why is documentation the last thing that gets done, and how can we do it better? This talk shares how important documentation is for React and React Native and how you can invest in or contribute to making your favourite project's docs to build a thriving community
Bringing React Server Components to React Native
React Day Berlin 2023React Day Berlin 2023
29 min
Bringing React Server Components to React Native
Top Content
Watch video: Bringing React Server Components to React Native
React Server Components (RSC) offer a more accessible approach within the React model, addressing challenges like big initial bundle size and unnecessary data over the network. RSC can benefit React Native development by adding a new server layer and enabling faster requests. They also allow for faster publishing of changes in mobile apps and can be integrated into federated super apps. However, implementing RSC in mobile apps requires careful consideration of offline-first apps, caching, and Apple's review process.

Workshops on related topic

Introducing FlashList: Let's build a performant React Native list all together
React Advanced 2022React Advanced 2022
81 min
Introducing FlashList: Let's build a performant React Native list all together
Top Content
Featured Workshop
David Cortés Fulla
Marek Fořt
Talha Naqvi
3 authors
In this workshop you’ll learn why we created FlashList at Shopify and how you can use it in your code today. We will show you how to take a list that is not performant in FlatList and make it performant using FlashList with minimum effort. We will use tools like Flipper, our own benchmarking code, and teach you how the FlashList API can cover more complex use cases and still keep a top-notch performance.You will know:- Quick presentation about what FlashList, why we built, etc.- Migrating from FlatList to FlashList- Teaching how to write a performant list- Utilizing the tools provided by FlashList library (mainly the useBenchmark hook)- Using the Flipper plugins (flame graph, our lists profiler, UI & JS FPS profiler, etc.)- Optimizing performance of FlashList by using more advanced props like `getType`- 5-6 sample tasks where we’ll uncover and fix issues together- Q&A with Shopify team
Detox 101: How to write stable end-to-end tests for your React Native application
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
Workshop
Yevheniia Hlovatska
Yevheniia Hlovatska
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
How to Build an Interactive “Wheel of Fortune” Animation with React Native
React Summit Remote Edition 2021React Summit Remote Edition 2021
60 min
How to Build an Interactive “Wheel of Fortune” Animation with React Native
Top Content
Workshop
Oli Bates
Oli Bates
- Intro - Cleo & our mission- What we want to build, how it fits into our product & purpose, run through designs- Getting started with environment set up & “hello world”- Intro to React Native Animation- Step 1: Spinning the wheel on a button press- Step 2: Dragging the wheel to give it velocity- Step 3: Adding friction to the wheel to slow it down- Step 4 (stretch): Adding haptics for an immersive feel
Deploying React Native Apps in the Cloud
React Summit 2023React Summit 2023
88 min
Deploying React Native Apps in the Cloud
WorkshopFree
Cecelia Martinez
Cecelia Martinez
Deploying React Native apps manually on a local machine can be complex. The differences between Android and iOS require developers to use specific tools and processes for each platform, including hardware requirements for iOS. Manual deployments also make it difficult to manage signing credentials, environment configurations, track releases, and to collaborate as a team.
Appflow is the cloud mobile DevOps platform built by Ionic. Using a service like Appflow to build React Native apps not only provides access to powerful computing resources, it can simplify the deployment process by providing a centralized environment for managing and distributing your app to multiple platforms. This can save time and resources, enable collaboration, as well as improve the overall reliability and scalability of an app.
In this workshop, you’ll deploy a React Native application for delivery to Android and iOS test devices using Appflow. You’ll also learn the steps for publishing to Google Play and Apple App Stores. No previous experience with deploying native applications is required, and you’ll come away with a deeper understanding of the mobile deployment process and best practices for how to use a cloud mobile DevOps platform to ship quickly at scale.
Effective Detox Testing
React Advanced 2023React Advanced 2023
159 min
Effective Detox Testing
Workshop
Josh Justice
Josh Justice
So you’ve gotten Detox set up to test your React Native application. Good work! But you aren’t done yet: there are still a lot of questions you need to answer. How many tests do you write? When and where do you run them? How do you ensure there is test data available? What do you do about parts of your app that use mobile APIs that are difficult to automate? You could sink a lot of effort into these things—is the payoff worth it?
In this three-hour workshop we’ll address these questions by discussing how to integrate Detox into your development workflow. You’ll walk away with the skills and information you need to make Detox testing a natural and productive part of day-to-day development.
Table of contents:
- Deciding what to test with Detox vs React Native Testing Library vs manual testing- Setting up a fake API layer for testing- Getting Detox running on CI on GitHub Actions for free- Deciding how much of your app to test with Detox: a sliding scale- Fitting Detox into you local development workflow
Prerequisites
- Familiarity with building applications with React Native- Basic experience with Detox- Machine setup: a working React Native CLI development environment including either Xcode or Android Studio
Building for Web & Mobile with Expo
React Day Berlin 2022React Day Berlin 2022
155 min
Building for Web & Mobile with Expo
Workshop
Josh Justice
Josh Justice
We know that React is for the web and React Native is for Android and iOS. But have you heard of react-native-web—for writing an app for Android, iOS, and the web in one codebase? Just like React Native abstracts away the details of iOS and Android, React Native Web extracts away the details of the browser as well. This opens up the possibility of even more code sharing across platforms.
In this workshop you’ll walk through setting up the skeleton for a React Native Web app that works great and looks awesome. You can use the resulting codebase as a foundation to build whatever app you like on top of it, using the React paradigms and many JavaScript libraries you’re used to. You might be surprised how many types of app don’t really require a separate mobile and web codebase!
What's included1. Setting up drawer and stack navigators with React Navigation, including responsiveness2. Configuring React Navigation with URLs3. Setting up React Native Paper including styling the React Navigation drawer and headers4. Setting up a custom color theme that supports dark mode5. Configuring favicons/app icons and metadata6. What to do when you can’t or don’t want to provide the same functionality on web and mobile
Prerequisites- Familiarity with building applications with either React or React Native. You do not need to know both.- Machine setup: Node LTS, Yarn, be able to successfully create and run a new Expo app following the instructions on https://docs.expo.dev/get-started/create-a-new-app/