Perfect Pitch: Unveiling the Mathematical Symphony Behind a Guitar Tuner

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
GithubProject website
Rate this content

Join me on a musical journey through the world of open source software as I share the story behind crafting from scratch an online guitar tuner with Javascript.

In this talk, we'll explore the delightful intersection of mathematics and development, where a simple curiosity led to the creation of an open-source tuner using React and Next.js.

I'll share insights and challenges into the implementation of the tuner, navigating through the algorithmic landscape of autocorrelation and frequency calculations.

This talk has been presented at React Summit 2024, check out the latest edition of this React Conference.

FAQ

Omar Job is the Technical Lead at Learn, an Italian company focused on making digital competencies accessible to everyone. He is also passionate about music and enjoys playing the guitar.

The guitar tuner project aims to explore the mathematics involved in pitch estimation and detection. It was created out of pure curiosity to see if the developer could build a guitar tuner from scratch.

The guitar tuner project is built using React and Next.js. It also utilizes the Web Audio API for audio processing and the MediaStream API to access the user's microphone.

Pitch detection involves identifying the frequency of a note being played, which is measured in hertz (Hz). Notes are represented by frequencies, and an octave doubles the frequency of the same note.

Zero crossing is an algorithm that counts the number of times a signal crosses the zero line. It is straightforward on a clear signal but not effective on noisy signals like those from a guitar.

Autocorrelation is an algorithm that compares a signal with a shifted version of itself to identify periodic patterns. It is used to clear noisy signals and make them more readable for frequency calculation.

The project uses the MediaStream API to access user media like the microphone and the Audio API for decoding and processing audio signals.

The project uses the React Go chart library for data visualization. The frequency is passed as a prop to the library, which then displays an indicator that moves based on the frequency.

Potential improvements include resetting the indicator when no note is detected, improving noise cancellation and signal pre-processing, detecting chords instead of single notes, and studying other algorithms for pitch detection.

Fast Fourier Transform (FFT) is a method that converts a signal from its original time domain to the frequency domain. It helps identify the frequency of a signal but can be complicated when dealing with noisy signals.

Omar Diop
Omar Diop
22 min
18 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk provides an overview of an open-source guitar tuner project, covering topics such as pitch detection, web APIs for microphone access, implementation with React and XGS, and data visualization. It explores various pitch detection algorithms, including zero crossing, fast Fourier transform, and autocorrelation. The project utilizes the MediaStream API and Audio API to access user media. The implementation is done with React and Next.js, and includes a utility library for performing pitch estimation and detection. The Talk also discusses future improvements for the project, such as noise cancellation and chord detection.

1. Introduction to the Guitar Tuner Project

Short description:

I am Omar Job, the Technical Lead at Learn, an Italian company aiming to make digital competencies accessible to everyone. Today, we will have an overview of an open-source guitar tuner project. We will explore pitch detection and estimation, web APIs for microphone access, implementation with React and XGS, data visualization, and discuss issues and improvements. This project was born out of curiosity and aims to explore new territory. It's open source, built with React and extraJS, and utilizes the web audio API.

Hi, everyone. It's a pleasure to be here and I cannot wait to share with you this topic. So, let's start with the presentations. I am Omar Job and I'm Technical Lead at Learn, which is an Italian company which aims to make digital competencies accessible to everyone. In my free time, I have a lot of passions, one of which is music, and I love to bother my neighbors playing guitar. And that's why we are here.

So, today we're going to see a lot of things. We will have an overview about the project, which is an open source guitar tuner. We will see the basics of pitch detection and pitch estimation, so how to detect the frequency of a note that's played. And then we will see the web APIs involved in the project, so how to get access to a user's microphone. And we will see even the actual implementation with React and XGS. And we will see after that the data visualization part, so how to display the information that we are retrieving. And after that, finally, we will talk about issues and possible improvements.

I would like to start with a question that is, do we need another guitar tuner? Well, there's plenty of guitar tuners online. You can find a lot of applications and download them for free. So, which leads to another question, that is, why? Why did we come up with a guitar tuner again? Well, actually, I wanted to test myself and I was curious because I wanted to see what was under the hood. I wanted to see all the mathematics involved in pitch estimation and detection. And I want to see if I could build a guitar tuner from scratch. So, this is a project made out of pure curiosity. And disclaimer, I'm not a mathematician. This is the result of my researches and my curiosity. So, this is a project that aims to explore something that I did never explore.

The project is open source. You can find it online. You can check the code and I will share it with you after the talk so you can explore it. It's made with React and extraJS. And it uses the web audio API. The interface is pretty clear. It detects the note that is being played. And as you can see, the indicator moves when I play a note.

2. Basics of Pitch Detection and Algorithms

Short description:

The basics of pitch detection involve understanding that notes are represented by frequencies in hertz. A reference frequency, A440, is used for tuning. The guitar signal, represented by a wave, is often noisy, making analysis challenging. Various algorithms exist for pitch detection, including zero crossing.

It's very accurate. It can be improved, but for this stage, it's very, very good and it works very well. So, what are the basics of pitch detection?

Well, I want to align everyone on the topic because if you play an instrument, and I noticed that a lot of programmers do play instruments, but if you're not a musician, this topic can be tricky to understand. So, I want to align everyone on the topic. So, we start with the notes.

And I tell you that notes are represented by a frequency, which is measured in hertz. As you can see, in this table, we have a lot of notes. Every note is represented by a number, which is the frequency. On the top row, you can see the note names that go from C to B. And on the left column, you can see some numbers that go from 0 to 4. These numbers are octaves. An octave is the same note repeated, but with a different frequency, and each octave doubles the frequency of the note. In other words, if you go on the A column, on the A440, which is highlighted, you can see that if you go down with the rows in the column, the frequency is doubled.

So, the A440 is the reference frequency for tuning. So, a lot of tuners use this frequency to tune all the instruments, and this is A4, so the A in the octave 4. Guitar has six strings, so every string is played as a note. So, as you can see here, the six strings is an E note in the fourth octave. The first string, for instance, is an E in the second octave, and as you notice, the frequencies are very very different. Our goal is to detect the frequency that is being played, and tell the user which note he is playing in order to tune his guitar.

So, the guitar signal can be represented with a plot. So, in sound, the signal is represented by a wave. As you can see on the left, we have a clean signal, which is an A4, so A440 Hz. As you can see, it is periodic and very clear. On the right, you can see the guitar signal, which is not so clear, but can be very noisy and tricky to perform analysis on this kind of signal. This leads to our algorithms.

We have a lot of algorithms to perform pitch estimation and pitch detection. These are three main algorithms that I've studied, and I'm going to show you every algorithm step by step. I will point out the pros, the cons, and which one I used to perform the actual pitch detection in the project. We will start with zero crossing. Zero crossing starts with this kind of plot, so the clear signal.

3. Pitch Detection Algorithms and Web APIs

Short description:

Zero crossing is an algorithm that counts the number of times a signal crosses the zero line to determine frequency. Fast Fourier transform converts a signal from time domain to frequency domain, but can be challenging to interpret. Autocorrelation compares a signal with shifted versions to obtain a clearer plot. The project uses the MediaStream API and Audio API to access user media.

We are applying zero crossing to a clear signal. Zero crossing is an algorithm in which you count the number of times the signal crosses the zero line. Why do you do that? Because the frequency, so the Hz, is the representation of one cycle per second. Every cycle, a clear function, a periodic function, crosses the zero lines two times. So, if you know the number of crossings and you divide it by two, you get the number of oscillations, the number of cycles, and you can divide the number of oscillations by the seconds and obtain the exact frequency in Hz. This method is pretty straightforward on a clear signal, but can this work on a guitar signal? Well, no. The answer is no, because as we've seen earlier, this kind of signal is very, very tricky, is very noisy, and is not always periodic, so it's difficult to apply this kind of method.

Let's see if another method comes to our to the rescue. So, fast Fourier transform. Fast Fourier transform is a method that applies the discrete Fourier transform to a signal. So, it converts the signal from its original domain, that is time, to the frequency domain. What do I mean by that? Let's make it clear. As you can see here, on the x-axis you have the time on the signal, and the y-axis you have the amplitude, so the measure of the intensity of the signal. On the right, the signal is converted in this kind of plot, in which on the x-axis you have the frequency, and on the y-axis you have still the amplitude, but as you can see there's a spike around 440. So, the signal that is A4, 440, is converted in a plot that has a spike. So, it's clear that this frequency is this one. So, on the right, on the x-axis, you have the frequency, and you can easily detect the frequency. But can this work with a guitar signal? Well, it can, but it's very complicated to understand which spike to choose, because as you can see here, you have a lot of spikes and it's not very clear.

Let's move to the last algorithm that's the one that I've used in the project, which is autocorrelation. Autocorrelation is an algorithm that is used to compare the signal with the shifted version of itself. What do I mean by that? If you look at the top plot, you can see that the signal is shifted and repeated across the time, and on the bottom plot you can see this function is traced. So, when the signal is the same, the function has a value of 1. When the signal is completely different, it has a value of minus 1. So, if you go on, and you go forward shifting the signal, you obtain a plot. From this kind of signal, you obtain this kind of plot, which is easier to understand and is very clear. So, this algorithm, we can say that it's used to clear the signal and to track a plot that's more readable and more accessible to calculations.

Let's move on and see the web APIs involved. The main APIs that I've used are MediaStream API and the Audio API. Let's take a look at the MediaStream API. The MediaStream API is an API that allows you to access user media, so to the audio of the user, to the microphone, or to the camera.

4. Implementation with Audio API and File Structure

Short description:

The project uses the Audio API, which provides interfaces for decoding, processing, and analyzing audio signals. The implementation is done with React and Next.js. BrowserAudio.ts is used to access user media, while Tuner orchestrates the logic between components. Pitch Detector is a utility library for performing frequency estimation, pitch detection, and auto-correlation.

You have a lot of settings that you can use and to choose from, such as noise cancellation or auto gain control, for instance, but I've used only these three settings because in the documentation there's plenty. You can go check it out, but this is what I needed.

Then, user is prompted by this API, when you call it, with a pop-up. So, it asks the user to access microphone. User clicks yes and you get access to the microphone.

Then, we have the Audio API, which is an API that exposes a lot of interfaces, one of which is the audio context, which is an interface that allows you to decode and process the signal and to perform calculation. Then, after the audio context, I've used the createAnalyzer method. There's another method that allows you to perform real-time time and domain frequency analysis, so you get access to the signal and you can analyze this signal.

As you can see in this image, you have the signal, you insert this analyzer node, which is a node that goes between the signal. You perform some calculation, some analysis, but the signal is unchanged. So, it does not alter the signal. It performs only analysis and it is used only for analysis purposes, not for changes.

The actual implementation is made with React and Next.js. The file structure is the following. I have created three main files, one of which is BrowserAudio.ts that is used to access the user media, so all the stuff that we have seen earlier. Then, we have Tuner where all the logic is orchestrated between the components. This is the main component of the application. Then, we have Pitch Detector, which is an utility library that is used to perform calculation, frequency estimation, pitch detection, and auto-correlation.

Let's see BrowserAudio.ts. This class is very straightforward. You have two attributes that are AudioContext and Analyzer that are the methods that I'll talk about later, so the interface that we are going to use to get access to the user's microphone. Then, we have this method that I call GetMixStream. You use the MediaDevices API, WebMedia API, to access user media and it's done. The user gets prompted and you get access to the signal.

Then, in Tuner, you create instances of the BrowserAudio.ts and then we have this Buffer, which is an array, a float32 array, in which we are going to store all the data related to the signal because the AudioContext gives us the signal in this kind of form, so it's converted to numbers. After that, we instance the AudioContext and the Analyzer in order to get access to these interfaces. Then, we have a method that is called StartTuner that is used to access the MixStream, so the method that we've seen earlier. You get access to the MixStream and you perform all the calculations that you want. After that, I set the source, which is a state variable with the AudioContext, so I get access to all the signal and then SetListening is only a state variable used for display purposes to show only an array or something. Then, this effect is used to perform the custom estimation, in which I have an interval that runs every one millisecond and calls this function, which is source.connect, in which I connect the Analyzer, that I've seen earlier in the image before, into the signal in order to perform estimation and frequency time analysis.

5. Pitch Estimation and Detection

Short description:

Then, the audio signal is converted into numbers representing amplitude. Autocorrelation is performed to obtain a more readable plot. The frequency is then calculated, and the corresponding note is determined based on the frequency. The main utility class performs pitch estimation and detection using autocorrelated values.

Then, this interval goes on and keeps calling this function, so you connect the audio and start getting the signal, because GetPitch, it's a method that is used to perform estimation on frequencies. This is GetPitch. So, you use this method that is Analyzer, GetFloatTimeDomainData that does the thing that I've said earlier. So, it converts all the signal data into numbers in this buffer, which is a float32 array, and, as you notice, every item of the array has a number and represents the amplitude of the signal, so it tracks a plot. So, this plot can be seen as this array. It reminds me that.

Then, after that, after having this array, I can perform autocorrelation, because I need to clear this kind of plot, into this kind of signal, and get something that is more readable. So, I perform autocorrelation, and, as you can see, the array on the right, CorrelatedValues, is easier to understand, because it has all numbers between 1 and minus 1, and if you track a plot with these kind of numbers, you obtain the plot that we've seen earlier with the autocorrelation function. So, it's more readable. This function performs autocorrelation, then you can calculate the frequency after having that kind of plot. Once you have the frequency, you can tell which note it is, because, as we said earlier, every note is represented by a frequency. So, if you know the frequency, you know which note it's being played. So, you set the note. The note is this. I created this type, which has the name, an octave, so the number of octaves the note is, the senseOf, and the frequency. SenseOf are the number of sense that the note is off from the fundamental frequency. For instance, if the fundamental frequency is 440, and the user is playing 435 hertz, the note is off by 5 cents. So, I tell the user that it's off, so I can point the indicator in another way.

Then, this is the main utility class that is used to perform pitch estimation and pitch detection. So, we get autocorrelated values, and these functions normalize. The normalized function is used to get all values between 1 and minus 1, so to normalize all the data that we have, and it performs max-absolute scaling, which is an algorithm that finds the maximum absolute values and divides all the elements by this maximum value. So, after performing this function, you get all the elements between 1 and minus 1. So, it's more readable, and it's easier to perform calculation on this kind of data. Then, we perform the autocorrelation with this method, autocorrelation with lag, and if you notice, there's this kind of variable that's called rms, which is root mean square. What is root mean square? It's the measure of the magnitude of the loudness of the signal. So, I use this variable to store the magnitude of the signal and filter the signal if the signal is below a threshold. So, I want to filter out all the data that is not loud because it can be even me making noise on my desk or something like that. I want only the notes that are loud, and root mean square is calculated by performing the square root of the quadratic mean like this formula. So, you obtain the value that represents the loudness of the signal. This is the main function, the autocorrelation function.

6. Signal Conversion and Frequency Detection

Short description:

The audio signal is converted into code using a mathematical formula. The frequency of the signal is detected by calculating the highest peak in the function. The frequency is then used to obtain the note type. The user is informed if they are off the original frequency by a certain number of cents.

As you can see, it's the same as the mathematical formula in which you shift the signal and multiply it by that, and then you sum this multiplied value all in a series. So, it's this mathematical function. I only converted the mathematical function into code. You can check it out on the repo.

Once you have the signal that is clear, so the periodic function, you can detect the frequency. This method is the get frequency one. I could have used, for instance, even zero crossing method, but I decided to go with another method in which you calculate the highest peak in the function. In this case, it's the red one, and if you know how much time has elapsed between a peak and another, you can easily calculate the frequency by dividing the sample rate, which is the time elapsed between one peak and another, by the largest peak index. So, you get the frequency.

You have this utility which is get note from frequency. So, from the frequency, you obtain the note type that I showed you earlier. So, you get note name, the octaves, send off, and the frequency. You can see here there's a method called get MIDI number from pitch or even get send off from pitch. So, I wrote these methods, but I did not come up with this method because they are standard in the industry. Then, after you use this kind of method, so get note from frequency, to obtain an object which is the note. So, you have the name which is A, the octave which is 4, the sends off which is 5, and frequency which is 435. So, I tell the user that he is five sends off from the original frequency.

7. Improvements and Conclusion

Short description:

I studied the topic extensively and provided the formula reference and source code in the repo. After obtaining the note object, the user's tuning accuracy is determined. Data visualization is achieved using the React Go chart library. The project has room for improvement, such as resetting the indicator when no note is detected, noise cancellation, and chord detection. Exploring other pitch detection algorithms like YIM is also an option. Feel free to check out the project, study pitch estimation, and connect with me on social media.

So, I did study a lot about this thing and I inserted for you even in the repo, in the code, the reference and the source of this formula. So, you can go check them out, read them, try to implement them, and come up with your solutions. But this is standard.

Then, after you use this kind of method, so get note from frequency, to obtain an object which is the note. So, you have the name which is A, the octave which is 4, the sends off which is 5, and frequency which is 435. And here, in this case, user is not in tune because he's playing 435 frequency, but the frequency of the reference frequency is 440. So, I tell the user that he is five sends off from the original frequency.

Let's move on to the data visualization part and let's see what we can achieve with that. So, I wanted to do it in a simple way because the main goal was the mathematical part and the visualization part. So, I decided to go with a library which is called React Go chart. It's very easy.

You give to the, as a prop, the frequency. For instance, sorry. You give as a prop the percentage. So, the indicator moves with the percentage. So, I calculate by the frequency the percentage of the indicator and I pass it to the, as a prop, and it displays the indicator that is going up and down based on the frequency that, in which the user is playing. And even here, I display the note name and the note number and the frequency easily without any library because it's only data.

After that, we are finally coming to an end because we're going to talk about all the improvements that can be done to this project. Well, as I said, it was a research purpose project but a lot of improvements can be made. I want to make a lot of improvements while I'm still studying this topic and, for instance, you can challenge yourself and reset the indicator if no note is detected because this indicator, when you detect a note, it does not go to its initial position.

It's very easy to fix but if you want to go forward, you can even improve noise cancellation and signal pre-processing. You can filter out noisy signal or, for instance, you can try to detect which chord is the user playing, not the single note but the chord. It's much difficult to do it but it's very interesting to do the science behind that and you can even study other algorithms to perform pitch detection such as YIM but I advise you this is a rabbit hole because there is a lot of stuff behind all these kinds of algorithms so the choice is yours.

You can go check it out, check out the project, try to understand it, study a lot of things about pitch estimation and frequency calculation and I tell you, it's very funny to study these things because I've never thought that there was something like that behind the guitar tuner but this project was the output of a long research and it was a fun project and I challenge you to do the same thing, to check out the code and to try to implement it. So thank you for having me. I share the code here in this slide. You can scan the QR code and you can find it so you can get access to the repo which is open source. If you like it, add me on my socials. You can find me on Twitter, on LinkedIn, wherever you want. So we can get in touch, talk about pitch estimation, we can talk about React and other things related to computer science. So it has been a pleasure and I wish you a nice day.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
Don't Solve Problems, Eliminate Them
React Advanced 2021React Advanced 2021
39 min
Don't Solve Problems, Eliminate Them
Top Content
Kent C. Dodds discusses the concept of problem elimination rather than just problem-solving. He introduces the idea of a problem tree and the importance of avoiding creating solutions prematurely. Kent uses examples like Tesla's electric engine and Remix framework to illustrate the benefits of problem elimination. He emphasizes the value of trade-offs and taking the easier path, as well as the need to constantly re-evaluate and change approaches to eliminate problems.
React Compiler - Understanding Idiomatic React (React Forget)
React Advanced 2023React Advanced 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
Watch video: React Compiler - Understanding Idiomatic React (React Forget)
Joe Savona
Mofei Zhang
2 authors
The Talk discusses React Forget, a compiler built at Meta that aims to optimize client-side React development. It explores the use of memoization to improve performance and the vision of Forget to automatically determine dependencies at build time. Forget is named with an F-word pun and has the potential to optimize server builds and enable dead code elimination. The team plans to make Forget open-source and is focused on ensuring its quality before release.
Using useEffect Effectively
React Advanced 2022React Advanced 2022
30 min
Using useEffect Effectively
Top Content
Today's Talk explores the use of the useEffect hook in React development, covering topics such as fetching data, handling race conditions and cleanup, and optimizing performance. It also discusses the correct use of useEffect in React 18, the distinction between Activity Effects and Action Effects, and the potential misuse of useEffect. The Talk highlights the benefits of using useQuery or SWR for data fetching, the problems with using useEffect for initializing global singletons, and the use of state machines for handling effects. The speaker also recommends exploring the beta React docs and using tools like the stately.ai editor for visualizing state machines.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js for React.js Developers
React Day Berlin 2023React Day Berlin 2023
157 min
Next.js for React.js Developers
Top Content
Featured WorkshopFree
Adrian Hajdin
Adrian Hajdin
In this advanced Next.js workshop, we will delve into key concepts and techniques that empower React.js developers to harness the full potential of Next.js. We will explore advanced topics and hands-on practices, equipping you with the skills needed to build high-performance web applications and make informed architectural decisions.
By the end of this workshop, you will be able to:1. Understand the benefits of React Server Components and their role in building interactive, server-rendered React applications.2. Differentiate between Edge and Node.js runtime in Next.js and know when to use each based on your project's requirements.3. Explore advanced Server-Side Rendering (SSR) techniques, including streaming, parallel vs. sequential fetching, and data synchronization.4. Implement caching strategies for enhanced performance and reduced server load in Next.js applications.5. Utilize React Actions to handle complex server mutation.6. Optimize your Next.js applications for SEO, social sharing, and overall performance to improve discoverability and user engagement.
Concurrent Rendering Adventures in React 18
React Advanced 2021React Advanced 2021
132 min
Concurrent Rendering Adventures in React 18
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
Introducing FlashList: Let's build a performant React Native list all together
React Advanced 2022React Advanced 2022
81 min
Introducing FlashList: Let's build a performant React Native list all together
Top Content
Featured Workshop
David Cortés Fulla
Marek Fořt
Talha Naqvi
3 authors
In this workshop you’ll learn why we created FlashList at Shopify and how you can use it in your code today. We will show you how to take a list that is not performant in FlatList and make it performant using FlashList with minimum effort. We will use tools like Flipper, our own benchmarking code, and teach you how the FlashList API can cover more complex use cases and still keep a top-notch performance.You will know:- Quick presentation about what FlashList, why we built, etc.- Migrating from FlatList to FlashList- Teaching how to write a performant list- Utilizing the tools provided by FlashList library (mainly the useBenchmark hook)- Using the Flipper plugins (flame graph, our lists profiler, UI & JS FPS profiler, etc.)- Optimizing performance of FlashList by using more advanced props like `getType`- 5-6 sample tasks where we’ll uncover and fix issues together- Q&A with Shopify team
React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured Workshop
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.