I'm Claudia Bressi, a senior front-end and mobile developer from Italy. I'm passionate about music and computer science, and I've combined them in my sound design project using React. The project visualizes music using the Web Audio API and supports different audio file formats. We use React, TypeScript, Redux Toolkit, Framer Motion, and Material UI for development. The Web Audio API provides powerful methods for analyzing, filtering, and visualizing sounds, and we utilize the Analyzer node and Audio Worklet for this project. Browser compatibility is a consideration, and we have built custom hooks for modularity and code reuse.
Hi everyone. I'm really grateful to join react advanced today for presenting you my sound design project. I'm Claudia Bressi, and I am a senior front-end and mobile developer. I live near Venice in Italy, and I work for a consultant agency in Milan, Italy, where we build applications using react and react native for mobile.
I'm really passionate about both music and computer science. And since I was a child, I started learning piano and also I started using the computer in my father's office. At present time, I still love playing instruments like electronic keyboard and guitar, listen to my favorite playlists and love going to jigs and festivals. That's why I try to combine a front-end project with music.
Let's talk about the project. The main goal is to visualize music using React library. And the overall application can handle any type of audio file, for example MP3 or WAV files. And as a result, we generate a visual component for each sound spectrum. Let's dive deeper in the project details. The code is React and TypeScript in order to have easier maintainability. For handling a global state throughout the application, I've set up Redux toolkit For animating components, I've opted for Framer Motion Library, which is a quite modern and straightforward package. Then, on global UI components, like buttons, inputs and typography, I've chosen the well-known material UI, which probably many of you already use in your React applications.
The Web Audio API is an amazing library that lets you deal with audio data. It gives us developers thousands of methods to analyze, filtering and visualizing sounds. Also, there are multiple available options to add 3D special effects, panning, multichannel splitting and much more. For this project, we mainly use two interfaces, the Analyzer node, which lets us handle with specific audio information as for frequencies, and the Audio Worklet, that lets us developers add new modules to be executed of the main thread. High level speaking, every time you work with Web Audio API, the generated result will be a direct acyclic graph, where each node is either an audio source, a filter effect or a destination target. Through the Web Audio API, visualizations are achieved by assessing audio parameters ranging over time. These variables are gain, pitch and frequencies most of the times. The Web Audio API has indeed the already mentioned analyzer node that leaves unaltered the audio signal passing through it. Instead it outputs audio data that can be passed to a visualization technology such as HTML Canvas or, in our case, React Functional Components. An expected fun fact is the browser compatibility. As you can see, Internet Explorer is not supported at all. Luckily for us developers, this browser will be retired next year on June 15. In order to improve our project, I've built up some custom hooks to improve modularity and reuse throughout the codebase.
Comments