Come On Barbie, Let’s Go Party: Using AI for Music Mixing

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

As a DJ, I use many techniques to mix and create new sounds that get peoples’ hands in the air. In this talk I’ll describe AI algorithms based on Neural Networks which have the ability to break down music into elements. I’ll cover how our brain differentiate between dozens of different sound signals when we listen to music. Can we instruct AI to do so? 

The cool part: live DJing on stage using AI algorithms.

This talk has been presented at JSNation 2024, check out the latest edition of this JavaScript Conference.

FAQ

DJ mixing is the art of blending two or more tracks seamlessly to create a continuous flow of music. It involves curating the right playlist, reading the crowd, and determining the next track based on the energy on the dance floor.

Deep learning revolutionizes DJ mixing by allowing DJs to separate and manipulate different components of a track, such as vocals and instruments, in real-time. This technology uses neural networks to analyze and process sound signals, enabling more creative and precise mixing.

A data scientist at Wix builds machine learning pipelines for data scientists across the organization. They work on extracting and processing data to improve various aspects of Wix's services, including website building and user experience.

The DJ in the text mixes Dark 80s, synthwave, and techno sounds.

Common visual representations of sound include waveforms and spectrograms. Waveforms show the amplitude changes over time, while spectrograms display the changes in frequencies over time, providing a 3D representation of sound.

The three main levels of audio feature abstraction are: 1) Low-level features, such as amplitude, envelopes, and zero crossing rate, which are numerical data for machines to process. 2) Medium-level features, such as pitch, beats, and notes, which are more perceptual for humans. 3) High-level features, such as tempo, lyrics, melody, and rhythm, which are enjoyed by general listeners.

Deep learning helps in music source separation by using neural networks to process raw audio or spectrograms and extract different components, such as vocals and instruments. Convolutional neural networks (CNNs) identify local patterns, while recurrent neural networks (RNNs) find dependencies between track parts. The result is a mask that separates the desired audio component from the original track.

Applications of music source separation technology include karaoke, music transcription, lyrics extraction, and beat matching. This technology allows for high-quality separation of vocals and instruments, making it easier to create karaoke tracks, transcribe music, and enhance DJ mixes.

The DJ was inspired to explore music source separation using neural networks after receiving an email about a new technology in his DJ software that could separate track sources. Initially dismissive, he later revisited the feature to help a friend separate vocals from an old track, leading to his amazement at the technology's capabilities.

The DJ used open-source tools, datasets, and Python code to experiment with music source separation. He followed step-by-step guides, read articles, and trained models himself to understand and utilize the technology.

Ziv Levy
Ziv Levy
27 min
13 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today, we explore DJ mixing and how deep learning revolutionizes the art by discussing sound processing, extracting features, and using machine learning. Deep learning allows for efficient extraction of audio features and high-resolution track separation. Neural networks can achieve source separation by converting audio to spectrograms and applying convolutional and recurrent neural networks. This has immediate impact on industries such as karaoke and music transcription.

1. Introduction to DJ Mixing and Deep Learning

Short description:

Today, we're going to explore DJ mixing and how deep learning revolutionizes the art. I'm a DJ at Wix and a data scientist. DJing is more than curating playlists, it's about reading the crowd. Sometimes, when I try to blend a song that sounds perfect in my headphones, it crashes on the dance floor. Let me show you an example. We'll discuss sound processing, extracting features, and using machine learning. And then, we'll dive into the revolutionary deep learning approach.

Today, we're going to actually explore and dig into this art of mixing, of DJ mixing, and I'm going to talk about this perspective as a DJ and actually we're going to also talk about how deep learning brings a whole new revolution to this music art of mixing and in general what can be done with sound signals and neural networks.

So, again, a bit more about myself. I'm working at Wix for the past seven years now. I'm working at the data science group. My day job is that actually I'm building machine learning pipelines for data scientists across the organization. For those of you who are not familiar with Wix, Wix is a website platform building. And again, I'm also a DJ. I'm mixing Dark 80s, synthwave, and techno sounds and this is what we're going to talk about today, again, this aspect of my life as a DJ.

And I don't need to tell you that being a DJ is not only curate the right playlist, but it's also the ability to read the crowd and to see what track is going to be next according to the energy on the dance floor. And the issue is, like, the problem is that sometimes I hear something very good that really perfectly fits to the dance floor in my headphones and when I try to blend it in to the dance floor, it crashes. Let me show you how I am crashing a mix. And how awful this sounds. So I picked those two songs. One of them is by Adele. You are familiar with this song, right? And the next one is, oh, not this one. Next one is this one. Also familiar. By the way, everything I do, I'm doing it live. So if I have some glitches or some messed up, just excuse me. Okay?

So in my head, those songs are perfectly matching. But if I'm trying to play it, and let's skip to this, to the highlight of the Adele song. I'll try to mix the song exactly at the highest point of it. Okay. As you have heard, it's a lot of noise. This is where, you know, some of you probably would do some faces of, hmm, what? What's wrong with this DJ? But to my fortunate, you will be surprised to see what, you know, a very drunk crowd may overcome. But for me, it's like it's devastating. It's really like ruining the moment, and the energy is unbalanced, and I need to recover from it, and it's very stressing. But again, in my head, it was perfect. So what was it? So what we're going to talk about today is what sound is and how we process audio with computers and how we're pulling out features from this audio, and how we use it in machine learning. Okay? And then we're going to talk about the deep learning approach, which is pretty much revolutionary.

2. Exploring Source Separation and Sound Modeling

Short description:

It all started with an email about a unique technology for separating track sources. I didn't pay much attention until a friend asked for help in separating vocals. I rediscovered the tool in my DJ software and was amazed by its real-time capabilities. Intrigued, I delved into music source operation using neural networks. Sampling measures amplitude levels, resulting in a waveform that holds information about frequency, intensity, and timbre. Computers struggle to distinguish between instrument overtones, unlike our brains.

And as we speak, things are really happening right now. So it all started with, you know, back a couple of years ago, I got an email from the release note from the DJ software that I'm using, and they're saying something like, here, dear DJs, we are now able to provide you a unique technology that will allow you to separate the sources of your track, and by that, you know, be creative and do something with it. And I thought to myself at first that, well, it's not so interesting. I mean, probably it has been solved already. But you know, it was like post-Covid era, there were still limitations, and you know, limitation on crowding and everything, so I really didn't pay attention to that.

And recently, a friend of mine came to me and she said, I want your help to separate the vocals out of some track that I have. This is a very old track, there are no studio versions or something. What can I do? And you know, sometimes I have my equalizer here, and I can play and, you know, in some manner reduce the sound of some sounds, or enhance the sounds of others, but it's not really creating a karaoke version of, like, peeling apart the layers. But suddenly I remembered that I have this tool in my DJ software, and I read, you know, the step-by-step guide of what to do, what do I need to configure, click a few buttons, and boom, I had it. And I was, you know, it was nice, she was happy, but then I, like, played with it with another song and another song, and I was actually, it wasn't just nice. I was amazed by that, and everything was happening in real time.

And this is something that was not on this release note, by the way, but actually, or maybe it is, but I didn't read the entire thing. But actually I was amazed, so this really triggered my engineering part of the brain. And, you know, I started, what do I do? I want to know how things are happening, I go to Google. And I look for music source operation using neural network, and I downloaded an article, read it, another article, read it, downloaded the dataset, downloaded the Python code, trained the model myself, and then I was testing it with another track and another track and another track, and I was actually mind-blowing about this technology. And after a few hours of playing with it, this is how I look like. Like, I discovered, you know, a whole new world came out to me.

So, the first thing is how we model sound, okay? What sound is? So, sound, eventually, is like changing in air pressure caused by air molecules' vibrations. Our ears are sensitive to those vibrations, and eventually this is what our brain perceives as sound. So computers does something similar called sampling. I'm not going to dig into this technique because of time constraints, but the computer measures the amplitude levels of those vibrations. Eventually, what we get is a waveform, which is the most common visual representation of sound, but actually this waveform holds multifactorial information about the sound. The first thing is the frequency, okay? If we zoom in, we can get the frequency of the sound. Second thing is the intensity of the sound. The intensity is measured by a squared, like we are taking a squared area of the waveform, and we see what is the peak in proportion to what is the minimum and the maximum points. And then we have something very important, which is the timbre of the sound. And the timbre is something that also considered as the tone quality or the tone color. It's not the quality like how clear I hear the sound, it's the tone quality of like overtone of different instruments overtone each other. For example, if I'm playing a C chord at the same time I'm playing a C chord in a guitar, at the same time someone plays a C chord at the piano, I want to be able to distinguish between those instruments, and this is something very hard to do for computers. Actually, if you think about it, our brain can do it pretty much instantly.

3. Understanding Spectrograms and Visualization

Short description:

The spectrogram is a 3D representation of sound that shows the changes in frequencies over time. The amplitude levels of a sound can be observed through the brightness of colors in the spectrogram. It provides a more convenient visualization compared to waveforms.

So while we have this waveform representation, we also have another representation, another visual representation, which is called the spectrogram. Let's see what the sound example here. The sound example here is the C chord. Okay, you can see the high amplitude levels like up ahead. We can see like the high amplitude levels and it's fade out. Okay? And a waveforms are unlike the amplitude level, it shows the changes of frequencies over time. Okay, the waveforms showing the amplitude changes over time. But then you can say something, oh, but I lost the amplitude levels, which is very important. But actually, if you can see, like a C chord is having a 500 hertz frequency. Okay? And in a spectrogram, you can see in the 500 hertz this bright line. So actually, a spectrogram is a 3D representation of a sound. Okay? And the more the amplitude are higher, it means that the brightness of some other color selected for this manner is going to be stronger. So that's it. Oh, okay. So actually, this is how a spectrogram looks like. But it's more convenient to see it like this.

4. Levels of Abstraction in Sound Modeling

Short description:

When extracting audio features, there are three main levels of abstraction: the lowest level for machine processing, the medium level for human perception, and the highest level for our enjoyment of music.

So the second question is like, okay, what we actually model in sound? What we can extract? What information we can extract out of it? So when pulling out audio features, it is considered to have like three level, three main levels of abstraction. At the lowest level, we can find features like the amplitude, the envelopes, the energy frequencies, and zero crossing rate. We will talk about it like in just a second. But we are mostly dealing with a lot of numerical information in this low level. Okay? This low level intended for machines to process. It's not something that we as human can process it or perceive it. In the medium level, we have something that is more human perceptual. It's like pitch, beats, notes. Okay, something that musicians specifically can like spot them right away. And in the highest level, we have what every and each one of us is enjoying when we are listening to music, which is the tempo, the lyrics, the melody, the rhythm, and everything around.

5. Extraction of Features and Track Separation

Short description:

In traditional machine learning techniques, extracting features from sound requires significant effort. However, in deep learning, the network can extract audio features automatically, making the process more efficient. This allows for the separation of track sources, such as vocals and instruments, with high resolution and quality.

So in the traditional machine learning techniques, let's say I have this sound and I want to know whether this is a car engine. This sounds like a car engine, right? So let's say I'm a data scientist or an engineer. I have to actually extract different kinds of features. For the sake of this example, let's take those two and I will explain. So the amplitude envelope are the peaks of the amplitudes, okay, and the surrounding values. And zero crossing rate is like the angle or the speed when the amplitude values go from positive to negative values.

So let's say I pick those two and I build a proper data set. I train the model and now I have a model that can predict by a given sound whether it is a car engine or not. But listen to how much work I had to put in this effort because assume that for every little piece of sound, I need to extract all those features and process it. And this sometimes may take time.

Now, in neural network or specifically in deep learning, my ability to give the network a raw audio as an input, okay, or in this manner, let's say it's not raw audio, it's like a spectrogram and let the network do its magic like it usually do with, for example, with images or other. And I don't need to actually work too hard for that because the network extract all the audio, all the features for me. And eventually I'm getting a prediction whether it is a car engine or not. Now if I want to, you know, prepare a data set, of course, and calculate some loss function and train the network over and over again, I feel I get, you know, a solid, ready to production network.

So I might do like repeat this process, but eventually I want to do the same and to separate the track sources into vocals and instruments. So let's see how it sounds. So let's go back to Adele. Okay, now I will click a few buttons that I prepared, assuming that this DJ controller is like a keyboard, so I can configure it to press some buttons and it will operate something. So in this way, I want to separate Adele, for example. Now, I don't know if you are realizing, but this is a very high resolution, high quality sample of, you know, separating the track. You can hear Adele's voice very clearly, including, by the way, her, you know, the other singers. Yeah, the backup singers, thank you. And let's see, like, how it goes with the guys from Crazy Town. Come my lady, come my lady, you're my butterfly, sugar, baby. Come my lady, come, come my lady, you're my butterfly, sugar, baby. You're such a sexy, sexy, pretty little thing, this little bitch, she got me strung with your tongue ring. So it's a bit less, less like, less performance from the Adele result, but still, we still get, like, a very good quality of the vocals apart. And if I, in this example, I would like to get only the instruments, I will get something like that. And now I have, like, just the beats, okay? And I'm able to work with it. And we can see something, maybe something like that.

6. Achieving Source Separation with Neural Networks

Short description:

I explore how source separation is achieved using neural networks. By converting audio to spectrograms and applying convolutional and recurrent neural networks, the network can extract features and create a mask for separating vocals and instruments. This has immediate impact on industries such as karaoke, music transcription, and bit matching and tracking.

I can do, let's see, let's go back to Adele. Sorry. The scars of your love remind me of us, they keep me thinking that we almost had it all. The scars of your love, they leave me breathless, I can't help feeling we could have had it all. And if you had my heart inside, it would have made it far to the beach. Nice, right?

So let's see how it is actually done under the hood. So I took this example from this paper. And I'm not going to actually, you know, attack why they did this and not that. There are probably better ways to do it today, but I find this example very easy to understand, like how neural networks actually perform and what it does. So let's go through it together.

So at first, I'm taking the audio and converting it to a spectrogram, which is the visual representation of the track. Let's see, wow, wow, I have like five minutes, okay. Stressing. And the next thing is that I'm using both convolutional neural networks that excel in, you know, breaking apart the music into small pieces and to spot the local patterns. And then I'm using recurrent neural networks that excel in finding the dependencies between those little parts of the track. And actually, in every architecture of neural network to do this task of source separation, you will find both the convolutional part and the recurrent part.

Eventually, we are like, like every neural network running result, we are getting a lot of feature maps, okay? And eventually, we are combining all those feature maps into what is called a mask, okay? So a mask is like a whole sound of the result of this network and what I do with this masking is I'm passing through the input, the original input, as is through the mask, okay? So, you know, you can consider it like this filter where eventually I'm left, in this case, with the vocals. And in the training phase of the network, I can compare it, you know, to what I have in the dataset. So the network will know whether it's done its job right or not according to some loss function. And eventually, I'm getting a model that is able to get the vocals in production. Now, in order to get the other part of the track which mean the instruments, all I need to do is subtract the vocals from the original input and then I'm left with the instruments. And this is actually has immediate impact on a lot of industries. As we speak, of course, karaoke world is revolutionizing right now. Everything related to music transcription and lyrics, okay, it's now much easier because I can do multi-model network that will first peel apart the vocals and then run another model to do the transcription which is something we know how to do it for quite some time. And of course, bit matching and tracking, it's like a lot easier. I will demonstrate it even more. We actually, this is actually happening right now. And it's amazing. It's amazing to me as a musician, as an artist, as the one that I'm trying also to create new mixes, a new version, I can take some samples of just this particular instrument of some sample and do some very good stuff that actually, you know, like people very like it on the dance floor. I can see it.

7. Exploring Data Science and Track Combination

Short description:

I share my experience with data science, using open source tools and datasets for music. This approach is accessible for anyone to try and has yielded amazing results. Now, let's move on to combining multiple tracks, starting with Adele.

I can spot it. And what I wanted to take from this session is that I'm not a data scientist, okay? I mean, that's right, I know how to deal with data science stuff. I will post all the links afterwards, don't worry. But everything is out there. I did it step by step with open source tools, with open source datasets for music. You can try it yourself today, tonight, in an hour. Again, I find it very fun, very creative, and I don't know, for me, it's amazing. It's working.

So, are you ready to do some real party now? Okay. Thank you. So, what I'll try to perform now is to actually combine even four tracks and play all of them together. It's very like it's not, it's a bit hard, so I really hope to get it done. But we will first start with our beloved Adele, and let's see how it goes from the rest. So, Adele, can you tell us a little bit about your work? I mean, I really like Adele, and I have been listening to Adele for a long time. I mean, I think her music is just a lot of fun, and I've been listening to her for a long time.

And I think her music, I really like her. I mean, I've been listening to Adele for a long time, and I've been listening to Adele for a long time. And I think her music, I really like her. I mean, I've been listening to Adele for a long time, and I've been listening to Adele for a long time. And I've been listening to Adele for a long time. And I've been listening to Adele for a long time. And I've been listening to Adele for a long time. And I've been listening to Adele for a long time. And I've been listening to Adele for a long time. And I've been listening to Adele for a long time. And I've been listening to Adele for a long time. And I've been listening to Adele for a long time.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps