Decoding Vector Search and Embeddings for React Developers

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In this talk we'll break down key Generative AI concepts like Vector Search and Vector Embeddings and how can React Developers leverage them with technologies like LangChainJS to build intuitive AI powered apps. We will discuss different AI enabled search experiences like vector and hybrid search approaches, exploring how each method works, their advantages and disadvantages, and practical use cases.

The talk will break down what can be overly complex concepts into practical takeaways for our everyday work as React/JavaScript developers. By the end of the session you'll have a better understanding of when and how to use different search techniques to optimize your user experience. Most importantly you will also learn how as a React developer we use can use these search techniques using open source tooling and libraries including some powerful search engines written in TS/JS and create AI powered applications in React. 

This talk has been presented at React Day Berlin 2024, check out the latest edition of this React Conference.

FAQ

Couchbase is a NoSQL database platform that empowers developers with AI capabilities, including support for vector search.

Vector search is a method that uses vectors to find semantically related meanings in data, improving search accuracy. It's important for React developers as it enables more powerful search experiences in applications.

Traditional search methods rely on exact keyword matching, whereas vector search finds semantically related data by converting text into vectors and comparing their similarities.

Vectors are lists of numbers representing words, phrases, or images, conveying their meaning. Embeddings are low-dimensional numerical representations of these vectors, making it easier to search and organize data.

React developers can use APIs and tools like Langchain and Vercel's AI SDK to integrate vector search capabilities into their applications, improving search accuracy without designing complex algorithms.

Semantic search finds meanings in queries rather than exact keyword matches, similar to vector search. It enhances user experience by providing relevant results based on context.

Tools like Langchain, Vercel's AI SDK, and various vector databases can help React developers integrate vector search capabilities into their applications.

Vector search improves user experience by providing more accurate search results based on the semantic meaning of user queries, reducing the chances of users not finding what they're looking for.

Common distance measures in vector search include Euclidean distance, cosine similarity, and Manhattan distance, which evaluate how close vectors are to each other.

RAG combines vector search with generative AI to find relevant data and generate coherent text, improving the accuracy and relevance of AI-generated responses.

Shivay Lamba
Shivay Lamba
28 min
16 Dec, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Welcome to my talk at React Day Berlin. Today, we'll discuss vector search and embeddings for React developers. Traditional search is broken, resulting in poor user experiences. Converting data into embeddings unlocks more accurate search, content recommendations, and document comparisons. Vector search measures the closeness or similarity of vectors using distances. In vector search, the closest relevant documents are found based on similarities between vectors. There are multiple ways to calculate the distance between data in vector search. Generative AI applications often deal with both textual and image data, making vector search useful. By converting queries and data into vectors, semantic meaning is considered, allowing for more accurate results. React and Next.js developers have access to a range of toolings, including BlankChain and AI SDK from Vercel. Retrieval augmented generation enhances search results by converting data into vectors and generating more coherent text. The user query is converted into a vector and matched using vector search, and the response is generated using the AISDK from Vercel.

1. Introduction to Vector Search and Embeddings

Short description:

Welcome to my talk at React Day Berlin. Today, we'll discuss vector search and embeddings for React developers. Traditional search is broken, resulting in poor user experiences. We'll explore vectors, embeddings, and how they apply to React development. Vectors are lists of numbers that represent text and embeddings organize text for efficient searching.

Hello, everyone. Welcome to my talk at React Day Berlin. I'm Shabal Amba. I work as a developer evangelist at Couchbase, which is a NoSQL database platform empowering developers, including a lot of AI capabilities that are being taken with support for vector search.

So today's session is going to be around decoding vector search and embeddings for React developers. Now imagine that you are like Joey from Friends and sitting on a really nice looking couch. Of course, all of us love couch and having a good chair, right? So of course, if you're a software developer, you would probably love your work chair or if you're just enjoying looking at a movie and enjoying rocking behind with a really nice chair. So of course, all of us love that. And if you use it too much, sometimes you might see that your chair just breaks. So of course, the next step would be that you might want to look up for that particular type of chair that you really love. So you might have a search query on an e-commerce platform where you search for I'm looking for an old chair, which is very comfortable. It has kind of a brown color because you can't really remember the exact brand of that particular chair that you're looking for, because it's so old.

And of course, what would happen if you were to give such a descriptive description of the chair when you move ahead and try to search with this right type kind of description on a different kind of e-commerce platforms? Let's see what actually happens. So do you think Joey will be able to actually find that same chair that he's actually looking for? Well, according to a lot of surveys, 88% of the users won't be able to actually find what they're actually looking for, and they don't return to a website after such kind of a bad experience where they don't look or where they are not able to find what they're looking for. And 61% of the users say that they are not able to actually find what they're looking for within five seconds. And if that happens, they will actually go to the next site for finding out the results.

So of course, if I put in that same description inside of an e-commerce platform, then what would happen? Then if you go ahead and search for that particular type of chair, we probably won't be able to find the same thing that we're looking for. We find a bunch of other random chairs that do not match the description that we are trying to search for. So what does that mean about search experiences? We saw that we didn't find any brown chairs, no earthy brown chairs, no manual recliners or automatic recliners. But at least these were chairs, but they still didn't meet our description that we wanted to find. So that is why the traditional search is actually broken.

So what we're going to be talking about today, we're going to be talking about what is a vector, what are embeddings, what are vector search, and then how does it actually apply for us React or Next.js developers? So we saw that, of course, the traditional search, which is typically based on the full text search, where whatever user query is being sent, we try to match the exact search query and we try to find results in our database that match the exact word to word or text by letter or letter by letter based on the search query that you have sent. So if you have a very vague description like I'm looking for a really nice looking brown chair that is reclining, if that same text is not existing in your database, then you'll not be able to actually find results for that. So let's find how we can overcome that particular issue. So first, let's understand that what exactly is a vector. So again, a warning that if you are a React developer or a JavaScript developer, this will involve a lot of numbers. Well, a vector is a list of numbers that can represent words, phrases, images, and a lot more. And over here, each number in this particular list will basically have some information about the meaning of that particular text that you are showing to it. And an embedding or a vector embedding basically turns these words or sentences into these low dimensional lists of numbers. And these embeddings are very compact and they help making tasks like searching and organizing these texts very easy.

2. Working with Embeddings for Vector Search

Short description:

Converting data into embeddings unlocks more accurate search, content recommendations, and document comparisons. Embedding models are specialized AI models that convert text into numerical arrays using neural networks. Vector search measures the closeness or similarity of vectors using distances. Closer vectors indicate similar real-world features. For example, a query like 'I'm looking for my old chair with its leg part went up' can be used for vector search.

So for example, if you have this entire description that we were trying to search within our search box, when we convert this into the embedding using an embedding API, it will get converted into this vector. So as you can see that it basically represents this entire embedding model. And here you have an array of these embeddings. So these values, they represent the real world features of your data. You're just converting your real world features of the data into these numerical values with the help of an AI model called as embedding model, where you can see that the embedding model in this case is the TextEmbedding3 model, which converts your input data into these embeddings that represents the real world features or the real world dimensions of your data. And the reason why we convert this into this numerical format or this embedding format, we'll come to that later.

And over here, this TextEmbedding model is from OpenAI. So your data application basically transforms all the way from all of this data, like the textual data that you see right now, into a large array of the dimensions of the vector representations, which unlocks. So by converting your data from text into these embeddings, it basically unlocks more accurate search, content recommendations, and then being able to also do document comparisons. So the question is that, is this really magic? So let's first also discuss about embedding models. So embedding models on a high level will convert your text into these numerical arrays. And this basically happens with the help of these specialized AI models, which really just are neural networks. So in the landscape of machine learning, deep learning, these are neural networks, which basically take the data from the words to understand what's there in that word. And then it will convert it into this numerical value. And of course, these models are traditionally trained over a lot of vast data to properly understand what does that data actually represent, so that it can then convert into this numerical form and have the relevant context of what the features of that particular data are. Similarly, of course, these days, a lot of the different AI models, like the GPT models, are what are being used under the large language models, which are generative models, which allow you to generate data and generate human-like responses, whether it's text generation or summarization. So of course, those are different categories of models, which are used for LLMs. But the embedding models is what is able to basically convert your data into the numerical format.

So now let's understand that how does vector search actually work? So we've discussed about what are vectors? What are these embeddings that are generating these vectors from our input data? And of course, these vectors are also multi-model, which means that you have text-based embeddings, but then you can also have image or even a combination of both images and text. So once you convert these into the embeddings, what does it actually mean? Why are we even converting them into the embeddings? So that's where the vector search actually comes into picture. So first, let's talk about closeness and understand what does that closeness actually mean. So closeness within vector search basically means that how close two vectors are or how similar they are, which is typically measured with the help of a distance, which represents the value of how far apart these vectors are. So basically, in a nutshell, if the two vectors are smaller in their distance, like if they are closer to each other, that means they will have more similarity. And what does that mean in real world? So first, these vectors are representing the real-world features of something. So if the vectors are closer to each other, that means the real-world features of two things are matching more than if they are further apart. So that means that two things which are similar in their features will actually represent, their vectors will be closer with each other. That means the distance between them will be smaller in size. So here is an example. Let's say that we have the query. I'm looking for my old chair with its leg part went up.

3. How Vector Search Works

Short description:

In vector search, the closest relevant documents are found based on similarities between vectors. Traditional text search would not find results based on semantic meaning, but vector search does. It finds values that are semantically related to the search query and returns them based on mathematical calculations of vector distances.

So in this case, if you see that this is a three-dimensional array, right? A three-dimensional vector space or just coordinate space with x, y, z dimensions. And each of the different things are represented as these vectors. So on the right-hand side, you see the home furniture, on the left-hand side, you see the living room sets, and on the bottom, you see the office sets. And right now, if you consider your search query that also actually gets converted into the vector. So now you can see that it is finding that the most closest relevant set of documents are actually brown recliners and the leather reclining chairs because our query talks about us having a reclining chair and having a brown chair. So it's basically finding that, okay, the closest value would be this brown chairs and the reclining chairs. And it will basically give you a list of the most closest matching values. But you see that, for example, dining room tables are much more farther apart because we are talking about a chair and a chair is actually not related to a dining room table. So what essentially means is that now with the traditional search, which is the full text search, if you would have searched for, I'm looking for my old chair, it would have tried to find whether we have any database value that matches the exact description of what you have given in the search query, which of course would not have given you any result. But with this vector search, you're able to find semantically related meanings. And because of that, you are able to find values which are semantically related to the search query that you've given. Since your search query talks about a chair with a broken chair and then recliner and brown in color, it finds the values which have features which are similar to it. And then it returns based on the mathematical calculation of how close these vectors are, because these vectors, as I reiterate, are the real world features. So that's how the vector search basically works.

4. Calculating Distance in Vector Search

Short description:

There are multiple ways to calculate the distance between data in vector search, such as Euclidean distance, cosine distance, and Manhattan distance. React developers don't need to design these algorithms. They can use provided APIs and functions to handle the complexity of vector databases and embedding models. Keyword search can become slow and complex, but vector search is performant, especially when dealing with complex queries and multi-modal data like images and text.

So now of course, the question is, how is this closest actually determined? So we'll go into the AI space now and understand all with the help of math. So the idea is that there are multiple ways to actually calculate this distance between the data. So in this case, the first approach is the Euclidean distance, which basically groups together different things by similarity. Then we have the cosine distance, which basically is used a lot in text analysis, which is finding out the meaning of the text, but not the size of the text. So based on the meaning of the text that's been provided, it will find the closest values that are most closest to the meaning of the text based on how we interpret the meaning. And then we have the Manhattan distance, which is typically again used when you have a lot of information and you want to find straightforward differences between those information.

So most typically you will either use cosine similarity or the Euclidean distance, which simply says that based on two values which are close to each other in the vector space, that means that their real world features are similar. So they are being grouped by similarity. So of course, this is probably a lot to handle. Now of course, the great thing is that as a React developer, you don't have to worry about actually designing these algorithms. This is mainly for us to understand that as React developers, what is going on behind the scenes. Because if you're going to be building applications that use the vector search, then what is going on behind the scenes? Of course, there are a lot of tooling available for us as React developers to use, which removes all of the complexity involved with all of this understanding of the vector databases and how these embedding models generate the embeddings. We don't have to worry about that. We just have to use out of the box provided APIs and functions which handle all of this for us. And we just have to build really good user experiences that leverage vector search and get benefit from the vector search capability inside of our React applications.

So of course, we see that the keyword search typically is very fast. But have you ever seen something like this? This is an example of a SQL query where we are saying we are trying to find the ID, the title, content, author, summary. So imagine that you are having a blog website where you are trying to use the full text search. Where in the backend, we are having this query where we are having to find the relevant title, content, author. And then we are trying to find documents where the title would be based on the particular keyword that you're trying to match. And if you're trying to make it match with multiple keywords, just imagine how complex your query would be. If you have this one keyword with which you're trying to match, that's fine, but then you can see that it can become so complex. But this is where it will also then become slow as well. Just try to imagine how slow it might be. But that is where, if the query becomes very complex, that is where vector search can be very performant. And I'd like to basically showcase an example before we dive or we jump into the code. So it's even more useful when you have just more than words. Whether you have images, text, video. Because of the multi-modal capability of these embeddings, we can have a combination of text and images as well. Whereas, full-text search probably might not even work with an image if you're trying to find it.

5. Introduction to Vector Search in Generative AI

Short description:

Generative AI applications often deal with both textual and image data, making vector search useful. As a React developer, understanding vectors, vector embeddings, and their relation to real-world features is important. Let's explore an example using a GitHub repository for movies.

Because with generative AI applications, with all of the modern AI applications, we have a bunch of data that's both textual and has images as well. So that's where we will learn how vector search is useful.

So, of course, before we talk about what's in store for React developers. All of this theory that we have covered in the first 15 minutes, talking about what is a vector, what is a vector embedding, how is it actually related, how the vector embeddings relate to the real-world features, and then how you use vector search. So how do you do all of this as a React developer? Before that, I would like to quickly show you an example.

So this is a GitHub repository for showing all the latest movies. Now here, what you'll see is that right now, I have this, I have basically this modal. So in the modal, if I go back to my application, so over here I can see that I have basically how much percentage of my application I want to be semantic. So here I'm showing all of the list of movies. So if I keep it 0% semantic and I try to search for a movie name, for example, Shrek. So it gives me the results because it is trying to find the search results based on the exact keyword that I'm giving to it.

6. Understanding Vector Search and Semantic Meaning

Short description:

When using vector search, you can find related items even if you don't remember the exact keyword. By converting queries and data into vectors, semantic meaning is considered, allowing for more accurate results. This improves the developer experience by enabling searches based on descriptions rather than exact matches. Milliserts, a search engine with vector search support, facilitates the transition between semantic and full text search, with vector search being more performant in cases where exact word-to-word matching is not required.

So the keyword is Shrek, so it finds me results with the titles or the description of the movies which have Shrek inside. So it works well. But let's say if I don't remember the word Shrek and I try to find something like green-colored Ogden, which is basically what Shrek is. You can see that it actually does not give me the result because with semantic meaning, because the full text search does not know that Shrek is actually a green-colored Ogden. But if I now transition this into 100% semantic, if I search now for green-colored Ogden. So what you'll see is that the first result that I get is actually Shrek. And the reason behind that is that Shrek is actually a green-colored Ogden. And with the semantic meaning, because I'm not using the full text search now, it's no longer just doing the keyword matching, but it now sees that the input query that I have sent, green-colored Ogden gets converted into a vector. And then all of the data that you see, all the movie data has been converted into the embeddings. So the vector for Shrek is similar to the input query vector of the green-colored Ogden because Ogden is like a monster and Shrek is actually like a green-colored monster. So when we convert them into the features, a green-colored Ogden feature basically matches with that of Shrek because it is actually a green-colored monster. So they are closely related to each other. So that is why you see that the top result is Shrek and we get that. So, this improves the developer experience for us a lot more because now I'm no longer just required to provide exact keyword match. As a user, if you don't exactly recall something, you can give some sort of a description that if matches with some of the data from your dataset, you'll be able to get the data. So this is an example of how you can do it with the help of this milliserts. It's basically a search engine that has vector search support. So that's how you can very easily transition between semantic versus just full text search. And you see the power of the full text search versus using with vector. With vector being a lot more performant for use cases where you don't exactly remember the exact word-to-word matching with the help of the full text search.

7. Toolings for React and Next.js Developers

Short description:

React and Next.js developers have access to a range of toolings, including BlankChain, an open-source framework for building generative AI applications. The AI SDK from Vercel offers various toolings to build generative AI applications powered by large-language models and vector search. Ready-to-use UI elements are provided for semantic search in React UI. An example Next.js application demonstrates the use of vector search to find related blog posts. In the Gen-AI space, retrieval augmented generation enhances search results by converting data into vectors and generating more coherent text.

So now let's move and understand what's in store for a React or a Next.js developer. So you have a number of different toolings available to you. We have BlankChain which is basically an open source framework, which allows you to build generative AI applications. And there's a specific library for JavaScript which allows you to create these kind of experiences like vector search or even more advanced generative AI applications that can leverage vector search.

And then of course, we have the AI SDK from Vercel, which provides a lot of different toolings for being able to, again, build a lot of generative AI applications powered by large-language models and vector search. And of course, these are completely open source you can leverage. And we'll show them in today with the help of a demo. And of course, you also get examples of UI elements that you can directly add to your React UI, which allow you to provide semantic search. Semantic search is basically similar to vector search where you find the semantic meaning of the keyword of the search query and find what is related inside of your database. So you get some ready to use UI elements also that you can leverage inside of your codebase.

And here's an example of a Next.js application where I'm using an API. So here you can see that I'm connecting to a database cluster. And then here I'm using the vector search vector query where I send in my query. And then I create an embedding for my input query. And then I use the scope.search. So here I'm basically internally using the vector search where I provide my index, my search index. And what it does is that it finds the list of blogs, blog posts, which are the closest related, which are matching the closest related to the input query that I've sent. Right? So of course, if you are trying to search for React.js documents, then if there's a blog post with the title of React.js, you'll be able to find with the keyword search. But let's say if you're trying to find for React hooks, there might not be a blog post that has the React hooks in the title. But if you create a vector embedding and use vector search, then you might be able to find the relevant search results.

Now, of course, all of this, you know, we are living in a Gen-AI space. So what does all of this have to do with Gen-AI? Of course, like, you know, if you have ever used a chatbot that don't have context, then they start to hallucinate where this gives you a very random response. So that's where the retrieval augmented generation basically comes into picture. Where what we do is that we convert all of your data into the vectors and we find the relevant documents. Then we enhance that with some sort of augmentation from the vector search. So from the vectors search we have some results that we get. So we provide that with our additional input prompt. And then we generate the text to provide more coherent text. So for that, let's try to see how would you actually use it? So this is where I have a demo application, which is the chat with PDF. Here, I can upload a PDF document.

8. Retrieval Augmented Generation with Vector Search

Short description:

When a PDF document is uploaded, the text is extracted and converted into vectors and embeddings. A vector database stores these embeddings. When a question is asked, the input query is converted into a vector and matched with existing vectors using vector search. The closest vector is identified, resulting in a fully functional retrieval augmented generation application. The code uses Langchain JS, Couchbase vector database, and OpenAI embeddings. The text is split using a text splitter, an embedding model is initialized, and Couchbase credentials are validated. The document and vector embedding are sent to the Couchbase vector store, and the embeddings are generated from the data source and stored in the database. A retrieval chain is created to retrieve relevant data based on the input query using the retriever, which finds the most closely related values using vector search.

So in this case, for example, I'll search for all things open. So this is a PDF document. So what would basically happen in this case is that when I select this particular document, what it's doing is that all of the text that is there in this particular PDF is getting extracted and all of the text gets converted into a vector into the embeddings and then they get stored inside of a vector database. And now if I ask it a question, for example, when was the conference held? So when I send this input query, that also actually gets converted into the vector. And then when this vector is converted, I do the vector search where I match it with the list of all the vectors that I already have. And it basically finds the closest vector. And based on that, the result that you see now is that I get a fully functional retrieval augmented generation based application where my AI bot is not hallucinating and it is able to find the closest result. In this case, you can see that it is perhaps when it did the vector search, it was able to find what is the closest result based on the document that you have just uploaded. And if you look at the code base over here, this is where all of the magic is basically happening. So the first thing that you see is that we are using Langchain JS, which is the open source framework for building Gen AI applications with the help of JavaScript. So in this case, I have the document loaded. I'm using the Couchbase vector database, then I'm using the OpenAI embeddings from OpenAI. So we get the first thing that we do is that as soon as we upload the PDF, we split the PDF using the text splitter where we generate all of the text. We extract all of the text. We initialize an embedding model in this case from OpenAI. We validate our Couchbase credentials. That's the database that I'm using in this case. And then we basically connect to the database cluster. And then what we do is that we use this particular Couchbase vector store from documents data. So in this case, I send in the document, I'm sending the vector embedding, and then I generate the actual embeddings from my data source. And based on that data source that I generate, the embeddings, those embeddings, they get stored inside of my database. Then when I'm actually chatting with my application, here, what I'm doing is that I am basically going ahead and creating an entire rack template where I provide an input template over here where I say that you're a helpful AI assistant and you have to basically give answers based on the input question that I give to it. So again, I connect to my vector store. Now, in this case, what I do is that based on the input query that is being sent, what I'm doing is that I'm actually retrieving that particular data. So what you'll see is that I create a retrieval chain. So if I see this particular function, I can see that in this case when I'm creating the retrieval chain, I use the retriever. And in this case, basically what is happening is that the way that the retriever works is that, which is basically defined over here, is what happens is that based on the input query that you have sent, it will find the most relevant set of values which are most relevant to the input data that you have just given. And it will find, using the vector search, it will find the most closely related values. And based on that, it will find those values and send it back to us. And in our application now, when the user has given that input question, it will have the context of the most relevant matching values from the vector search.

9. Generating Responses with Lang Chains and AISDK

Short description:

The user query is converted into a vector and matched using vector search. The response is generated by the model using the AISDK from Vercel, which allows for streaming responses. This demonstrates the power of lang chains in generating responses.

And then it will generate a response that most closely matches based on the input query that has been sent to the user. So the user query gets converted into the vector, it finds the most... it uses vector search to find the most closely matching vector values. It sends it back to a RAG pipeline, and the model basically uses that to generate a response, which finally, what we are using over here is we are also using the AISDK from Vercel, where we generate the response with a streaming value. So that is why in the response that you saw, if I asked it another question, for example, what is the competitor mentioned in the doc? So it basically streams that particular response with the help of the AISDK. And the Vercel AISDK works very well with RAG applications, with lang chains, it has inbuilt integrations for that. And so you see that it basically generates the response. So this is a classic example of how you can leverage the lang chain.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Premium
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps