React Server Components in AI Applications

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In this talk, we will explore how we can use AI to enhance user experience by having large language models return interactive React Components that we define to our users. This way, instead of instructing users on what to do with text, our applications can present interactive user interface elements directly in response to a user query. We discuss this briefly in this PodRocket episode where the Vercel CTO says it could be game changing. In this talk, we teach developers how to implement such functionality in their applications.

This talk has been presented at React Advanced 2024, check out the latest edition of this React Conference.

FAQ

Tejas Kumar discusses the use of React server components with AI, focusing on how vector embeddings can improve search experiences.

He aims to solve the issue of ineffective natural language search on platforms like Netflix by using AI and vector embeddings to provide more accurate search results.

Vector embeddings are generated by specialized machine-learning models that convert natural language into a list of numbers, allowing for comparison in space to improve search accuracy.

Vector search works by comparing the vector of a user's query to the vector of an item, like a movie name, to find the closest match in the database.

Movies++ is a movie app being developed to improve movie search using AI-enabled React server components and a vector database.

The Vercel AI SDK is used to enable AI in the movie app, allowing for more interactive and accurate search capabilities.

Generative UI refers to creating dynamic and interactive user interfaces by using AI tools to query databases and render components like movie details.

He handles AI non-determinism by pre-defining components to ensure consistent rendering, allowing more control over how the UI looks.

His presentation covers vector embeddings, natural language search, and generative UI using React server components and AI.

Tejas Kumar is a developer relations engineer for generative AI at DataStax, with over 20 years of experience building on the web at companies like Ressel, Spotify, and Zeta.

Tejas Kumar
Tejas Kumar
17 min
28 Oct, 2024

Comments

Sign in or register to post your comment.
  • Va Da
    Va Da
    P4
    Interesting way to combine React hooks with UI state.
Video Summary and Transcription
Today we will discuss React server components with AI and how to build a better search experience using them. We will learn how to make a Next.js app AI-enabled using the Vercel AI SDK. The Vercel AI SDK's streamUI function with the GPT 4.0 model will be used to make suggestions interactive. We will explore the use of history and conversation in AI and how to continue the conversation and read the result. The concept of generative UI with the vector database will be introduced, along with querying the database for movies. We will process user queries and return movies based on them. The power of React server components in enhancing UI will be demonstrated. In summary, the Talk covers vector embeddings, natural language search, and generative UI.

1. Introduction to React Server Components with AI

Short description:

Today, I want to talk to you specifically about React server components with AI. We're going to build a better search experience using React Server Components by querying a vector database called Astra. This is a huge part of generative AI.

♪ Hey, I'm Tejas Kumar, and I've been building on the web for over 20 years at places like Ressel, Spotify, Zeta, and more. Today, I'm a developer relations engineer for generative AI at DataStax, but I'm not here to talk about any of that. Today, I want to talk to you specifically about React server components with AI.

And look, I don't want to take too much time, so let's just get straight into the demo. Here's what we have. So I love Netflix. I use Netflix a lot, but oftentimes, when I search for things, I search in natural language, like movies with a strong female lead, something like this, you know? And usually, it doesn't match. And that's a real problem with the Internet today, is like, we think in natural language, but if I want to watch just a movie with superheroes, I can't type this without getting just a non-answer.

This is changing with AI and React server components, and the way it's changing is through vector embeddings. Now, we need to spend a little bit of time talking about them because they're really powerful. Vector embeddings are generated by very specialized machine-learning models that take in natural language and output a big list of numbers. How they can do this is because these numbers are comparable in space. What does that mean? If we think about a vector, a three-dimensional vector, that's just space, right? There's horizontal, vertical, and depth. That's 3D space, three-dimensional space, which is represented by a three-dimensional vector. Now, if I say words like dog and cat and pet are all over here, and words like airplane and aluminum are sort of over here, and words like React and Angular and Solid are all over here, you see, it's moving around space, and we're putting things in different places.

That's what an embeddings model does. It literally just embeds vector representations of natural language in space. When you have those, you can then compare the space given by the vector of a user's query to the space of a thing, like a movie name, and find things that are closer. Literally, this is how vector search works. And this is a huge part of generative AI. So together, what we're going to do is build a better search experience using React Server Components. The way we're going to do that is by querying this database. You need to have your vectors live somewhere. At Datastacks, we make a vector database called Astra, and we loaded a bunch of movies here into this database. And you can see their vector representations if you look at this field here. So I'm just going to copy that, and I'm going to paste it. This is the vector representation for one of the movies. It's kind of nuts. It's just like this huge array of just coordinates, right? And so we're going to query this and build some things. What we're going to build is a movie app called Movies++.

2. Making a Next.js App AI-Enabled with Vercel AI SDK

Short description:

We're going to code together to make a Next.js app AI-enabled using the Vercel AI SDK.

It looks like this. And what I want is to search for movies with superheroes, and have it do stuff. But it doesn't exist now. We're going to code this together in real time.

So to do that, let's just open our editor. And this is a Next.js app. You can see that by the app directory. And every Next.js app starts with pretty much a layout. So this is our layout. It's just HTML body children. Inside the layout, we've got a page. That's our homepage. If we go split-screen, what we can see is we have the header, the logo, and the search form. And in the search form, we have a text input and some suggestions. And this is all live, meaning if I change this whole suggestion to like, hi, it'll just update. You see? It says hi. And so this is all live. And what we're going to do is make this AI-enabled, okay?

We're going to do that using the Vercel AI SDK. So how do we do that? Well, let's make a new file, first of all, called AI.tsx. This is a server-only component. And it's very important that we add a server at the top. Now we can export const AI. And this will create a context for us. And in the context, we have some actions. We'll talk about what those are. They're literally what they sound like. And we have AI state and UI state. So initial AI state, initial UI state. And these are just arrays of conversations. And the conversations look like, you know, they look like there's a role, which is, you know, user or assistant.

3. Making Suggestions Interactive with Vercel AI SDK

Short description:

We need to make the suggestions in our layout do something. For example, when a suggestion for movies with a strong female lead is clicked, we want to call the search function from useMovieSearch. We'll create an action called continueConversation to make it continue the conversation. We'll use the Vercel AI SDK's streamUI function with the GPT 4.0 model to get the response.

And there's the content, which is the prompt. Okay? This is literally what, it's just an array of that. Okay, cool. So now that we have this, we're going to go to our layout, so our outermost level, and wrap everything in this AI context. We're just going to auto-import it just there. Okay? Sorted.

Now we need to make these suggestions do something. So the suggestion for, you know, movies with a strong female lead, when you click it, we need to do something. So on click, it calls search from useMovieSearch. Okay, what is search? Search is a function that comes from useMovieSearch, and it does nothing. So we need to make this do something.

Okay, what do we want to make it do? Well, we want to make it continue the conversation. So for that, we'll create an action called continueConversation. And it's an async function that gets a prompt, and we'll see what happens. We'll just say prompt is a string. This is TypeScript. Okay, so now we need a response, which is the return from a function called streamUI from the Vercel AI SDK. StreamUI requires a model, so we'll get OpenAI from Vercel's wrapper. I think it's a named export from AISDK slash OpenAI. And we'll call this and use GPT 4.0. The autocompletion here is very nice. So we'll use 4.0. Great.

4. Using History and Conversation in AI

Short description:

We need a list of messages that come from the history. If we get text back, we update the history and return the content. Finally, we define the continue conversation function and the search function.

Next, we need a list of messages. But where do they come from? Well, they come from history. So we'll get history by calling getMutableAIState. Again, it's just an array, right? And messages is history.get, what we have so far. And we'll add the incoming prompt, just like that from the user.

Next, we need to say what happens if we get text back. So we'll do this, and we'll see what we get. So we get content, and we get done. And so if we're done, then we can call history.done to update history. And we'll just wrap it with the current history, plus the role is assistant, not AI, and we'll get back the content. Finally, we'll return the content. Notice we're returning the content, not just if we're done, but always, because this is streamed. Finally, we'll return the role is assistant, and the display is response.value. That's it.

This is our continue conversation. We've defined it. Now let's use it. So we'll come back here. And, sorry, this is our search function. So we'll say const. First of all, we'll get the conversation. Let's come here and say, you know, equals use UI state, excuse me. So we'll use UI state. And we just need to set conversation. This is a getter-setter. So we get the conversation array, and then we can set it. Okay, great. So now what do we want to do? Well, we want to set the conversation to whatever the old conversation, let's just say, Matt Pocock, cover your eyes, it's whatever the old conversation was, but now we have a message from the user, right? So we set the conversation there. Next, we get the response from the LLM or from the AISD.

5. Continuing the Conversation and Reading the Result

Short description:

We get the response from the LLM or the AISD and continue the conversation. Then we set the conversation with the response. Finally, we read the result from use UI state.

Next, we get the response from the LLM or from the AISD. So we'll say, await continue conversation. We need to get that. It's just const. Continue conversation. Conversation. What? No. Is use actions. Okay. And we'll continue the conversation. And again, it expects a prompt, so we'll just pass the prompt. And it's wrapping the prompt here. Okay.

So we'll do that. And now we get a response, so we need to set the conversation again. This time, just with the response. Great. This looks good. Finally, we go back to our page, and we're reading the result, but it's from an empty array. Instead of this, we'll read the result from use UI state. Because again, we're setting UI state here, right? This is a setter of use UI state. So we set it, and we read it. That's pretty much it.

Let's go see what happened. So now we have this, and I'll say hi, just to test that it works. It doesn't work, because hooks can only be called inside the body of a function component. So, my hooks need to be one level up. Let's try this again. Hi. Great.

6. Creating Generative UI with the Vector Database

Short description:

It's totally working. How can I assist you today? Let's try doing our search like this. Movies with a strong female lead looks good. But we notice it's giving us markdown. So let's wrap content in markdown. We can query a database where we have these movies, including metadata like the poster path. We can use this to render better UI, called generative UI. Let's create a tool called getUI for this purpose.

It's totally working. How can I assist you today? So now let's try doing our search like this. Great. This is actually working. How do we work? This is great. Totally cool. Okay, now, movies with a strong female lead looks good so far, but you notice it's giving us markdown with the double x double asterisks and stuff. So let's go back here and wrap content in markdown. And these are server components. Look at that. But we need to escape. Let's try this again. Let's say movies with a strong female lead. Incredible. And so we have markdown, and it's working. But this is where it gets interesting, because if you look here, we're actually returning a server component. We could do better. What that means is we can actually query a database where we have these movies. In fact, we have them in our vector database. So if we search our vector database, we don't just get back the name, but we get back all kinds of metadata, like the poster path. And we can use this poster path to render not just text, but maybe better UI. This is called generative UI. And we can do that through the use of tools. So let's create a tool. And we'll call it getUI. Sure, why not? And we can define some stuff. We can do parameters or end a description. The description is how the LLM knows how to call this. So we'll say get movies as UI. We can even say, use this tool when the user asks for UI.

7. Querying the Database for Movies

Short description:

Parameters are what the LLM creates to give to our function. We'll use Zod to validate this. The query becomes an input argument to our generate function. We want to query our database using the AstroDB TypeScript client. We have access to the database and can now perform a query to find movies based on the title.

Parameters are what the LLM creates to give to our function, which is a tool. So we'll just use Zod to validate this. Then we'll say Z.object. And what do you want? Well, you want the query. So we'll say query Z.string. Okay. And now this becomes an input argument to our generate function. And this is where we can do whatever we want and return UI. So now we get query. It already knows because of this. In fact, if I do this, it's going to see. It's so cool. It's type safe.

So what do we want to do here? Well, we want to query our database. So to do that, we'll use the AstroDB TypeScript client. So we'll import Data API client from DataStax AstroDB TS. And we'll just instantiate it. So we'll say, we can just instantiate it here. We can say const client is new Data API client. And we'll do some environment variables, process.env AstroDB application token. I know this exists. And the database is client.db process.env AstroDB API endpoint. Okay. Cool. So now we have an access to our database. We have a client for the database. So let's query it. So what we want to do is const movies is and exactly. This is actually perfect. Movies.find title is query.

8. Processing User Query and Returning Movies

Short description:

We're going to compare vectors to convert the user's query into a vector. We'll use OpenAI's text embedding model to convert the query to an embedding. We find the movies and turn them into an array. Finally, we return a React component that maps over the movies and returns posters.

We're not going to do this. Instead, we're going to compare vectors. So we find the first argument is empty, but the second argument will say vector. And we need a vector embedding. Remember, we talked about embeddings. We need to convert the user's query into a vector.

How might we do that? Well, we can use OpenAI for this. So we can say vector is await new OpenAI. And we can obviously pass in our API key and dot embeddings dot create. And the input is the query. And the model is text embedding three large. We're just going to convert the query to an embedding. And then we get back the response and we say dot data. Dot data response dot data dot zero dot embedding. So we get back the embedding just like that one line.

So we find our movies and then we turn them into an array. Great. So now we have our movies. Finally, we just need to return a React component. So we'll return movies, which is a component. And we pass in movies like this. That's it. So let's go back here. And what is this component? It's just a div that maps over movies and returns posters. It's really it. So let's see now how this works. I'm just going to say this is any. Don't tell Matt Pocock. Okay.

9. Showing Movies Based on User Query

Short description:

When the user asks for UI, we call our tool to show movies with a strong female lead. The movies are interactive, and we can limit the number displayed. We can also show a spinner while generating the embeddings. Finally, we return the movies and give feedback to the user.

Okay. So when the user asks for UI. So let's go back to our thing and say show me movies with a strong female lead as UI. And so now it's going to try and call our tool and we'll see what happens.

There we go. Look at that. That's incredible. And the cool thing is these are interactive. So I can like click on them and I get taken to the actual page. Isn't that cool? But let's maybe do a limit. I don't want so many. So let's just say like limit maybe here. Limit is for and let's try again movies.

But notice there was some lag. So what we can even do is show a spinner. So if we turn this into a generator function. So if we do something like this and add a star now, it's a generator meaning I can yield. So I can yield here and I can say loading movies. Actually, we can say something like generating embeddings. Right. And then once I get the embeddings, I can yield here and say asking Astra. And then finally, I returned movies.

This is so cool. I give feedback to the user. Let's try this again. Show me superhero movies as UI. And see that's so cool. Perfect. Look at that. Superhero movies.

10. Enhancing UI with React Server Components

Short description:

I can use React components instead of strings, like integrations and spinners. We can create a great UX using tailwind and flexbox. I demonstrate client-side rendering and the power of generative UI with React Server components. These components provide control and prevent non-determinism. In summary, we covered vector embeddings, natural language search, and generative UI.

Indeed, I can even have these not be strings, but react components. So I can do tailwind. I can say integrations. I have some spinners somewhere, I think. Let's see, I think I have spinner. Surely it's an ellipse, a spinner and integration spinner. Awesome. So we could do integration spinner. And we can close the div like this. And we can maybe import that. Perfect. And then I'll just replicate the spinner here and we can really what I want to show you is we can make truly great UX. Look at that. And we can even do flexbox because it's tailwind item center gap too. And now we can say, what do you want? Did I make some thought? I added a semicolon. OK, this is great. So let's try again. Romantic comedies in the UK. Incredible.

And to show you that it's actually client side rendered, I go to my movies component and I can just go to my movie and on the figure, which is just it's just an image I can say on click, literally, it's a client side event here on click. And I can say alert high. Right. And I could be like movies with huge monsters. Now, I click that client site. Incredible. And so that's generative UI. And these are all server components. As we saw, if we go to the AI function, all of this is happening on the server. That's the power of generative UI. And that's how you can use React Server components with AI today.

One final thing about this is these components are pre existing, which is good, because oftentimes a pre problem with AI is non-determinism. But when you can enforce a render, these components that I create, you get way more control from the large language model instead of it hallucinating some nonsense. You actually can control how it looks. So in summary, we covered a lot of stuff. We covered vector embeddings. We covered natural language search and finally generative UI. I'm around on the Internet if you want to find me on But for now, thank you so much for joining this presentation.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Case Study: Building Accessible Reusable React Components at GitHub
React Summit 2024React Summit 2024
29 min
Case Study: Building Accessible Reusable React Components at GitHub
Watch video: Case Study: Building Accessible Reusable React Components at GitHub
The talk discusses building accessible React components and emphasizes the importance of using the correct HTML elements and ARIA roles for accessibility. It explains how to navigate and select options within a form and how to add supplementary text using Aria described by. The speaker also discusses the benefits of using conditional checkboxes and ARIA disabled to improve the UI. Additionally, the talk explores the role of JavaScript in web accessibility and provides recommendations for testing website accessibility.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps