Chat With Your Components

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

How many times have you done this: Made an API, used the data with React, and then started over, again and again? What if we could make a full stack React application that can create user interfaces on its own, without needing to write any backend code and directly chat with your data? In this talk, we'll learn how to make an app that can change the way data looks using simple text commands and smart AI changes. Let's talk with your components.

This talk has been presented at React Advanced 2024, check out the latest edition of this React Conference.

FAQ

Proxima is an open source analytics platform developed by Teodor, a web developer.

React and AI can be used to build smart user interfaces by leveraging components that function like Lego bricks, allowing for flexible data integration, content generation, and seamless UI updates.

Chat2BT reintroduces the chat interface as a straightforward way to interact with AI, decoupling end users from developers' release cycles and enabling dynamic UI interactions.

AI packets allow direct interaction with AI models from React components, enabling prompt-based responses and advanced UI capabilities using OpenAI and other model providers.

Zod Schemas provide a runtime schema to guide AI models, ensuring accurate output generation and enabling features like form pre-filling, data validation, and dynamic UI adjustments.

LangChain is an open-source framework that facilitates interaction with data sources and models, enabling dynamic SQL query generation and UI component creation based on user inputs.

AI can generate weather-related UI components by instructing the model to act as a weather assistant, using prompts and context to produce dynamic weather data representations.

Challenges include ensuring the model produces correct outputs for various scenarios, managing data mapping to user intentions, and maintaining visual consistency across different screens.

The AI package integrates with prompt engineering by using structured prompts to guide model outputs, enabling the creation of generative components for diverse applications.

Detailed information and examples from Teodor's talk can be found on his website, where he has published articles with use cases and code demos.

Theodore Vorillas
Theodore Vorillas
15 min
28 Oct, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
In this Talk, the speaker discusses how to build smart user interfaces using React and AI. They explore using AI to enhance data interaction in a table component and demonstrate how AI can be used to pre-fill calendar slots in a booking engine. The speaker also explains how to create dynamic UIs with React by adjusting language prompts and specifying UI elements. They introduce LangChain, an open-source framework for building language model applications, and explain how it can be used to generate UIs. The speaker concludes by discussing how JSON data can be used to generate UIs and provides documentation and examples for building UI components.
Available in Español: Chat With Your Components

1. Building Smart User Interfaces with React and AI

Short description:

In my talk, I'll explain how to build smart user interfaces using React and AI. React has revolutionized web development with its powerful and flexible component system. The chat interface allows for straightforward interaction with AI. We'll experiment with AI packets, which enable direct interaction with AI models. Using Zod Schemas, we can guide the models to produce the correct output.

Hi. I hope you're enjoying the conference so far. My name is Teodor. I'm a web developer building Proxima, an open source analytics platform. In my talk today, I'll try to explain how you can build smart user interfaces using React and artificial intelligence.

In my honest opinion, React has been a game changer for web development. JSX index allowed us to repurpose the way we build user interfaces. Imagine you have a bunch of components in the design system. These components are like Lego bricks, but way more powerful and flexible. A component can fit data, generate content, and pass down the result to other components. Then you can start building using these primitives and compose them to build complex user interfaces.

But there's a fundamental problem. Every single time you build something, well, you have to actually write the code, right? Every time you want to make a change, you have to combine the data, the UI, and the logic. So, the end users are somehow tightly coupled with their release cycles of the developers. When chat2BT came out, though, it reintroduced an idea coming from the early 2000s. The chat interface. A symbol and so common user experience. Asking questions and getting answers back in plain text, it's the most straight forward way to interact with artificial intelligence. With the public release of new AI APIs, access to advanced technology became more open, allowing us to create applications that push beyond typical limits.

For this talk, we'll try to experiment a bit with the AI packets. This is a thin wrapper around open AI and other model providers along with some helpers, hooks, and abstractions built on top of React and the Node.js ecosystem. The AI packets allow us to directly interact with AI models from our React components or API routes. So, we can pass a prompt, the desired model, and get a response back in plain text format. And this is just the simplest use case. But there's so much more we can actually do. A model operates like a black box. It has no understanding of your application's context or the data at hand. However, we can guide it by providing a runtime schema to ensure it produces the correct output. For this purpose, we're going to use Zod Schemas. And as an example, we can start exploring how we can solve common user interface problems.

2. Enhancing Data Interaction with AI

Short description:

We explore using AI to tackle the issue of interacting with data in a table component. Streaming the response as a partial object allows for instant UI updates. Zod schema can be used for form validation and data validation. Another example is a booking engine that pre-fills the calendar slot based on the prompt. The AI package integrates full-prompt engineering with generative components. We'll create a weather application called Breeze using an engine to guide the model's output.

Here, we've got a table component alongside a few dropdowns for filtering the data. While the dropdown seems to guide the user, interacting with the data isn't as straightforward as it should be. In larger applications, this could lead to unnecessary clicks and visual clutter.

Now, let's explore how we can tackle this issue using AI. Inside our application, we have our filter schema and use it to generate a filter subject that is passed along the table component. We have just three fields. Category, language, and country. We can pass the schema as context in the model and receive a response.

With the AI SDK, we can stream the response as a partial object, while the schema defines the structure of the data returned by the model. Streaming is a powerful feature, allowing us to update the UI almost instantly. Finally, we set up a basic input to trigger the filtering process. It's important to note that in frameworks like Next.js, we can fully take advantage of the stream capabilities of React Server components.

And this is what the implementation looks like in the user interface. We can gradually add more fields. And as the schema updates, the interface will adapt accordingly.

Now, if you are anyhow familiar with Zod, you may have used it to validate form inputs as well. We can build on top of the previous example by pre-filling a form directly from the model's response. The source of the data isn't crucial, as it can be passed through the context of the generation. The best part is that we can reuse the same Zod schema for the AI generation, front-end form validation, and back-end data validation as well.

Finally, here's another example that I really like, and that's a booking engine. Once the prompt is injected with the correct context, we can pre-fill the slot in the calendar and generate the event accordingly. Everything is starting in the front-end. All the previous demos highlighted micro-indirections and how we can address common UI challenges, such as filtering, completing complex forms, or managing date pickers. If you want to dive deeper into this topic, I have written a detailed article on my website for you to explore.

And actually, this was just a glimpse of what we can achieve with a large language model. Undeniably, the most powerful aspect of the AI package is its ability to integrate full-prompt engineering with generative components. So, we'll try to create a simple weather application that lets us check the weather in different cities around the world. Let's call it Breeze. The most crucial part is to start creating the engine that will guide the model to produce the correct output. In this case, we are simply instructing the model to act as a weather assistant.

3. Creating Dynamic UIs with React

Short description:

We can adjust the language prompt and specify UI elements using brackets and shortcut notations. The chat integration tracks conversation history. Weather generation functions guide the model to understand user queries. The so-weather UI tool provides a description, parameters, and a generate function. The input field allows users to ask for weather in different cities. The system is static but there is potential for dynamically generating React components based on user intentions and available data.

We can adjust the language prompt as needed and provide as much context as required. In this scenario, we can not only guide the model, but also specify the different UI elements it can generate using brackets and shortcut notations. For example, we can ask for the weather in Paris.

Finally, we can begin building the chat integration as a server action in NextJS. Since we aim to create a fully functional chat application, we also need to track the conversation history between the agent and the user. And this is why we're using the mutable AI state. The return result is a stream containing plain React components. We can set up the initial response, which is a spinner, then pass along the agent prompt to set up the behavior of the assistant. Finally, we can attach the conversation history within the object.

Our next step is to add the weather generation functions. We need to guide the model to understand what the user is asking for and bring all the weather-related information and trigger the appropriate task in the agent. Its task, also referred to as a tool, is defined within the tool subject linked to the stream UI function.

Now, let's explore how we can implement the so-weather UI tool. Every tool comes with a description that outlines its purpose and functionality. The description is used as a guidance for the model. We also have the parameter subject that describes the arguments that the tool accepts. And finally, the generate function. A generator that yields plain text or React nodes. Inside the generator, we can progressively update the UI, starting with the loading state, and then retrieve all the necessary data from the database to finally display the weather UI.

Now, let's bring everything together and see the full picture. The input field on the bottom allows us to ask the assistant for weather in different cities. Once triggered, the agent will pick up the prompt and the response will be streamed back to the user almost in real time, as you can see in the chart. If you would like to learn more about Fully Fledged Generated User Interfaces, I have put together a detailed article on my website, complete with examples and use cases. As always, feel free to check it out.

But there's still a problem that I should have to mention. The system is once again static. Everything is hard-coded, and you have to manually guide the model to produce the correct output with the appropriate tools on every single scenario. But what if we could somehow dynamically generate complex React components based on the user's intentions and all the data available? If you're familiar with site builders like Wix, or applications like Notion, you might have noticed that they let you drag and drop UI components to create pages or dashboards according to your needs. If you examine how Notion defines the UI of a page, you will see that it's essentially a JSON object. The structure outlines the visual outcome of the page in a highly detailed manner.

4. Building Dynamic UIs with LangChain

Short description:

We can generate a similar structure using LangChain, an open-source framework for building language model applications. LangChain allows interaction with various data sources, models, and tools. We can query the database, create prompts, and generate SQL queries. With limited permissions, we can improve results by using specific prompts. We can convert the output to JSON and connect the chain with the AI module. The model can map the data response with the user's intention.

So we can generate a similar structure based on the available data and the UI components at hand. Let's explore how we can do that using a framework called LangChain.

LangChain is an open-source framework for building language model applications. With LangChain, we can interact with various data sources, models and tools, orchestrating the flow of data and generation. It also works nicely along with the AI package.

By data source, we can basically mean different things. It can be an API, a graphical endpoint, a database, a file, or even a third-party tool. LangChain can read the data structure of different data sources, interact with them through a uniform interface, and pass them along to the language model.

Now, let's query the database and provide data to our components. First, we need to create a data source object that will be used to connect to our database. Here, we're using just a local database. Next, we need to create a prompt template that will guide the model to generate the correct output. We will describe the input and output using placeholders for all the parameters.

Our goal here is to prompt the model to produce an SQL query based on the user's question. The context for the prompt is automatically provided by the database schema. So, if the user asks for the top two bestselling albums, the model will know which tables to query. Once that's set up, we can create a chain that takes the incoming prompt, turns it into an SQL query, runs it, and gives back the result.

Finally, we can invoke this chain using a question and get the answer back as a string. It's quite obvious, but I have to mention that since we're interacting directly with the database, we need to make sure that the model has limited permissions. Database operations aren't always predictable, and the model can sometimes get prompted. We can improve the results, though, by using a more detailed, strict, and specific prompt.

Our last step is to add another link in the chain that converts the output into a JSON object. Now, we have a complete chain that takes the user's question, creates an SQL query, runs and returns the results in JSON format. Its link in the chain is modular and can be adjusted as needed. Once we're satisfied with the data parser, we can combine everything and use the right adapter to connect our chain with the AI module.

The final piece of the puzzle is to force the model to map the data response with the user's intention. By intention, we mean the expected result of the database query shown in the visual interface, widely accepted by the end user. For instance, if we're querying multiple rows, we would expect to see a table view. For time series data, on the other hand, a chart would be more appropriate, and so on. This time, we're directing the model to work as a designer.

5. Generating UI with JSON Data

Short description:

The JSON data structure is used to build the UI. We give the model samples and clear instructions to ensure consistent visual output. We document components and provide a mini documentation guide. The generator composes UI components and streams results back to the client. Users ask for information and receive custom mini widgets. Feel free to reach out for questions or feedback.

This will generate a JSON data structure that we will use to build the UI. As you can tell, the problem engineering here is key because it outlines what we want to create. We can further improve the model's understanding by giving it samples and clear instructions on how to convert the JSON data structure. This is crucial to ensure the model's visual output is consistent across screens and aligned with our visual requirements.

Next, we need to give the model all the available components we want to use. Since we're using React, we can document our components, including their props, the props that they do accept, and the data they need. To do this, we can extract their types, any zone validation schemas they might use, and create a mini documentation guide for the model.

This is where all the pieces come together. The generator will receive the data structure, compose the UI components, and stream the results back to the client. In our final demo, we can see how the whole process of data flow and UI generation works. Users are asking the Assistant for information about their favorite artists and albums, and the Assistant responds with custom mini widgets created on the spot. We have limited the building blocks to titles, cards, lists, and charts, but the visual results on every single run are still quite appealing and consistent.

So, I hope you enjoyed the talk, and I truly want to encourage you to start building your own tools and applications with a piece of technology that looks so promising. All the code from the demos and slides are also available on my website, and if you have any questions or feedback, feel free to reach out on Twitter or through email. I'm looking forward to hearing back from you. So, I want to thank you a lot for your time and patience, and I hope you enjoy the rest of the conference. Up until the next one. Bye!

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Premium
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps