OpenAI in React: Integrating GPT with Your React Application

Rate this content
Bookmark

Explore the integration of OpenAI's GPT into React applications to create dynamic, intelligent user interfaces. This session offers practical insights into leveraging GPT's powerful NLP capabilities for chatbots, content generation, and personalized user experiences. Dive into the technical setup with Next.js 14, LangChain, Vercel's AI SDK, and MongoDB's Vector Search to optimize and customize your applications. Join us to transform your React projects with AI.

This talk has been presented at React Summit 2024, check out the latest edition of this React Conference.

FAQ

Attendees are encouraged to scan the provided QR code for more information, attend the free workshop, or visit the MongoDB booth for further queries.

The demo used fake documentation for a fictional JavaScript library called fancy widget.js. This included typical documentation sections such as README, usage, license, installation, contributing, change log, and API reference.

A RAG pipeline stands for Retrieval Augmented Generation. It is a method used to enhance the capabilities of language models by retrieving relevant information from external sources to answer questions.

The demo used technologies such as MongoDB, OpenAI API, React, Next.js, and the langchain library. Other tools included .env for environment variables and React markdown for styling.

Documents are transformed into vector embeddings using a process that involves reading the documents, chunking them into pieces, and storing these embeddings in a vector database like MongoDB.

MongoDB is used as a vector database to store the vector embeddings of the documents. It is also utilized for vector search to retrieve the most relevant document chunks for answering user questions.

If the chat bot does not know the answer, it will respond with 'Sorry, I don't know how to help with that,' indicating that the information is not available in the provided documentation.

The user's question is intercepted and sent to a vector search API route, which retrieves relevant document chunks. These chunks are then used to form a response template, which is sent to the language model to generate an answer.

The AI chat bot is tested by asking it questions related to the provided documentation, such as 'What is fancy widget.js?' and checking whether it responds accurately based on the information given.

The purpose of the talk is to demonstrate how to create an AI chat bot that can answer questions based on information that the LLM was never trained on, using a basic Retrieval Augmented Generation (RAG) pipeline.

Jesse Hall
Jesse Hall
10 min
14 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
In this Talk, the speaker demonstrates how to create an AI chat bot that can answer questions based on information it was never trained on. They build a basic RAG pipeline in just five minutes using live coding. The speaker also shows how to create embeddings and a vector database, set up a vector search index and endpoint, and modify the chat route to enhance the chat bot's capabilities. The program is run and tested, and the Talk concludes with an invitation to join a workshop for more information.

1. Introduction

Short description:

I'm going to speed run creating an AI chat bot that can answer questions based on information that the LLM was never trained on. We're going to create a basic RAG pipeline and build it in about five minutes, live coding.

I'm going to pack as much information into this talk as I can, and if I go too fast, I lose you, if you have any questions, come find me afterwards, and I'm happy to talk. So what I'm going to do is I'm going to speed run creating an AI chat bot, and not just any AI chat bot, a chat bot that can answer questions based on information that the LLM was never trained on. So we're going to create a basic RAG pipeline, retrieval augmented generation. So on top of that, I'm going to build this in about five minutes, and I am going to live code, so nothing can go wrong, right?

2. Creating Embeddings and Vector Database

Short description:

I ran NPX create next app using the next lane chain example. Installed MongoDB, React markdown, and .env. Checked the app, added dark mode. Tested with a question about MongoDB. Created fake documentation for a JavaScript library. Transformed markdown files into vectors and saved them in a MongoDB vector database. Used vector search to enhance LLM capabilities. Set up file system promises, open AI embeddings, Mongo client, and MongoDB Atlas vector search. Created embeddings for each document and stored them in MongoDB. Fixed a typo.

So I did already run NPX create next app using the next lane chain example. I installed MongoDB, the lane chain MongoDB integration, React markdown for some styling and .env because we're going to use a node script in order to run our ingest.

So I also have an open AI API key and my MongoDB Atlas connection string in my environment variables. So let's go ahead and check out this app. So this is the example straight, without any alteration. Well, I added dark mode so I wouldn't blind everybody, so you're welcome for that. So let's just test to make sure it works, so let's say what is MongoDB and hopefully the Wi-Fi works, and okay, there we go. And open AI responds to us with a pretty good answer. So it's working out of the box. Great.

Let's check out the code. So I've got this fake documents directory here, and I use chat GPT to help me create some fake documentation for a fake JavaScript library called fancy widget.js. So we have read me, usage, license, installation, contributing, change log, API reference. We have all the documentation that you'd expect from a JavaScript library. So what we're going to do is we're going to take these markdown files and we're going to transform them into vectors, vector embeddings, and then we're going to save those in our vector database. We're going to use MongoDB for the vector database. And then we can use, during vector search, we can use this to augment the LLMs capabilities so they can answer questions based on this information.

All right, so let's go ahead and get started doing that. So in the root here, I'm going to create a new file. We're going to name it create embeddings.mjs, and then we are going to do some typing here. So we're going to import our file system promises from recursive character text splitter, and then import open AI embeddings from lang chain open AI, and then our Mongo client from MongoDB, and then our MongoDB Atlas vector search from lang chain, and then we'll set up our Mongo clients, getting our environment variable there for our connection string.

Our database name is going to be documents, collection name embeddings. We'll set up our collection, and then we'll get our documents directory, those fake documents, and then get the files for those, and then console log the file names, and then the file name, look through those file names, get each document. After we read each document, then we're going to console log that we're vectorizing the document, and then our splitter is going to use our recursive character text splitter from lang chain, and we'll chunk those into different pieces, and then output those and store those into MongoDB using MongoDB Atlas vector search. We'll create those embeddings, we'll tell it which collection, which index name, which text key, and which embedding key to use, and then console log that we're done, close the connection to MongoDB.

And there is a bit of a typo here. Of course that didn't happen in practice. And of course I wasn't typing because that was a VS code extension. This is supposed to be import recursive character text splitter. So let me grab that.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.
Building the AI for Athena Crisis
JS GameDev Summit 2023JS GameDev Summit 2023
37 min
Building the AI for Athena Crisis
Join Christoph from Nakazawa Tech in building the AI for Athena Crisis, a game where the AI performs actions just like a player. Learn about the importance of abstractions, primitives, and search algorithms in building an AI for a video game. Explore the architecture of Athena Crisis, which uses immutable persistent data structures and optimistic updates. Discover how to implement AI behaviors and create a class for the AI. Find out how to analyze units, assign weights, and prioritize actions based on the game state. Consider the next steps in building the AI and explore the possibility of building an AI for a real-time strategy game.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Featured Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Llms Workshop: What They Are and How to Leverage Them
React Summit 2024React Summit 2024
66 min
Llms Workshop: What They Are and How to Leverage Them
Featured Workshop
Nathan Marrs
Haris Rozajac
2 authors
Join Nathan in this hands-on session where you will first learn at a high level what large language models (LLMs) are and how they work. Then dive into an interactive coding exercise where you will implement LLM functionality into a basic example application. During this exercise you will get a feel for key skills for working with LLMs in your own applications such as prompt engineering and exposure to OpenAI's API.
After this session you will have insights around what LLMs are and how they can practically be used to improve your own applications.
Table of contents: - Interactive demo implementing basic LLM powered features in a demo app- Discuss how to decide where to leverage LLMs in a product- Lessons learned around integrating with OpenAI / overview of OpenAI API- Best practices for prompt engineering- Common challenges specific to React (state management :D / good UX practices)
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps
Building AI Applications for the Web
React Day Berlin 2023React Day Berlin 2023
98 min
Building AI Applications for the Web
Workshop
Roy Derks
Roy Derks
Today every developer is using LLMs in different forms and shapes. Lots of products have introduced embedded AI capabilities, and in this workshop you’ll learn how to build your own AI application. No experience in building LLMs or machine learning is needed. Instead, we’ll use web technologies such as JavaScript, React and GraphQL which you already know and love.