OpenAI in React: Integrating GPT-4 with Your React Application

In this talk, attendees will learn how to integrate OpenAI's GPT-4 language model into their React applications, exploring practical use cases and implementation strategies to enhance user experience and create intelligent, interactive applications.

Rate this content
Bookmark
Watch video on a separate page
Video Summary and Transcription
The talk explores how to integrate advanced AI capabilities into React applications using technologies like LangChain, MongoDB Atlas Vector Search, and OpenAI. It begins by discussing the concept of vector embeddings, which are crucial for enhancing GPT models by reducing hallucinations and providing real-time, context-aware data. The video highlights the importance of using vector search and retrieval augmented generation (RAG) to improve language model performance. MongoDB plays a pivotal role in storing these vector embeddings, allowing for intelligent data retrieval. The speaker outlines how to build an AI-powered documentation site using Next.js, leveraging the Versel AI SDK for creating conversational UIs. The integration of AI in React apps is shown to significantly boost user engagement and business efficiency. The talk also covers the use of AI in various sectors like retail and healthcare, emphasizing the potential of AI-powered chatbots for real-time customer service. Technologies like Node.js and the OpenAI API are essential for setting up this AI infrastructure. The role of generative AI in creating new content is discussed, along with the challenges of static knowledge bases in GPT models. The speaker encourages trying out MongoDB Vector Search and LangChain for building smarter, context-aware applications.

FAQ

AI can be used for fraud detection, chatbots, personalized recommendations, and more. It is applicable in various industries including retail, healthcare, finance, and manufacturing.

Batch AI analyzes historical data to make predictions about the future, usually run offline and on a schedule. Real-time AI, on the other hand, makes predictions and decisions based on live data, allowing it to react quickly to events as they happen.

Generative Pretrained Transformers (GPTs) are large language models that perform tasks like natural language processing and content generation. Their key limitation is their static knowledge base; they only know what they've been trained on and can sometimes provide inaccurate information.

RAG leverages vectors to pull in real-time, context-relevant data, augmenting the capabilities of GPT models. It reduces hallucinations, provides up-to-date information, and allows access to private, proprietary data, making applications smarter and more context-aware.

No, AI is far from a fad. It's a revolutionary change that is helping businesses solve real problems and making individuals more productive.

AI matters now more than ever because it helps create highly engaging applications, provides personalized experiences, and drives competitive advantage by making intelligent decisions faster on fresher, more accurate data.

Generative AI involves training models to generate new content such as images, text, music, and video. It represents the cutting edge of AI technology and goes beyond making predictions to creating new content.

Vectors are numerical representations of data that enable semantic search, allowing for the retrieval of contextually relevant information. They are used in various AI applications to improve the accuracy and relevance of search results.

AI improves user engagement by providing personalized, context-aware experiences. It also enhances business efficiency by making intelligent decisions faster, based on fresher and more accurate data.

Technologies like Next.js, OpenAI, LangChain, Vercel AI SDK, and MongoDB Vector Search are used to build AI-powered React applications. These tools help integrate AI seamlessly and make applications smarter and more efficient.

1. The Importance of AI in Application Development#

Short description:

AI is a revolutionary change that helps businesses solve real problems and make employees and individuals more productive. It matters now more than ever and can take your React applications to the next level. Building intelligence into applications is in high demand for modern, engaging experiences, fraud detection, chatbots, personalized recommendations, and more. AI-powered apps drive user engagement and satisfaction, as well as efficiency and profitability. Almost every application will use AI in some capacity. Use cases include retail, healthcare, finance, and manufacturing. Early computing relied on analytics, but as computing power increased, analyzing larger datasets became easier.

Artificial intelligence is just a fad, right? It's going to blow over like a blockchain. Well, actually I don't think so. In fact, AI is far from a fad. It's a revolutionary change. It's helping businesses solve real problems, and making employees and individuals more productive. So let's talk about why AI matters now more than ever, and how AI can take your React applications to the next level.

I'm Jesse Hall, a Senior Developer Advocate at MongoDB. You might also know me from my YouTube channel, CodeStacker. So throughout this talk, we're going to explore the demand for intelligent apps, practical use cases, limitations of LLMs, how to overcome these limitations, the tech stack that we're going to use to build a smart React app, and how to integrate GPT, make it smart, and optimize the user experience.

So if you're new to the AI space, maybe you don't know all of these terms and technologies that we're going to talk about, or maybe you're scared that you're going to miss out on what all the new kids on the block are talking about. But don't worry because we're going to define and demystify a lot of these concepts. And then we're going to go deeper and discuss some of the considerations that you need to make whenever you're building AI into your applications.

There is a huge demand for building intelligence into our applications in order to make these modern highly engaging applications, and to make differentiating experiences for each of our users. You could use it for fraud detection, chatbots, personalized recommendations, and beyond. Now, to compete and win, we need to make our applications smarter and surface insights faster. Smarter apps use AI-powered models to take action autonomously for the user, and the results are two-fold. First, your apps drive competitive advantage by deepening user engagement and satisfaction as they interact with your application. And secondly, your apps unlock higher efficiency and profitability by making intelligent decisions faster on fresher, more accurate data.

Almost every application going forward is going to use AI in some capacity. AI is going to wait for no one. So in order to stay competitive, we need to build intelligence into our applications in order to gain rich insights from your data. AI is being used to both power the user-facing aspect and the fresh data and insights that you get from these interactions is going to power a more efficient business decision model.

Now there are so many use cases, but here are just a few. Retail, healthcare, finance, manufacturing. Now, although these are very different use cases, they're all unified by their critical need to work with the freshest data in order to achieve their objectives in real time. They all consist of AI-powered apps that drive the user-facing experience. And predictive insights make use of fresh data and automation to drive more efficient business processes. But how did we get to this stage of AI? Well, in the early days of computing, applications primarily relied on analytics to make sense of the data. This involved analyzing large datasets and extracting insights that could inform business decisions. As computing power increased, it became easier to analyze larger datasets in less time.

2. Advancements in AI and Machine Learning#

Short description:

The focus shifted towards machine learning, specifically batch AI and real-time AI. Batch AI analyzes historical data to make predictions about the future, while real-time AI uses live data for real-time predictions. Generative AI is the cutting edge, training models to generate new content. GPT, or Generative Pretrained Transformers, are large language models that make applications smarter, but they have limitations.

Now, as computing power continued to increase, the focus shifted towards machine learning. Traditional batch machine learning involves training models on historic data and using them to make predictions or inferences about future events, about how your user might interact in the future. The more data over time that you feed your model, the better it gets. The more you can tune it and the more accurate the future predictions become. So as you can imagine, this is really powerful because if you can predict what's going to happen tomorrow you can make really great business decisions today.

So batch AI as the name implies is usually run offline and on a schedule. So it's analyzing historical data to make predictions about the future, but therein lies the problem with batch AI. It's working on historic data. It can't react to events that happen quickly in real time. Now although it's really great for industries such as finance and healthcare, we need data on things that are happening now. And so this is where real-time AI comes in. Real-time AI represents a significant step forward from traditional AI. This approach involves training models on live data and using them to make predictions or inferences in real time. This is particularly useful for fraud detection, for instance, where decisions need to be made quickly based on what's happening in real time. What good is fraud detection if the person defrauding you has already gotten away with it?

And then finally, that brings us to generative AI, which represents the cutting edge. This approach involves training models to generate new content. Now this could be images, text, music, video. It's not simply making predictions anymore. It's creating the future. Now, fun fact, the images here were all created using Dolly. So over the years, we've seen AI evolve from analytics to real-time machine learning and now to generative AI. These are not incremental changes. They're transformative. They shape how we interact with technology every single day.

So let's zoom in a bit. We have something called Generative Pretrained Transformers or GPT. These large language models perform a variety of tasks from natural language processing to content generation and even some elements of common sense reasoning. They are the brains that are making our applications smarter. But there is a catch. GPTs are incredible, but they aren't perfect.

3. Limitations of Language Models#

Short description:

One limitation of language models is their static knowledge base. They can't access real-time proprietary data, which is crucial in business contexts. The need for up-to-date information and domain-specific data makes relying on language models alone impractical.

One of their key limitations is their static knowledge base. They only know what they've been trained on. There are integrations with some models now that can search the internet for newer information. But how do we know that that information that they're finding on the internet is accurate? They can hallucinate, very confidently, I might add. So how can we minimize this? Now, they can't access or learn from real time proprietary data, your data. And that's a big limitation, don't you think? The need for real time, proprietary and domain-specific data is why we can't rely on the LLMs as they are. This is especially true in the business context where up-to-date information can be a game changer.

4. Augmenting React Apps with RAG#

Short description:

We'll explore how to augment React apps with smarter capabilities using large language models and boost those capabilities even further with Retrieval Augmented Generation, or RAG.

So what's the solution? How do we make these models adaptable, real-time, and more aligned with our specific needs? Well, this brings us to the focus of our talk today. It's not merely about leveraging the power of GPT in React. It's about taking your React applications to the next level by making them intelligent and context-aware. We're going to explore how to augment React apps with smarter capabilities using large language models and boost those capabilities even further with Retrieval Augmented Generation, or RAG. And so we're not just integrating AI into React, we're optimizing it to be as smart and context-aware as possible.

5. Retrieval Augmented Generation and Vectors#

Short description:

Vectors are numerical representations of data in a multidimensional space. They enable semantic search and can be used with various data types. Encoders organize vectors in different ways, and popular encoders are available through platforms like Hugging Face and OpenAI. Retrieval Augmented Generation (RAG) uses vectors to enhance the capabilities of an LLM by pulling in real-time, context-relevant data.

Now what's involved in Retrieval Augmented Generation? First up, vectors. What are vectors? These are the building blocks that allow us to represent complex, multidimensional data in a format that's easy to manipulate and understand. Now the simplest explanation is a vector is a numerical representation of data, an array of numbers. Now these numbers are coordinates in an n-dimensional space where n is the array length. So however many numbers we have in the array is how many dimensions we have.

Now you'll also hear vectors referred to as vector embeddings or just embeddings. So here's a real life example of vectors in use. When you go to a store and you ask a worker where to find something, many times they're going to say, go to aisle 30, bay 15. And so that is a two dimensional vector. And we also notice that stores that similar items will be placed near each other for ease of searching and finding. The light bulbs aren't just scattered all over the store. They're strategically placed to be found easily. And video games use 2D and 3D coordinates to know where objects are in the game's world. With these coordinates, we can compute the proximity between objects to detect collisions, for instance. And the same kind of math is used to compute the similarity between vectors during vector search. And now if you're a Stargate fan, the gate addresses are made of at least seven dimensions that are like vectors. And so to locate Stargates in other galaxies, you can add an eighth or ninth dimension, just like you would a phone number's area code and country code. So this shows how adding dimensions significantly increases the size of the virtual space in which our data is organized. And so again, what makes vectors so special? They enable semantic search. In simpler terms, they let us find information that is contextually relevant, not just a keyword search. And the data source is not just limited to text. It can also be images, video, or audio. These can all be converted to vectors.

So how do we go about creating these vectors? Well, this is done through an encoder. The encoder defines how the information is organized in the virtual space. And there are different types of encoders that can organize the vectors in different ways, depending on the use case. There are encoders for text, for audio, images, and so on. And many of the most popular encoders can be accessed through Hugging Face, OpenAI, and many others. So now let's tie all of this back to Retrieval Augmented Generation. RAG leverages vectors to pull in real-time, context-relevant data to augment the capabilities of an LLM.

6. Vector Search and RAG for React Apps#

Short description:

Vector search capabilities enhance GPT models by reducing hallucinations, providing up-to-date information, and allowing access to private data. Generate embeddings for custom data, store them in a vector database, and accept user queries to find relevant information. Use VectorSearch to retrieve semantically related results. An intelligent chatbot with RAG and Vector Embeddings can offer real-time data during customer service interactions, making your React app adaptable and context-aware.

Vector search capabilities can augment the performance and accuracy of GPT models by providing a memory or a ground truth to reduce hallucinations, provide up-to-date information, and allow access to private data. So first, we take our private data or custom data, whatever it may be, and generate our embeddings using an embedding model, and then store those embeddings in a vector database. Now again, this data could be documents from our site, could be blog articles, videos, images, PDFs, et cetera.

Now you don't have to use LangChain to facilitate all of this, but it's very helpful, and we're going to talk more about that later. And once we have our embeddings for our custom data, we can now accept user queries to find relevant information within our custom data. Now to do this, we send the user's natural language query to an LLM, which vectorizes the query, and then we use VectorSearch to find information that is closely related, semantically related, to the user's query, and then we return those results. And we can do anything that we want with these results. We could summarize the answer to their question based on the custom data. We could respond with links to specific documentation, pages, and so on.

So imagine your React app has an intelligent chatbot with RAG and Vector Embeddings, this chatbot could pull in real-time data, maybe the latest product inventory, and offer it during a customer service interaction. With RAG and Vector Embeddings, your React app isn't just smart, it's adaptable, real-time, and incredibly context-aware.

7. Technologies for Building Smarter React Apps#

Short description:

We'll use Next.js with the app router, OpenAI for embedding and generating responses, Lankchain for data preprocessing, the Versel AI SDK for building conversational streaming UIs, and MongoDB and MongoDB Atlas Vector Search for storing vector embeddings and enhancing generative AI capabilities.

Now, talk is cheap. How do we actually do these things and what technologies can help us? Well, the first tool that we're going to use in our tech stack is Next.js, and we're going to, of course, use version 13.5 with the app router. Next.js and Versel just make building apps with AI technologies so easy. And then OpenAI has been spearheading advancements in language models with models like GPT 3.5 Turbo and GPT4. And while there are many other language models out there, today we're going to focus on OpenAI and we're going to use them for embedding and for generating responses. And then Lankchain is another crucial part of our tech stack. It helps us in data preprocessing, routing data to the proper storage, and making the AI part of our app more efficient. And then there's the Versel AI SDK. It's an open source library designed to build conversational streaming UIs. It abstracts so much boilerplate code that you would have had to write otherwise. And then last but not least, where are we going to store our vector embeddings? MongoDB. We're also going to leverage the MongoDB Atlas Vector Search. It's a game changer for AI applications, enabling us to provide a more contextual and meaningful user experience by storing our vector embeddings directly in our application database, instead of bolting on yet another external service. And it's not just Vector Search. MongoDB Atlas itself brings a new level of power to our generative AI capabilities. Each technology in the stack is chosen for a specific reason. And when combined, they enable us to build a smarter, more powerful React application.

8. Building an AI-powered Documentation Site#

Short description:

We're going to build an AI-powered documentation site that answers questions, provides contextually relevant information, summarizes answers, and provides links to relevant pages. To do this, we need an open AI API key, Node.js 18 Plus, and a MongoDB Atlas account. We'll create embeddings for our custom data by running a Node app that splits our markdown files into chunks. We'll store these embeddings in a MongoDB collection using LangChain and MongoDB Atlas VectorSearch. Finally, we'll set up a search index on our collection in MongoDB using a JSON configuration with specified dimensions, similarity, and type. Let's then set up the Next.js app, using the Next.js lang chain starter if desired.

So let's take a look at how to build a React application with these technologies. And this is what we're going to build, an AI-powered documentation site. Now, this site will not only answer questions, but also provide contextually relevant information, summarize answers, and provide links to relevant pages to dig deeper.

What will we need? An open AI API key, Node.js 18 Plus, and a MongoDB Atlas account. The first thing that we need to do is create embeddings for our custom data. Now, since this chatbot is going to reference our custom documentation, we'll assume it's written in markdown files. And so this embedding function is just a Node app, a single file with less than 60 lines of code. And we're going to run this when we build our next JS app. So we'll need to make sure that we have .env, LangChain, and MongoDB installed. And then we'll import everything that we need. We have the LangChain text splitter, the MongoDB Atlas Vector Store, open AI embeddings, MongoDB Client, and then .env.

Below that, we'll prepare our MongoDB connection using our connection string and get the collection that we want to use. And then we need to process our documentation files. We have to split them into chunks that the encoder can use. So we'll create a splitter using the recursive character text splitter from LangChain to split the Markdown files that we're fetching. Now notice that you can choose the chunk size and overlap, which is helpful to make sure that we have enough context in each chunk. And then we'll create our output, awaiting the splitter createDocuments function, passing it our documents. And lastly, we'll use LangChain to store these embeddings in our MongoDB collection. We're passing to MongoDB Atlas VectorSearch the output, creating a new AI Embeddings, and then giving it the metadata for the collection, index name, text key, and embedding key. These keys are the fields that contain the original text and the embedding vectors. And then we close the MongoDB connection.

And this is an example of what ends up in our database. We have our original text and Markdown format, our vector embeddings, and metadata that identifies the portion of the document that this text came from. One last thing that we have to do to prepare the data for searching is to set up a search index on our collection in MongoDB. You can do that through the Atlas UI using this JSON configuration. So here we're specifying the dimensions of the encoder model that we're using. In this case, it's 1536. And then you can also define the similarity and type to use.

Okay, so now the fun part, let's set up the Next.js app. Now, if you'd like, you can use the Next.js lang chain starter, which already has everything set up except for the MongoDB driver.

9. Injecting Custom Data and Using Vector Search#

Short description:

To make our GPT model smarter, we'll inject our own custom data using the LangChain MongoDB VectorSearch method. By connecting to MongoDB and creating vector embeddings for the user's question and interaction, we can compare them with stored embeddings for our custom data. Using maximal marginal reference, we can find related documents and refine the accuracy. Finally, in the chat route, we'll pass the last user message to the vector search route and include the search results, context, and user's question in the standard LLM.

So you'll also need to npm install MongoDB. Next, add your OpenAI API key to the environment variable file. And then for good measure, you can test the app by running npm run dev. You should now be able to chat with OpenAI. But this is just a standard chat with the ordinary models that we all know and love. So now let's make our GPT model smarter by giving it our custom information.

Now, this is the default chat route provided with the Next.js lang chain template. And we can see that it utilizes the Vercel AI SDK along with lang chain, OpenAI chat models, and some lang chain. This is also utilizing the Vercel Edge runtime to lower latency and help with streaming our responses back to the user. Now, further down this route handler, we are creating an OpenAI chat. And here we can specify which model we'd like to use, GPT-4, GPT-3.5 Turbo, et cetera. We're also setting streaming to true because we want the response to start populating in the UI as fast as possible and stream to the user. This is a much better experience than just throwing up a loading spinner and having the user wait one to 30 seconds to have a response come back. And then it returns the streaming response. And so in this route is where we need to inject our own custom data.

To do that, we're going to use the LangChain MongoDB VectorSearch method. And in this route, we're going to connect to MongoDB just like we did before. And we're going to pass to this the question that the user asked. And then we'll use the MongoDB Atlas VectorSearch LangChain method to create vector embeddings for the user's question. We have to also create vector embeddings for the user interaction so that we can then compare it with the other vector embeddings that we have stored in MongoDB for our custom data. And then we'll tell it which collection index, text key, and embedding key to use. And then when we do the search, we're using something called maximal marginal reference to find the related documents. We can specify the number of results to fetch and how many of the top results to return. This allows us to refine how accurate we want to be. And finally, we close the connection and return the retriever output.

Back in our chat route, we're going to get the last message from the user and pass it to the new vector search route that we just created. Now we can include the results of that search and pass everything to the standard LLM. We'll include some context, the vector search results, and the user's question. So the prompt we're using here is, you are a very enthusiastic MongoDB representative who loves to help people. Given the following sections from the MongoDB documentation, answer the question using only that information, output it in markdown format.

10. Conclusion and Key Takeaways#

Short description:

If you are unsure and the answer is not explicitly written in the documentation, say, sorry, I don't know how to help with that. We replaced the last user's message with our updated message that includes our custom information and context. The question is, how do I get started with MongoDB and Node.js? It answers with a summary and links to specific pages in the documentation for further help. If you want to build something like this from start to finish, be sure to attend my workshop. We covered a lot of ground today from understanding the role of AI in modern applications to its seamless integration with React using GPT models and RAG architecture. Remember, the future is not just about smarter AI, but also about how well it's integrated into user-centric platforms, like your next React-based project. Give MongoDB Vector Search a try in your next AI application. Thank you, and I hope this was helpful.

If you are unsure and the answer is not explicitly written in the documentation, say, sorry, I don't know how to help with that. And finally, we replaced the last user's message with our updated message that includes our custom information and context. That then gets passed to the LLM and we receive our response back.

And these are the results. The question is, how do I get started with MongoDB and Node.js? It answers with a summary and links to specific pages in the documentation for further help.

Now, if you want to build something like this from start to finish, be sure to attend my workshop. You can get more details on how to sign up for that on the conference website.

And we covered a lot of ground today from understanding the role of AI in modern applications to its seamless integration with React using GPT models and RAG architecture. We explored the limitations of large language models, how you can overcome them using real-time and custom data. We looked at a great tech stack that seamlessly integrates AI into your application using Next.js, OpenAI, LaingChain, the Vercel, AI SDK, and MongoDB Vector Search. And these are not just buzzwords. These are tools that can revolutionize how we think about building smart applications.

Now, remember, the future is not just about smarter AI, but also about how well it's integrated into user-centric platforms, like your next React-based project. So be sure to give MongoDB Vector Search a try in your next AI application. Thank you, and I hope this was helpful.

Jesse Hall
Jesse Hall
22 min
15 Nov, 2023

Comments

Sign in or register to post your comment.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Featured Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Llms Workshop: What They Are and How to Leverage Them
React Summit 2024React Summit 2024
66 min
Llms Workshop: What They Are and How to Leverage Them
Featured Workshop
Nathan Marrs
Haris Rozajac
2 authors
Join Nathan in this hands-on session where you will first learn at a high level what large language models (LLMs) are and how they work. Then dive into an interactive coding exercise where you will implement LLM functionality into a basic example application. During this exercise you will get a feel for key skills for working with LLMs in your own applications such as prompt engineering and exposure to OpenAI's API.
After this session you will have insights around what LLMs are and how they can practically be used to improve your own applications.
Table of contents: - Interactive demo implementing basic LLM powered features in a demo app- Discuss how to decide where to leverage LLMs in a product- Lessons learned around integrating with OpenAI / overview of OpenAI API- Best practices for prompt engineering- Common challenges specific to React (state management :D / good UX practices)
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps
Building AI Applications for the Web
React Day Berlin 2023React Day Berlin 2023
98 min
Building AI Applications for the Web
Workshop
Roy Derks
Roy Derks
Today every developer is using LLMs in different forms and shapes. Lots of products have introduced embedded AI capabilities, and in this workshop you’ll learn how to build your own AI application. No experience in building LLMs or machine learning is needed. Instead, we’ll use web technologies such as JavaScript, React and GraphQL which you already know and love.