Building Your Generative AI Application

Rate this content
Bookmark

Generative AI is exciting tech enthusiasts and businesses with its vast potential. In this session, we will introduce Retrieval Augmented Generation (RAG), a framework that provides context to Large Language Models (LLMs) without retraining them. We will guide you step-by-step in building your own RAG app, culminating in a fully functional chatbot.


Key Concepts: Generative AI, Retrieval Augmented Generation


Technologies: OpenAI, LangChain, AstraDB Vector Store, Streamlit, Langflow

This workshop has been presented at React Summit 2024, check out the latest edition of this React Conference.

FAQ

The purpose of this online workshop is to build a chatbot using generative AI in a hands-on and interactive manner.

The workshop is hosted by Dieter, a solution engineer at DataStax.

The workshop introduces technologies such as generative AI, retrieval-augmented generation (RAG), vector stores, and the Langflow no-code environment.

AstraDB is used as a vector store to implement retrieval-augmented generation (RAG) capabilities for the chatbot.

Retrieval-augmented generation (RAG) is a technique that uses a vector store to retrieve context from a large set of documents to provide more accurate and context-aware responses from a large language model (LLM).

Vector search works by vectorizing text, audio, or video content into multi-dimensional vectors that capture the semantics of the content. These vectors are then stored and searched to find the most similar context to a query vector.

Langflow is an open-source, no-code environment that allows users to implement generative AI applications without writing any code.

Rackstack is a curated list of dependencies provided by DataStax that packages various generative AI frameworks, ensuring compatibility and high-quality implementation.

Yes, the chatbot can be deployed on the Streamlit platform, making it accessible to others and allowing for easy sharing and collaboration.

The prerequisites for the workshop include having a GitHub account, and signing up for services like AstraDB, OpenAI, and Streamlit.

Dieter Flick
Dieter Flick
82 min
06 Jun, 2024

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Welcome to this online workshop where we will build our own chatbot with generative AI. Datastacks provides technologies for implementing generative AI applications, including AstraDB as a vector store for retrieval augmented generation. The build-your-own-Rack chatbot repository contains application iterations, each adding additional functionalities to create a fully functional chatbot with streaming capabilities. Implementing the first application with Streamlit involves drawing a UI and integrating it with OpenAI chat models. Deploy the workshop chatbot application in Streamlit and use Langflow to implement generative applications without coding.

1. Introduction to Workshop

Short description:

Welcome to this online workshop where we will build our own chatbot with generative AI. The agenda includes an introduction to generative AI, retrieval augmented generation, and a hands-on workshop to implement our own chatbot application. There is also an overview of Langflow, a no-code environment for generative AI applications.

Hey, welcome to this online workshop. So we are going to build today our own chatbot with generative AI. So it is a hands-on workshop that we are going to do together. And feel free to open up your camera if you want. And so let's try to make this as interactive as possible.

I'm Dieter. I'm a solution engineer. I work for DATAstacks. And so I work with the technologies we have at DATAstacks. And I'm going to introduce a bit of them in a second.

The agenda for today looks like this. So first, a few slides to introduce you to the big picture of generative AI. Then I would like to introduce retrieval augmented generation and what it is, what you can do with it, what is it good for. And the main part of the whole thing, for sure, it is only a few slides, is a hands-on workshop. So we are going to implement our own chatbot application. And hopefully at the end of this workshop, every one of you is proud of having implemented our own functional chatbot that you can show around and experiment with. As you can see, there is an additional agenda point. Today it is about coding. It is about coding our chatbot application. But there are other ways to implement generative AI applications. And I would like to introduce to you today also Langflow, an open source project, a no-code environment that allows you to implement generative AI applications without a single line of code. But that is for the end. And first, a few slides.

2. Datastacks and Retrieval Augmented Generation

Short description:

Datastacks provides technologies for implementing generative AI applications, including AstraDB as a vector store for retrieval augmented generation. Generative AI leverages large language models, but they may not have our private data, so we can use fine-tuning or retrieval augmented generation with a vector store. Vector search allows us to find context similar to a query vector to answer questions.

And then we do some work together, some coding. So a few words to Datastacks. So Datastacks is a real-time AI company. So we provide technologies that allow you and allow developers to implement their generative AI applications. Data equals AI, right?

And so at the core of Datastacks, we have data management technologies, like our database in the cloud called AstraDB. And we are going to leverage AstraDB today as our vector store. So what a vector store is, I will explain all these. And if anything is unclear, please let me know.

So AstraDB, we use it as a vector store in order to implement our rack capabilities, that is retrieval, augmented generation, and this is what you want to use as soon as you implement the chatbot that works with your private data. And we also discuss some libraries and frameworks within our hands-on work. So let's go to the next slide. So let's set the foundation.

So this is about artificial intelligence, so a subdomain of artificial intelligence is generative AI. And generative AI is used and got super famous over the last months because we all use it each day in order to generate content, like text, like audio, and video. And we leverage within the subdomain of artificial intelligence within generative AI large language models. There is not to say much about large language models. I believe every one of us already touched on it and worked with a large language model. But there is one point I would like to stress about a bit.

So the large language model was trained with vast amounts of data, data that is publicly available. But it was not trained with our private data for sure, right? And so this is why LLM would hallucinate if we use it directly without providing some additional context about the context we are in, about the products, about the services, right? And so that is why we implement today this RAC chatbot, so asset, the LLM, it can be outdated, was trained months back. And it doesn't have our private data. And it will hallucinate if we ask something and it doesn't really have the data in order to generate response on point. And it might be insecure if we use it as a service. So there is no AI without data. And so there are ways to provide the LLM with our context. And the two ways I would like to explain are fine-tuning and retrieval augmented generation.

So one option is to fine-tune the large language model with our own context. There is some training involved. And it takes a while until the large language model is trained with our context. And after that, the LLM is ready to answer questions based on our private context. But we would need to do that all time whenever this private context is updated or changed. The other option that is available is retrieval augmented generation. And this works with a vector store. So we vectorize our context and manage the vectors. And manage our context in a vector store and retrieve out of the vector store. So we can have millions of documents in there. We retrieve from all these documents and from all that context the context that is required in order to answer our question. How it works? You will learn it. And you will do it in practice in a second.

So what is vector search? As you can see here in that graph, so we all learned that in school. And mainly two-dimensional and three-dimensional vector spaces. We also use that in the generative AI world and in that vector search world. And as you can see in the graph, we have objects like a trouser and a skirt. And they are closer to each other. In that vector search world, that means that both objects are more similar to each other than the trouser is to the t-shirt. So this is how vector search works. And therefore, a vector store has algorithms implemented to find the vectors that are similar to the query vector. And we are going to leverage that in order to find the right context that our chatbot needs to answer a question. So it looks like this on the left-hand side. There is a text chunk.

Watch more workshops on topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Featured Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Llms Workshop: What They Are and How to Leverage Them
React Summit 2024React Summit 2024
66 min
Llms Workshop: What They Are and How to Leverage Them
Featured Workshop
Nathan Marrs
Haris Rozajac
2 authors
Join Nathan in this hands-on session where you will first learn at a high level what large language models (LLMs) are and how they work. Then dive into an interactive coding exercise where you will implement LLM functionality into a basic example application. During this exercise you will get a feel for key skills for working with LLMs in your own applications such as prompt engineering and exposure to OpenAI's API.
After this session you will have insights around what LLMs are and how they can practically be used to improve your own applications.
Table of contents: - Interactive demo implementing basic LLM powered features in a demo app- Discuss how to decide where to leverage LLMs in a product- Lessons learned around integrating with OpenAI / overview of OpenAI API- Best practices for prompt engineering- Common challenges specific to React (state management :D / good UX practices)
Working With OpenAI and Prompt Engineering for React Developers
React Advanced Conference 2023React Advanced Conference 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps
Building AI Applications for the Web
React Day Berlin 2023React Day Berlin 2023
98 min
Building AI Applications for the Web
Workshop
Roy Derks
Roy Derks
Today every developer is using LLMs in different forms and shapes. Lots of products have introduced embedded AI capabilities, and in this workshop you’ll learn how to build your own AI application. No experience in building LLMs or machine learning is needed. Instead, we’ll use web technologies such as JavaScript, React and GraphQL which you already know and love.
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
C3 Dev Festival 2024C3 Dev Festival 2024
48 min
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
WorkshopFree
Andreia Ocanoaia
Andreia Ocanoaia
Feeling the limitations of LLMs? They can be creative, but sometimes lack accuracy or rely on outdated information. In this workshop, we’ll break down the process of building and easily deploying a Retrieval-Augmented Generation system. This approach enables you to leverage the power of LLMs with the added benefit of factual accuracy and up-to-date information.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Building the AI for Athena Crisis
JS GameDev Summit 2023JS GameDev Summit 2023
37 min
Building the AI for Athena Crisis
Join Christoph from Nakazawa Tech in building the AI for Athena Crisis, a game where the AI performs actions just like a player. Learn about the importance of abstractions, primitives, and search algorithms in building an AI for a video game. Explore the architecture of Athena Crisis, which uses immutable persistent data structures and optimistic updates. Discover how to implement AI behaviors and create a class for the AI. Find out how to analyze units, assign weights, and prioritize actions based on the game state. Consider the next steps in building the AI and explore the possibility of building an AI for a real-time strategy game.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.