Video Summary and Transcription
AI offers opportunities for React developers to code faster and automate tasks. Generative AI is a crucial area for developers to focus on. Working with AI APIs and RAGS can open up new possibilities for projects. Orchestration frameworks and tools like Lanchain and relevance help chain tasks together and work with different AI models. AI is a supplement to human capabilities and learning to code with AI can help developers push boundaries and become better.
1. Introduction to AI for React Developers
AI offers opportunities for React developers to code faster, build applications that anticipate user needs, and automate complex tasks. If you dream of launching your own SaaS or considering a career shift, learning AI can be a great option. The learning path to become an AI engineer may seem overwhelming, but you don't need to learn everything. Generative AI is a crucial area to focus on, and it can be mastered by making web requests to interact with AI APIs.
Hello, I'm Alexandre Spallato, I'm a Developer Relations Engineer at Storyblok, and today we're diving into an exciting topic, AI for React developers. Which are the opportunities and most of all, how can you learn it? Is it possible to learn it as a React or JavaScript developer? With AI, as you certainly know, you can code faster with applications like Copilot, you can also use it for learning or to explore documentation, but there is much more than that for you if you decide to learn AI.
You can use AI to build applications that anticipate user needs, personalize experience, or automate complex tasks. So if you dream of launching your own SaaS, this is the way. If you consider a career shift, the demand for AI engineers is going to be huge and as JavaScript developers, your skills are a strong foundation to build on.
Okay, so what is the learning path to become an AI engineer? I see, you are overwhelmed, because if you search on Google or even on chatGPT how to become an AI engineer, you might think that you need to learn data science, machine learning, mathematics, Python, et cetera. And yes, it would be awesome to learn all that, but if we had time. As you can see here, each step is a subset of the other one and now we are here in generative AI. And this is what we need to master. This is a screenshot from Latin space from SWIX and it says we are observing a once in a generation shift of applied AI, fueled by the emergent capabilities of open source API availability of foundation model. It means we are here. The machine learning and data science are here. They are research engineers. And so now this means that if you can make a web request to interact with an API, you are already on the right path.
2. Working with AI APIs and RAGS
AI is a big umbrella for all the tags that make machines act like they are a brain. Machine learning is a part of AI where computers learn from data. Large language models such as GPT are special tools in machine learning focused on understanding and creating text. With APIs, we can ask AI systems to perform tasks for us. Some big names in this space include OpenAI with GPT, Anthropic with Cloud, Gemini for Google, and Hugging Face. Experimenting with AI APIs can open up new possibilities for projects and products. The Versel AI SDK can streamline work and minimize boilerplate code. RAGS (Retrieval Augmented Generations) allows models to be augmented with additional data for generating responses based on context.
So first up, let's get the buzzing down. AI is a big umbrella for all the tags that make machines act like they are a brain. Machine learning is a part of AI where computers learn from data. Large language models such as GPT are special tools in machine learning focused on understanding and creating text. So understanding these basics help us to see what AI can do and give us a starting point for diving deeper into the world of AI engineering.
So now let's talk about working with APIs. With APIs we can ask this AI system to do tasks for us. Some of the big names in the space, including OpenAI with GPT, Anthropic with Cloud, Gemini for Google, Hugging Face, which offers a wide range of open source AI models. So first try using these APIs to see what you can create and improve in your projects. It's not just about using them but understanding how they can change the way we build software. So experimenting with these AI APIs can open up new possibilities for your project and products. And I also recommend to explore thirdly the OpenAI documentation and understand how the assistance APIs work as well as the function calling. So function calling allows you to connect large language models with external tools.
So depending on the user query, the model will call one tool or another. So if we have two tools, one for checking weather and one to send an email, if you ask how you should dress in Madrid today, it will invoke the check weather tool and if you ask to send an email it will use the other one of course. Also to streamline your work you can use the Versel AI SDK which is compatible with Next, Nuxt, Svelte, Solid, etc. It has a unified API that standardizes the interaction with the various AI models and minimizes the boilerplate code. Then you need to understand RAGS, Retrieval Augmented Generations.
With RAGS we can augment the model with additional data. For example, if you are building a customer service chart, you will first need to feed the model information with data about the company. This will allow the model to use this external information by retrieving it and from there it will generate the answer. So let's see how this works. First the information is broken in chunks because the LLMs have a limited number of tokens so they cannot search in the whole information. So depending on the questions they will find the corresponding chunks semantically similar and we will apply it as context for your query. Then these chunks need to be transformed in embeddings which involves converting the data in vectors which are an array of numbers that the machine can understand. Then these embeddings will be stored in a vector database and then the query will also be transformed in a vector and will search the database by semantic similarity in order to generate the responses based on the most similar context. So this is how it works. Here we have a user that is chatting and the user query is sent to the embedder, transformed in a vector, stored in the vector database. Same thing from the data source, the knowledge base is transformed in embeddings, stored in the vector database and then the large language model will find the most similar context and generate an answer to the query etc. Simple.
3. Building Real-world Applications with AI
To build a real-world application, you need to understand user preferences, find destinations, check weather, and suggest travel options. Orchestration frameworks like Lanchain and LamaIndex help chain tasks together and work with different LLM APIs. Tools like Flowwise and relevance offer graphical interfaces and APIs for frontend development. As developers, our ability to understand deeply, innovate bravely, and solve problems creatively cannot be replicated by AI. AI is a supplement to our human capabilities. Learning to code with AI can help us push the boundaries of what's possible and become better developers.
Now that you understand that, if you want to build a real-world application it will not be only a prompt to an LLM. You will need several tasks. Imagine that you build a map to plan vacation, you will need to understand user preferences, find destination, check weather, suggest travel option, and this is where orchestration frameworks like Lanchain or LamaIndex comes into play. They help us to chain different tasks together as well as methods to chunk, retrieve, embed, generate etc and also allow us to work with different LLM APIs.
You can also use tools like Flowwise which is a graphic user interface on top of Lanchain and then you will have an API. You have also some local tools like relevance that provide you APIs if you don't want to to do back-end stuff and then with these APIs you get the data and you can build your front-end as React engineers.
So in conclusion, natural language is emerging as a new tool of our trade but let's not forget the essence of being a developer cannot be replicated by AI. So our ability to understand deeply, innovate bravely, and solve problems creatively is what make us uniquely human. So as we embrace AI, remember that it's not a replacement but a supplement to our human capabilities. And so as React developers you can totally learn these skills and continue to push the boundaries of what's possible by learning to code with AI and learning to use it to build applications. So AI can support us in becoming the best developers we can be.
So here you have some links, even more than the one I've talked about, of the applications that you can use. So I hope you like this talk. Thank you and here you have a QR code to get the slide. Bye.
Comments