Hi, everyone. You have your lunch? Are you awake or sleepy? Okay, because I don't have real coffee here, so I hope that you already have your coffee. If not, I'm sorry, but this is going to be the most boring talk in your life. No, I really hope not. But anyway, so before and foremost, my name is Maya Chavin. I'm a senior software engineer at Microsoft. I'm working in a team called Microsoft Industrial AI, which we leverage different AI technologies to build AI integrated solution and applications for industry specific.
Sorry, my voice today is lost during the flight, so I don't know what happened. So if it's hard for you to understand me, I'm really sorry. And if you want to understand me better, please feel free to contact me after the talk, okay? I've been, like the introduction, I've been working with web and JavaScript and TypeScript, but today's talk, it has nothing to do with TypeScript or JavaScript or anything. It's talking about AI. And first and foremost, how many people here working with AI or generative AI? Okay, so we can skip this slide.
Now, anyway, so for people who doesn't know about generative AI or maybe know about the term but never have a chance to experience it. So generative AI is an AI that can generate text and media from a varieties of input data, which is we call it prompts, basically text or anything, like now we can also send it some image for it to analyze and also learn from their system data. And that is our talk, we'll based on it, which we will talk about what are the core model, what are the core model for LM or generative AI to use. And our talk also will focus about how we're going to use the model and to define what the core flow of a very simple service, document Q&A, where you can find it on Google a hundred times when you Google for document Q&A using AI. But in this talk, we will learn a bit more what the flow behind it, what we can, what kind of service we can use for each different component inside the flow in LM, and finally how we can enhance and expand the service using prompts or any technique that we can pay attention to when we develop a new document Q&A as a generic service. Okay.
But first and foremost, LM. How many people here working with LM, any model LM? What LM do you use? GPT? GPT? Text embedded? DALY? Raise your hand. Come on, I believe that you already have coffee, right? Anyway, so just a recap, LM as a service is a large language model which allow us to, which be able to process human input. And then it will also have capable of training its own data, whether it's supervised or unsupervised, and it works with token. And the nice thing about LM is that it provide you a set of API at the black box that help developer develop AI applications more straightforward and more simply than before. Okay. So some of the LM we can see here, OpenAI, Google, Microsoft, Meta, Anthropic, Hugging Face, nothing new here.
So we talk about LM working with token, right? So what exactly is token? Well, to put it simple, token is just a piece of words, which mean every single words in a sentence you have to translate it to token. And to count the token, we have some calculator that we can use to count the token. It's called token counter, which is right here. I have it in the, this is applications, that you can go here and write your text in here and it will generate for you how much, how many token it will take you to, it will cost you to pass this string to the AI.
Comments