Video Summary and Transcription
Today we're going to build a React application with just our voice using GitHub Copilot, an AI peer programmer powered by OpenAI Codecs. It's important to be specific in your comments to get accurate suggestions from Copilot. Prompt engineering tips can be used to create different applications, such as a basic markdown editor and a simple to-do app. The application was tested successfully by adding and deleting to-do items using voice commands.
1. Introduction to GitHub Copilot
Today we're going to build a React application with just our voice. GitHub Copilot is an AI peer programmer that helps you code faster. It draws context from your comments and code to suggest lines and functions. It's powered by OpenAI Codecs, a machine learning model that translates natural language into code. Prompt engineering is the practice of using prompts to get the desired output.
♪♪♪ Hey, folks, today we're going to build a React application with just our voice. Super excited, but I'm just going to give you a little bit of background before we dive into that. We're going to talk about GitHub Copilot, prompt engineering tips with GitHub Copilot, and then we're going to use our voices to build a React app.
You might be wondering, who am I? Who's this awesome person who could build apps with their voices? My name is Rizal Scarlett, I'm a developer advocate at GitHub. I have a background in software engineering and teaching underrepresented groups how to code. I'm also addicted to social media, so if you wanted to connect with me, my handle is at BlackGirlBytes on most platforms, including BlueSky and Mastodon, so find me on there.
Okay. GitHub Copilot, what is it? It's an AI peer programmer that helps you to code faster with less work. And so it feels like magic to me, just saying. It feels like it can read my mind, but being honest, it can't do either of those things, and sometimes it doesn't always spit out perfectly well-written, up-to-date code. Maybe sometimes it's referencing an older API, which is why I always encourage people to go back, review the code that is generated just as you would with a human peer programmer.
Okay, so what's happening if it's not magic? Under the hood, GitHub Copilot is drawing context from your comments and your code and suggesting individual lines and whole functions instantly. It's being powered by OpenAI Codecs. And you may be familiar with OpenAI because of ChatGPT or DALI, which is the AI image generation tool, but Codecs is another thing they created. And it's a machine learning model that translates natural language into code. Some background on Codecs is that it's a later version of GPT-3, which stands for Generative Pre-trained Transformer 3, which uses deep learning to produce human-like text. A lot of our favorite apps, like Duolingo, use GPT-3 for grammar correction. And the biggest difference between GPT-3 and Codecs is that Codecs has been fine-tuned for programming. So this is what's power in GitHub Copilot. Here's an example of GitHub Copilot in action. We have Melmykdev from Twitter, who is using GitHub Copilot to translate the strings, answer question and date. And all that they do is they provide context through the language code, so FR for French, JP for Japanese, ES for Spanish, and GitHub Copilot takes that context and translates it into the right words. So, maybe you're like, I've used GitHub Copilot and it's not working for me, it's not doing it. Let me introduce you to the concept of prompt engineering. And that's the practice of using prompts to get the output that you want. Here are my top three tips for prompt engineering. Give it a high-level task description. At the top of the file, describe the app's purpose so that it knows what it's doing. This is especially helpful if you're starting from a clean slate, there's no code in it. GitHub Copilot has no context.".
2. Providing Examples and Iterating
In machine learning models, there's a concept called few shot learning where you provide examples to refine the model's output. It's important to be specific in your comments to get accurate suggestions from GitHub Copilot.
So, you have to give it somewhat a comment. Also providing examples, right? In machine learning models, they learn from examples. So, there's this concept of few shot learning where you feed your model examples and it gets a more refined idea of what it should be outputting based on certain inputs. And then also, iterating more for specificity. Maybe you wrote a really vague comment that says get data, but GitHub Copilot's like get what data? Tell it that you want to get the user data or the user IDs that belong to this specific individual or whatever. Get more specific. Go ahead and delete that comment that you wrote and delete the suggestion and reiterate on it.
3. Using Prompt Engineering and Copilot Voice
Here's an example of using prompt engineering tips to create a basic markdown editor. GitHub Copilot can help with typing, making coding more accessible. Now, let's use Copilot Voice to create a simple to-do app. It can generate functions and use state variables. We'll also render input forms and lists of to-do items.
Okay, so here's an example of me using these prompt engineering tips that I just shared with you. At the top here, you can see that I wrote like a high level task description. I want to create a basic markdown editor. I want to use React hooks, I want to have a live preview of the markdown text and I get all that, but it's not exactly what I wanted. So I delete some parts of it and I write a comment that says I want to have a default value for my use date variable and I want that value to say type markdown here and then I have that and I try it out and it works pretty well. Like I got headings, I got bullets, I got bold, I got italics, I got links, it's working great.
But let's move into the voice part. Like why does that matter, right? GitHub Copilot can help you brainstorm, jog your memory, retain focus, but it could also help you type and you might be like, why do I need help typing? Maybe you have physical dexterity issues, maybe you have visual impairment. As we all get older, our vision will decline so this is something that's helpful in terms of coding accessibility. So let's dive into it. I'm gonna use Copilot Voice to create a simple to-do app. Let's get myself set up here. Okay. Go to line 16. Create a state variable for new to-do items. Create a function to handle adding new to-do items. Create a function for deleting to-do items. Save file.
So we see here that it was able to create a couple functions for us and use state variables, a few state variables, I mean, to say. So I just want to move over here for time's sake and get it to render a few elements for us. Create an input form to add to-do items. Create an unordered list of to-do items with a button to delete each item. Save file. So as you can see, this is not doctored because there's a little bit of error here. We don't really want the comments here. But that's all right. We're doing this live. All right. Next thing I want to do is I wanted to tell it to run the program. Run program.
4. Testing the Application and Conclusion
The application is now live on Localhost 3000. I tested it by adding and deleting to-do items using my voice. To learn more about GitHub Copilot, visit github.com/features/Copilot. Thank you to React Summit for the opportunity to speak.
Awesome. So now this is live on Localhost 3000. So I can move over to my browser. I'm going to refresh. And we do have each to-do item here. I want to just maybe try it out to see if it works. Speak at React Summit. And let's try to delete. So it does work. We were able to build a simple application with our voices.
If you wanted to learn more about GitHub Copilot, you could go on over to github.com slash features slash Copilot. I want to thank React Summit for allowing me to speak. And thank you for listening. Bye. Thank you.
Comments