WHOA, I Wrote This React App With My Voice!

Rate this content
Bookmark

Have you ever imagined writing code without even touching the keyboard? In this talk, I'll show you how I wrote a React app with my voice. But more importantly, I'll demonstrate how this technology can developers with disabilities to write code with ease and efficiency. Together, we'll explore the ways in which voice-activated AI assistants can revolutionize the way we think about coding. Join me as we explore the exciting possibilities of voice-activated AI programming in the React community and the ways in which it can make React coding more accessible and fun for everyone.

This talk has been presented at React Summit 2023, check out the latest edition of this React Conference.

Watch video on a separate page

FAQ

GitHub Copilot is an AI peer programmer that helps you code faster by suggesting individual lines and whole functions based on the context provided by your comments and code. It is powered by OpenAI Codecs, a machine learning model designed to translate natural language into code.

Rizal Scarlett is a developer advocate at GitHub with a background in software engineering and teaching underrepresented groups how to code. Rizal is also active on social media with the handle @BlackGirlBytes.

GitHub Copilot can be utilized for voice-driven programming by using voice commands to dictate code changes and actions, such as creating functions or managing state variables in a React application. This feature enhances coding accessibility for individuals with physical dexterity or visual impairments.

Effective prompt engineering tips for GitHub Copilot include providing a high-level task description at the top of your file, using examples to guide the AI's output, and iterating your prompts for specificity to ensure the AI understands exactly what data or functionality you need.

Yes, GitHub Copilot can sometimes produce errors or reference outdated APIs. It's crucial to review and verify the code generated by Copilot just as you would with code written by a human peer programmer.

The main difference is that while GPT-3 is a generative pre-trained transformer used broadly for generating human-like text, OpenAI Codecs is specifically fine-tuned for programming, translating natural language into code to assist developers in their coding tasks.

GitHub Copilot draws context from the comments and code you write, using this information to suggest relevant lines of code or entire functions. This AI tool is powered by understanding the intent and context through natural language processing.

GitHub Copilot speeds up the development process, helps in brainstorming and retaining focus, and can jog your memory on coding syntax or logic. It also supports voice commands, making coding more accessible for people with certain disabilities.

Rizel Scarlett
Rizel Scarlett
9 min
06 Jun, 2023

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today we're going to build a React application with just our voice using GitHub Copilot, an AI peer programmer powered by OpenAI Codecs. It's important to be specific in your comments to get accurate suggestions from Copilot. Prompt engineering tips can be used to create different applications, such as a basic markdown editor and a simple to-do app. The application was tested successfully by adding and deleting to-do items using voice commands.

1. Introduction to GitHub Copilot

Short description:

Today we're going to build a React application with just our voice. GitHub Copilot is an AI peer programmer that helps you code faster. It draws context from your comments and code to suggest lines and functions. It's powered by OpenAI Codecs, a machine learning model that translates natural language into code. Prompt engineering is the practice of using prompts to get the desired output.

♪♪♪ Hey, folks, today we're going to build a React application with just our voice. Super excited, but I'm just going to give you a little bit of background before we dive into that. We're going to talk about GitHub Copilot, prompt engineering tips with GitHub Copilot, and then we're going to use our voices to build a React app.

You might be wondering, who am I? Who's this awesome person who could build apps with their voices? My name is Rizal Scarlett, I'm a developer advocate at GitHub. I have a background in software engineering and teaching underrepresented groups how to code. I'm also addicted to social media, so if you wanted to connect with me, my handle is at BlackGirlBytes on most platforms, including BlueSky and Mastodon, so find me on there.

Okay. GitHub Copilot, what is it? It's an AI peer programmer that helps you to code faster with less work. And so it feels like magic to me, just saying. It feels like it can read my mind, but being honest, it can't do either of those things, and sometimes it doesn't always spit out perfectly well-written, up-to-date code. Maybe sometimes it's referencing an older API, which is why I always encourage people to go back, review the code that is generated just as you would with a human peer programmer.

Okay, so what's happening if it's not magic? Under the hood, GitHub Copilot is drawing context from your comments and your code and suggesting individual lines and whole functions instantly. It's being powered by OpenAI Codecs. And you may be familiar with OpenAI because of ChatGPT or DALI, which is the AI image generation tool, but Codecs is another thing they created. And it's a machine learning model that translates natural language into code. Some background on Codecs is that it's a later version of GPT-3, which stands for Generative Pre-trained Transformer 3, which uses deep learning to produce human-like text. A lot of our favorite apps, like Duolingo, use GPT-3 for grammar correction. And the biggest difference between GPT-3 and Codecs is that Codecs has been fine-tuned for programming. So this is what's power in GitHub Copilot. Here's an example of GitHub Copilot in action. We have Melmykdev from Twitter, who is using GitHub Copilot to translate the strings, answer question and date. And all that they do is they provide context through the language code, so FR for French, JP for Japanese, ES for Spanish, and GitHub Copilot takes that context and translates it into the right words. So, maybe you're like, I've used GitHub Copilot and it's not working for me, it's not doing it. Let me introduce you to the concept of prompt engineering. And that's the practice of using prompts to get the output that you want. Here are my top three tips for prompt engineering. Give it a high-level task description. At the top of the file, describe the app's purpose so that it knows what it's doing. This is especially helpful if you're starting from a clean slate, there's no code in it. GitHub Copilot has no context.".

2. Providing Examples and Iterating

Short description:

In machine learning models, there's a concept called few shot learning where you provide examples to refine the model's output. It's important to be specific in your comments to get accurate suggestions from GitHub Copilot.

So, you have to give it somewhat a comment. Also providing examples, right? In machine learning models, they learn from examples. So, there's this concept of few shot learning where you feed your model examples and it gets a more refined idea of what it should be outputting based on certain inputs. And then also, iterating more for specificity. Maybe you wrote a really vague comment that says get data, but GitHub Copilot's like get what data? Tell it that you want to get the user data or the user IDs that belong to this specific individual or whatever. Get more specific. Go ahead and delete that comment that you wrote and delete the suggestion and reiterate on it.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top Content
Today's Talk discusses the importance of managing technical debt through refactoring practices, prioritization, and planning. Successful refactoring requires establishing guidelines, maintaining an inventory, and implementing a process. Celebrating success and ensuring resilience are key to building a strong refactoring culture. Visibility, support, and transparent communication are crucial for addressing technical debt effectively. The team's responsibilities, operating style, and availability should be transparent to product managers.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
A Practical Guide for Migrating to Server Components
React Advanced 2023React Advanced 2023
28 min
A Practical Guide for Migrating to Server Components
Top Content
Watch video: A Practical Guide for Migrating to Server Components
React query version five is live and we'll be discussing the migration process to server components using Next.js and React Query. The process involves planning, preparing, and setting up server components, migrating pages, adding layouts, and moving components to the server. We'll also explore the benefits of server components such as reducing JavaScript shipping, enabling powerful caching, and leveraging the features of the app router. Additionally, we'll cover topics like handling authentication, rendering in server components, and the impact on server load and costs.
Power Fixing React Performance Woes
React Advanced 2023React Advanced 2023
22 min
Power Fixing React Performance Woes
Top Content
Watch video: Power Fixing React Performance Woes
This Talk discusses various strategies to improve React performance, including lazy loading iframes, analyzing and optimizing bundles, fixing barrel exports and tree shaking, removing dead code, and caching expensive computations. The speaker shares their experience in identifying and addressing performance issues in a real-world application. They also highlight the importance of regularly auditing webpack and bundle analyzers, using tools like Knip to find unused code, and contributing improvements to open source libraries.
Monolith to Micro-Frontends
React Advanced 2022React Advanced 2022
22 min
Monolith to Micro-Frontends
Top Content
Microfrontends are considered as a solution to the problems of exponential growth, code duplication, and unclear ownership in older applications. Transitioning from a monolith to microfrontends involves decoupling the system and exploring options like a modular monolith. Microfrontends enable independent deployments and runtime composition, but there is a discussion about the alternative of keeping an integrated application composed at runtime. Choosing a composition model and a router are crucial decisions in the technical plan. The Strangler pattern and the reverse Strangler pattern are used to gradually replace parts of the monolith with the new application.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Build Modern Applications Using GraphQL and Javascript
Node Congress 2024Node Congress 2024
152 min
Build Modern Applications Using GraphQL and Javascript
Featured Workshop
Emanuel Scirlet
Miguel Henriques
2 authors
Come and learn how you can supercharge your modern and secure applications using GraphQL and Javascript. In this workshop we will build a GraphQL API and we will demonstrate the benefits of the query language for APIs and what use cases that are fit for it. Basic Javascript knowledge required.
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Featured Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Llms Workshop: What They Are and How to Leverage Them
React Summit 2024React Summit 2024
66 min
Llms Workshop: What They Are and How to Leverage Them
Featured Workshop
Nathan Marrs
Haris Rozajac
2 authors
Join Nathan in this hands-on session where you will first learn at a high level what large language models (LLMs) are and how they work. Then dive into an interactive coding exercise where you will implement LLM functionality into a basic example application. During this exercise you will get a feel for key skills for working with LLMs in your own applications such as prompt engineering and exposure to OpenAI's API.
After this session you will have insights around what LLMs are and how they can practically be used to improve your own applications.
Table of contents: - Interactive demo implementing basic LLM powered features in a demo app- Discuss how to decide where to leverage LLMs in a product- Lessons learned around integrating with OpenAI / overview of OpenAI API- Best practices for prompt engineering- Common challenges specific to React (state management :D / good UX practices)
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps