Beyond JavaScript: Maximizing React With Web APIs

Rate this content
Bookmark

This lightning talk explores how you can enrich your React projects with Web APIs. From handling data to interacting with browser features, attendees will gain insights into leveraging the power of Web APIs for an enriched and efficient React development experience to highlight the out of the box capabilities offered by Browsers.

This talk has been presented at React Summit US 2023, check out the latest edition of this React Conference.

Watch video on a separate page

FAQ

Web APIs are interfaces that allow interaction between the web browser and the underlying operating system or hardware. In React applications, these APIs can be used to access browser capabilities like HTTP requests, device sensors, local storage, and audio/video processing directly without relying on external libraries or NPM modules.

The Fetch API is a web API that allows you to make HTTP requests. This can be used to retrieve data from a server or submit data to a server, making it fundamental for web-based applications to interact with APIs and perform network operations.

Yes, you can rely significantly on browser APIs to develop React applications. Browser APIs like Fetch, Device APIs, Storage APIs, and others provide powerful functions that can replace many external dependencies, potentially making applications lighter and faster.

Device APIs allow web browsers to access hardware-related features of a device, such as geolocation, motion sensors, and network status. These APIs enable web applications to utilize device-specific capabilities and provide a richer, more interactive user experience.

Storage APIs provide methods to store data locally within the user's browser. This allows web applications to save and retrieve data such as user preferences, app state, and offline data, enhancing the performance and user experience by making the application more responsive and capable of working offline.

Audio and Video APIs enable web applications to process and manipulate audio and video files directly in the browser. This includes tasks like playing, recording, and editing media files, which are crucial for applications that require media content interaction.

In web API development, the experimental phase is a stage where new APIs are implemented and tested in browsers but are not yet finalized. During this phase, developers can test these features by enabling specific flags in their browsers, such as Chrome flags for experimental web platform features.

To enable experimental web APIs in Chrome, you need to access the Chrome flags by navigating to 'chrome://flags' in your browser. Here, you can enable flags related to experimental web platform features, allowing you to test and use these APIs before they become widely available.

Shivay Lamba
Shivay Lamba
10 min
13 Nov, 2023

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk explores how to use browser and web APIs to enhance React applications, covering categories such as Fetch API, Device APIs, Storage APIs, Audio and Video APIs, and Shape API. It explains how web APIs are implemented in React applications and suggests using native JavaScript functions or NPM modules like React Speech Recognition. The Talk demonstrates the use of Battery and Face Detection APIs in a React application, including features like getting battery percentage, live coordinates, and speech-to-text functionality. It also mentions the possibility of creating augmented reality face filters with the face detection API. The conclusion highlights the availability of resources on Mozilla docs and a GitHub repository for further learning and code samples.

1. Introduction to Browser and Web APIs

Short description:

In this part, we will explore how to leverage browser APIs and web APIs to enhance React applications. These APIs can handle tasks ranging from machine learning to GPU utilization. We will also discuss different categories of web APIs, such as Fetch API, Device APIs, Storage APIs, Audio and Video APIs, and Shape API.

Thank you for joining us. This is my first time in GitNation Talks. I'm super excited. Thank you!

Of course, this happens on a daily basis with us. We love and hate JavaScript because it's so popular. It's so good. You can do a bunch of things. But at the same time, you want your applications to have as much JavaScript as possible. Or as less number of NPM modules to make your applications lighter. Because, of course, when you have a lot of NPM modules, your applications do become a lot heavier. And, I mean, if you're just using Git as well, they can become pretty big in size. Like, a huge hundreds of megabytes in size.

So, the idea over here that I want to kind of portray is that can you rely on the web browser on which you are basically running our React applications instead of having to use NPM modules or JavaScript itself. The simple answer to that is yes. We have all these browser APIs and web APIs that essentially allow us to very easily do anything from machine learning stuff like face detection all the way to some standard things like being able to leverage your GPU for doing things like 3D rendering where your browser is capable of being able to fetch the GPU information and leverage that with something like WebGPU. So, all of these can be handled with the help of a suite of different browser APIs or web APIs that you can leverage for your own use case and we'll be exploring some of them and how you can basically integrate them inside of a React application.

So, there are a bunch of different categories of these web APIs. A lot of you might be aware of these. But for the folks who are not, I'll just quickly give a quick rundown of all these seven different web broad categories that I've kind of defined. So, the first one is Fetch API, that allows you to make HTTP requests. The next one is the Device APIs. So, Device APIs essentially allow your browser to be able to replicate a lot of the different mobile things that typically you'll have in mobile devices but also replicate that inside of a web browser as well. So, if you're using a web browser in your laptop or perhaps in your mobile phone. So, for example, the Geolocation API allows you to fetch your coordinates and render them inside of your application. Or, of course, you can draw graphics with the help of the Canvas API. You also get a lot of storage APIs that allow you to do things like local storage or if you want to store the duration of a particular task that you're implementing inside of your browser. So, all of those will be capable of the help of the Storage APIs. And, of course, you have the Audio and the Video APIs that allow you to do a bunch of things with audio and video processing. And then the Shape API. So, these are, I think, personally my favorite.

2. Implementing Web APIs in React Applications

Short description:

You can do a bunch of different things like being able to detect faces or even render certain animations with the Shape API. The browser teams implement web APIs, which are initially in an experimental phase and later released as stable features. In React applications, you can implement web APIs like the battery status and speech recognition, using native JavaScript functions or NPM modules like React Speech Recognition.

You can do a bunch of different things like being able to detect faces or even render certain animations that are typically given to you with the help of the Shape API. The Shape API, I would say, is still in an experimental phase.

Now, if you're curious to know how the browser APIs or these Web APIs come into being. So primarily, the Chrome, if you talk about your main web browsers. So the Chrome platform team is what's basically implementing a lot of these different browser APIs. So a lot of them are also currently, let's say, experimental in phase. So, when you're using these different Web APIs, you'll find that a lot of times some of them might be implemented or might be in effect by default, but some of them might not be supported. And if you want to support them, they will be most likely in an experimental phase, and you'll have to go to your Chrome flags in order to enable them. For instance, if you want to enable the phase detection API, you'll have to enable the Chrome web experimental features flag instead of your Chrome flags in order to basically make that public.

So the kind of way in which basically these web APIs come into fruition is that the browser teams will implement them. They will be in an experimental phase. Once we kind of reach to a point where they are stable enough, they will be released as stable features. And then they'll be implemented in these browsers by default.

Now, let's quickly take a look at how can you actually implement these inside of your React applications. So the first basic example that I like to quote is the battery status. So the battery status example will basically give you the battery status of your device, and you can render that. So what you see over here is the navigator. With most of the web APIs, you're basically using the navigator interface that allows you to identify the user agent, in this case, whatever computer you're using. And in this case, as you can see in the code, over here I'm just using the navigator.getBattery. So I'm not installing some third-party NPM module in order to do that, and I'm able to just use a native JavaScript function that I get from the browser API, and I'm basically just rendering the charging status of what is my battery percentage right now.

Another example where you can actually use some NPM modules as well. So the next one is the speech recognition. So you get a browser API for being able to do speech recognition live inside of your browser. The React speech recognition library essentially provides your custom React hook with the WebSpeech API, so instead of directly using the WebSpeech API, you can just install this NPM module and get out-of-the-box capability. And I'll quickly show a demonstration before we move further. So the first demonstration I'd like to show is with our app.js. Here what you'll see is I'm using actually a bunch of different web APIs, so the first one I'm just setting some states for my battery level, for my location, so I'm using the battery API, I'm using the coordinate that's the geolocation API, I'm using a simple fetch request to show how you can fetch data, and of course I'll be using the React Speech Recognition, which is the NPM model that provides the React hook. Over here, very simple to how I showed in the code sample in the slides, that you just use the navigator and the navigator object and then any function that's typically supported. So in this case, I'm running one to fetch your battery status, similarly I have one for being able to use the geolocation API, then the fetch API to fetch some data, and then of course the final one is the transcript, where I'm using the speech recognition to do a live transcript. So I'll quickly go ahead and run this, and this is a demo, I'll quickly refresh.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Install Nothing: App UIs With Native Browser APIs
JSNation 2024JSNation 2024
31 min
Install Nothing: App UIs With Native Browser APIs
This Talk introduces real demos using HTML, CSS, and JavaScript to showcase new or underutilized browser APIs, with ship scores provided for each API. The dialogue element allows for the creation of modals with minimal JavaScript and is supported by 96% of browsers. The web animations API is a simple and well-supported solution for creating animations, while the view transitions API offers easy animation workarounds without CSS. The scroll snap API allows for swipers without JavaScript, providing a smooth scrolling experience.
Pushing the Limits of Video Encoding in Browsers With WebCodecs
JSNation 2023JSNation 2023
25 min
Pushing the Limits of Video Encoding in Browsers With WebCodecs
Top Content
Watch video: Pushing the Limits of Video Encoding in Browsers With WebCodecs
This Talk explores the challenges and solutions in video encoding with web codecs. It discusses drawing and recording video on the web, capturing and encoding video frames, and introduces the WebCodecs API. The Talk also covers configuring the video encoder, understanding codecs and containers, and the video encoding process with muxing using ffmpeg. The speaker shares their experience in building a video editing tool on the browser and showcases Slantit, a tool for making product videos.
WebHID API: Control Everything via USB
JSNation 2022JSNation 2022
23 min
WebHID API: Control Everything via USB
Today's Talk introduces the webHID API, which allows developers to control real devices from the browser via USB. The HID interface, including keyboards, mice, and gamepads, is explored. The Talk covers device enumeration, input reports, feature reports, and output reports. The use of HID in the browser, especially in Chrome, is highlighted. Various demos showcase working with different devices, including a DualShock controller, microphone, gamepad, and Stream Deck drum pad. The Talk concludes with recommendations and resources for further exploration.
Visualising Front-End Performance Bottlenecks
React Summit 2020React Summit 2020
34 min
Visualising Front-End Performance Bottlenecks
React's web-based tools allow for independent learning. Dazzone, a sports streaming service, faces challenges with low memory and CPU targets. Measuring, analyzing, and fixing performance issues is crucial. Virtualization improves rendering efficiency and performance. The application is now much faster with significantly less jank.
Automate the Browser With Workers Browser Rendering API
JSNation 2024JSNation 2024
20 min
Automate the Browser With Workers Browser Rendering API
The Talk discusses browser automation using the Worker's Browser Rendering API, which allows tasks like navigating websites, taking screenshots, and creating PDFs. Cloudflare integrated Puppeteer with their workers to automate browser tasks, and their browser rendering API combines remote browser isolation with Puppeteer. Use cases for the API include taking screenshots, generating PDFs, automating web applications, and gathering performance metrics. The Talk also covers extending sessions and performance metrics using Durable Objects. Thank you for attending!
MIDI in the Browser... Let's Rock the Web!
JSNation 2022JSNation 2022
28 min
MIDI in the Browser... Let's Rock the Web!
MIDI is a versatile communication protocol that extends beyond music and opens up exciting possibilities. The Web MIDI API allows remote access to synths and sound modules from web browsers, enabling various projects like music education systems and web audio-based instruments. Developers can connect and use MIDI devices easily, and the Web MIDI API provides raw MIDI messages without semantics. The WebMidi.js library simplifies working with the Web MIDI API and offers a user-friendly interface for musicians and web developers. MIDI on the web has generated significant interest, with potential for commercial growth and endless possibilities for web developers.

Workshops on related topic

Writing Universal Modules for Deno, Node and the Browser
Node Congress 2022Node Congress 2022
57 min
Writing Universal Modules for Deno, Node and the Browser
Workshop
Luca Casonato
Luca Casonato
This workshop will walk you through writing a module in TypeScript that can be consumed users of Deno, Node and the browsers. I will explain how to set up formatting, linting and testing in Deno, and then how to publish your module to deno.land/x and npm. We’ll start out with a quick introduction to what Deno is.