You can do a bunch of different things like being able to detect faces or even render certain animations that are typically given to you with the help of the Shape API. The Shape API, I would say, is still in an experimental phase.
Now, if you're curious to know how the browser APIs or these Web APIs come into being. So primarily, the Chrome, if you talk about your main web browsers. So the Chrome platform team is what's basically implementing a lot of these different browser APIs. So a lot of them are also currently, let's say, experimental in phase. So, when you're using these different Web APIs, you'll find that a lot of times some of them might be implemented or might be in effect by default, but some of them might not be supported. And if you want to support them, they will be most likely in an experimental phase, and you'll have to go to your Chrome flags in order to enable them. For instance, if you want to enable the phase detection API, you'll have to enable the Chrome web experimental features flag instead of your Chrome flags in order to basically make that public.
So the kind of way in which basically these web APIs come into fruition is that the browser teams will implement them. They will be in an experimental phase. Once we kind of reach to a point where they are stable enough, they will be released as stable features. And then they'll be implemented in these browsers by default.
Now, let's quickly take a look at how can you actually implement these inside of your React applications. So the first basic example that I like to quote is the battery status. So the battery status example will basically give you the battery status of your device, and you can render that. So what you see over here is the navigator. With most of the web APIs, you're basically using the navigator interface that allows you to identify the user agent, in this case, whatever computer you're using. And in this case, as you can see in the code, over here I'm just using the navigator.getBattery. So I'm not installing some third-party NPM module in order to do that, and I'm able to just use a native JavaScript function that I get from the browser API, and I'm basically just rendering the charging status of what is my battery percentage right now.
Another example where you can actually use some NPM modules as well. So the next one is the speech recognition. So you get a browser API for being able to do speech recognition live inside of your browser. The React speech recognition library essentially provides your custom React hook with the WebSpeech API, so instead of directly using the WebSpeech API, you can just install this NPM module and get out-of-the-box capability. And I'll quickly show a demonstration before we move further. So the first demonstration I'd like to show is with our app.js. Here what you'll see is I'm using actually a bunch of different web APIs, so the first one I'm just setting some states for my battery level, for my location, so I'm using the battery API, I'm using the coordinate that's the geolocation API, I'm using a simple fetch request to show how you can fetch data, and of course I'll be using the React Speech Recognition, which is the NPM model that provides the React hook. Over here, very simple to how I showed in the code sample in the slides, that you just use the navigator and the navigator object and then any function that's typically supported. So in this case, I'm running one to fetch your battery status, similarly I have one for being able to use the geolocation API, then the fetch API to fetch some data, and then of course the final one is the transcript, where I'm using the speech recognition to do a live transcript. So I'll quickly go ahead and run this, and this is a demo, I'll quickly refresh.
Comments