For now, the web is able to use a couple of different backends depending on the browser and the operating system. The most performant way would be to use the WebGPU backend, but that requires the WebGPU API, which is only available in Chrome Canary behind the flag. So in my example, I am using WebGL, which is the most performant backend that is available in most browsers right now.
Now, we probably have all seen basic examples of image recognition, like in this case face landmark detection, where we can give an image as an input and then receive the position of the key points in the face. And the images work quite well with machine learning because in the end, machine learning models expect some numerical input and it returns an output, and images are nothing else than just the numerical RGB values on a 2D rectangle.
Now, in my case, I want to recognize certain words, and well, words are not images, right? Except when they are. So in the end, each piece of audio can be converted into a spectrogram, and let's imagine we have 100 recordings of me saying the words to Sandler. We now have 100 images of this two-second clip that we can now compare with the spectrogram of my talk. Now, of course, a spectrogram of the whole talk that grows over time is hard to compare with my two-second clip, but we can split up the whole track into two-second parts and compare those two seconds with our model. The problem here is that we will miss quite a lot of the words, because we can't be sure that the split actually cuts out one word as a whole. The solution here would be to add an overlay. In this case, we have an overlay of 0.5, which means that we have more images per second to analyze. The bigger the overlay, the more images are there to analyze, and the more accurate is the detection. In my example, I even needed an overlay of 0.95 to have a meaningful result.
Now similar to the face landmark detection, TensorFlow also offers a speech command detection, and just like before, we can import it, we can then create a recognizer, and we can start listening. The default model looks for a couple of predefined keywords, but of course my Swiss-German words are not in that list, so I need to train my own model. With Teachable Machine, Google did publish a web app that allows you to train your own image or audio model based on your own input data. So on the right you see my training data, where I have around one hour of me just talking as the background class, and then we have 50 and 70 examples of the two keywords I want to detect. And with Teachable Machine, I can now train the data in the browser, and it just generates the model for me. Now, all I need to do is I need to pass the created model and the metadata to the Create function, and it will now use the new model to detect my custom input. So my slides are running in the browser, and I can now just activate the listener. That might take some time. Now every time I say words like MandEffect, it will trigger the buzzer. And it actually did work quite well on my latest Swiss German Talk. So I really hope that I was able to inspire you with this short insight into machine learning in the browser so we can use models, we can train new models, all directly on the device in the browser. For more and for deeper knowledge, I can also recommend the free course by Jason Maes from Google Machine Learning for Web Developers. And with this, I would like to thank you for your interest and I wish you a nice rest of the conference.
Comments