Video Summary and Transcription
The video covers the use of TensorFlow.js for machine learning in the browser and beyond, emphasizing its versatility in various environments such as web browsers, servers, and mobile devices. TensorFlow.js allows developers to run, retrain, or write machine learning models directly in JavaScript, providing benefits like client-side execution and privacy preservation. Pre-trained models in TensorFlow.js, such as object detection, body segmentation, and face landmark detection, enable rapid prototyping and creative applications. The video also highlights the use of transfer learning to adapt pre-trained models to new tasks with custom data. JavaScript's interactivity and rich presentation capabilities make it ideal for integrating machine learning into web applications, offering performance benefits through hardware acceleration like GPUs. The talk also mentions the potential of TensorFlow.js to enhance various industries, including fashion and farming, by optimizing processes and improving user experiences. The video encourages developers to explore the TensorFlow.js ecosystem, including tools like Teachable Machine and Cloud AutoML, and to engage with the community for further learning and collaboration.
1. Introduction to TensorFlow.js
Hello, everyone. I'm going to be talking to you about machine learning in the browser and beyond. JavaScript is one of the only languages that can run across all of these environments without additional plugins. TensorFlow.js is a great way to train your machine learning models. You can make anything you might dream up, from augmented reality to sentiment analysis. The easiest way is to use our pre-trained models, which include object detection, body segmentation, pose estimation, and face landmark detection. You can see this in action live in the browser, even using the webcam.
TensorFlow. Hello, everyone. I'm going to be talking to you about machine learning in the browser and beyond. So let's get started.
Now, first off, why would you want to use machine learning in JavaScript? That's a great question. And if we look here, we can see that we can use machine learning anywhere JavaScript can run, and that's actually a lot of places. The web browser, server side, desktop, mobile, and even Internet of Things. And JavaScript is one of the only languages that can run across all of these environments without additional plugins. And that alone is very, very powerful.
And with TensorFlow.js, you can run, retrain via transfer learning, or write your own machine learning models completely from scratch if you want to. And you can also use TensorFlow.js to run on your own machine learning models. So let's get started. So TensorFlow.js is a great way to train your machine learning models, and you can run, retrain, or write your own machine learning models completely from scratch if you choose to do so, just like you might be doing in Python right now, but in JavaScript. And that means you can make anything you might dream up, from augmented reality, sound recognition, sentiment analysis, and much, much more. It's really up to you what you create.
So the easiest way is to use our pre-trained models. These are super easy to use JavaScript classes for many common use cases, like we can see on the current slide. And here, we can see things like object detection, body segmentation, pose estimation, face landmark detection, and much, much more. Even natural language processing is supported in the browser. So let's see some of these in action. Now, first up is object recognition. This is using Cocoa SSD behind the scenes, and is trained on 90 object classes. You can see this in action on the right-hand side, with the dogs being highlighted with their bounding boxes, and we can even know that there's two dogs in this image as both are returned to us. So let's see this in action live to see how it performs in the browser.
Okay, so here's a web page I created that's running this code live in Chrome, and if I click on any one of these images, I can now get object detection working for any objects it finds in those images, even if it's different object class types. But we can do better than this, we can actually enable the webcam and then do this live in realtime, and you can see me talking to you right now, and you can see how it's classifying both myself and the bed in the background sometimes as I speak to you. And this is super cool, because not only is this running at a client-side script, this is running client-side in a browser, so that means none of this webcam imagery is being sent to a server for classification. Not only does that save costs, it also means my privacy is preserved, and that's a really big thing these days. And with TensorFlow.js, you can get that for free. So next up, we've got face mesh.
2. Face Mesh, Body Segmentation, and Pose Estimation
This part introduces the face mesh feature, which can recognize 468 facial landmarks on the human face. It also showcases the real-time performance of face mesh in a web browser, with the ability to move the face around and see the mesh overlay. Additionally, the use of body segmentation and pose estimation is demonstrated, highlighting the creative potential of these models.
This is just 3 megabytes in size and can recognize 468 facial landmarks on the human face. And this is a really cool feature, and I'm going to show you how to use it on the slide right now.
Now, people are starting to use this for real-world use cases, such as L'Oreal, and they've created AR makeup try-on, which allows you to try on lipstick in this case in realtime without even having to be there physically present in the store. So you should note, the lady on the right-hand side is not wearing any lipstick. We're using face mesh to understand where her lips are, and how to apply the lipstick she wants on her face in realtime. So this is super cool, and I'm sure we're going to see more stuff like this coming out in the future.
So let's see face mesh in action to see how it performs in the real world. Let's switch to the demo. Okay, so now you can see me talking to you with face mesh running in realtime in the web browser at the same time. On the left-hand side here, you can see the machine learning in action, and there is indeed a mesh of my face being overlaid on the web browser, and I can move my face around, and it's pretty robust. I can open and close my mouth and my eyes, and you can see that happening all in realtime. And we're getting a pretty solid 25 frames a second or so as I'm running on my GPU via WebGL.
Now, of course, I'm also streaming this live, so I'm using some of my GPU resources right now. I could get higher frames per second if I desired, but I can also switch as a developer to Wasm, which is WebAssembly, to execute on a CPU, or I can choose to execute on the CPU by itself, which is the slowest form of execution. So there's a number of options on where you can deploy to. And because this is JavaScript, not only am I doing the machine learning there on the left-hand side, I'm also able to render this 3D point cloud of the results on the right-hand side using 3.js. And JavaScript from day one has been designed for the presentation and display of information, so you've got a lot of libraries, especially for 3D graphics, to be able to do this very, very easily. And you can see I can move it around and now inspect my face from different angles, which is being constructed in realtime live in the browser.
Okay, back to the slides. So next up is body segmentation. This allows you to distinguish 24 body areas across multiple bodies all in realtime. You can see this is an action on the slide, and on the right-hand side, we've got a number of lines that represent different parts of each body. Even better, we've got pose estimation going on at the same time, those light blue lines contained with each one of the bodies on the right-hand side there, which allow us to estimate where the human skeleton is. And with that, that can enable really powerful demos, such as the ability to recognize when you're in a certain pose, or a gesture, or something like this. And we've got a lot of great data from our yoga instructors and this kind of stuff, so it's super cool to see the creative potential of this model. And in fact, with a bit of creativity, we can use things like body pics in a number of delightful ways. Here's just two examples I created in my spare time. On the left-hand side, you can see me becoming invisible as I walk around on my bed, and notice when I walk on the bed, the bed still deforms. And with this static image, I'm able to calculate the background updates in real-time and only remove my body from the parts where my body actually is. And of course, it's not perfect, but it's the first steps, and this was made in just a day.
3. JavaScript Prototyping and Transfer Learning
JavaScript allows for rapid prototyping, such as a clothing size estimator that saves time and money. Combining TensorFlow.js with other technologies opens up possibilities for creating Iron Man-like effects, bringing magazine images to life, and even teleportation. Another way to use TensorFlow.js is through transfer learning, which allows for retraining existing models with custom data. Teachable Machine is a great tool for prototyping with object detection, pose estimation, and sound detection.
So very, very cool stuff, and you can see here how JavaScript can allow you to prototype ideas very, very rapidly. On the right-hand side, I also made a clothing size estimator. Now, I don't know about you, but I'm really terrible at knowing how big I am when I'm trying to buy clothes once a year. And for different brands, I'm different sizes. In some brands, I'm a small, other brands, I'm a medium. So I never know what to select at checkout. Now, here, I can, in under 15 seconds, get an estimate of my body size for the key measurements that they care about for that particular brand, and I can automatically select at checkout the correct size for me. And that saves me time and money of having to return things when I need them, and this solved a problem I had in my daily life.
Next up, what about giving yourself superpowers? One guy from our community combined our face mesh model with WebGL shaders to create this Iron Man-like effect. And here you can see lasers coming from his eyes and mouth in a really beautiful, realistic kind of way, which could be great for an activation with a movie company or something like this for a new movie release. Or what about if you combine with other technologies? Here, I'm using WebXR and WebGL and TensorFlow.js together to extract an image of a body from a magazine and then bring that body into the real world so they can inspect the fashion design in more detail. I've even seen this person go one step further and make the face animate and say sounds, which is really, really cool. But why stop there? We can go one step further still. By adding WebRTC to this, which stands for Web Real-Time Communication, I can even teleport myself. So here, I can segment my body using body pics from my room. I can transmit that segmentation over the internet and then reconstruct it in a real physical space using WebXR. And this allows me to speak to my friends and family in the current times where we're not able to travel as much in a more meaningful way than a rectangular video call. In fact, maybe in the future, my future presentations will be delivered to you in this form. Who knows? But some very exciting stuff ahead.
Now, the second way to use TensorFlow.js is via transfer learning. And this allows you to retrain existing models to work with your own custom data. Now, of course, if you're a machine learning expert, you can do this all programmatically. But today, I want to show you two easier ways to get started. Now, the first is Teachable Machine. This is a website that can do both the training and the inference completely in the web browser. This is great for prototyping, for things like object detection, pose estimation, and sound detection. I think more models will be supported in the future, so watch for space. But let's see it in action to give you a flavor of how it works. Okay, so if we head over to teachablemachine.withgoogle.com, you can follow along if you like. We can actually select one of three projects to choose from.
4. Recognizing Custom Objects with Image Project
Today, we're going to recognize a custom object using the Image Project feature. We can choose to recognize my face or a deck of playing cards. After giving meaningful names to the classes, we record samples and train the model. The model successfully categorizes the training data and predicts the objects with high confidence. This prototype can be exported as a model.
Today, we're going to go for Image Project to recognize a custom object. So we click on that, and we're then presented with a screen like this. On the left, we've got a number of classes for the objects you want to recognize. If you want to recognize more than two things, we can click the nice Add Class button here if we choose to do so. But today, we're just going to recognize my face or a deck of playing cards.
So let's go ahead and give it some more meaningful names. I'm going to call the first one Jason to represent me, and the second class I'm going to call Cards, which represents the cards. Now, all we need to do is allow access to our webcam, and you'll see a live preview of the cards. And you'll see a pop-up on the left-hand side for the first class.
Now I just need to record some samples of my face to make sure we have some training data for this class type. So let's go ahead and do that. I'm going to move my head around to get some variety. There we go. And we can see that I've got... how many images have I got there? About 38 sample images. Perfect. I'm now going to go and do the same thing with class number two, the deck of cards. And you can see right here a nice set of playing cards. So what I'm going to do is hold to record again, but this time I'm going to get roughly the same number of images, but of the cards. So I've got 42 there. That's close enough. All I need to do now is click on Train Model.
And now live in the web browser, this is going to attempt to categorize the training data that I've presented to it versus the ones it was previously taught on. And you can see there in under 30 seconds it's already complete, and it's currently predicting JSON as the output with 99% confidence, which is pretty good. And if I bring my deck of playing cards up, you can see that switches to cards with 100% confidence. So JSON, cards, JSON, cards. And you can see how easy that was to make and how robust it is in actually detecting those two objects.
Now, of course, this is a prototype. If this was good enough for what I needed, I can click on Export Model here. I can see that I've got a bunch of cards.
5. Using Cloud AutoML and TensorFlow.js
I can see that I've got a bunch of cards. I can click on the Download button. So that's Teachable Machine. Cloud AutoML allows us to train custom vision models in the cloud. You can upload folders of images to Google Cloud Storage. Once it's ready, you can then click Export and choose TensorFlow.js. It's super simple to use this production-quality trained model. First, import the TensorFlow.js library. Second, import the AutoML library. Then, call await tf.autoML loadImageClassification with model.json.
I can see that I've got a bunch of cards. I can see that I've got a bunch of cards. I can see that I've got a bunch of cards here. I can click on the Download button. And, of course, I can then copy this code and use it on my own website if I choose to do so.
So that's Teachable Machine. And great for prototyping. However, if you've got gigabytes of data, you might want to use something more robust for production quality models. So let's go back to the slides and see how to do that.
So Cloud AutoML allows us to train custom vision models in the cloud, and we can deploy to TensorFlow.js at the end, which is super useful. So all you have to do is upload folders of images that you want to classify to Google Cloud Storage, as you can see here, and then click on the Next button. Once you do that, you'll be asked if you want to optimize your model for higher accuracy or faster predictions or some kind of trade-off between the two. You then set a budget and leave it training for hours or days, depending how much data you've got uploaded there, and it will come back to you with the best results. It's going to train your model and it's going to try many different hyperparameters, many different types of computer vision models, and try and figure out what works best with your data. Once it's ready, you can then click Export and choose TensorFlow.js, as shown here in the circle, which will download the model.json files which you need to run in the web browser. And with that, you can then use it on your own web page and add your own user experience and user interface and so on and so forth.
You might be wondering, well, how hard is it to actually use this production-quality trained model? And actually, it's super simple. In fact, it's so simple it fits on one slide. So let's walk through this code. First, we're going to import the TensorFlow.js library using this script tag. Second, we're going to import the AutoML library with the second script tag. Below this, I've created an image, which is just a daisy image, which I found somewhere on the internet. This is the image we want to classify. It could be a frame from the webcam or whatever it might be. But this, I've just taken for simplicity a daisy.jpg. Now, the meat of the code is in these three lines of JavaScript here. The first one simply calls await tf.autoML loadImageClassification and then we pass to it model.json. So model.json here is the file we downloaded in the previous page. And this will be hosted somewhere on your content delivery network Now, because this is an asynchronous operation, we have to use the await keyword in JavaScript to tell JavaScript to wait for that to finish before continuing sequentially thereafter. So once that's loaded, because the model might be a couple of megabytes in size, we can then move on to the next step.
6. Image Classification with TensorFlow.js
To classify an image using TensorFlow.js, you need to grab a reference to the image element, call the model.classify method, and wait for the asynchronous operation to finish. The predictions object will contain important information about the image, allowing you to perform various actions with just a few lines of code.
So next, we grab a reference to the image we want to classify. So in this case, we call document.getElementById daisy, which is referring to the daisy image above here. And now we've got a reference to that in memory. All we need to do now is call await model.classify and pass it the image you want to classify. And this, again, is an asynchronous operation because this might take several milliseconds to execute, which of course in computer terms is a very long time. So we want to wait for that to finish and then we'll have a JSON object assigned to this predictions constant here on the left, which you can then iterate through and go through all the things it thinks is important to know about the image. And with that, you can do whatever you like. You can trigger something to run. You could control a robot. You could do whatever you wanted to do just with a few lines of code. So super cool and super functional.
7. API Structure and Performance Benefits
Now the third way to use TensorFlow.js is to write your own code. Today, I'm going to focus on the superpowers and performance benefits of using TensorFlow.js in the browser. We have two APIs: the high-level Layers API, similar to Keras, and the lower-level Ops API for mathematical operations. Our pre-made models sit on top of these APIs, which can execute on different backends, including CPU, WebGL, and WebAssembly. On the server-side, we can execute using Node.js, with the same performance benefits as Python.
Now the third way to use TensorFlow.js is to write your own code. And of course, to go through that would be a whole different talk in itself. So today I'm going to focus on the superpowers and performance benefits of why you might want to consider using TensorFlow.js in the browser.
I want to give you an overview of how our API is structured. We've got two APIs. One is the high-level Layers API, which is very similar to Keras, if you're familiar with Python. In fact, if you use Keras, it's basically the same function signatures, so you should feel very much at home. And then for those of you who want to go lower level, we have the Ops API, which is the more mathematical layer that allows you to do things like linear algebra and so on and so forth.
And then here we've got our pre-made models, which are sitting upon our Layers API. That Layers API sits on top of our Ops API, and this understands how to talk to different environments, such as the client-side. And by client-side here, we mean things like the web browser. Now those environments themselves can execute on different backends. And in this case, we can execute on things like the CPU, which is the slowest form of execution, WebGL to get graphics card acceleration, and WebAssembly, or WASM for short, for improved performance on the CPU across mobile devices. And the same is true for the server-side as well. We can execute using Node.js on the server-side, and this can talk to the same TensorFlow CPU and GPU bindings that Python has. So yes, that means you get the same AVX support and the same CUDA acceleration that you do in Python. And in fact, as we'll see later, this means the performance benefits are pretty much exactly the same. We execute as fast, and sometimes faster than Python for certain use cases.
8. Python Development and Performance Benefits
If you choose to develop your machine learning in Python, our Node.js implementation supports loading Keras models and TensorFlow Save models without conversion. However, if you want to run a Save model in the web browser, you'll need to use our TensorFlow.js command-line converter. Performance-wise, converting pre- and post-processing to Node.js can lead to significant boosts. HuggingFace converted their Distilbert model to run in Node.js, resulting in a two-times performance boost. Executing on the client side offers privacy, lower latencies, and lower costs, making it ideal for certain applications.
Now if you choose to still develop your machine learning in Python, which many of you, of course, will, that's completely fine too. Our Node.js implementation supports the loading of Keras models and TensorFlow Save models without any kind of conversion. So as long as you're executing on the server-side in Node, no conversion is required to use that and integrate with, say, a web team. So that's pretty convenient. And then if you choose to take your Save model and you want to run that in the web browser, then you'll have to use our TensorFlow.js command-line converter to do so. That will convert the Save model format into the .json format we need to run in the web browser. And that's only required if you want to run the client-side in the browser.
So let's talk about performance. Here we can see, for MobileNet V2, the average inference times for GPU and CPU. Looking at the GPU results there, you can see that for tf.python, we get a result of 7.98 milliseconds, and for Node.js, we get 8.81. So basically, within a millisecond of each other, which is basically the same result. Now, the key thing to note here is that if you have a lot of pre- and post-processing, and you convert that to be written in Node.js, you get the benefits of a just-in-time compiler that's unique to JavaScript at runtime. And this can lead to significant performance boosts outside of the inference alone. So let's see how HuggingFace used this. So here we can see how HuggingFace converted their Distilbert model to run in Node.js. They basically converted the pre- and post-processing layers to be written in JavaScript in Node, and this led to a two-times performance boost equivalent. So what's important to note here is that we are using the saved model from Python, but by converting the pre- and post-processing layers, it gives us a two-times speed increase because of the just-in-time compilation of JavaScript in Node. And on that note, if you are thinking about executing on the client side, there's also some superpowers to kind of consider here as well. And these are hard or impossible to achieve on the server side in Node or Python. So the first one is privacy. If you're executing on the client side completely, then none of the sensor data is going to a server for inference. And that means the client's privacy is completely preserved. And that's very important for certain types of applications, like medical or legal, or if you're trying to comply with certain rules, such as GDPR, where you might not be physically allowed to transmit data to a different server. Second point, if no server is involved, you can achieve lower latencies. Typically, it might take 100 milliseconds or more if you're using a mobile device to talk to a server and get the result. If you're using TensorFlow.js on-device, you can cut that middleman out and essentially have lower latency for your inference times, resulting in a higher frames per second to allow real-time applications. The third point is lower cost. Because no servers are involved, you can save significant costs on hiring GPUs, RAM, and CPUs, which might be running 24-7 for a busy machine learning application. By doing this all on client-side, you don't need to hire those pieces of hardware in the first place. You just need to have the standard web CDN to deliver the website.
9. Interactivity, Reach, Scale, and Performance
JavaScript provides interactivity and rich presentation capabilities. The reach and scale of the web allow for easy access to machine learning models. TensorFlow.js in the browser offers GPU support on a wide range of devices. Node.js enables the use of TensorFlow saved models without conversion. JavaScript allows coding in a single language, and there is a large NPM ecosystem for Node.js. Performance is enhanced with C bindings and just-in-time compilation.
Fourth point, interactivity. JavaScript from day one has been designed to present information in a rich way. And we have very mature libraries for 3D graphics, data visualization, charting, and much, much more. This allows you to prototype very complex things very, very quickly. And this is one of the advantages of using JavaScript.
And then the fifth point is the reach and scale of the web. Anyone in the world can click a link and use your machine learning model in a web browser. The same is not true if you want to do this in Node or Python. Because first of all, you have to understand how to install Linux. Secondly, you need to install TensorFlow. Then you need to install the CUDA driver and then you need to clone the person's GitHub repository, read their README, and if all of that works in your favor, then you might have a chance of running their machine learning model. So you can see how there's a much lower barrier to entry here if your purpose is to get your research used by many people around the world. And that can be really great because it can allow you to identify biases or bugs that maybe could have gone overlooked if only 10 people were using it instead of 10,000.
With TensorFlow.js in the browser, we can run on GPUs on 84% of devices due to WebGL. We're not limited to just NVIDIA graphics cards. We can run on AMD ones too, and so on and so forth. And if we look at the server side, we can also see some of the benefits of running in Node.js. It allows us to use the TensorFlow saved model format without any kind of conversion or performance penalties, and we can run larger models than we can do on the server side. There are, of course, some GPU memory limits you might run into if you try and push a gigabyte model over the web to the client device.
Third point, it allows you to code in just one language. If you're already using JavaScript, this is a big win. Currently, 67% of developers use JavaScript in development already, according to the Stack Overflow 2020 survey. And there's a large NPM ecosystem for Node.js. We have lots of modules and libraries available for you to run, so great community support too. And then fifth point, performance, as we spoke about, we've got the same C bindings as the original TensorFlow in Python, which can be used to get parity for inference speeds, and we've got the just-in-time compiler boost for the pre- and post-processing if you choose to convert that over to Node.js.
So with that, let's wrap up with some resources that you can use to get started and learn more. If there's one slide you want to bookmark, it's probably this one. Here you can see all the resources you need to get started with TensorFlow.js. Our website at the top there, you can find many resources and tutorials to help you on your way. We've got our models available at tensorflow.org.js.models.
10. Getting Started and Community Engagement
I've shown you a few examples today, but there are many more available on GitHub for you to get started quickly. If you want to dive deeper, I recommend reading Deep Learning with JavaScript. Join our community and check out the madewithtfjs hashtag on Twitter or LinkedIn to see the amazing projects being created. Start with the boilerplate code on our GitHub repository. Machine learning is now accessible to everyone, so artists, musicians, and creatives can all take advantage of it. Stay in touch on Twitter and LinkedIn for further questions.
I've only shown you three or four today, but there's many, many more on there which you can also be using out of the box to get started super fast. We're completely open source, so we're available on GitHub as well, so feel free to check us out if you're ambitious. We have a Google group for more advanced technical questions, which are group monitors. And of course, we've even got code-pending glitch examples to help you get started with boilerplate code to understand how to take data from a webcam and pass it to some of our models. So with that, you can get started very, very quickly.
Now, if you want to go deeper, I recommend reading Deep Learning with JavaScript by Manning Productions, and this is written by folk on my team and the TensorFlow team itself. It's a great resource to go from zero to hero. And with that, I encourage you to come join our community. If you check out the madewithtfjs hashtag on Twitter or LinkedIn, you'll find hundreds of projects that people are creating every single week around the world. And I can't show them all in the presentation today, but here's just a glimpse of some of the great things that are going on elsewhere in the community.
So my last question is, how do you get started with boilerplate code? Well, I think the first thing you need to do is to go to the GitHub repository. So my last question for you is, what will you make? Here's one final piece of inspiration from a guy from our community in Tokyo, Japan. He is a dancer by day, but he's used TensorFlow.js to make this really cool-looking hip-hop video, as you can see on the slide. And my point for saying this is that machine learning really is now for everyone, and I'm super excited to see how everyone else in the world will start to use machine learning now that it becomes more accessible. Artists, musicians, creatives. Everyone has a chance now to use machine learning, and if you do, please make use of that madewithtfjs hashtag so we can have you featured in our future presentations and blog post write-ups. Thank you very much for listening, and with that, feel free to stay in touch. I'm available on Twitter and LinkedIn for further questions, and I look forward to talking with you soon. Thank you. Well, thank you for having me today. Great to be here. Well, really happy to have you.
JavaScript and Machine Learning Collaboration
My first question is about the number of cameras used. TensorFlow.js with JavaScript has the potential to become more widespreadly used than the Python version. You don't need much of an ML background to use pre-made models in JavaScript. For those with a machine learning background, using TensorFlow.js on the client side offers advantages such as privacy and scalability. Use the right tool for the right job and make your cutting-edge ML research accessible to the whole world. Collaboration between the JavaScript and machine learning communities can lead to great innovations.
My first question, and the question is from me, is how many cameras do you have? It felt like you have like six cameras going on, all different angles. I've got definitely more than two cameras here. It's very good for demos and doing things in the web browser and then recording at the same time, so good stuff. Yeah, it was nice to be able to enjoy your glowing microphone from six angles, so, yeah. Thanks a lot for your awesome gear setup. It makes the ML train faster if I've got RGBs, right? Good stuff. We can train on that.
So we're going to go to a question from the audience. Is TensorFlow with JavaScript the new wave? Do you think it will be bigger than using TensorFlow with Python at any point? So if we look at just the number of developers that use JavaScript, currently, I believe, 69% of professional developers use JavaScript in production. So if we're looking at just pure numbers of developers that exist that use JS, they're definitely a larger audience than, say, Python. Will all of those JS developers use machine learning? That's the bigger question, of course. But I do think as time progresses, people will realise how they can use it in the web environments to unlock new things for their clients and so on and so forth. So as it grows, I do believe it has the potential to maybe become more widespreadly used than maybe the Python version. But time will tell how that goes, I guess. So it depends if people start using it more in web environments. Yeah.
So for me, I am a web developer. And it sounds scary to me to go into machine learning. I mean, I just want to move boxes around and make media queries and responsive websites. And then you're saying, basically, if you have that knowledge, if you can do JavaScript web development, you can do machine learning with TensorFlow. Yeah. I mean, you don't need to have too much of an ML background to use our pre-made models that I spoke about in the presentation today. As long as you know some basic JavaScript to how to call a function, basically, you can then get some results back and then do with that as you wish, just like you would use any other black box API from the internet or something like this. So it's super easy to use for people to get started with. But for those of you who have more of a machine learning background, maybe you're already doing it in Python, and you just want to get the reach and scale of the web, because that's something that you might not get if you are doing it on the server side only in your own custom environment and you're not scaling it out to millions of people, then there's some other advantages you can get by doing it using TensorFlow.js on the client side, such as the privacy and all these other things that we spoke about in the presentation. So I think, you know, use the right tool for the right job. And if your goal is to make your cutting-edge ML research kind of popular and accessible to the whole world, then JS is a great way to do that because people who are not so technical get a chance to then use your amazing research. And I'd love to see more of this coming together between the two communities. So imagine if some musician or artist or creative could take that amazing research you've done and apply it in a very different way to maybe what you originally envisioned, but to create some great innovations nonetheless. Kind of like the clothing size estimator that we saw today, but I created in a few days.
Understanding Math and TensorFlow.js
The importance of having a good understanding of math when using TensorFlow.js is similar to Python-based machine learning. While delving into the lower levels may require learning mathematics, working with pre-made models or existing research often simplifies the process. Most users won't need to create custom models from scratch, as there are already models available for various tasks. However, if specific requirements arise, retraining on custom data may be necessary. TensorFlow.js offers accessibility to users of all levels, providing both high-level and low-level APIs. It allows users to start with the basics and gradually explore more advanced concepts.
And the original person who created body pics may not have thought of that. They may not have thought of using it in that use case. But of course, when people have problems in their lives, they'll come up with very creative solutions to those problems. And I'm looking forward to see what happens in the future with that too.
Yes, so basically people can make a cover album. So a follow-up question to that is, how important is it if I want to start using TensorFlow.js to have a good understanding of math? It's just the same as Python. It's a very simple thing. It's a very simple thing. It's a very simple thing. It's just the same as Python-based machine learning. Obviously, if you want to go down and unpeel all the layers of what's going on behind the scenes, you're eventually going to find a lot of the linear algebra and statistics and all this kind of stuff that drives all of this stuff. So if you want to go lower level, then of course you start to need to learn some of the mathematics to start tweaking things. But if you are working with the pre-made models or other research people have made, often a lot of this stuff is very simple. I think maybe 5% of people will probably ever need to actually write their own custom model from scratch. Most of the time, you can reuse existing research. If I want to do face landmark detection, we've got models for that already. I don't need to reinvent the wheel there necessarily, unless I need to understand different points that aren't available in that current model. At that point, I need to start diving into this to start retraining it on my custom data to learn how to recognize the data. So it's a very simple thing. I can start using my custom data to learn how to recognize those things. But if you don't need to do that, then you can do very well without going too deep into the pile there. So it's easy to dip your toes without it being all too scary.
Yeah, I think no matter what your level, everyone has a chance to play with TensorFlow.js. And if people want to go deeper, all the goodies that you're used to from Python land also exist there. We've got a very similar Keras-like API and also the lower-level mathematical API also exists just like the original Python-based version of TensorFlow. So no matter what your level, you can go full into it or you can stay at the high levels and have a more abstract view of it all. Yeah, but it's basically anything you learn, right? You can kind of tip your toes in and it will be okay. And the more familiar you get, you can go deeper and deeper and down the rabbit hole, basically. Exactly. And I think that's the way to go, honestly.
Exploring Machine Learning and the JS Community
People start off on the high level, explore and optimize their passion. I don't consider myself an expert, but I use machine learning creatively. My background is web engineering, and I've been exploring machine learning for the past few years. Being a developer advocate for TensorFlow.js allows me to delve deeper into ML while enjoying JavaScript. The JS community is younger than Python but is starting to develop similar tools. We have danfo.js, inspired by pandas, and welcome contributions to replicate these tools in JS.
I think a lot of people start off on the high level, they find something they're passionate about, they start tinkering with that, and then they want to optimize it in some way or adjust it slightly. And it's at that point they start to go a little deeper into that kind of field and, you know, they make great things. So, yeah, little by little, people go in the right direction as they need to, I think, and explore as they need. Yeah.
Okay. We have another question. How long did it take for you to become a machine learning expert, as you are? I don't consider myself an expert. I'm still learning every day. I'm always inspired by the amazing things that are coming out from the rest of the community and colleagues. So I've got a long way to go on my journey, for sure. I think I just use machine learning in creative ways that people might not always think about. So some of the demos I showed today, for example. So I think I get known for those kinds of things. But my machine learning journey, my background is actually web engineering, and it's only been a couple of years and it's only been the last three or four years that I've actually started exploring the machine learning side of things. And I was very fortunate to get this role being the developer advocate for TensorFlow.js, where I can now have a chance to go much, much deeper onto the ML side as well as still enjoying my passion for JavaScript as well. So, yeah, I've got a lot of learning to do ahead of me, and I'm excited to do all of that with all of you as well and hopefully share my learnings as I go. Learning in public. That's fun, always. Exactly.
So next question is actually from my colleague Sergey. Python has a big toolset of explorative data analysis pandas. Jupyter, Math, Plotlib. How is it for JS? What is the biggest bottleneck for the JS community to deep learning? Yeah. So obviously we are much, much younger than the Python community. We're only about, well, just over three years old compared to TensorFlow Python, which is more like three plus years old now. However, we are starting to see our JS community port some of these things over to JS. So there's a library called danfo.js, which is essentially inspired by pandas. And people are starting to make these tools and things in JS now as well. And of course we'd love anyone who's interested in helping us make and replicate these things in JS, please do come contribute. We're open source, and of course for these other things, you don't have to be contributing to TensorFlow.js itself.
Opportunities and Job Security in TensorFlow.js
It could be a standalone library. Now is a good time to get involved. Job security in machine learning is in demand. It depends on the needs of the company. JavaScript allows for easy implementation of ML models. Jobs will grow as awareness grows. Learning the skills now is great for the future. JS developers have the potential to optimize various industries with ML.
It could be a standalone library. And now is a really good time to get involved because, as I mentioned, 69% of developers are using JavaScript and they're just starting to get their feet wet with machine learning. And this could be a big opportunity to make the library that everyone uses five to ten years from now when it does become more mainstream in the JavaScript community. So this is a great opportunity to be on that bandwagon, if you will. But yes, we're starting to see the community build these things organically, and I'm sure that's going to continue to improve with time.
And this is a question from me again. I want to say job security is... Let's say I drop everything and go full in TensorFlow.js, I don't know, a few months of studying. Job security, how is it? I mean, machine learning, generally speaking, is in demand. I think most people at this point in time are not really interested in the job security side of things. And I think that's a good thing. I think that's a good thing. I think most people at this point in time would associate machine learning with Python-based roles, typically, especially at the more academic institutions and research areas, even at Google, for example. However, I think it depends on the needs of the company you're trying to go to. If it's some kind of web startup, maybe they make content management systems or something like this for websites and stuff like this. And maybe they want to have a differentiating factor to make them stand out from the crowd. Maybe they want to put some smarts into their web stack for that content management system. Maybe you could have auto-text summarization or smart cropping of images when people upload images. And all this stuff can be done now in JavaScript very, very easily using some of the models we've seen today. And yeah, basically, I think these jobs will grow for sure as people become more aware that it's actually possible to do it in the browser. We're still at the very early stages, as I mentioned, but as people's awareness grows, so will the jobs and other things that come with that. Yeah. So, okay. Learning the skills now is great for the next few years when those jobs start to appear, when people actually will need this. Because as a JS developer, we're in a unique position that we... When we make a website, that website could be for pretty much any industry out there. It could be for a farmer. It could be for a fashion brand. It could be for anything. And all of these different verticals have the potential to be optimized in some way using some kind of ML flow.
ML in Various Industries
ML can be injected into various industries, such as fashion and farming, to optimize processes and improve user experiences. For example, a clothing size estimator can be used in the fashion industry, while categorizing goods like apples and oranges can streamline farming operations. The web platform also offers opportunities for frictionless experiences without the need for app installations. Machine learning can optimize touchpoints on websites, enhancing user experiences.
So for fashion, maybe something like the clothing size estimator. For farming, maybe it allows them to automatically send goods to the right place by categorizing apples and oranges, whatever it might be. So there's potential for ML to be injected into their pipelines. And maybe there's room for something on the web platform too for a more seamless, frictionless experience where you don't need to install an app. You just go to the website. So in that case, there's one touchpoint that might need to be optimized using machine learning in that use case. So things like that might come up in the future.
Integration with Server-side TensorFlow
JavaScript developers should learn TensorFlow.js for job security. TensorFlow.js will work hand-in-hand with the server-side TensorFlow serving system. Depending on the situation and team's knowledge, integrating TensorFlow.js with Node.js can simplify integration with web teams. Converting pre- and post-processing layers to Node.js can lead to significant performance improvements. The right combination of tools depends on the use case and team's familiarity with different languages.
Yeah. So JavaScript developers that are watching, you heard it here first. It's good for your job security. Learn TensorFlow. I have a question from Jeroen. Will TensorFlow.js make the server-side TensorFlow serving system obsolete? No. No, I think it will work hand-in-hand with these things. So as I mentioned, it depends on your engineers and what they're used to. I wouldn't say we want to start any language wars here. I think if you know Python and you're familiar with Python, go and use it for sure. But if you're integrating with a web team, then maybe the web team will be using Node.js. And in that case, they might prefer to load your Python model in Node, which you can do with TensorFlow.js on Node. And you can do that without any conversion, in fact. So it might be a little bit more complicated but it makes integration with other teams a lot easier, maybe, depending on the stacks that they are already using. And a lot of web engineering teams will be using something like Node.js. And we also saw today that by converting the pre- and post-processing layers of your whole stack, of your ML stack, if you convert that to Node, you can actually get the benefits of a just-in-time compiler of JavaScript that we saw with HuggingFace for the DistillBERT implementation. It led to a two-times end-to-end speed increase in performance, which is great just by converting to Node.js on the server side. So I think it depends on your use case and your team's knowledge and all these kinds of things. So I don't have any strong preference between using one or the other, but using the right combination may depend on your situation and what you're trying to do.
Running TensorFlow.js without Server
If you're running on the client side, everything is completely off the server, other than the initial delivery of assets. It can run completely offline if you use a progressive web app. You can do it completely serverless if you choose to do so. Thanks a lot, Jason. Just a reminder, Jason is going to the speaker room now. If you have more questions or want to hang out, go to the Spatial Chat. Thanks for joining us, Jason. I look forward to seeing what you make. Cheers.
Yeah. Okay. We have time for one short answer. This is a question from the user Quest. Very inspiring talk. We talked about security benefits of running TensorFlow.js without server. Is it possible to keep everything off the server, or are there things that wouldn't work without the server? One minute, go.
So yes, if you're running on the client side, then everything is completely off the server, other than the initial delivery of those assets. So in that sense, it can run completely offline if you use something like a progressive web app or something like this. So yes, you can do it completely serverless if you choose to do so.
Cool. Well, thanks a lot, Jason. Just as a reminder, Jason is going to go to the speaker room now. If you have any more questions or want to just hang out with Jason, go to the Spatial Chat, find the link below in the timetable. And thanks a lot for joining us, Jason. And yeah, I might just get my hands dirty this weekend. Excellent. I look forward to seeing what you make. And thanks for having me. Cheers. Bye-bye.
Comments