TensorFlow.js 101: ML in the Browser and Beyond

Rate this content
Bookmark

Discover how to embrace machine learning in JavaScript using TensorFlow.js in the browser and beyond in this speedy talk. Get inspired through a whole bunch of creative prototypes that push the boundaries of what is possible in the modern web browser (things have come a long way) and then take your own first steps with machine learning in minutes. By the end of the talk everyone will understand how to recognize an object of their choice which could then be used in any creative way you can imagine. Familiarity with JavaScript is assumed, but no background in machine learning is required. Come take your first steps with TensorFlow.js!

This talk has been presented at ML conf EU 2020, check out the latest edition of this Tech Conference.

FAQ

TensorFlow.js is a library that allows you to run, retrain, or write your own machine learning models directly in JavaScript, enabling machine learning applications to be built in environments where JavaScript runs such as web browsers, servers, and mobile devices.

Using machine learning in JavaScript with TensorFlow.js allows deployment across diverse environments like web browsers, servers, and IoT devices without additional plugins. This approach offers powerful capabilities such as client-side execution, preserving privacy, reducing server costs, and enhancing interactivity and user experience.

Yes, TensorFlow.js can execute existing Python machine learning models through its Node.js implementation, which supports loading Keras models and TensorFlow Saved models without requiring conversion when executed server-side.

Pre-trained models in TensorFlow.js are easy-to-use JavaScript classes designed for common use cases such as object detection, body segmentation, and face landmark detection, allowing developers to incorporate machine learning features without starting from scratch.

TensorFlow.js can run completely client-side in the browser, meaning that sensitive data does not need to be sent to a server for processing. This approach helps in preserving user privacy and complying with data protection regulations like GDPR.

Transfer learning in TensorFlow.js allows developers to retrain existing models with their own custom data, adapting pre-trained models to new, specific tasks without the need for extensive training data and computational resources.

Yes, TensorFlow.js is accessible for developers with varying levels of expertise. It offers pre-made models and high-level APIs that simplify the integration of machine learning capabilities into applications without deep knowledge of underlying algorithms.

TensorFlow.js enables machine learning models to run directly on client devices, utilizing hardware acceleration like GPUs and reducing latency for real-time applications. This setup can lead to cost savings and performance gains, especially in environments where quick data processing is crucial.

Performance of TensorFlow.js is competitive with TensorFlow in Python, especially when models run on Node.js with access to CPU and GPU resources similar to Python. JavaScript's just-in-time compilation can also offer additional performance boosts for certain tasks.

Jason Mayes
Jason Mayes
41 min
02 Jul, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

The video covers the use of TensorFlow.js for machine learning in the browser and beyond, emphasizing its versatility in various environments such as web browsers, servers, and mobile devices. TensorFlow.js allows developers to run, retrain, or write machine learning models directly in JavaScript, providing benefits like client-side execution and privacy preservation. Pre-trained models in TensorFlow.js, such as object detection, body segmentation, and face landmark detection, enable rapid prototyping and creative applications. The video also highlights the use of transfer learning to adapt pre-trained models to new tasks with custom data. JavaScript's interactivity and rich presentation capabilities make it ideal for integrating machine learning into web applications, offering performance benefits through hardware acceleration like GPUs. The talk also mentions the potential of TensorFlow.js to enhance various industries, including fashion and farming, by optimizing processes and improving user experiences. The video encourages developers to explore the TensorFlow.js ecosystem, including tools like Teachable Machine and Cloud AutoML, and to engage with the community for further learning and collaboration.

1. Introduction to TensorFlow.js

Short description:

Hello, everyone. I'm going to be talking to you about machine learning in the browser and beyond. JavaScript is one of the only languages that can run across all of these environments without additional plugins. TensorFlow.js is a great way to train your machine learning models. You can make anything you might dream up, from augmented reality to sentiment analysis. The easiest way is to use our pre-trained models, which include object detection, body segmentation, pose estimation, and face landmark detection. You can see this in action live in the browser, even using the webcam.

TensorFlow. Hello, everyone. I'm going to be talking to you about machine learning in the browser and beyond. So let's get started.

Now, first off, why would you want to use machine learning in JavaScript? That's a great question. And if we look here, we can see that we can use machine learning anywhere JavaScript can run, and that's actually a lot of places. The web browser, server side, desktop, mobile, and even Internet of Things. And JavaScript is one of the only languages that can run across all of these environments without additional plugins. And that alone is very, very powerful.

And with TensorFlow.js, you can run, retrain via transfer learning, or write your own machine learning models completely from scratch if you want to. And you can also use TensorFlow.js to run on your own machine learning models. So let's get started. So TensorFlow.js is a great way to train your machine learning models, and you can run, retrain, or write your own machine learning models completely from scratch if you choose to do so, just like you might be doing in Python right now, but in JavaScript. And that means you can make anything you might dream up, from augmented reality, sound recognition, sentiment analysis, and much, much more. It's really up to you what you create.

So the easiest way is to use our pre-trained models. These are super easy to use JavaScript classes for many common use cases, like we can see on the current slide. And here, we can see things like object detection, body segmentation, pose estimation, face landmark detection, and much, much more. Even natural language processing is supported in the browser. So let's see some of these in action. Now, first up is object recognition. This is using Cocoa SSD behind the scenes, and is trained on 90 object classes. You can see this in action on the right-hand side, with the dogs being highlighted with their bounding boxes, and we can even know that there's two dogs in this image as both are returned to us. So let's see this in action live to see how it performs in the browser.

Okay, so here's a web page I created that's running this code live in Chrome, and if I click on any one of these images, I can now get object detection working for any objects it finds in those images, even if it's different object class types. But we can do better than this, we can actually enable the webcam and then do this live in realtime, and you can see me talking to you right now, and you can see how it's classifying both myself and the bed in the background sometimes as I speak to you. And this is super cool, because not only is this running at a client-side script, this is running client-side in a browser, so that means none of this webcam imagery is being sent to a server for classification. Not only does that save costs, it also means my privacy is preserved, and that's a really big thing these days. And with TensorFlow.js, you can get that for free. So next up, we've got face mesh.

2. Face Mesh, Body Segmentation, and Pose Estimation

Short description:

This part introduces the face mesh feature, which can recognize 468 facial landmarks on the human face. It also showcases the real-time performance of face mesh in a web browser, with the ability to move the face around and see the mesh overlay. Additionally, the use of body segmentation and pose estimation is demonstrated, highlighting the creative potential of these models.

This is just 3 megabytes in size and can recognize 468 facial landmarks on the human face. And this is a really cool feature, and I'm going to show you how to use it on the slide right now.

Now, people are starting to use this for real-world use cases, such as L'Oreal, and they've created AR makeup try-on, which allows you to try on lipstick in this case in realtime without even having to be there physically present in the store. So you should note, the lady on the right-hand side is not wearing any lipstick. We're using face mesh to understand where her lips are, and how to apply the lipstick she wants on her face in realtime. So this is super cool, and I'm sure we're going to see more stuff like this coming out in the future.

So let's see face mesh in action to see how it performs in the real world. Let's switch to the demo. Okay, so now you can see me talking to you with face mesh running in realtime in the web browser at the same time. On the left-hand side here, you can see the machine learning in action, and there is indeed a mesh of my face being overlaid on the web browser, and I can move my face around, and it's pretty robust. I can open and close my mouth and my eyes, and you can see that happening all in realtime. And we're getting a pretty solid 25 frames a second or so as I'm running on my GPU via WebGL.

Now, of course, I'm also streaming this live, so I'm using some of my GPU resources right now. I could get higher frames per second if I desired, but I can also switch as a developer to Wasm, which is WebAssembly, to execute on a CPU, or I can choose to execute on the CPU by itself, which is the slowest form of execution. So there's a number of options on where you can deploy to. And because this is JavaScript, not only am I doing the machine learning there on the left-hand side, I'm also able to render this 3D point cloud of the results on the right-hand side using 3.js. And JavaScript from day one has been designed for the presentation and display of information, so you've got a lot of libraries, especially for 3D graphics, to be able to do this very, very easily. And you can see I can move it around and now inspect my face from different angles, which is being constructed in realtime live in the browser.

Okay, back to the slides. So next up is body segmentation. This allows you to distinguish 24 body areas across multiple bodies all in realtime. You can see this is an action on the slide, and on the right-hand side, we've got a number of lines that represent different parts of each body. Even better, we've got pose estimation going on at the same time, those light blue lines contained with each one of the bodies on the right-hand side there, which allow us to estimate where the human skeleton is. And with that, that can enable really powerful demos, such as the ability to recognize when you're in a certain pose, or a gesture, or something like this. And we've got a lot of great data from our yoga instructors and this kind of stuff, so it's super cool to see the creative potential of this model. And in fact, with a bit of creativity, we can use things like body pics in a number of delightful ways. Here's just two examples I created in my spare time. On the left-hand side, you can see me becoming invisible as I walk around on my bed, and notice when I walk on the bed, the bed still deforms. And with this static image, I'm able to calculate the background updates in real-time and only remove my body from the parts where my body actually is. And of course, it's not perfect, but it's the first steps, and this was made in just a day.

QnA

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building Brain-controlled Interfaces in JavaScript
JSNation Live 2021JSNation Live 2021
27 min
Building Brain-controlled Interfaces in JavaScript
Top Content
Learn how to build brain-controlled interfaces using JavaScript and brain sensors. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore examples of calm and focus detection, as well as the Kinesis API for mental commands. Discover the applications of brain-controlled interfaces, such as scrolling web pages and password-less authentication. Understand the limits and opportunities of brain control and the potential for using brain sensors in medical applications.
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
JSNation 2022JSNation 2022
21 min
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
ChirpX is a technology to securely run binary code in the browser, written in C++ and compiled to JavaScript WebAssembly. It can run a full virtualized system in the browser, including Bash and other languages like Python and JavaScript. ChirpX aims for scalability and the ability to work with large code bases, supporting multiprocessing and multithreading. It uses a two-tiered execution engine with an interpreter and a JIT engine. Future plans include running the full X.Org server in the browser and implementing the Windows system call. WebVM, the underlying technology, has a virtual file system backed by Cloudflare.
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
React Advanced Conference 2021React Advanced Conference 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
MediaPipe is a cross-platform framework that helps build perception pipelines using machine learning models. It offers ready-to-use solutions for various applications, such as selfie segmentation, face mesh, object detection, hand tracking, and more. MediaPipe can be integrated with React using NPM modules provided by the MediaPipe team. The demonstration showcases the implementation of face mesh and selfie segmentation solutions. MediaPipe enables the creation of amazing applications without needing to understand the underlying computer vision or machine learning processes.
Build a 3D Solar System with Hand Recognition and Three.js
JSNation 2022JSNation 2022
36 min
Build a 3D Solar System with Hand Recognition and Three.js
Top Content
This Talk explores the use of TypeScript, 3JS, hand recognition, and TensorFlow.js to create 3D experiences on the web. It covers topics such as rendering 3D objects, adding lights and objects, hand tracking, and creating interactive gestures. The speaker demonstrates how to build a cube and a bouncy box, move objects with flick gestures, and create a solar system with stars and planets. The Talk also discusses the possibilities of using hand gestures for web navigation and controlling websites, as well as the performance limits of these technologies.
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
JSNation 2022JSNation 2022
22 min
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
Top Content
Welcome to MakePad, a new way to build UI for web and native using WebAssembly and Rust. JavaScript is not suitable for complex applications like IDEs and design tools. Rust, a new programming language, was used to reimagine MakePad, resulting in a fast and efficient platform. MakePad offers live editing, high CPU performance, and the ability to load native instrument components. The future of MakePad includes an open-source release, a design tool, and support for importing 3D models.
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Article
Charlie Gerard
Charlie Gerard
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.
What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.

Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter

Workshops on related topic

Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Featured Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
C3 Dev Festival 2024C3 Dev Festival 2024
48 min
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
WorkshopFree
Andreia Ocanoaia
Andreia Ocanoaia
Feeling the limitations of LLMs? They can be creative, but sometimes lack accuracy or rely on outdated information. In this workshop, we’ll break down the process of building and easily deploying a Retrieval-Augmented Generation system. This approach enables you to leverage the power of LLMs with the added benefit of factual accuracy and up-to-date information.
Let AI Be Your Docs
JSNation 2024JSNation 2024
69 min
Let AI Be Your Docs
Workshop
Jesse Hall
Jesse Hall
Join our dynamic workshop to craft an AI-powered documentation portal. Learn to integrate OpenAI's ChatGPT with Next.js 14, Tailwind CSS, and cutting-edge tech to deliver instant code solutions and summaries. This hands-on session will equip you with the knowledge to revolutionize how users interact with documentation, turning tedious searches into efficient, intelligent discovery.
Key Takeaways:
- Practical experience in creating an AI-driven documentation site.- Understanding the integration of AI into user experiences.- Hands-on skills with the latest web development technologies.- Strategies for deploying and maintaining intelligent documentation resources.
Table of contents:- Introduction to AI in Documentation- Setting Up the Environment- Building the Documentation Structure- Integrating ChatGPT for Interactive Docs
Intro to AI for JavaScript Developers with Tensorflow.js
JSNation Live 2021JSNation Live 2021
81 min
Intro to AI for JavaScript Developers with Tensorflow.js
Workshop
Chris Achard
Chris Achard
Have you wanted to explore AI, but didn't want to learn Python to do it? Tensorflow.js lets you use AI and deep learning in javascript – no python required!
We'll take a look at the different tasks AI can help solve, and how to use Tensorflow.js to solve them. You don't need to know any AI to get started - we'll start with the basics, but we'll still be able to see some neat demos, because Tensorflow.js has a bunch of functionality and pre-built models that you can use on the server or in the browser.
After this workshop, you should be able to set up and run pre-built Tensorflow.js models, or begin to write and train your own models on your own data.
Hands on with TensorFlow.js
ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Jason Mayes
Jason Mayes
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
ML conf EU 2020ML conf EU 2020
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Alyona Galyeva
Alyona Galyeva
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.