Machine Learning on the Edge Using TensorFlow Lite

Rate this content
Bookmark

What if you could perform machine learning on the edge, i.e on your mobile device? This would mean that you no longer would need the roundtrip to the server, no data will leave the device and you don't even need an internet connection. In this session you will get an introduction to TensorFlow Lite so that you can use it in your own projects.


This talk has been presented at ML conf EU 2020, check out the latest edition of this Tech Conference.

FAQ

TensorFlow Lite is an open-source deep learning framework designed to deploy machine learning models on mobile and IoT devices.

The main components of TensorFlow Lite are the TensorFlow Lite interpreter, which runs optimized TensorFlow models on various hardware, and the TensorFlow Lite converter, which converts regular TensorFlow models into a more efficient form.

Using TensorFlow Lite offers advantages such as reduced latency, improved privacy by keeping user data local, no requirement for internet connectivity, and the ability to run on low-power devices. Additionally, it supports multiple APIs and allows the use of pre-trained models for common tasks.

TensorFlow Lite supports multiple programming languages including Java, Swift, Objective-C, C++, and Python.

Yes, TensorFlow Lite allows the use of pre-trained models for common machine learning tasks. You can either use your own model or choose from available pre-trained models.

The process to run an inference using TensorFlow Lite involves four main steps: initializing the interpreter, preparing the image input, performing the inference, and obtaining and mapping the results.

TensorFlow Lite enhances privacy and security by keeping user data local on the device, eliminating the need for data to be sent over the internet.

In Tanzania, farmers use a TensorFlow Lite model on mobile devices to detect diseases in crops, demonstrating a practical application of TensorFlow Lite in agriculture.

You can learn more about TensorFlow Lite from the official TensorFlow documentation, Google IO conference links, and Coursera courses on TensorFlow.

You can reach Håkan Silvernagel on Twitter or via email for any questions or comments.

Håkan Silfvernagel
Håkan Silfvernagel
8 min
02 Jul, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

TensorFlow Lite is an open-source deep learning framework designed to deploy machine learning models on mobile and IoT devices. It consists of two main components: the TensorFlow Lite interpreter and the TensorFlow Lite converter, which help optimize and run models on various hardware. The benefits of using TensorFlow Lite include reduced latency, increased privacy, and improved connectivity. Developers can use pre-trained models for tasks like object detection, and the framework supports multiple programming languages including Java, Swift, Objective-C, C++, and Python. A demonstration on a Raspberry Pi shows the model recognizing objects like a banana and a remote control. TensorFlow Lite has practical applications in fields such as agriculture, where it helps detect crop diseases. More resources can be found in the official TensorFlow documentation, Google IO conference, and Coursera courses.

1. Introduction to TensorFlow Lite

Short description:

In this part, Håkan Silvernagel introduces TensorFlow Lite, an open-source deep learning framework for deploying machine learning models on mobile and IoT devices. He explains the two main components of TensorFlow Lite, the interpreter, and the converter, and highlights the benefits of using TensorFlow Lite, such as reduced latency, increased privacy, and improved connectivity. Håkan also mentions the availability of multiple APIs and the ability to use pre-trained models for common machine learning tasks. He concludes by mentioning the use of a pre-trained model for object detection on a Raspberry Pi inside a TJ bot.

Good morning, good afternoon, and good evening. My name is Håkan Silvernagel, I'm going to be talking about how you can do machine learning on the edge using TensorFlow Lite. So I'm working as a manager for AI and big data for a Norwegian consultancy company called Myles and I'm also a Microsoft MVP. But the topic here for today is TensorFlow Lite.

So what is TensorFlow Lite? Well, it's an open source deep learning framework so that you can deploy machine learning models on both your mobile and IOT devices. So it can be on, for example, a Raspberry Pi that I will show you a little bit later on, but it can also be on a tiny microcontroller. So the way that this works is TensorFlow Lite has two main components. There's one TensorFlow Lite interpreter. So what it does, it runs optimized TensorFlow models on different types of hardware. And then we have the TensorFlow Lite converter. So the converter converts regular TensorFlow models into much more efficient form, which has been optimized both to improve the binary size and the performance.

So you might think to yourself, why should I use TensorFlow Lite? So there are a number of arguments here why we would prefer to use TensorFlow Lite. So one of them has to do with latency. So since everything that we do will take place on the device, there is no need for a round trip between the client and the server. And also in terms of privacy, the user data will be local. So that means that you have an increased security. And also in terms of connectivity, you don't require an internet connection. And in addition, a lot of these small devices, they don't have enough power in order to power up a radio transmitter. And there are also multiple API's here. So you can use either Java, Swift, Objective-C, C++, or Python. So there are lots of potentials. And the final thing, which is maybe the most important here, is that you can use pre-trained models for common machine learning tasks. The way that it works is that you pick a model, either you pick your own model, or you can pick one of these pre-trained models. And then you can convert your model into the TensorFlow Lite format. And then after that, you deploy it to your device. And then you can also optimize some optimization of the model. On this slide, we can see some of the pre-trained models that you can use. So we will be using a model for object detection in a quick while here. When we run object detection, we will run it on a Raspberry Pi, which is inside a TJ bot. And then it will be able to recognize different types of objects.

2. Using a Preoptimized Mobile Net SSD for Inference

Short description:

We will be using a preoptimized mobile net SSD trained on the CocoaDat dataset. The four main steps for inference are initializing the interpreter, preparing the image input, performing the inference, and obtaining and mapping the results. The process involves loading the interpreter, allocating tensors, preparing the image input from the video camera, performing inference, and obtaining the bounding box coordinates, class index, and confidence level of the detected objects. A quick demo on the Raspberry Pi device shows the recognition of a person but faces some issues with object recognition.

We will be using a preoptimized mobile net SSD trained on the CocoaDat dataset. So the Cocoa is trained on 80 object categories. And we will have a label file that will map the output of the models so that we know what it actually is in text. And this is an example from the GitHub link that you can see on the slide. And you can also download the TF Lite model file and also the label file from the URL below.

So when we do inference, there are four main steps that we need to do. One of them is to initialise the interpreter. The second is to prepare the image input. And the third is to perform the inference. And the last step is to actually obtain and map the results.

So we load the interpreter from the TF Lite runtime. And then we allocate the tensors, the input and the output tensor. And then in our second step, we need to prepare the image input. So now we will take image from the video camera. So we define a class, video stream class. And then what we want to do is we want to read from the stream, from the camera stream. So we start up an infinite loop here, where we just take the latest camera frame from the stream, and then we return it to the calling application. And then in our application, we define a video stream object. And then we just acquire the frame and then we do some resizing. the right size for TensorFlow. And then we perform the inference. So we point to data that will be used for this test, and then we run the interpreter by running invoke. And finally, we just obtain and map the results.

So in this case, we get the bounding box with the coordinates of the objects, and also get the class index of the detected objects and the confidence level of the objects. So let me switch over here for a quick demo. So now we are on the Raspberry Pi device. And then I can hold up an object here for the Raspberry Pi. And then we can see that it recognizes the person, which is me. It has some problems here with recognizing my object. Let me try another object.