Video Summary and Transcription
TensorFlow Lite is an open-source deep learning framework designed to deploy machine learning models on mobile and IoT devices. It consists of two main components: the TensorFlow Lite interpreter and the TensorFlow Lite converter, which help optimize and run models on various hardware. The benefits of using TensorFlow Lite include reduced latency, increased privacy, and improved connectivity. Developers can use pre-trained models for tasks like object detection, and the framework supports multiple programming languages including Java, Swift, Objective-C, C++, and Python. A demonstration on a Raspberry Pi shows the model recognizing objects like a banana and a remote control. TensorFlow Lite has practical applications in fields such as agriculture, where it helps detect crop diseases. More resources can be found in the official TensorFlow documentation, Google IO conference, and Coursera courses.
1. Introduction to TensorFlow Lite
In this part, Håkan Silvernagel introduces TensorFlow Lite, an open-source deep learning framework for deploying machine learning models on mobile and IoT devices. He explains the two main components of TensorFlow Lite, the interpreter, and the converter, and highlights the benefits of using TensorFlow Lite, such as reduced latency, increased privacy, and improved connectivity. Håkan also mentions the availability of multiple APIs and the ability to use pre-trained models for common machine learning tasks. He concludes by mentioning the use of a pre-trained model for object detection on a Raspberry Pi inside a TJ bot.
Good morning, good afternoon, and good evening. My name is Håkan Silvernagel, I'm going to be talking about how you can do machine learning on the edge using TensorFlow Lite. So I'm working as a manager for AI and big data for a Norwegian consultancy company called Myles and I'm also a Microsoft MVP. But the topic here for today is TensorFlow Lite.
So what is TensorFlow Lite? Well, it's an open source deep learning framework so that you can deploy machine learning models on both your mobile and IOT devices. So it can be on, for example, a Raspberry Pi that I will show you a little bit later on, but it can also be on a tiny microcontroller. So the way that this works is TensorFlow Lite has two main components. There's one TensorFlow Lite interpreter. So what it does, it runs optimized TensorFlow models on different types of hardware. And then we have the TensorFlow Lite converter. So the converter converts regular TensorFlow models into much more efficient form, which has been optimized both to improve the binary size and the performance.
So you might think to yourself, why should I use TensorFlow Lite? So there are a number of arguments here why we would prefer to use TensorFlow Lite. So one of them has to do with latency. So since everything that we do will take place on the device, there is no need for a round trip between the client and the server. And also in terms of privacy, the user data will be local. So that means that you have an increased security. And also in terms of connectivity, you don't require an internet connection. And in addition, a lot of these small devices, they don't have enough power in order to power up a radio transmitter. And there are also multiple API's here. So you can use either Java, Swift, Objective-C, C++, or Python. So there are lots of potentials. And the final thing, which is maybe the most important here, is that you can use pre-trained models for common machine learning tasks. The way that it works is that you pick a model, either you pick your own model, or you can pick one of these pre-trained models. And then you can convert your model into the TensorFlow Lite format. And then after that, you deploy it to your device. And then you can also optimize some optimization of the model. On this slide, we can see some of the pre-trained models that you can use. So we will be using a model for object detection in a quick while here. When we run object detection, we will run it on a Raspberry Pi, which is inside a TJ bot. And then it will be able to recognize different types of objects.
2. Using a Preoptimized Mobile Net SSD for Inference
We will be using a preoptimized mobile net SSD trained on the CocoaDat dataset. The four main steps for inference are initializing the interpreter, preparing the image input, performing the inference, and obtaining and mapping the results. The process involves loading the interpreter, allocating tensors, preparing the image input from the video camera, performing inference, and obtaining the bounding box coordinates, class index, and confidence level of the detected objects. A quick demo on the Raspberry Pi device shows the recognition of a person but faces some issues with object recognition.
We will be using a preoptimized mobile net SSD trained on the CocoaDat dataset. So the Cocoa is trained on 80 object categories. And we will have a label file that will map the output of the models so that we know what it actually is in text. And this is an example from the GitHub link that you can see on the slide. And you can also download the TF Lite model file and also the label file from the URL below.
So when we do inference, there are four main steps that we need to do. One of them is to initialise the interpreter. The second is to prepare the image input. And the third is to perform the inference. And the last step is to actually obtain and map the results.
So we load the interpreter from the TF Lite runtime. And then we allocate the tensors, the input and the output tensor. And then in our second step, we need to prepare the image input. So now we will take image from the video camera. So we define a class, video stream class. And then what we want to do is we want to read from the stream, from the camera stream. So we start up an infinite loop here, where we just take the latest camera frame from the stream, and then we return it to the calling application. And then in our application, we define a video stream object. And then we just acquire the frame and then we do some resizing. the right size for TensorFlow. And then we perform the inference. So we point to data that will be used for this test, and then we run the interpreter by running invoke. And finally, we just obtain and map the results.
So in this case, we get the bounding box with the coordinates of the objects, and also get the class index of the detected objects and the confidence level of the objects. So let me switch over here for a quick demo. So now we are on the Raspberry Pi device. And then I can hold up an object here for the Raspberry Pi. And then we can see that it recognizes the person, which is me. It has some problems here with recognizing my object. Let me try another object.
3. Object Recognition and Real-World Example
In this part, Håkan Silvernagel demonstrates the object recognition capabilities of TensorFlow Lite. He showcases how the model can accurately identify objects, such as a banana and a remote control, displaying the confidence level and bounding box. Håkan also shares a real-world example of using TensorFlow Lite to detect a disease affecting farmers in Tanzania. He concludes by providing references to the official TensorFlow documentation, Google IO conference, and TensorFlow courses on Coursera, and expresses gratitude to the audience.
As you can see, for example, this it recognizes it's a banana. And I can take another object here. This it recognizes that it's a remote control. You can also see the confidence level and the bounding box.
So, I also have an example here from the real world. How you can use TensorFlow in a real-world problem. So, in Tanzania, the farmers have a big problem because there is a disease. And they are using a TensorFlow Lite model in order to do run TensorFlow on the mobile and be able to detect the disease.
So, with that, I think I will just quickly switch over here to some references. So, here you can see the official TensorFlow documentation, and you also have some links to the Google IO conference. And some courses about TensorFlow on the Coursera website. If you have any questions or comments about this, you can always reach me on Twitter or reach me on an email. So, with that said, I want to thank you for tuning in, and I wish you a great conference.
Comments