Boost Productivity with Keras Ecosystem

Rate this content
Bookmark

TensorFlow has built a solid foundation for various machine learning applications, on top of which the Keras ecosystem can really boost the productivity of the developers in building machine learning solutions. Keras has a simple and arbitrarily flexible API for building and training models. However, we still need a lot of manual work to tune the hyperparameters. Fortunately, with Keras Tuner, we can automate the hyperparameter tuning process with minor modifications to the code for building and training the models. To further boost the productivity, we introduce AutoKeras, which fully automates the model building, training, and hyperparameter tuning process. It dramatically reduces the amount of prior knowledge needed of using machine learning for some common tasks. All you need is to define the task and to provide the training data.

This talk has been presented at ML conf EU 2020, check out the latest edition of this Tech Conference.

FAQ

TensorFlow is a foundational framework for deep learning that supports tensor manipulations, automatic differentiation, and deployment across various platforms. It is integral to the Keras ecosystem, providing the backend for executing complex mathematical operations and model training processes.

Automatic differentiation in TensorFlow allows for the calculation of gradients automatically without manual computation of derivatives. This is achieved using gradient tape to monitor variables and execute backpropagation efficiently.

TensorFlow supports deployment on various platforms including servers via TensorFlow Extended, single-chip machines such as edge devices using TensorFlow Lite, and web pages with TensorFlow.js.

Keras simplifies deep learning by providing high-level APIs that abstract the underlying TensorFlow operations. It allows for easy model creation with layers and training without the need to manually program the underlying mathematical operations or training loops.

Keras Tuner is a part of the Keras ecosystem that automates the tuning of hyperparameters in models. It requires only minor modifications to existing code and supports algorithms like random search, Bayesian optimization, and Hyperband to find optimal model configurations.

AutoKeras is an AutoML library for deep learning under the Keras ecosystem that reduces the learning curve for adopting machine learning. It enables building machine learning models with minimal code and no required ML background, supporting tasks like image classification and regression with advanced features like model export for deployment.

AutoKeras accommodates complex real-world scenarios through features like multi-model data handling and multitasking capabilities. It allows for the integration of diverse data types and simultaneous execution of multiple learning tasks, enhancing its applicability in varied applications.

Haifeng Jin
Haifeng Jin
30 min
02 Jul, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

The video highlights the Keras ecosystem's ability to simplify deep learning. TensorFlow offers tensor manipulations, automatic differentiation, and versatile deployment options including TensorFlow Lite and TensorFlow.js. Keras provides high-level APIs, making it easy to add layers and handle loss calculations and training loops. Keras Tuner automates hyperparameter tuning using algorithms like Bayesian optimization. AutoKeras makes machine learning accessible with minimal code, supporting tasks like image classification and regression. Beginners can start with AutoKeras or beginner-level courses. TensorFlow's Eager Execution offers flexibility for researchers. AutoKeras supports complex scenarios like multi-model data and multitasking. Time series forecasting will be available in AutoKeras by year-end.

1. Introduction to TensorFlow in Keras Ecosystem

Short description:

Hello, everyone. I'm very happy to be here at MLConf EU with you guys and thank you very much for the invitation. My presentation is about boost productivity with Keras ecosystem. We're going to highlight four features of TensorFlow. The first one is for tensor manipulations. The second feature is about automatic differentiation. Another important feature of TensorFlow is about deployment. It can be deployed anywhere for a model implemented with TensorFlow. TensorFlow is really about performance and scalability. But how do we really leverage the power of TensorFlow?

Hello, everyone. I'm very happy to be here at MLConf EU with you guys and thank you very much for the invitation. And my presentation is about boost productivity with Keras ecosystem.

So there are many members of the Keras ecosystem and the first one we're going to introduce is TensorFlow. I believe many of you guys have heard of it. It's a very powerful foundation for all the deep learning solutions built on top of it. But today, we're going to highlight four features of it for you to see how other members of the Keras ecosystem going to depend on these features to build something more.

The first one, we all know it's for tensor manipulations. As you can see, we have A and B as two by two matrix here, and you can do A plus B or C or exponential of D with them. This is supported by TensorFlow, or a source of mathematical operations on all those tensors. And the second feature we want to highlight is about automatic differentiation. It means you can calculate the gradients automatically without manually computing any of the derivatives on your own. Again, we have A and B as matrices, two by two. And the C is here, actually calculated from A and B. We got C. And now we want to calculate the gradient of C with respect to A. And you don't have to do any derivatives on your own, but by inserting these lines. With gradient tape, watch A, you can actually automatically calculate C, D, A, just by calling tape.gradientCA.

And another important feature of TensorFlow that's really used by any other library in the Keras ecosystem is about deployment. It can be deployed anywhere for a model that implemented with TensorFlow. For example, it can be deployed on servers with TensorFlow Extended. And it can be deployed on single-chip machines like Edge Devices with TensorFlow Lite, and also deploy on any web pages with tf.js. And moreover, TensorFlow is really about performance and scalability. So it runs on GPUs and TPUs for acceleration of all those medical operations on tensors. It also scales up to multiple servers with multiple GPUs and TPUs to make the training of the model really fast, and make the prediction for the model really fast, and deployment of the model really fast. And that's about TensorFlow. It's a very powerful mission for the Keras ecosystem.

But how do we really leverage the power of TensorFlow? As we saw, the API of TensorFlow is very low-level, consists of a lot of mathematical operations. You don't really want to implement a whole lot of deep learning solutions with all those mathematical operations, because each deep learning solution may involve hundreds of those mathematical operations. You don't want to write each one of them in your code on your own.

2. Keras: High-Level API for TensorFlow

Short description:

Keras is built on top of TensorFlow and provides high-level APIs that simplify deep learning. You can easily add layers to a sequential model and specify the number of neurons and activation functions. Loss calculations and training loops are handled automatically. Keras supports a wide range of tasks, from standard usage to advanced techniques like GANs and custom layers. For more complex scenarios, you can write custom metrics and losses, or even implement your own optimization algorithms using TensorFlow's low-level API.

So that's why we're going to go to the second member of the Keras, a radically productive deep learning. So for this one, it has actually built on top of TensorFlow and wrapping the TensorFlow low-level APIs into the high-level APIs, which are really simple to use.

We can see a very simple example code snippet here. We just initialize a sequential model, which is a type of model we're going to implement in Keras. And you can just keep adding those layers to the sequential model. Like here, we add a dense layer with 32 neurons, or units, in it, and activation function equal to Relu, and another dense layer to it. So then we got a model with two dense layers.

It's very simple to implement, and you don't really care about what mathematical operations are actually behind those dense layers, although it's actually matrix multiplication and additions, but you don't really need to write them on your own. You just use dense layer, and with a number of units. And also, you don't need to calculate the loss on your own. You just pass it as a string, categorical cross entropy. You don't do any calculations, but only a string.

This loss is actually for the classification task. And the model offset, you just passed in the training data. You don't need to write any loop on your own, like looping through 20 epochs. But you only pass in the data, and specify the epoch as 20. And then you start to do the predictions with the testing data. It's really simple to learn.

But despite the simplicity, it's capable of a lot of things. It's capable of everything supported by the TensorFlow low-level API. And you only need to learn what you need. We don't want the user to learn everything upfront that's quite a large learning curve. They start with a simple example like what we just showed before as a standard usage. You can use Sequential API, Function API, and Build and Layers.

But if you are aiming for something more, for example, you want to use TensorBoard and early stopping or saving the model to the disk with check pointing as you're training them, you can just pass a bunch of callback functions to the model. And it's really simple to do. But if you want something even more, like the GANS model or curriculum learning, and then you need to write some custom layers or custom metrics and custom losses, those are also very easy to implement just by extending the base classes in Keras.

But there are even more harder use cases, like learning to learn or you want to implement a new type of optimization algorithm, in that case, you will have to write your own training loop with the gradient tape operations. And with that, you are gaining full freedom of using anything supported with TensorFlow in Keras. So this is the logic of the API design of Keras.

QnA