Broadening AI Adoption with AutoML

Rate this content
Bookmark

Adoption of AI has been slowed the challenges involved in obtaining performant models, which require significant expertise and effort, and the limited number of practitioners with machine learning expertise. Automated machine learning (AutoML) eliminates the routine steps in the machine learning workflow, thus empowering domain experts without machine learning background to build good initial models, and allowing experienced practitioners to focus additional manual model optimization. This talk describes the extent of automation available for the various steps and demonstrates AutoML with a classifier for human activities based on accelerometer sensor data.

This talk has been presented at ML conf EU 2020, check out the latest edition of this Tech Conference.

FAQ

AutoML, or Automated Machine Learning, is a process that automates the routine and iterative tasks involved in building machine learning models, such as data preprocessing, feature engineering, model selection, and hyperparameter tuning.

AutoML overcomes the lack of machine learning expertise, increases productivity by eliminating time-consuming steps, and allows solving problems that require building many different models or dealing with different environmental stages.

AutoML uses wavelet scattering to generate features from signal data, automated feature selection to reduce the number of features, and Bayesian optimization for model selection and hyperparameter tuning.

Wavelet scattering is a technique used in AutoML to represent spikes and irregularities in signal data, providing very good features for machine learning models.

Bayesian optimization builds a model of the search space and optimizes the error over the course of iterations, efficiently tuning hyperparameters and selecting the best models.

AutoML has been tested on human activity recognition using accelerometer data from mobile phones and heart sound classification using phonograms from stethoscopes.

Feature engineering is crucial because it involves extracting variables from raw data that capture the majority of the variability, which is essential for building effective machine learning models.

AutoML addresses challenges such as the lack of AI expertise, the time-consuming nature of iterative processes, and the difficulty of feature engineering, especially for signal and image data.

Yes, AutoML can be used for deploying models on memory and power-limited embedded systems by reducing model size through automated feature selection and translating models to low-level code like C or C++ using MATLAB.

MATLAB aids AutoML by providing tools for automated feature generation, model selection, hyperparameter tuning, and translating AI models to deployable C or C++ code for embedded systems.

Bernhard Suhm
Bernhard Suhm
9 min
02 Jul, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

AutoML simplifies the complexity of building machine learning models, allowing engineers to focus on challenging AI problems and applications. The three-step AutoML approach by MathWorks includes wavelet scattering for feature extraction, automated feature selection, and model tuning with Bayesian optimization. AutoML can handle signal and image data processing, making it suitable for memory and power-limited embedded systems. MATLAB can translate AI models to low-level code for deployment. AutoML enables engineers to create optimized models without extensive AI expertise, addressing issues like building multiple models for different scenarios.

1. Introduction to AutomL and its Benefits

Short description:

Hello everyone, my name is Bernhard Suhm, I'm product manager for machine learning with the math works. Today I will focus on automel, automation that takes the routine iterative effort and most of the science out of building machine learning models. The point of AutomL is to simplify the complexity of building machine learning models, freeing up engineers to focus on the hard machine learning problems and on their applications. AutomL allows you to solve problems that otherwise wouldn't be feasible like use cases where you need to build many different models representing different variations or different environmental stages. We at MathWorks have developed a three-step AutomL approach that includes wavelet scattering to extract good features from signal and image data.

Hello everyone, my name is Bernhard Suhm, I'm product manager for machine learning with the math works. Let me motivate my topic with some questions to you. Where do you want to apply AI? Are you concerned with the lack of your experience in AI or with black box models? The community widely recognizes these as challenges and barriers to broader adoption of AI across many industries.

Today I will focus on automel, automation that takes the routine iterative effort and most of the science out of building machine learning models. So what exactly is automel? To understand that you need to know a bit about the typical workflow for building machine learning models, the focus of this talk, but building deep neural networks isn't that different. First, you need to process your raw data, deal with its messiness, and get it into a shape that is suitable for later stages, like dealing with missing data and outliers. Next, you need to engineer features, extract a few variables from your data that serve as input to your model, and capture the majority of the variability. That's fairly easy for numeric data, but a lot harder for signals. Next, you're faced with the choice of different machine learning models. And even to experts, it's not clear which model performs best on any given problem. So you have to try multiple, which leads to the model tuning stage, where you assess the performance of some initial models, optimize their hyperparameters, maybe select a subset of features to avoid overfitting. But that may not be enough to get really good performance. You may have to go back, replace some features with others, and do this all over again. If you're familiar with machine learning, you will know the most difficult and time-consuming stages are the feature engineering and the optimization. If your head is spinning now, don't despair because you don't have to know all this complexity. The point of AutomL is to simplify it. Ideally, to go directly from your initial data and your machine learning problem to a model you can deploy. However, really taking it seriously, that is not a realistic expectation. In a single step machine learning is not possible. However, what is realistic is, freeing up engineers like yourself to focus on the hard machine learning problems and on your application. Otherwise, without AutomL, you'll have to find that AI expertise either inside your team and organization or outside. Those data scientists are hard to find and expensive. As the first barrier that AutomL removes, it overcomes the lack of machine learning expertise. But even if you have that expertise, you are increased in productivity because AutomL takes away those time-consuming and iterative steps. Finally, AutomL allows you to solve problems that otherwise wouldn't be feasible like use cases where you need to build many different models representing different variations or different environmental stages.

So how do you apply AutomL for engineering? Most of those engineering applications are based on signal and image data and that's where the future engineering becomes critical for good performance and that's notoriously difficult. We at MathWorks brought our signal processing knowledge to bear and came up with the following three-step AutomL. First, you apply wavelet scattering. These wavelets are very suitable in their time-bounded shape to represent spikes and irregularities in your signal. Therefore, you get very good features.

2. Automated Feature Selection and Model Optimization

Short description:

Many engineering applications require deployment to memory and power-limited embedded systems. We apply automated feature selection to reduce the wavelet features and model size. AutoML empowers engineers to build optimized models without expertise. AutoML can be applied to signal applications with automated feature generation, selection, and model tuning. MATLAB can translate to low-level code for deployment. Join the longer session on automatic interpretability and the hands-on workshop on machine and deep learning using MATLAB online.

Many engineering applications, however, require deployment to memory and power-limited embedded systems. For those, you cannot deploy large models. So, second, we apply automated feature selection to reduce the maybe hundreds of wavelet features to just a few very performant features and reduce the model size. Finally, and key is the model selection hyperparameter tuning step. You have a choice of different models, and for the model to perform well, the hyperparameters need to be set just right. Let's look at that stage in a little more detail.

How does that simultaneous optimization of model and hyperparameters work? Well, you can perform random search, but that's not efficient either because the search space is very large. We employ Bayesian optimization that builds a model of the search space. And here you can see how that Bayesian optimization switches between different types of models and optimizes the error over in the course of the iterations.

How do we know that AutoML works? We compared AutoML to the traditional manual process on two classification problems. First, we looked at human activity recognition, where you take auxiliary meter data from mobile phones. We have about 7K observations in the set we collected and we manually engineered 66 features using various signal processing functions. Second, we looked at heart sound classification. Think about being in your doctor's office with a stethoscope and listening to your heart sound. So those phonograms, we could have a set of 10K observations that's publicly available and engineered less than 30 features.

So what results did we get? You can see here with the manual process, we achieved accuracies in the high 90s as you would want to have for such an important application. For AutoML, for one application slightly lower, but the point is, without all that expertise and time-consuming iterative process, you get very good models in a few steps. So AutoML empowers engineers without AI expertise to build optimized models, including for signal applications where the feature extraction is notoriously difficult. We can apply AutoML to signal applications in a few steps. Automated feature generation with wavelets. Automated feature selection to reduce model size and make it fit on your hardware. And model selection along with hyperparameter tuning in an efficient way using Bayesian optimization. Finally, to deploy your AI model to the edge and embedded systems, you need low-level code like C. MATLAB, you can translate automatically to C, C++ code that can be deployed directly and thus another barrier to broader adoption of AI removed.

Thank you for your attention and if you want to know more, Monday afternoon or evening, I'll have a longer session on automatic interpretability, a seminar on those two topics one hour and two hours a hands-on workshop on machine and deep learning using MATLAB online.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Building the AI for Athena Crisis
JS GameDev Summit 2023JS GameDev Summit 2023
37 min
Building the AI for Athena Crisis
Join Christoph from Nakazawa Tech in building the AI for Athena Crisis, a game where the AI performs actions just like a player. Learn about the importance of abstractions, primitives, and search algorithms in building an AI for a video game. Explore the architecture of Athena Crisis, which uses immutable persistent data structures and optimistic updates. Discover how to implement AI behaviors and create a class for the AI. Find out how to analyze units, assign weights, and prioritize actions based on the game state. Consider the next steps in building the AI and explore the possibility of building an AI for a real-time strategy game.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Featured Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Llms Workshop: What They Are and How to Leverage Them
React Summit 2024React Summit 2024
66 min
Llms Workshop: What They Are and How to Leverage Them
Featured Workshop
Nathan Marrs
Haris Rozajac
2 authors
Join Nathan in this hands-on session where you will first learn at a high level what large language models (LLMs) are and how they work. Then dive into an interactive coding exercise where you will implement LLM functionality into a basic example application. During this exercise you will get a feel for key skills for working with LLMs in your own applications such as prompt engineering and exposure to OpenAI's API.
After this session you will have insights around what LLMs are and how they can practically be used to improve your own applications.
Table of contents: - Interactive demo implementing basic LLM powered features in a demo app- Discuss how to decide where to leverage LLMs in a product- Lessons learned around integrating with OpenAI / overview of OpenAI API- Best practices for prompt engineering- Common challenges specific to React (state management :D / good UX practices)
Working With OpenAI and Prompt Engineering for React Developers
React Advanced Conference 2023React Advanced Conference 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps
Building AI Applications for the Web
React Day Berlin 2023React Day Berlin 2023
98 min
Building AI Applications for the Web
Workshop
Roy Derks
Roy Derks
Today every developer is using LLMs in different forms and shapes. Lots of products have introduced embedded AI capabilities, and in this workshop you’ll learn how to build your own AI application. No experience in building LLMs or machine learning is needed. Instead, we’ll use web technologies such as JavaScript, React and GraphQL which you already know and love.
Building Your Generative AI Application
React Summit 2024React Summit 2024
82 min
Building Your Generative AI Application
WorkshopFree
Dieter Flick
Dieter Flick
Generative AI is exciting tech enthusiasts and businesses with its vast potential. In this session, we will introduce Retrieval Augmented Generation (RAG), a framework that provides context to Large Language Models (LLMs) without retraining them. We will guide you step-by-step in building your own RAG app, culminating in a fully functional chatbot.
Key Concepts: Generative AI, Retrieval Augmented Generation
Technologies: OpenAI, LangChain, AstraDB Vector Store, Streamlit, Langflow