How to Machine Learn-ify any Product

Rate this content
Bookmark

This talk will be a walkthrough of utilizing machine learning to replace a rule based system for consumers. We will discuss when is it okay to use ML, how to build these models with intelligent data, evaluate these offline and finally how to validate this evaluation to land these models in production systems. Furthermore, we will illustrate various self-learning/interactive-learning strategies that can be used for production systems to automate how models teach themselves to become better.

This talk has been presented at ML conf EU 2020, check out the latest edition of this Tech Conference.

FAQ

A product is considered 'ready for ML' if simple rules cannot adequately solve its problems and if the solution needs to be generalized to a large scale beyond handling just a few cases.

To determine if a problem can be solved with simple rules, consider if the problem can be addressed by a straightforward decision or threshold. If the problem requires more complex decision-making or data interpretation, ML might be necessary.

The ML model development cycle includes data collection, deciding on the ML model to use, training the model, evaluating it through both offline and online metrics, and ongoing maintenance such as active learning.

Facebook used ML in their Portal product to enhance the calling feature, enabling the device to predict and understand who the user intends to call based on voice commands, even distinguishing between multiple contacts with the same name.

Gradient Boost Decision Trees (GBDT) are an ensemble of regression trees used for classification tasks. Facebook chose GBDT for its reliability and effectiveness in handling complex decision-making processes with categorical and discrete features.

In Facebook's application, particularly for calling features, high precision is crucial to ensure the correct execution of user commands, avoiding errors like calling the wrong person which can lead to user dissatisfaction and privacy concerns.

Working with ML at scale involves ensuring data privacy, handling large volumes of data, maintaining model performance across diverse user interactions, and continuously updating models to adapt to new data and features.

Shivani Poddar
Shivani Poddar
33 min
02 Jul, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

The video explains how to implement machine learning (ML) in products, focusing on Facebook Portal's calling feature. It discusses the importance of determining if a problem can be solved with simple rules or if it requires ML, especially for large-scale applications. The ML model development cycle includes data collection, feature and label setup, and training. Gradient Boost Decision Trees (GBDT) were used for their reliability. The video highlights the importance of precision in ML models to avoid errors like calling the wrong person. It also covers the challenges of working with ML at scale, including data privacy and continuous model retraining. Online evaluation and A/B testing are essential for ensuring model performance. The talk also touches on how to handle label delay through data augmentation and self-learning.

1. Introduction to ML and Product Readiness

Short description:

Hi everyone. I'm Shivani, an ML engineer at Facebook. In this talk, I'll guide you on when to use ML and share a successful use case from Facebook. To determine if your product is ready for ML, consider two questions: Can your problem be solved with simple rules? What is the scale of your problem? For example, classifying apples from oranges may only require a color filter for a small user base. But if you need to classify different types of oranges and apples, you'll need more than just color. If both criteria are met, ML is needed.

Hi everyone. I'm Shivani. I work as an ML engineer at Facebook, and today I'm going to share with you how to MLify almost any product. This talk is going to be a more practical talk where I'm going to walk you through when is it okay to use ML. We are going to discuss a use case wherein I at Facebook used ML and and successfully so and I'm going to walk you through the cycle of an ML models development.

Cool so the first question that we have to answer is that is your product really ready for ML. So this is one of the biggest mistakes that people do is thinking that anything can be plugged in with ML and any problem can be solved with ML. So I think really there are two questions that you want to answer. The first one is that can your problem be solved with simple rules. Can you just think of a threshold or is it a binary flip decision of whether or not your problem can be solved with a simple rule. The second one to think about is what is the scale of your problem. Do you need to generalize your solution to a lot more people than a few hundreds.

One example is that say I want to classify apples from oranges and all of my product is just about classifying apples to oranges. Having a small filter which would say orange is orange and red is apple would be a reasonable approach if I have 20 users using my product. Not yet justifying whether or not we need ML. However, if I was to classify different types of oranges which could also be reddish and different types of apples which could also be orange, I would need more than just color as a rule. I would probably need the shape. I would probably want to employ some computer vision techniques and so on. And so if you can answer both in affirmative that you do need more than just simple rules and your problem is ready to scale, you need ML for your product.

2. Using ML for Facebook Portal Calls

Short description:

Let's walk through a real-life scenario of using ML for Facebook portal. The goal was to make precise calls by predicting the intended recipient. Initially, rule-based selection was used, but it was cumbersome. ML was leveraged to learn from data distribution and overcome the limitations of rules. The ML model development lifecycle involves data collection, setting up feature and label sets, and using organic data from pre-existing product interactions. Features for organic data collection include ASR confidence scores.

Let's not walk through a real-life scenario of how we use this for Facebook portal. So I was working in the Facebook portal team and one of our hero features was calling. So the user would come, they would say, Hey, portal call John, and the idea would be for the device to understand who John is in your friend list. And if there's multiple Johns, then it should disambiguate who the right John is, and then create a call to whoever this person is. And note here that the cost of failing is high because you're ending up calling the wrong person, you leave a missed call. And so really the option here is to be very precise.

And so when we started out, the flow we had was that the user would initiate this command, portal would understand who the most possible John is, and this was simply rule-based. We would pick the top contact that we got, and then we would issue a confirmation prompt. And so if a user said yes, I confirmed, call them, we would call them, otherwise we wouldn't. But this was a very tacky, long process, right? The user had to come in and select, confirm, select who they were calling, oftentimes engage and touch in the UI. And we'll just take away from this experience of the user interacting hands-free with this device, which is smart. And so we had to tackle this problem of, hey, given John, how can I predict who the actual John is, so that the user does not have to come in and do all of this work themselves.

So there are a lot of rules that we could have used. We could see the similarity score in whether the name reflected or matched the name of the person. We could use confidence score, whether the ASR confidence, and ASR is a speech recognition system, and so whether it understood John correctly. We could also use the relationship of the user to the person they were calling. So naturally, if somebody was a family, they would be more likely to call for some users, and for some other users, if someone was calling a John they messaged or called frequently, it would be more likely that they're calling the same John. And then it could also vary on time of day, whether they actually gave the command or not. So for a lot of users, they would be talking to somebody else, sometimes ASR picks up incorrectly whether or not the user is trying to call this person. And so what is the probability of noise? What is the frequency in which that they're talking to this one person? What is the score of all of these upstream modules that essentially translate any speech to text? And so all of these were a barrier of rules, right, and they could not be as abstracted in one single rule for our purposes. And so since these are not just one rule, and we needed the model to actually learn, not just from a single flip-flop rule, but a data distribution, we decided that we are going to be leveraging ML for this problem. And so this is the problem I'm going to walk you through, through this life cycle of the ML model development. This is what it looks like, right, so whenever you've now decided, you've answered this question for your product, that you do need ML for your model development, and so, what does it look like? It starts from data collection. Data collection is setting up the right feature and label sets for your model training. And so this can be organic, or this can be artificial. In our case, for the example I just gave you, since we were already using confirmation prompts from the users to decide whether or not the contact was the right contact, we already had organic labels collected for our features to solve this ML problem. And so, one can either use the pre-ML era data, which is anonymized from their pre-existing user interactions, to train their model, or what ends up happening in a scenario, you can use, you can actually create tools to collect data to solve the problems that you're after. For the purposes of this talk, we are going to be focusing on organic data collection, and how can you get data from an already pre-existing product and how can you then use that data to inject ML in your product. And so this is what our features look like. For our organic data collection, our features would ASR confidence score, which was if a user says something, our ASR or speech recognition engine translates that to text, converts it into text and generates a confidence score with it.

QnA

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
React Advanced Conference 2021React Advanced Conference 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
MediaPipe is a cross-platform framework that helps build perception pipelines using machine learning models. It offers ready-to-use solutions for various applications, such as selfie segmentation, face mesh, object detection, hand tracking, and more. MediaPipe can be integrated with React using NPM modules provided by the MediaPipe team. The demonstration showcases the implementation of face mesh and selfie segmentation solutions. MediaPipe enables the creation of amazing applications without needing to understand the underlying computer vision or machine learning processes.
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
0 min
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Article
Charlie Gerard
Jan Tomes
2 authors
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.

What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
 What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
 What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
 And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
 What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
 Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
 You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.


Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
 You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
 How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
TensorFlow.JS 101: ML in the Browser and Beyond
JSNation Live 2021JSNation Live 2021
39 min
TensorFlow.JS 101: ML in the Browser and Beyond
JavaScript with TensorFlow.js allows for machine learning in various environments, enabling the creation of applications like augmented reality and sentiment analysis. TensorFlow.js offers pre-trained models for object detection, body segmentation, and face landmark detection. It also allows for 3D rendering and the combination of machine learning with WebGL. The integration of WebRTC and WebXR enables teleportation and enhanced communication. TensorFlow.js supports transfer learning through Teachable Machine and Cloud AutoML, and provides flexibility and performance benefits in the browser and Node.js environments.
An Introduction to Transfer Learning in NLP and HuggingFace
ML conf EU 2020ML conf EU 2020
32 min
An Introduction to Transfer Learning in NLP and HuggingFace
Transfer learning in NLP allows for better performance with minimal data. BERT is commonly used for sequential transfer learning. Models like BERT can be adapted for downstream tasks such as text classification. Handling different types of inputs in NLP involves concatenating or duplicating the model. Hugging Face aims to tackle challenges in NLP through knowledge sharing and open sourcing code and libraries.
Observability with diagnostics_channel and AsyncLocalStorage
Node Congress 2023Node Congress 2023
21 min
Observability with diagnostics_channel and AsyncLocalStorage
Observability with Diagnostics Channel and async local storage allows for high-performance event tracking and propagation of values through calls, callbacks, and promise continuations. Tracing involves five events and separate channels for each event, capturing errors and return values. The span object in async local storage stores data about the current execution and is reported to the tracer when the end is triggered.

Workshops on related topic

Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Featured Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
C3 Dev Festival 2024C3 Dev Festival 2024
48 min
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
WorkshopFree
Andreia Ocanoaia
Andreia Ocanoaia
Feeling the limitations of LLMs? They can be creative, but sometimes lack accuracy or rely on outdated information. In this workshop, we’ll break down the process of building and easily deploying a Retrieval-Augmented Generation system. This approach enables you to leverage the power of LLMs with the added benefit of factual accuracy and up-to-date information.
Let AI Be Your Docs
JSNation 2024JSNation 2024
69 min
Let AI Be Your Docs
Workshop
Jesse Hall
Jesse Hall
Join our dynamic workshop to craft an AI-powered documentation portal. Learn to integrate OpenAI's ChatGPT with Next.js 14, Tailwind CSS, and cutting-edge tech to deliver instant code solutions and summaries. This hands-on session will equip you with the knowledge to revolutionize how users interact with documentation, turning tedious searches into efficient, intelligent discovery.
Key Takeaways:
- Practical experience in creating an AI-driven documentation site.- Understanding the integration of AI into user experiences.- Hands-on skills with the latest web development technologies.- Strategies for deploying and maintaining intelligent documentation resources.
Table of contents:- Introduction to AI in Documentation- Setting Up the Environment- Building the Documentation Structure- Integrating ChatGPT for Interactive Docs
Hands on with TensorFlow.js
ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Jason Mayes
Jason Mayes
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
ML conf EU 2020ML conf EU 2020
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Alyona Galyeva
Alyona Galyeva
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.
Introduction to Machine Learning on the Cloud
ML conf EU 2020ML conf EU 2020
146 min
Introduction to Machine Learning on the Cloud
Workshop
Dmitry Soshnikov
Dmitry Soshnikov
This workshop will be both a gentle introduction to Machine Learning, and a practical exercise of using the cloud to train simple and not-so-simple machine learning models. We will start with using Automatic ML to train the model to predict survival on Titanic, and then move to more complex machine learning tasks such as hyperparameter optimization and scheduling series of experiments on the compute cluster. Finally, I will show how Azure Machine Learning can be used to generate artificial paintings using Generative Adversarial Networks, and how to train language question-answering model on COVID papers to answer COVID-related questions.