Video Summary and Transcription
The video explains how to implement machine learning (ML) in products, focusing on Facebook Portal's calling feature. It discusses the importance of determining if a problem can be solved with simple rules or if it requires ML, especially for large-scale applications. The ML model development cycle includes data collection, feature and label setup, and training. Gradient Boost Decision Trees (GBDT) were used for their reliability. The video highlights the importance of precision in ML models to avoid errors like calling the wrong person. It also covers the challenges of working with ML at scale, including data privacy and continuous model retraining. Online evaluation and A/B testing are essential for ensuring model performance. The talk also touches on how to handle label delay through data augmentation and self-learning.
1. Introduction to ML and Product Readiness
Hi everyone. I'm Shivani, an ML engineer at Facebook. In this talk, I'll guide you on when to use ML and share a successful use case from Facebook. To determine if your product is ready for ML, consider two questions: Can your problem be solved with simple rules? What is the scale of your problem? For example, classifying apples from oranges may only require a color filter for a small user base. But if you need to classify different types of oranges and apples, you'll need more than just color. If both criteria are met, ML is needed.
Hi everyone. I'm Shivani. I work as an ML engineer at Facebook, and today I'm going to share with you how to MLify almost any product. This talk is going to be a more practical talk where I'm going to walk you through when is it okay to use ML. We are going to discuss a use case wherein I at Facebook used ML and and successfully so and I'm going to walk you through the cycle of an ML models development.
Cool so the first question that we have to answer is that is your product really ready for ML. So this is one of the biggest mistakes that people do is thinking that anything can be plugged in with ML and any problem can be solved with ML. So I think really there are two questions that you want to answer. The first one is that can your problem be solved with simple rules. Can you just think of a threshold or is it a binary flip decision of whether or not your problem can be solved with a simple rule. The second one to think about is what is the scale of your problem. Do you need to generalize your solution to a lot more people than a few hundreds.
One example is that say I want to classify apples from oranges and all of my product is just about classifying apples to oranges. Having a small filter which would say orange is orange and red is apple would be a reasonable approach if I have 20 users using my product. Not yet justifying whether or not we need ML. However, if I was to classify different types of oranges which could also be reddish and different types of apples which could also be orange, I would need more than just color as a rule. I would probably need the shape. I would probably want to employ some computer vision techniques and so on. And so if you can answer both in affirmative that you do need more than just simple rules and your problem is ready to scale, you need ML for your product.
2. Using ML for Facebook Portal Calls
Let's walk through a real-life scenario of using ML for Facebook portal. The goal was to make precise calls by predicting the intended recipient. Initially, rule-based selection was used, but it was cumbersome. ML was leveraged to learn from data distribution and overcome the limitations of rules. The ML model development lifecycle involves data collection, setting up feature and label sets, and using organic data from pre-existing product interactions. Features for organic data collection include ASR confidence scores.
Let's not walk through a real-life scenario of how we use this for Facebook portal. So I was working in the Facebook portal team and one of our hero features was calling. So the user would come, they would say, Hey, portal call John, and the idea would be for the device to understand who John is in your friend list. And if there's multiple Johns, then it should disambiguate who the right John is, and then create a call to whoever this person is. And note here that the cost of failing is high because you're ending up calling the wrong person, you leave a missed call. And so really the option here is to be very precise.
And so when we started out, the flow we had was that the user would initiate this command, portal would understand who the most possible John is, and this was simply rule-based. We would pick the top contact that we got, and then we would issue a confirmation prompt. And so if a user said yes, I confirmed, call them, we would call them, otherwise we wouldn't. But this was a very tacky, long process, right? The user had to come in and select, confirm, select who they were calling, oftentimes engage and touch in the UI. And we'll just take away from this experience of the user interacting hands-free with this device, which is smart. And so we had to tackle this problem of, hey, given John, how can I predict who the actual John is, so that the user does not have to come in and do all of this work themselves.
So there are a lot of rules that we could have used. We could see the similarity score in whether the name reflected or matched the name of the person. We could use confidence score, whether the ASR confidence, and ASR is a speech recognition system, and so whether it understood John correctly. We could also use the relationship of the user to the person they were calling. So naturally, if somebody was a family, they would be more likely to call for some users, and for some other users, if someone was calling a John they messaged or called frequently, it would be more likely that they're calling the same John. And then it could also vary on time of day, whether they actually gave the command or not. So for a lot of users, they would be talking to somebody else, sometimes ASR picks up incorrectly whether or not the user is trying to call this person. And so what is the probability of noise? What is the frequency in which that they're talking to this one person? What is the score of all of these upstream modules that essentially translate any speech to text? And so all of these were a barrier of rules, right, and they could not be as abstracted in one single rule for our purposes. And so since these are not just one rule, and we needed the model to actually learn, not just from a single flip-flop rule, but a data distribution, we decided that we are going to be leveraging ML for this problem. And so this is the problem I'm going to walk you through, through this life cycle of the ML model development. This is what it looks like, right, so whenever you've now decided, you've answered this question for your product, that you do need ML for your model development, and so, what does it look like? It starts from data collection. Data collection is setting up the right feature and label sets for your model training. And so this can be organic, or this can be artificial. In our case, for the example I just gave you, since we were already using confirmation prompts from the users to decide whether or not the contact was the right contact, we already had organic labels collected for our features to solve this ML problem. And so, one can either use the pre-ML era data, which is anonymized from their pre-existing user interactions, to train their model, or what ends up happening in a scenario, you can use, you can actually create tools to collect data to solve the problems that you're after. For the purposes of this talk, we are going to be focusing on organic data collection, and how can you get data from an already pre-existing product and how can you then use that data to inject ML in your product. And so this is what our features look like. For our organic data collection, our features would ASR confidence score, which was if a user says something, our ASR or speech recognition engine translates that to text, converts it into text and generates a confidence score with it.
3. ML Model Selection and Evaluation
The NLU engine determines the user's intent, while the entity recognition engine predicts the contact. User context, such as message frequency, is considered. Labels are curated based on confirmation prompts. To choose the ML model, we wanted non-linear transformations and opted for GBDT. GBDT is an ensemble of regression trees that iteratively learns features and predicts the contact. Evaluation involves offline and online methods, using quantitative metrics like precision and recall.
We then have the NLU engine or the intent detection engine that takes as input this text and outputs the intent of the user. So the user could have wanted to create a call, they could have wanted to message and so on.
We then have the entity recognition engine, which takes the name of the user's friend here, John, and then predicts who the actual contact is. And so this is entity recognition, which is contact recognition.
Once we have these, we also have some other user context. So how often does the user message this person or call this person? This would give us context to know whether or not this is a new contact that the user has never messaged, or this is somebody that the user oftentimes messages.
Once we have these features, we then also curate labels from that confirmation prompt, where if a user said yes to the confirmation prompt in our previous log data and ended up creating a call to this contact, we say that the label is a positive label. If not, and if the user said no or cut the call right as it dialed, we say that it is a negative label.
So now that we have our data set up, the next step is, how do we decide what ML model to use? How do we decide whether to do neural networks deep learning or to stick with the simpler classical ML models, use just regression or decision trees and so on?
So for this purpose, we knew that we wanted to learn non-linear transformations. We had a lot of rules, and we wanted a model to be able to learn complex planes in those rules. We also knew that we have a mix of categorical features and discrete features and also data distributions. We knew that our categorical features could be sparse. There could be some information available for some users while some users might not have given information, like whether somebody's their friend or family and so on. And so given these aspects and given the fact that we wanted to use as little compute as possible, we decided to use GBDT. GBDT are Gradient Boost Decision Trees.
These are more simplistically an ensemble of regression trees that are averaged together to perform a classification task. So it basically iteratively learns which features to inject at each of these nodes to output the leaf label, which is whether the, so some way in which this will look like is with the previously discussed rules, the model will learn whether is a user a family or not. And then based on some threshold that the model has learned, go to one leaf or the other. In the next decision, it could learn whether or not the user called this friend in the last seven days or not. And then again, based on the threshold that it learned, it can go to the left hand side or the right hand side and eventually predict the label learned by the model in saying that whether or not this is the right contact.
So this is the model we use. And now that we have the data, we had the model. The next step is evaluation. How do we evaluate or how do we understand that the model that we used actually performed well? How do we train the model and what metrics do we use is the next question.
So there are two kinds of evaluation that I'm going to cover. One is offline model evaluation and the second is online. Offline model evaluation is purely done at the development time. So you do a quantitative evaluation and you do qualitative evaluation, which is focusing on actually how the data looks like. So for quantitative metrics, an array of metrics can be used. It could be precision, recall, F score, it could be accuracy.
4. Metrics and Analysis
For classification tasks, use blue scores. For natural language generation tasks, use similarity scores. Quantitative metrics like precision and recall are important, but qualitative analysis is also crucial. If all labels in the dataset are the same, the model may perform well on that data but fail on new data. We conducted both quantitative and qualitative analysis and achieved a precision of over 95%. Our models performed as expected.
These are for more classification kind of tasks. For more natural language generation tasks, one can use blue scores. For more similarity detection tasks, one can use similarity scores. And so this depends on which model you use. For our purposes, we used quantitative metrics of precision recall. And we also use qualitative analysis. So qualitative analysis is very important at development time. So and here's why. Your quantitative metrics will tell you whether the precision or recall of the model is great. However, what if all the labels that you had were ones? So all of your data set, you had 20 samples, and all of your data set was one single label. The model will very easily learn that I always have to predict one to do well on this data set. And then when you actually test the model on more data, which has other labels, the model might start predicting those wrong, will start predicting those wrong. But in your test data set, this label that you had in abundance is say, 80%. And this new label that the model has never seen before is say, 20%, your accuracy still stays 80%. And so unless you went in and qualitatively analyze how your model is doing and what it's predicting, you would say your accuracy is 80%, which is great quantitatively, but does not scale well qualitatively. And so for our purposes, we did both quantitative and qualitative analysis, our model did really well. And we were able to get a precision, a high precision of over 95%. And qualitatively, our models seem to do exactly what it what we wanted it to do.
5. Importance of Precision and Recall
Precision was crucial for our purposes. Making a call to the wrong person without a confirmation prompt would lead to embarrassing situations. We optimized for high precision, even at the cost of reducing recall. The thresholding for the model depends on the use case, and tradeoffs between precision and recall may be necessary.
Another point here is that for our purposes, precision was one of the most important metrics, and here is why. I've alluded to this point before, but if we made a call to the wrong person, and now, remember, we've gotten rid of that confirmation prompt. And so if he made a call to the wrong person, that would mean that the user would have called somebody they didn't mean to, and would have like this embarrassing situation ahead where they would have to explain how their smart device was not so smart anymore and ended up calling this person. And so what we had to do and optimize for was reducing our recall even to the point where precision tended to almost 100%. And so a lot of times, the thresholding that you select for your model would depend on your end use case. And you might have to make tradeoffs where precision recall is a curve. And so you might have to reduce your recall to improve your precision or vice versa. There might be other scenarios in which you want to fetch the maximum kind of things that you can, increase recall, while you might have a leeway to decrease precision.
6. Online Evaluation and Active Learning
Now that we have collected data, modeled the model, and evaluated offline metrics, the next step is online evaluation. We productionize the model by writing a wrapper that abstracts the model binary and makes it available in runtime. Online evaluation involves developing metrics, such as call success rate, and anti-metrics, such as call cancels. We roll out the model to a smaller set of users for A-B testing and monitor the metrics. If the metrics trend as expected, we progressively launch the model to a larger group. If not, we analyze the issue and make necessary changes. Finally, active learning is crucial for maintaining online metrics over time. Retraining is triggered when upstream modules change.
And so, great. So now, we have data is collected, model is modeled, offline metrics look good. What's next? Next is online evaluation. So, now that we are confident that our model is doing well, we are ready for deployment. How do we know? How do we productionize this model? How do we know that it's actually working well for real users? Do we just launch it to everybody? So, no, we don't. We have a second step before we can do that.
So, for our purposes, we productionize the model. We write a wrapper, which essentially abstracts the model binary and makes it available in runtime with a predict function where we can derive the features that the model learned in real time. So, now that we have a request, the user made a request, we featureize that request into all of those features that we discussed before. and we call this predict function in runtime with these features. We return a label and then in runtime we see whether or not the user actually ended up creating that call or went in and canceled that call. And so that's when we come to online evaluation. And we develop these proxy user fixing tricks. So, these metrics keep changing because metrics can be something like call success rate is increasing and call success rate is defined as the call that the user initiated and then ended up completing and not just dropping off. And then we can also define anti-metrics. For example, call cancels are increasing is an anti-metric. We don't want that to happen. And then we essentially roll it out to a set of users, a smaller set of users initially, we call this A-B testing. We see how these metrics are trending. And if these metrics are trending as we expected them to trend, then we go ahead and launch the model to a larger group of people and so on and so out and do a progressive rollout until we've launched it to everybody. If not, we go back to the drawing board, we see what the issue is. We see whether it's hyperparameter tuning that we need to change and so on to figure out exactly where the model is not performing well.
Finally, one would think that you've done data collection, you've done ML modeling, you've done offline evaluation, you've deployed it, you did stellar online evaluation, what next? There's one more step which people oftentimes miss, which is active learning or iterative learning or maintenance of the model. So what is that learning? Active learning can be in this context, it means a lot of things. But in this context, this is purely from the standpoint of how will the model maintain its online metrics over time over different users and so on. So we only in active learning, we will trigger retraining of the model when either the upstream modules change. So in our case, we had an NLU module, which was detecting intent, we had ASR module, which was converting speech to text. And we were depending on the confidences from these modules as features. So what if my NLU model changed, and the meaning of its confidence changed? What if the confidence that it was predicting as 0.5 before was 0.6 now, my model will start making wrong predictions if it was still in the older distribution while the distribution from the newer model changed. So if any of the upstream models which inform the features of the models downstream or your model change, you need to trigger retraining.
Retraining and AI Methodology
If there is a change in significant number of users, you will have to retrain your model. Evolving features based on qualitative analysis requires retraining the model. These ensure a robust ML model that adapts to changing infrastructure and user needs. Thank you for joining us. Let's move on to the first question from Sanjit about AI methodology. The next question is from Kent dot Ano about determining if a problem can be solved with simple rules.
If there is a change in significant number of users. So oftentimes, there will be different patterns that the model learns based on the current users that it is learning on. So, if there is a change in those then you will have to retrain your model.
And then lastly, if you explicitly evolve your features based on qualitative analysis. So you realize that there's more features that you can induce in the model to make it better, make the recall better, precision better. You of course need to retrain the model.
And yeah, so all of these together will ensure that you have a robust ML model at the end of the day for your users and will ensure that your model is adapting itself to the changing infrastructure as well as user needs.
Thank you. I hope that you learned a little bit about the ML development cycle in a product company like mine and reach out for any questions that you might have or if you'd like to connect.
Thanks for joining us all the way from the States. It's a bit early for you probably. I hope you had some breakfast. It's 9 30 a.m. It's not that early. Ah, good. Good. Good. You didn't have to set your alarm for this. Nice.
We're gonna go to the first question which is from our audience member Sanjit and he's asking how is your AI methodology different than any standards like SEMA slash CRIBS dash DM methodology. I have no idea what the acronyms meant. Maybe, maybe if you can say them again, I'll know better. Okay. Sanjit if you can explain further, then we'll try to get back to your question later on.
We'll go to the next question which is from Kent dot Ano. At the start of your talk, you said the problem had to be solvable with simple rules. How to tell if this is true? Do you sometimes just go probably train the model and check if it solves the problem even though you're not sure what the rules are? Yeah, I think that's a great question. And you're right. Not always can you blatantly tell whether a problem can be solved with simple rules or not. There's two general categories of problems that you have.
Classification and Generation Problems
Classification problems involve classifying a dataset into labels, while generation problems require the machine to generate something on its own. Classification problems can often be solved with simpler rules, while generation problems require ML algorithms. Gradient-boost decision trees have proven to be effective in practical applications, considering the scale and edge cases. Other algorithms can be better on paper, but practicality and proven performance are key. At Facebook, we prioritize tried and tested approaches. One of the hardest challenges at Facebook is balancing the freedom to build with the responsibility of protecting user privacy and considering the scale of the company.
One is classification problems and the second is generation problems. Classification problems mean that you have a dataset and you want to basically classify it into labels. It sounds simple, but this can have an array and gamut of ramifications or applications in the way that it's used. Generation problems say that the machine is given a dataset and it has to come up with its own way of generating something. So these cool things that you see where a machine is writing poetry or a machine is writing stories, all of those are generation problems. While when you see that a machine can detect objects like apples from oranges, those are classification problems.
So oftentimes you will see that classification problems are the ones that can be solved with simpler rules, while generation problems are more the ones where you want to rely on just any ML algorithm and then go from there. And so that's usually my go-to mantra for deciding whether a problem is just solvable by rules, or with ML, or not. This is basically what I use.
Okay. Well, I hope that Kent Ano is happy with that answer. Just as a reminder, there's a discussion room going on about computer vision on Spatial Chat, so be sure to go there after we've done this Q&A. The next question is from my co-MC, AJ. You mentioned about gradient boost decision trees, which is definitely good, but isn't gradient boosting slow for practical applications? Would an algorithm like CatBoost or XGBoost be better in general? So, I guess from the perspective of any company, like whenever you're building machine learning at a big company, you ideally want to rely on algorithms which are tried and tested. Especially, when you have the scale of people that you're deploying that model to, as we do. We have billions of people who use our models. And so, to that end, a lot of times, yes, a bunch of these alternate methodologies can be better on paper, but we like to default to the ones that have proven better in practicality or the ones we've deployed. And so, we always have to see whether a model scales well or whether a model is able to be good around edge cases and gradient-boost decision trees from both qualitative and quantitative analysis over time, at least from the perspective of the work that I has been established to be one of the best approaches. Having said that, that is not the only approach. We could have used any other approach and gone through the same cycle that I illustrated there. And we would have, had gradient-boost decision trees not performed as it did, which was beating the expectations we had from the model, we would have gone back to the drawing board and tried for looking at a different model.
Okay, thank you. We have another question from an audience member, Mihai, what was your hardest challenge while working at Facebook? That's a good one. That's a great question. No, I think, I really like working at Facebook and I think that this is one company which is bottom driven, as opposed to top down. Every day we decide what is it that we are excited to build, and then we roadmap what we are building and the only thing we have to see is whether that aligns with the goal that the team and the company is going in. And so to that end, it's a lot of freedom to actually build things you really care about. But then because of the scale of Facebook, it's also a lot of responsibility. So when I was at grad school, many years ago, I would just take any model I like, take any data I like and throw in the model and try to publish a paper without thinking about whether this data is anonymized properly, whether I'm protecting user privacy and whatnot. So now I think since we've learned so much, everything I have to do at Facebook has to be considering all of these things and their ramifications before I just go and build a model and just deploy it.
Data Protection and Label Delay
Working with data protection and making ML models work at scale has been challenging and exciting. GBDT is different from reinforcement learning in terms of problem representation and reward functions. We haven't used reinforcement learning for our problem yet. Label delay can be handled through data augmentation and self-learning techniques. Join the speaker room for further discussions.
I think all of the good guardrails that we have in place to ensure that we protect the data we are using for machine learning appropriately, that and how do you make a model work, which depends on this data, and balancing those, has been one of the most challenging and exciting things to learn because now there's things like federated learning, which tell you how can you actually featurize data on client and then send it to the server so that you don't compromise any information that the user doesn't want to share and whatnot, so there's it's a lot to learn, but I think it's both challenging and exciting.
All right, so as I understand, it's more about working at the scale of Facebook than actual technical challenge you have and they protect the privacy and stuff. Yeah, well good to hear that that's high on the agenda.
Next question from our audience member Reg, how is GBDT different from reinforcement learning? Yeah, GBDT is, so reinforcement learning is more, it's defined very differently in terms of just the way the problem is mathematically represented where in reinforcement learning you basically have a reward function you define and then you basically try to make sure that every action and you learn a policy that every policy your user takes can optimize that reward function. And so or a user takes or a machine takes can optimize that reward function. GBDT, while this problem that we are tackling, it's great that you asked this question, can be very well modeled in reinforcement learning because to decide whether or not we call this contact or not is deciding a policy or an action for the assistant in this case. We haven't yet used reinforcement learning to model it that way. And we use GBDT, which is predefined labels. We don't have a reward structure defined, which we try to maximize here while in reinforcement learning you would have the whole reward function. You would probably do Q-learning one of the reinforcement learning algorithms to optimize the same problem. I hope that answers your question. I'll have to go in a whole spiel and a lecture about deep learning, reinforcement learning, if I go any deeper. Well, that's what we're here for today, right? But let's not forget, after this Q&A session, you're going to be in your spatial chat room, right? So people that have more questions or want to go even deeper on your answer, they can go there and discuss further with you.
One more question from our audience member, Ansall. How do you handle label delay? An example, labels arriving one month later than the training data? Great question. Label delay, and I think I would presume that how do you handle any delay of features or data coming in. There's multiple ways to make up for things like label delay. One could be a data augmentation approach where you say this is what I hypothesize or computing your expected labels and basically training your model from there, and once you do have the labels that you're waiting for, then doing the self learning or active learning thing that I mentioned, which is any time your labels change, you trigger the retraining of the model. In this case, you will compromise the performance of the model for the initial whatever time you have the expected labels, the performance will be suboptimal. However, this will ensure that you have independent and self learning pipeline of modeling developed. But I'll have to understand more about what this label delay looks like to give a better answer which might be more suited to this particular use case. And once again, we'll plug your speaker room where people can ask more questions or go deeper into this answer. So, Ansoul, if you want to discuss I'm really lost for words today. It's a good day to be an MC when you're lost. So if you want to go deeper, go to the speaker room. And right now, I just want to thank you for joining us, and I bid you farewell to your speaker room.
Comments