November 4 - 6, 2020
ML conf EU
Online

ML conf EU 2020

The Machine Learning conference for software developers



This edition of the event has finished, the latest updates of this Tech Conference are available on the Brand Website.
TensorFlow.js 101: ML in the Browser and Beyond
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
An Introduction to Transfer Learning in NLP and HuggingFace
32 min
An Introduction to Transfer Learning in NLP and HuggingFace
Transfer learning in NLP allows for better performance with minimal data. BERT is commonly used for sequential transfer learning. Models like BERT can be adapted for downstream tasks such as text classification. Handling different types of inputs in NLP involves concatenating or duplicating the model. Hugging Face aims to tackle challenges in NLP through knowledge sharing and open sourcing code and libraries.
Teaching ML and AI to Coders
34 min
Teaching ML and AI to Coders
The Talk discusses the current state of AI and the challenges faced in educating developers. Google's mission is to train 10 percent of the world's developers in machine learning and AI. They have developed specializations and training initiatives to make AI easy and accessible. The impact of AI education includes rigorous certification exams and partnerships with universities. The Talk also highlights the growth trends in the tech industry and the importance of AI skills. TensorFlow is recommended for its deployment capabilities, and practice is emphasized for building a career in machine learning.
Power of Transfer Learning in NLP: Build a Text Classification Model Using BERT
35 min
Power of Transfer Learning in NLP: Build a Text Classification Model Using BERT
Transfer learning is a technique used when there is a scarcity of labeled data, where a pre-trained model is repurposed for a new task. BERT is a bidirectional model trained on plain text that considers the context of tokens during training. Understanding the baseline NLP modeling and addressing challenges like context-based words and spelling errors are crucial. BERT has applications in multiple problem-solving scenarios, but may not perform well in strict classification labels or conversational AI. Training BERT involves next sentence prediction and mass language modeling to handle contextual understanding and coherent mapping.
DeepPavlov Agent: Open-source Framework for Multiskill Conversational AI
27 min
DeepPavlov Agent: Open-source Framework for Multiskill Conversational AI
The Pavlov Agent is an open source framework for multi-skill conversational AI, addressing the need for specific skills in different domains. The microservice architecture allows for scalability and skill reuse. The Deep Pavlov Library enables the creation of NLP pipelines for different skills. The Deep Pavlov Dream serves as a repository for skills and templates, while the Deployment Agent orchestrates all components for a seamless conversational experience. DeepLove.AI offers more flexibility and customization compared to Microsoft's LUIS service.
Hands on with TensorFlow.js
160 min
Hands on with TensorFlow.js
Workshop
Jason Mayes
Jason Mayes
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
Never Have an Unmaintainable Jupyter Notebook Again!
26 min
Never Have an Unmaintainable Jupyter Notebook Again!
Jupyter Notebooks are important for data science, but maintaining them can be challenging. Visualizing data sets and using code quality tools like NBQA can help address these challenges. Tools like nbdime and Precommit can assist with version control and future code quality. Configuring NBQA and other code quality tools can be done in the PyProject.toml file. NBQA has been integrated into various projects' continuous integration workflows. Moving code from notebooks to Python packages should be considered based on the need for reproducibility and self-contained solutions.
Deep Transfer Learning for Computer Vision
8 min
Deep Transfer Learning for Computer Vision
Dipanjan Sarkar
Sachin Dangayach
2 authors
Today's Talk focuses on deep transfer learning for Computer Vision in the semiconductor manufacturing industry, specifically defect classification. The speakers discuss using a hybrid classification system with pre-trained models and image augmentation for accurate defect detection. They also explore the use of unsupervised learning, leveraging clustering algorithms and pre-trained models like ResNet-50, for defect analysis without prior knowledge. The process is reproducible, user-friendly, and provides accurate cluster results, with potential for future supervised learning applications.
Browser Session Analytics: The Key to Fraud Detection
7 min
Browser Session Analytics: The Key to Fraud Detection
Blue Tab Solutions specializes in advanced analytics and big data, and recently improved financial fraud detection using Spark and the CRISPM methodology. They discovered insights like the correlation between fraudulent sessions and the mobile cast page accessed from the web application. The models created using decision trees, random forest classifiers, and gradient boosting classifiers were validated using the area under the ROC curve. The GVT classifier yielded the best result with a score of 0.94. Regular training is necessary for accurate models, and the next steps involve real-time action when fraud is detected.
Broadening AI Adoption with AutoML
9 min
Broadening AI Adoption with AutoML
AutomL simplifies the complexity of building machine learning models, allowing engineers to focus on the hard problems and applications. It enables the solving of problems that wouldn't be feasible otherwise. The three-step AutomL approach by MathWorks includes wavelet scattering for feature extraction. AutoML also enables feature selection and model optimization for memory and power-limited embedded systems. MATLAB can translate to low-level code for deployment.
Processing Robot Data at Scale with R and Kubernetes
8 min
Processing Robot Data at Scale with R and Kubernetes
The Talk discusses the challenges of managing and analyzing the increasing volume of data gathered from robots. It highlights the importance of data extraction and feature engineering in analyzing what happens before a failure. The use of Kubernetes and Packyderm for data management and automatic updates in the pipeline is mentioned. The parallelization of R scripts and the scalability of large clusters for data collection and processing are emphasized. The Talk also mentions the use of AI at the robot fleet level for unlocking new opportunities.
Dabl: Automatic Machine Learning with a Human in the Loop
35 min
Dabl: Automatic Machine Learning with a Human in the Loop
This talk introduces Dabble, a library that allows data scientists to iterate quickly and incorporate human input into the machine learning process. Dabble provides tools for each step of the machine learning workflow, including problem statement, data cleaning, visualization, model building, and model interpretation. It uses mosaic plots and pair plots to analyze categorical and continuous features. Dabble also implements a portfolio-based automatic machine learning approach using successive halving to find the best model. The future goals of Dabble include supporting more feature types, improving the portfolio, and building explainable models.
Can You Sing with All the Voices of the Features?
8 min
Can You Sing with All the Voices of the Features?
This Talk discusses the role of repetition in songwriting and how it has become more prevalent over the years. The use of string metrics, such as the Levenstein distance, allows for the analysis of similarity between segments of songs. A similarity threshold of 70% is used to determine if segments are considered similar. Overall, the Talk explores the importance of repetition in creating successful songs and the use of analytical tools to measure similarity.
Machine Learning on the Edge Using TensorFlow Lite
8 min
Machine Learning on the Edge Using TensorFlow Lite
Håkan Silvernagel introduces TensorFlow Lite, an open-source deep learning framework for deploying machine learning models on mobile and IoT devices. He highlights the benefits of using TensorFlow Lite, such as reduced latency, increased privacy, and improved connectivity. The Talk includes a demonstration of object recognition capabilities and a real-world example of using TensorFlow Lite to detect a disease affecting farmers in Tanzania. References to official TensorFlow documentation, Google IO conference, and TensorFlow courses on Coursera are provided.
Boost Productivity with Keras Ecosystem
30 min
Boost Productivity with Keras Ecosystem
This Talk introduces the TensorFlow in Keras ecosystem and highlights its features, including tensor manipulations, automatic differentiation, and deployment. It also discusses the workflow and automation of hyperparameter tuning with Keras Tuner and AutoKeras. The Talk emphasizes the simplicity and productivity of using AutoKeras, which supports various tasks and advanced scenarios. It also mentions the challenges beginners face and provides resources for learning. Lastly, it touches on the use of TensorFlow and Keras in the research domain and the customization options in AutoKeras, including time series forecasting.
Computer Vision Using OpenCV
32 min
Computer Vision Using OpenCV
Today's Talk explores image processing, computer vision, and their combination with machine learning. Image processing involves manipulating images, while computer vision extracts valuable information from images. Histograms are crucial in image processing as they represent the distribution of brightness values. Various image processing techniques can be used, such as thresholding and convolution. Computer vision techniques focus on extracting important features for object recognition and can be hand-tailored. Audio processing is not the focus of OpenCV, but TensorFlow libraries may be more suitable. Understanding the algorithms behind the code is important for robustness and effective debugging. Computer vision has applications in healthcare for cancer recognition and in agriculture for plant health monitoring.
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Alyona Galyeva
Alyona Galyeva
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.
The Evolution Revolution
31 min
The Evolution Revolution
The Talk discusses the challenges of implementing software solutions and the need for abstractions. It emphasizes the importance of innovation and implementing once to avoid complexity. The use of Brain.js in machine learning research and its practical applications are highlighted. The talk also mentions the benefits of using JavaScript and GPU.js for graphics processing. Overall, the Talk encourages simplicity, efficiency, and collaboration in software development.
Introduction to Machine Learning on the Cloud
146 min
Introduction to Machine Learning on the Cloud
Workshop
Dmitry Soshnikov
Dmitry Soshnikov
This workshop will be both a gentle introduction to Machine Learning, and a practical exercise of using the cloud to train simple and not-so-simple machine learning models. We will start with using Automatic ML to train the model to predict survival on Titanic, and then move to more complex machine learning tasks such as hyperparameter optimization and scheduling series of experiments on the compute cluster. Finally, I will show how Azure Machine Learning can be used to generate artificial paintings using Generative Adversarial Networks, and how to train language question-answering model on COVID papers to answer COVID-related questions.
How to Machine Learn-ify any Product
33 min
How to Machine Learn-ify any Product
In this Talk, an ML engineer from Facebook shares insights on when to use ML and a successful use case from Facebook. The speaker discusses the process of using ML for Facebook Portal calls, including data collection and model selection. The importance of precision and recall in ML models is emphasized, as well as the need for online evaluation and active learning. The Talk also touches on the challenges of data protection and label delay in ML model development.