Build a UI that Learns - Intelligent Prefetching with React and TensorFlow.js

Rate this content
Bookmark

How to build a UI that LEARNS? Being able to learn and predict the behavior of users has many powerful applications, one of them is the chance to boost the UI performance prefetching code & resources before the user reaches them. In this talk, we describe a high-level implementation of an intelligent prefetcher, using ReactJS and TensorFlow.js. We use neural networks to learn the user's behaviour, and leverages React's lazy-loading API to prefetch components according to predictions. There is a chance for Frontend developers to explore the powerful combination of UI and AI.

This talk has been presented at React Summit Remote Edition 2021, check out the latest edition of this React Conference.

FAQ

React's code splitting API allows for components to be imported only when they are needed, rather than at the initial loading of the application. This reduces the initial bundle size and speeds up the initial load time.

Machine learning, particularly using neural networks, can analyze user interaction sequences to predict future actions. This allows applications to prefetch content dynamically, improving efficiency and user experience.

Dynamic imports in React allow developers to load components or routes only when they are necessary, rather than at startup. This reduces initial load times and system resource demands, enhancing overall performance.

LSTM (Long Short-Term Memory) neural networks are ideal for sequence prediction problems, such as predicting user navigation patterns in React. They help in accurately forecasting user actions for effective prefetching of components.

React context is used to provide global state management across components. It can store and manage predictive data and machine learning models, enabling components to access predictions and trigger prefetching based on predicted user actions.

Intelligent prefetching in React involves predicting user behavior to load components before they are needed, improving application performance by reducing wait times for component rendering.

TensorFlow.js is a library that allows developers to implement machine learning models directly in the browser, using WebGL for computation. This enables React applications to run machine learning algorithms efficiently for tasks like user behavior prediction.

Eliran Natan
Eliran Natan
17 min
14 May, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Today's talk explores intelligent prefetching in React, including code splitting, lazy loading, and prefetching to improve performance. The use of neural networks for sequence prediction and training with actual user behavior is discussed. React context is used to link UI handlers with predictions and prefetching, enabling dynamic content import and improved user experience. The combination of AI and UI development is showcased in this personal project.

1. Introduction to Intelligent Prefetching in React

Short description:

Today's talk is about intelligent prefetching in React. We discuss the issues with bundle size and loading times in single-page applications and how React code splitting can help. We also explore the concept of prefetching components and using lazy loading for entire routes to improve performance. Additionally, we delve into the question of predicting user behavior and how supervised learning, specifically neural networks, can be used for sequence prediction.

Hello all and welcome to this session. My name is Eli Ranatan, and I'm happy to be here in React Summit. Today I will talk about intelligent prefetching in React. So we start by talking about prefetching of code, later we show how we can predict the user behavior using machine learning, and finally we combine those solutions to form fast React applications.

So I want to start by talking about the context. Usually we are building those amazing single-page applications and eventually we turn up with this huge bundle size that eventually causes problems in loading times, slowness, and potentially harming the user experience. So what we can do is to use the React code splitting API in order to import components on demand. So here instead of just importing the chart component, increasing my bundle size, I'm lazy loading the chart component and the actual fetching of code would happen only when we render it. But that does not solve the problem entirely, right? Because we are just shifting the fetching time to somewhere else. Whenever the user will actually want to render this chart component, then it will have to wait for the fetching to occur, and that can affect the loading time and harm the user experience. So what about this crazy idea? We could prefetch, we could break this trade-off by prefetching the chart component. So using the time that the user is just staring at the screen and before it ever reaches the we could prepare this content, we could use the dynamic import feature to dynamically import the component and then overriding this variable. Now, we could do that if we had some indication or some educated guess about the user's next move. We could use this trick to further increase the performance of our applications if we are doing that for entire routes. So if I have this routing between dashboards and products, I could use a lazy loading to lazy load those routes. And then conditionally, if I have some indication about the user's next move, I could dynamically import those routes, saving a lot of time.

So that brings us to the question, and that's a very interesting question, of how we can predict the next move of user. So let's just analyze this question for a bit. We have this complicated single-page application that is combined from a lot of different components. And we can list those different triggers, those actions, links, buttons that the user can interact with, or at least those that are interesting in the sense that they would cause a rendering of other large components. Now we could keep track on the user's behavior and make this perform this ordered sequence of interactions that the user is performing. And the question is how we can base on this sequence how we can predict the next element in that sequence. So what we need is this intelligent mechanism that receives a sequence and returns the prediction or estimation of the next item in that sequence. And in machine learning we call this problem a sequence prediction. So we can use supervised learning specifically neural network in order to figure out this estimation. So a neural network will receive an encoding of that sequence as this series of numbers. Each number is an identification of a certain trigger and returns this probability distribution across all the possible triggers that the user can interact with. So each output would represent the chance that that corresponding element will be the next element in the sequence. So this is a supervised learning and the neural network acts as a function approximator. The function receives a sequence and returns a probability distribution.

2. Training Neural Network with User Behavior

Short description:

We use the user's actual behavior as examples to train the neural network. One-hot encoding is used to break numerical correlations between elements. The LSTM type of neural network is crucial for sequence predictions. TensorFlow.js library enables implementation in the browser environment. The dot predict command provides probability predictions for each element. Training the network with the dot fit command using actual user behavior as examples can be time-consuming.

Now once that we have this output, we could just take the maximal argument and derive from that the next UI element that the user is about to interact with. Now of course that those predictions will be meaningless unless we train the network. So in supervised learning, we have to supply the network with examples. And the best source for those examples is the actual behavior of the user. So we could sample the behavior of the user, taking those sequences and feed the neural network with those examples, basically telling the neural network this is the actual behavior of the user, please adjust your predictions accordingly.

Now I think it's cool that we are using the user in order to train the application. So the more that the user uses the application, then it trains it accordingly. And when the user changes its behavior over time, then the application can adapt. Now specifically, speaking about implementation, this predictor can be implemented as a neural network. And what you are seeing here is that the input of that neural network will be... We are using here one-hot encoding instead of just pure numbers. And this is because we want to break any numerical correlation between those elements. There is no meaning to say that a dashboard is smaller than products, because it is represented by the number three and not by the number 21. So this is the way to break this correlation.

Now once that we have... The most important thing in the architecture of your neural network is that we have to use this LSTM type of neural network. This is the best type that can support sequence predictions. It's also important to notice that the input layer is here. The number of units in the input layer should match the length of the sequences that we are working with, and the number of units in the output there should match the number... All the possible triggers that the user can interact with. Now, once that we have the output from the network, we can attach it to the corresponding element in the user experience and assume that the user is about to hit this element.

Now, implementation-wise, we have to implement this mechanism somewhere in the browser environment, and we can do that using TensorFlow.js library. So TensorFlow.js library is based on WebGL, and it allows us to implement machine learning algorithms within the browser environment. So we are using the sequential command in order to stack layers in the network one after the other, and we are specifying these LSTM type layer, specifying all the shape of the input as a matrix of binaric numbers. And eventually, once we have this network, we can use the dot predict command supplying a sequence and asking to predict the probability for each element. Now, those predictions will be meaningless unless we train the network, as we say, and for the training part, we would use the dot fit command. So we're supplying this batch of sequences and the correlated labels. And those labels represent elements. So we are taking the actual behavior of the user and supply it as examples to the fitting mechanism. Now, training the network could take time.