Search Speed: Making Expedia Flights Faster

Rate this content
Bookmark

Performance plays a crucial role in ensuring the responsiveness and functionalities of an application. A deeper knowledge on prefetching, preemptive search, faster caching and frontend optimisations and many other performance initiatives will be learnt. 

This talk has been presented at JSNation 2024, check out the latest edition of this JavaScript Conference.

FAQ

Expedia aimed to improve the performance of their flight search page to decrease latency, enhance user experience, and maintain user attention.

The common performance metrics include First Contentful Paint (FCP), First Input Delay, Cumulative Layout Shift, and Time to Interactive. Additionally, derived metrics such as Page Usable Time and Non-Supply Overhead are also monitored.

Prefetching involves fetching resources beforehand during the browser's idle time. This allows the page to retrieve resources faster from the prefetch cache instead of the CDN path, which significantly improves performance for new users.

Preemptive search involves predicting and caching search responses based on user inputs from the previous page. When the user lands on the flight search page, the cached response is delivered, improving performance by nearly 50% on both web and native platforms.

Micro queries break down a bulky page-level query into smaller chunks, allowing for incremental loading of information. This helps in displaying important page-level information faster and improves overall page performance by nearly 20%.

Asynchronous query execution ensures that network calls are not waiting for each other unless dependent. This leads to more efficient loading and improved performance, as observed by an 8% improvement in Expedia's experiments.

Micro front-end architecture involves breaking down page-level components into shareable, reusable packages. This makes the components maintainable and optimizes them efficiently, improving build size and overall performance.

Expedia has an automation process that monitors performance thresholds and alerts the team if performance degrades due to specific commits. Continuous monitoring helps in maintaining and improving performance over time.

The key performance improvements include a 52% reduction in page usable time for the top 90th percentile and significant improvements in non-supply overhead.

More detailed information can be found in the Medium blog and other useful links provided by Ina. These references cover a detailed description of the performance experiments and insights on prefetching and performance scoring.

Heena Gupta
Heena Gupta
10 min
17 Jun, 2024

Comments

Sign in or register to post your comment.

Video Summary and Transcription

The Talk discusses how the engineering team at Expedia improved the performance of customer flight search by using various metrics and techniques. These include prefetching resources during browser idle time, preemptive search to predict responses, and optimizing performance through micro queries and a micro front-end architecture. The team also focused on improving build and package size limits for better code analysis. Performance monitoring and automation were implemented for ongoing performance improvements.

1. Improving Flight Search Performance

Short description:

Hi everyone. Ina from the engineering team at Expedia will discuss how we improved the performance of customer flight search. The motivation behind this improvement is the impact of latency on user experience and attention. We use lighthouse metrics such as FCP, first input delay, cumulative layout shift, and time to interactive. Page usable time and non-supply overhead are two derived perf metrics we monitor. Prefetching resources during browser idle time allows faster retrieval, especially for new users. The next experiment is preemptive search.

Hi everyone. So myself, Ina, I am from engineering team at Expedia. So this talk is around the search speed, how we made the flight search faster, how we dramatically improved the performance of customer flight search on Expedia.

So before going deeper into the topic, let me first share the motivation, what brings us to improve the performance on the flight search page. So first of all, on the flight search page, the search topic is at peak. And if the page is not performance efficient, then it leads to an increase in latency and hence the user experience gets impacted and thereby the user attention is also impacted.

Also, before beginning over the perf experiment, let me first bring you to the performance metrics topic. So for measurement, there are some common set of lighthouse metrics that can be monitored. A few important ones for the pages are first contentful paint that is commonly known as FCP. Then there is first input delay. There is cumulative layout shift and time to interactive. Other than that, we can also make use of some derived perf metrics. Two of those for Expedia that helped us to monitor the perf metric for the users is page usable time. And then there is non-supply overhead. Page usable time is the metric that is marked when the main component of flight search page gets mounted. And non-supply overhead is the overall page usable time on the flight search page minus the supply overhead. That means that the total amount of time taken by Expedia to reach to the flight search component without depending on the supply. Other than that, there is a size limit also that we have put on the flight search page to make sure that the bundle size as well as the package that we have within the flight search are not exceeding the threshold.

Now let's first come to the first perf experiment that is prefetching. So prefetching means that we are fetching the resources beforehand during the browser ideal time. And when we land on the current page, we are not fetching the resources from the CDN path but from the prefetch cache. This helps us to retrieve the resources faster. And before coming to prefetching, it's important to prepare what all resources you want to prefetch. That is, it's not important that all the resources need to be prefetched right on the previous page but the important resources, ideally the ones that are commonly used across multiple pages can be prefetched so that the retrieval is faster. Also prefetching is impactful for the new users. The users who are not fetching, who are not making use of the resources from the browser cache. For the existing users, the resources already come from the browser cache and hence prefetching might not be impactful there. Or if you are opening from the incognito, then also it's not impactful. But if you are the new user, then it's going to impact a lot. Then the next experiment that we have is preemptive search.

2. Optimizing Performance and Architecture

Short description:

Preemptive search predicts the response before the user lands on the flight search page, improving performance by 50% on web and native. Micro queries fetch responses in chunks, improving page performance by 20%. Async queries and improved waterfall diagram result in an 8% performance improvement. Micro front-end architecture breaks down page-level components into shareable packages, optimizing performance and ensuring maintainability.

So by preemptive search, we mean that we are preemptively calling the search response. That is, the response is getting predicted even before the user lands on the flight search page. So, that is done by knowing what all search inputs we have on the previous page, that is, the home page for flight search. And as soon as the user triggers the search button, we know that this is the search response that the user is going to ask for. So we cache the response beforehand and when the user lands on the flight search page, the cached response is received by the user. This was a very important experiment in terms of perf measurement and it helps us to improve the performance by nearly 50% on both web and native.

The next perf experiment that we have is the micro queries. So what we were doing initially on the flight search page was there was a main page level query that was giving us all the responses at once. So once that main bulky page query was broken down into micro queries, we were able to fetch the responses in chunks rather than loading all the responses at once. So this helps us to make sure that the user is able to see important page level information beforehand and then the information that was not needed during the page load time were fetched. So, by that, we were able to improve the page performance by nearly 20%. And we were also separate out some of the information like the fair detail information that is not needed immediately by the user.

Another important thing when it comes to performance is making sure that the queries are happening on async manner. So for that, the first step for any of the pages that we analyze the waterfall diagram for the page to make sure that the network calls are happening as expected. So it's important that the calls are not waiting for each other unless they are dependent on each other. So in our case, the loading and loaded queries are actually independent of each other. So it was made sure that these calls are getting triggered at the same time and are not waiting for each other. We did observe nearly 8% of the experiment by improving how the queries are getting executed and improving the waterfall diagram for the page.

The next thing here is ensuring a micro front-end architecture is getting followed on the page. It also depends on the page as well as like it depends on the requirements as well. For us, micro front-end architecture is useful so far. What it meant is that we broke down the page-level components into shareable packages by making those page-level components such as offer details, fair details, etc. We were able to make those shareable package flexible. The packages were also reusable across pages. So for example, Flight Search and Flight Information page are correlated to each other. So we were able to reuse those shareable packages. We can also optimize at package-level efficiently than at the page-level itself. Another thing to note here is that the packages are maintainable. We were able to define who is going to maintain which package. So there was a sense of ownership when it comes to package very efficiently.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top Content
Today's Talk discusses the importance of managing technical debt through refactoring practices, prioritization, and planning. Successful refactoring requires establishing guidelines, maintaining an inventory, and implementing a process. Celebrating success and ensuring resilience are key to building a strong refactoring culture. Visibility, support, and transparent communication are crucial for addressing technical debt effectively. The team's responsibilities, operating style, and availability should be transparent to product managers.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Build Modern Applications Using GraphQL and Javascript
Node Congress 2024Node Congress 2024
152 min
Build Modern Applications Using GraphQL and Javascript
Featured Workshop
Emanuel Scirlet
Miguel Henriques
2 authors
Come and learn how you can supercharge your modern and secure applications using GraphQL and Javascript. In this workshop we will build a GraphQL API and we will demonstrate the benefits of the query language for APIs and what use cases that are fit for it. Basic Javascript knowledge required.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Building a Shopify App with React & Node
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
WorkshopFree
Jennifer Gray
Hanna Chen
2 authors
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
React Performance Debugging
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)