Fast, Flexible Virtual Scrolling With Functional Programming

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Virtual scrolling is a clever way to reduce rendering overhead, and is especially helpful as the complexity of websites increases. Although there are many plug-and-play libraries which support virtual scrolling in various frameworks, rolling your own is surprisingly easy. In addition, this allows one to maximally optimize each viewport for its content, unlocking additional performance savings.

In this talk, you will learn how to create a flexible virtual scrolling viewport from scratch, and make it performant using concepts from functional programming. We will be following an implementation of a virtual scrolling viewport using web components from start to finish, including variable element heights, skeletons, asynchronous child component rendering. We will then optimize it using memoization, wrapping more-performant imperative constructs in a functional layer, and using element recycling to significantly reduce render churn when using stateless child components.

Finally, we will touch on the state of the art in virtual DOM rendering, and how this approach compares to the use of the new `content-visibility` CSS property.

This talk has been presented at JSNation US 2024, check out the latest edition of this JavaScript Conference.

FAQ

Virtual scrolling is a technique where only the visible elements in a list are rendered, improving performance by reducing the number of items loaded into the DOM at any given time.

Virtual scrolling is beneficial for large lists because it improves initial page load time and scrolling performance by rendering only the elements currently in view, rather than all elements at once.

To implement virtual scrolling, you need to compute the total height of rendered elements, determine which elements should be rendered based on the scroll position, and calculate the alignment of these elements within the viewport.

Optimizations include memoizing intermediate values, using binary search for index calculations, delaying the rendering of non-essential parts, and reusing a pool of list elements to avoid unnecessary DOM manipulation.

CSS content visibility allows elements outside the viewport to be skipped in rendering while still being part of the DOM and accessibility tree, enhancing searchability and accessibility without affecting performance.

A downside is that all elements must still be constructed in the DOM initially, which can be slow, making it less suitable for lists with many small items compared to virtual scrolling.

You can use a pool of reusable elements to avoid creating new elements, instead shifting the contents of these elements as needed, which reduces the need for constant DOM updates.

Using a buffer by rendering extra elements before and after the viewport helps absorb latency during fast scrolling, reducing flickering and improving user experience.

For lists with variable element heights, you compute the cumulative sum of item heights and use it to find the start and end indices for rendering, rather than using basic division.

The limitation is that un-rendered elements don't appear in browser searches or accessibility tools like screen readers, requiring alternative solutions such as implementing a search bar or using new web standards like CSS content visibility.

Adam Niederer
Adam Niederer
16 min
21 Nov, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today's Talk focuses on implementing virtual scrolling to improve performance and flexibility of lists. The process involves computing the height of the viewport, scroll position, and individual list item height. By rendering only the elements within the viewport, the initial page load and scrolling performance are enhanced. Various optimization techniques are discussed, including memoization, binary search, and delayed rendering. These techniques significantly improve scrolling performance and provide a better user experience. Additional techniques such as using skeletons, element pooling, and functional purity can further optimize rendering. Advanced techniques, like moving elements within the pool and using content visibility, yield substantial performance improvements. However, content visibility is better suited for large pages with few large sections. Overall, the Talk offers valuable insights into virtual scrolling and its limitations.

1. Introduction to Virtual Scrolling

Short description:

Today we're going to implement a basic virtually scrolled list from scratch, then talk about the improvements we can make to boost the flexibility and usability of the list, and then make the list as fast as it can possibly be. Virtual scrolling helps improve both initial page load and scrolling performance. To compute how tall the elements are, which elements to render, and where to render them, we need the height of the viewport, the scroll position, and the height of an individual item in the list. Let's code it by computing the total height of the elements and figuring out which elements to render based on the viewport and scroll position.

♪♪ Hey, I'm Adam, and welcome to my talk about virtual scrolling. Today we're going to implement a basic virtually scrolled list from scratch, then talk about the improvements we can make to boost the flexibility and usability of the list, and then make the list as fast as it can possibly be. This talk will help you understand how virtual scrolling works and might be able to help you get a few more frames per second out of it, even if you're using a library. So let's get started.

For every scrolled viewport, virtual or not, there are two key parts. An outer viewport element that's one height, and an inner element or list of elements that's a larger height. If it were shorter, there'd be no need to scroll, of course. Traditional scrolled lists render all of their items into the DOM, even those which aren't visible, which can cause lag both when rendering that list initially and while scrolling. Virtual scrolling, on the other hand, only renders the elements that are being viewed at the moment. This helps us improve both initial page load and scrolling performance. Usually when your list size hits triple digits, it's time to start thinking about virtual scrolling, depending on how long it takes to render your individual items. So let's take a look at how to virtually scroll a list.

First, let's make a super basic minimum viable product. For this, I'm going to assume all of the list items will be the same height, but we'll come back to this a little bit later. To virtually scroll something, we actually only need to compute three things. One, how tall is the stuff that we're rendering? Two, which elements in the list should be rendered? And three, how far down inside the inner element should we render it so that it aligns with the viewport? To compute these three things, we're going to need three pieces of data from the DOM. First, the height of the viewport. Self-explanatory. Next, we need the scroll position, which is the distance from the top of the viewport to the top of the inner element. We can get this from the viewport's scroll-top property. Finally, we'll need the height of an individual item in the list, inclusive of borders and margins. They're all the same size in this example, so that's easy enough. Now let's code it.

So the first thing we need to compute is the total height of the stuff we're rendering, which we set as the height of the inner element. The browser can't do it for you like in traditional scrolling because we're not rendering all of the elements at once. So for our basic list, we can set it to the height of an element times the number of elements. Next, we need to figure out which elements we're rendering. For this, we need the height of the viewport and the scroll position within the viewport. From that, the index of the first element we want to render is the floor of the scroll position divided by the height of each element. Similarly, the last is the ceiling of the scroll position plus the height divided by the height of each element.

2. Implementing Virtual Scrolling

Short description:

We only render the elements from our start to end indices and move the list items to the part of the inner element shown in the viewport. To support variable element heights, we compute the cumulative sum of list item heights and find the start and end indices based on the scroll position and viewport height.

So now our code will look a bit like this. We only render the elements from our start to end indices, and to react to scrolling, we can simply use some reactive state. The viewport height can be done with a ref, which I'm not showing here.

Of course, we don't want to render all of these elements at the very beginning of the inner element all of the time. We now need to move the list items down to the part of the inner element that's being shown in the viewport. Thankfully, that's easy. Translate it by the start index times the list items height. Done. That's all you need to have a functional virtually scrolled list. The code fits on a slide.

There are some points we can improve to make the list more flexible to work with, though, and improve the user's experience when scrolling the list. The biggest limitation is that all of the elements have to be of the same height. So let's work around that. To support variable element heights, the main thing we need to change is our method of finding the first and last elements to render. Instead of doing basic division, we first need to compute the cumulative sum of our list items heights. Then to compute our start index, we take our scroll position and find the index of the greatest number that is less than our scroll position in our cumulatively summed heights. The index of that number is our start index. For the end index, we want to do something similar. We want to take our scroll position plus our viewport height, then find the smallest number that is greater than that sum in our cumulatively summed heights. The index of that number is our end index.

3. Optimizing Virtual Scrolling

Short description:

The index of that number is our end index. For this, the transform offset becomes the cumulative height at our start index, and the total height of the inner element becomes the simple sum of all of our list items heights. This assumes that your list of element heights can be computed before rendering.

The index of that number is our end index. That might sound like a lot, but we can do it with a search that returns both the indices around the number that we're searching with. For this, the transform offset becomes the cumulative height at our start index, and the total height of the inner element becomes the simple sum of all of our list items heights. Done. Of course, this assumes that your list of element heights can be computed before rendering. Some libraries let you estimate the height of the elements, and then they will measure them once they've actually rendered and update accordingly. I personally don't like this approach because it causes scrolljank when the estimates are off in a way that compounds, and in most of the cases I've encountered, you can find a way to pre-compute the element heights. If you're building something like a newsfeed, though, this is definitely an approach that you can use.

4. Optimizing Scrolling Performance

Short description:

Now that the list has better UX and flexibility, we can optimize scrolling using functional programming concepts. Memoize intermediate values like cumulative sum of heights to avoid unnecessary computation. Use optimized versions of subroutines and a binary search to find start and end indices of the list. Delay rendering expensive parts of list items for improved scrolling performance.

So now that we've made it a bit more flexible for our designers and developers, let's make the UX a bit better. In the real world, list items can take a while to render, and if the user scrolls faster than a list can normally render, elements can start to flicker around the edges as the viewport advances past the last element, only for a new last element to pop in a few milliseconds later within view of the user. This isn't a great user experience, and almost all virtual scrolling libraries attempt to solve this by rendering a few additional elements before and after the viewport to absorb that latency. This isn't an incredible solution, because sustained high-velocity scrolling will still back up rendering, and it increases the number of elements we need to render and re-render, which is contrary to the whole point of virtual scrolling. That said, just go faster isn't always feasible, so buffers can help to smooth things out.

I've seen a lot of virtual scrolling libraries express this in terms of extra list items to render above and below the viewport, but I don't actually think that's the right approach in all cases. We just modified our code to support variable element heights, so we shouldn't assume that five elements, for example, in each direction is the right number at every position in the list. If they're small, we might want to render more to give us more buffer space, and if they're big, rendering a fixed number might be leaving performance on the table. It makes a lot more sense to me to provide a buffer length in pixels and then render as many items as is needed to fill that. I've also heard whispers online about dynamically varying the buffer's allocation and length depending on the direction and velocity of the scroll. And that, to me, sounds like a plausible way to optimize it as well.

Now that the list has some better UX and more flexibility for developers and designers, we can tap some concepts from functional programming to make scrolling hit higher frame rates as well. So first, let's knock out the easy stuff. There's no reason to recompute some of the intermediate values related to our variable height support on each frame, like the cumulative sum of the list items' heights. We could just plot that in a variable somewhere, but memoizing it is much easier and ensures that we can't break our list by forgetting to update it somewhere. Just make sure the cache is size one and the list is pointer-compared. The heights shouldn't change frequently, so this is basically free performance when you're scrolling. The same goes for application-related stuff. If you have a virtually-scrolled table, make sure you're not recomputing the sorting and filtering on every single frame. Memoization is basically free in terms of cognitive load, and it can help a lot in these situations where your props aren't changing, but you're re-rendering really frequently. In addition, we can make sure that we're using the most optimized versions of subroutines possible. If you're like me and you write everything in a functional style, you may be leaving some performance on the table when computing your cumulative sum of heights. If you do it in a reduce, you're putting a ton of pressure on your garbage collector. As long as the API stays functional, you can write the function in an imperative style, and you should only need to allocate once for that. In addition, we can use a binary search to find the start and end indices of the list. Unlike a standard binary search, though, instead of returning nothing if the needle isn't found in the haystack, we instead return the closest value that, in the case of the start index, is less than the needle, or in the case of the end index, is greater. This takes the asymptotic performance of searching down to log n. Next, let's take a look at some DOM-related optimizations to make virtual scrolling faster. For this, sometimes we don't care that a user can see all of a list item when they're scrolling. And if scrolling performance is of paramount importance, we can delay rendering the expensive parts of the list item that a user doesn't need to see right away while the list is rapidly re-rendering during scrolling.

5. Optimizing Scrolling Techniques

Short description:

And instead, we can display skeletons. To do this, we derive a debounce scroll position from our scroll event and only render the essential parts of our new list using the real-time value. Next, we can impose some minor restrictions on our list items to significantly speed up rendering. We create a pool of elements that are reused when scrolling, providing a 35% performance improvement. The components that make up the list items must be functionally pure, and we can use additional techniques to optimize rendering performance.

And instead, we can display skeletons. To do this, we derive a debounce scroll position from our scroll event and only render the essential parts of our new list using the real-time value. For this, a less aggressive strategy would be to do this only on elements in your overflow buckers, although that, of course, comes with lower performance savings. This does have an impact on the user, though, so I would recommend exhausting all of the other performance tweaks that you can do behind the scenes before doing this.

Next, we can impose some minor restrictions on our list items to significantly speed up rendering in some cases. But first, let's understand what exactly is happening when a new element is rendered in when scrolling. The way it's set up 99% of the time is that the element that has fallen out of our viewport or overflow buckers is removed from the DOM and a new one is created in the direction that will soon enter the viewport. For the component in which I was initially implementing virtual scrolling, we found that it was actually much slower to create and render a new element tree for a list item than it was to re-render an existing one. So this pattern of constantly creating and deleting elements was causing performance issues on scroll.

Your performance characteristics might be different, but we've seen this across a pretty wide range of use cases. To mitigate this, we can use a technique my coworker Justin Spedding came up with, so welcome to Justin's part of the talk. What we do is we create a pool of elements which are reused when scrolling. To do this, we need to compute the maximum number of elements we need, which is the sum of the viewports and overflow buffers heights divided by the minimum height of any list item. This pool size could be made dynamic, but again, we want to avoid creating and destroying elements if at all possible here. Now we can make some creative use of our framework's key property to tell it to render our pool of list item elements and simply change their contents and behavior as we scroll. Instead of having the key be the index of the item, it is instead the index of the element in the pool. Now the browser will only ship the contents of the items rather than having to render whole new ones. Doing this nets a 35% performance improvement and benchmarks a real-world list that we use at Chatham.

There are also a few cases where this can slow you down, and I'll address that in a second, but props if you can guess it. First though, this puts a few restrictions on what our child elements can do, so let's cover those. With this approach, the components that make up your list items have to be functionally pure. That is to say that they always output the same thing given the same props and they don't have any side effects. In front-end terms, this means no local state, no lifecycle hooks, and no global state that isn't specific to the list items data. This isn't a huge deal because you can wrap the list items in their own component and have them subscribe to a piece of global state that is partitioned per piece of data in the list rather than per component. You can kind of sort of mimic lifecycle events by watching for changes in props, but if you have a lifecycle event in a list item, then you might want to take a broader look at your architecture anyway. In addition to this, there's one more technique we can use to speed up rendering performance. You may have noticed that while element sharing provides a huge real-world speed up, it does technically regress our asymptotic performance when scrolling. We're now re-rendering as many items as are in the viewport by shifting the props of each item component up by one when scrolling rather than creating and deleting one item. So if the viewport is too large, it can negate the speed up of sharing and even cause regressions in some synthetic benchmarks. So I built an additional optimization on top of it to get the best of both worlds.

6. Advanced Virtual Scrolling and Limitations

Short description:

Instead of shifting all of the element's props or contents up one and adding a new one at the bottom, we take the topmost element and move it to the bottom, then change that element's contents to that of the new item. We split the element pool into two buckets, each with the same size as our element pool. Then when we need to move the element to the bottom because of scrolling, we make it display an item in the second bucket that corresponds to the index of that element in the pool. This optimization yields another 30% performance improvement and ensures performance remains consistent with large viewports. There's a new CSS property called content visibility that aims to improve virtual scrolling, but it's better suited for large pages with a few large sections.

When scrolling down, instead of shifting all of the element's props or contents up one and adding a new one at the bottom, we take the topmost element and move it to the bottom, then change that element's contents to that of the new item, leaving all of the other elements alone. That's the imperative description of it. Declaratively, we can do a little math to represent that transformation functionally in JSX. In the code, I'm using a flex box to do this because it's easier to fit on a slide, but transforming the individual list items is probably marginally better.

Basically, we split the element pool into two buckets, each with the same size as our element pool. The first bucket contains items before the next multiple of our pool size, and the second bucket contains those after. So if our pool size is 24, the first bucket will display items 0 through 23, and the second would display items from 24 to 47. The second bucket is always displayed after the first bucket, regardless of where it is in the DOM. We then assign each list item to the first bucket to start. Then when we need to move the element to the bottom because of scrolling, we make it display an item in the second bucket that corresponds to the index of that element in the pool.

So when we initially scroll down, the first element in our pool transitions from displaying an item in the first bucket to one in the second bucket, and will therefore display item 24 instead of item 0. Once item 24 leaves the viewport, it will display item 48. Then once all elements are displaying items in the second bucket, the second bucket becomes the first bucket, and a new second bucket is formed for the second however many items the pool can hold. Now we can see our element pool being treated as a ring buffer of sorts and only re-rendering one element at a time when scrolling. This brings our scrolling performance back down to 01. Adding this optimization yields another 30% performance improvement in our real-world example and ensures the performance remains consistent with large viewports, like on those on vertical monitors.

Finally, I want to talk about some of the broader limitations that come with virtual scrolling and some new web standards that aim to fix some of these issues. The fundamental issue with virtual scrolling that can't be fixed by just coding better is that we're not rendering all of the elements, so those un-rendered elements don't show up in control F searching or in the browser's accessibility tree. This can cause issues for people who need to use screen readers, for example, and it means that you always need to implement a search bar for lists that you want to be searchable.

There's a new CSS property called content visibility that came out of some talks about making virtual scrolling better with web standards. When this prop is set to auto, if the element is not in the viewport, selected, focused or on the top layer, the browser skips rendering it entirely. However, the element is still in the DOM and, unlike display none, it's still in the accessibility tree and can be selected or focused. The browser can skip rendering the content, but the content is still there for control F and for screen readers. The problem, though, is that you still have to construct a DOM tree with all of these elements on your initial load, which in the real world, unfortunately, is still prohibitively slow. I haven't had too much time to experiment with this, but given what I've seen, I don't see virtual scrolling being dethroned by this anytime soon.

That said, from what I understand, content visibility wasn't really intended to entirely supplant virtual scrolling, and it's much better suited for large pages with a few large sections that will render outside the viewport rather than lists with lots of small items. For that use case, it's the perfect tool for the job and I highly encourage using it. So, in conclusion, lists are really common in web development, and hopefully this talk has inspired you to get as much as you possibly can out of them. There's a lot of performance being left on the table with traditional scrolling, and you can often take basic virtual scrolling further as well. Sample code for the techniques discussed in the talk will be available here. So, thanks for putting up with lots of code on slides and lots of programmer art, and of course for watching my talk. And if there's time for questions, I'd be happy to take some.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.
Design Systems: Walking the Line Between Flexibility and Consistency
React Advanced 2021React Advanced 2021
47 min
Design Systems: Walking the Line Between Flexibility and Consistency
Top Content
The Talk discusses the balance between flexibility and consistency in design systems. It explores the API design of the ActionList component and the customization options it offers. The use of component-based APIs and composability is emphasized for flexibility and customization. The Talk also touches on the ActionMenu component and the concept of building for people. The Q&A session covers topics such as component inclusion in design systems, API complexity, and the decision between creating a custom design system or using a component library.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Master JavaScript Patterns
JSNation 2024JSNation 2024
145 min
Master JavaScript Patterns
Top Content
Featured Workshop
Adrian Hajdin
Adrian Hajdin
During this workshop, participants will review the essential JavaScript patterns that every developer should know. Through hands-on exercises, real-world examples, and interactive discussions, attendees will deepen their understanding of best practices for organizing code, solving common challenges, and designing scalable architectures. By the end of the workshop, participants will gain newfound confidence in their ability to write high-quality JavaScript code that stands the test of time.
Points Covered:
1. Introduction to JavaScript Patterns2. Foundational Patterns3. Object Creation Patterns4. Behavioral Patterns5. Architectural Patterns6. Hands-On Exercises and Case Studies
How It Will Help Developers:
- Gain a deep understanding of JavaScript patterns and their applications in real-world scenarios- Learn best practices for organizing code, solving common challenges, and designing scalable architectures- Enhance problem-solving skills and code readability- Improve collaboration and communication within development teams- Accelerate career growth and opportunities for advancement in the software industry
React Patterns Made Simple
React Day Berlin 2024React Day Berlin 2024
62 min
React Patterns Made Simple
Top Content
Featured Workshop
Adrian Hajdin
Adrian Hajdin
Learn widely used React patterns, including HOCs, Compound Components, Provider Patterns, Functions as Child, and Portals, to write cleaner, more efficient code and create scalable, maintainable applications.Overview In this workshop, viewers will learn about key React patterns that can make their code more efficient, readable, and maintainable. We'll introduce each pattern, explain how it works, and demonstrate practical examples. By the end of the session, participants will have a solid understanding of how to use these patterns in their projects.Learning GoalsHOCs Compound Components Provider Patterns Functions as Child Portals Modularity Maintainability Real-world Application.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.