React Microfrontend Applications for TVs and Game Consoles

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

DAZN is the fastest-growing sports streaming service and during this talk, we will consider our experience building tv applications (for ps5, xbox, LG, Samsung and other targets) with micro frontend architecture. You may expect time travel to see how it started and what we have at the moment, what makes tv development different compared to the web, same as techniques that allow us to share code between targets. You will learn how we test our application with an in-house remote testing lab as well as our deployment and release process.

This talk has been presented at React Summit 2022, check out the latest edition of this React Conference.

FAQ

The speaker is Denys, a principal engineer at The Zone.

You can find code samples on Denys's GitHub account and his website.

The Zone is available on web devices, mobile devices, smart TVs, game consoles, and set-top boxes.

The two main groups of targets at The Zone are HTML5 (React targets) and Bespoke (native languages).

The Zone initially used a monolith architecture.

The current architectural approach at The Zone is a micro frontend architecture called ALEP (Asynchronous Library Loading).

Some of the domains in The Zone's architecture include the authorization domain, catalog domain, and landing page domain.

The Zone uses a combination of data providers and machine learning techniques to detect key moments in sports content.

The Zone faces challenges such as release congestion and the need for independent infrastructure for different targets when multiple teams work on the same domain.

Feature-based testing is done by individual teams, while integration testing is done on a final pre-prod environment called staging.

Denis Artyuhovich
Denis Artyuhovich
25 min
17 Jun, 2022

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk discusses the architecture journey of a sports engagement platform, transitioning from a monolith to a microfrontend architecture. The ALEP microfrontend architecture is introduced to manage the complexity of a complex catalog and enable efficient deployment and version management. The deployment and release process involves using AliB and updating metadata in the deployment dashboard. The integration to React and TV development process involves using the AliB package and having independent lifecycles for packages. Shared code is used across different targets, and testing is done in a remote virtual lab. Focus management and key moments detection in sports are also addressed.

1. Introduction and Architecture Journey

Short description:

First of all, let me introduce myself. I'm Denys, a principal engineer at the zone. We change every aspect of how fans engage with sports, from content distribution to creating our own documentaries. We are available on various devices, including web, mobile, smart TVs, game consoles, and set top boxes. Today, we'll focus on the HTML5/React targets. In 2016, we started with a monolith architecture for Samsung Hue, but as we grew, we transitioned to a micro front-end architecture with clear domain boundaries. If you're interested in our journey from monolith to microfrontend, I recommend watching a talk by my colleague Max Galla. We also introduced a deployment dashboard for our new microfrontends, allowing independent releases for each domain.

Cool. First of all, let me introduce myself. My name is Denys. I'm principal engineer at the zone. And if you want to check some code samples I'm going to refer to, you can use my GitHub account and also if you want to find any of my contacts, just follow my website.

And at the zone, we change every aspect how funds are engaging with sports, starting from their content distribution to creation of own documentaries and fully extended augmented experience with amazing feature we built, which are working in real time. But what's interesting as well is that we are available on dozens of devices, such as of course, web devices, mobile devices but also smart TVs, game consoles, set top boxes and about these three last three, we're going to talk in now in next 20, 25 minutes or so.

So before we continue, I just want to share this split we have at the zone. We have two groups of targets. One group is HTML5, or we probably can call it React targets, and another one is Bespoke. So today, we will be focusing on the first one. The other one is more for native languages and as you can see it covers so many different targets, which are Samsung, LG, PlayStation, et cetera. There are lots of them.

So now, I'd like you to take all of you to some adventure and show how our architecture journey started, how we iterated over it and what we currently have. And as you can imagine, with Samsung Hue back in 2016, when the Zone just started, it was the way application created by even third party company and we started of course, with monolith architecture because monolith is kind of obvious choice for Samsung Hue as it helps you to grow fast enough at a small scale. Yeah. And it works really well until your development team and features in your application is relatively small. And later on, we stepped into the rapid growth space where we have hundreds of engineers and of course, at this scale, one of the most important things is to give teams autonomy. It's actually where we step in as like engineering company and instead of the third party company, we have rebuilded application completely from scratch. It is a micro front-end architecture where we implemented vertical split of the vertically split of the mains. And just to give you idea of the domains, so domains is something with that time we thought so, with clear boundaries. For example, we have authorization domain which is responsible for signing-sign-up, recovery password flow. We have the catalog domain which is basically responsible for browsing of the content. We have learning page domain which is responsible for C pages and so on. And I believe you get the idea. If you're interested in this journey how we actually like iterate it from the monolith to the microfrontend, I really recommend you this talk from my friend and colleague Max Galla which he did, I believe, last year. Really interesting journey. But same time we also introduced deployment dashboard for our new microfrontends. So this domain can be released independently and that time the only one team were responsible for the entire domain. And everything was well, it was really, I mean, big step forward from what we had previously.

2. Complex Catalog and Microfrontend Architecture

Short description:

We have a complex catalog with features like player, key moments, and a panel with mini-apps developed by various teams. Managing this complexity becomes challenging as multiple teams share deployable artifacts and face release congestion. To address this, we introduced the ALEP microfrontend architecture, which allows for vertical domain splitting and horizontal feature-based splitting. This architecture enables features to be fetched on demand, improving deployment efficiency and version management.

But we continue to grow, we reach the point where we have more than several hundreds of engineers and some of the domains become way more complex than they were initially. So, catalog itself, yeah, it's a place where you can draw the catalog, but it also has quite a lot features. For example, player. Player, quite complex, feature-rich package, adaptive bitrate, digital rights management, other things which it's aimed to support. Later on, we have key moments. You see these dots? Yeah? Right in a player. They are actually representing the interesting moments of the game. We have the thing for various sports, for football, for boxing, for motor GP, and recently we introduced it for the baseball. I highly recommend you to check it out. And we apply various techniques, including the machine learning, to detect them in real time and plot on the timeline correctly according to the video. So, you know exactly when moment happens. As I said, it works for VOD content and for live content.

We have also the panel here, which is sort of mini-app where lots of mini-things integrated, but they all are developed by various teams, such as Formation. They, again, live feature, fully in sync with video. If any transformation happens on the pitch when you're watching the game, it's going to reflect that change, yeah? And as you can imagine, we need something different to manage all of this, because Catalog no longer belongs to the one team. There are lots of teams which are now sit there. So we're getting the same problems. Multiple teams share the single deployable artifacts and we're getting the release congestion because if you're promoting the change between your test environments and probably you have staging for pre-prod and prod environment, and let's say a player tries to release their changes, but they stuck or found an issue, yeah? Any other team kind of blocked with their release because they need to wait until the issue will be resolved or someone going to take out those changes from the code, which is tricky in many cases. We have APAC release statuses even though I just demoed to you. We have a nice deployment dashboard where you can deploy the chapter, its entire chapter. You know the version of the entire chapter, but what version of the package? Well, good luck to find out. You need to either maintain something or develop custom solution for this. Yeah, quite tricky. So we iterated over it and we introduced new microfrontend architecture, which we call ALEP, which stands for Asynchronous Library Loading. So we still keep our vertically splitted domain, but it's complimentary, fulfill it with horizontal feature base split where features can be fetched on parent demand and I'm going to demo it now how exactly it works. So let's consider first from web perspective, what happens in action when you visit the zone. As the first thing, you enter in our website, we're loading your bootstrap. Bootstrap is a model which is responsible to fetch all further chapters. It's also responsible to prepare our runtime environment. It's also like checks your old status to know which chapter to load and some other stuff.

3. AliB Deployment and Release Process

Short description:

As soon as bootstrap is done, it fetches the chapter and the catalog. AliB plays a crucial role in this process, with two steps: deployment and release. Deployment alone does not guarantee consumption; release is necessary. Teams update metadata in the deployment dashboard and the catalog fetches this metadata during runtime to determine what should be consumed.

And next, as soon as bootstrap done, it fetches the chapter and let's say catalog is fetched and this is where AliB comes to stage. And to better understand how AliB works, let's consider it from two sides, developer experience and the runtime. So on that developer experience side, teams they have full autonomy developing and deploying their packages when they're ready. And there is a very special thing. In AliB, we have two steps, deployment and release. And if you have deployed something, it doesn't mean that someone will consume it. No one actually will be consuming until release happens. And teams can do the release, updating the specific metadata from the front-end deployment dashboard. Which means in the runtime, catalog fetches metadata about the package first to see what's being released and what should be consumed now in the runtime. And then initialize it on demand.

4. Integration to React and TV Development Process

Short description:

The integration to React involves using the AliB package and specifying the desired package and options. This ensures independent lifecycles for packages and allows teams to have ownership over minor and patch releases. The micro frontend architecture on TV devices is similar to web, but with differences in the runner and the need for native models. Each target has independent infrastructure, and changes can be promoted independently based on testing. The development process on TV starts in the browser, with options like storybooks and sandboxes for package development.

From the integration to the React, it looks very like on this slide. Basically, there is a hook which is use AliB package. You are specifying the package you are interested in, you are specifying some options, including the major version up to which you are interested, to ensure that all the integrated changes will be non-breaking for your current state. And by that we ensure that packages have a completely independent lifecycle. Teams have full ownership over their minor and patch releases. For the major releases, they need integrational changes, they're going to need to update the catalogue, and it's enforced on the code level. And it fully supports the idea of autonomy and you build it your own statement, which we truly believe in.

Surprise, surprise, we have completely the same micro frontend architecture on TV devices. What we have on TV is different compared to that. We have our runner, or apk, or call it whatever you like, which we're trying to keep as tiny as possible. In best cases it's just URL, in good cases it's just manifest, in some cases it also includes native models because if you need native models they should be integrated there. But later on, bootstrap loaded, and then chapters, and then again packages as you can see on the screenshot. But compared to web on TV and game consoles all targets are different. And they were more different than just different browser engines, because there is a different browser engine, there is different lifecycle events, there are different UI events and different runtime. And something what's available on one target can be just not even possible on other targets. That's why we delegate some extra responsibility on TV to the bootstrap player, compared to web one in addition to the normal functionality. It also supports various transformation such as configuration of various things including key mapping, because left, up, right, down events are all different across different targets. It's also different on Xbox and completely different on PlayStation. Also important, each target has fully independent infrastructure. It's crucial, as you remember. They're all different. So we maintain all our infrastructure with code, and if you're interested how to basically build broad-ready infrastructure for your front end with TypeScript and Terraform, I was doing the talk last React advanced and there is a link for the sample to the similar infra we have. And, yeah, there is a recording as well. So you can check it out.

Similar to the previous one, teams should have autonomy to release to those targets where it's been tested, right? Because they're all different. So, it's possible to promote changes completely independently, so you don't need to go immediately to all targets. You can just release where you've already been tested. But what about development process on TV? Development process from the very beginning always starts in your browser, yeah? So you can use Chrome, Firefox, your choice. For most of the packages we have storybooks, for some we have own sandboxes. It's up to the team to decide what they like for their development needs.

5. Shared Code and Remote Virtual Lab

Short description:

We share state and UI code between targets, even when the UI looks different. Create React Contexts allow us to use components in a common code base and enforce their interface with TypeScript. Feature flags and assigned audiences help toggle components on or off based on conditions. We also have target-specific models and legacy model swapping. Testing is crucial as shared code may not work out of the box. We have a remote virtual lab with a web app and end-to-end tests, accessing devices through the API. The Raspberry Pi controls the camera, TV, and remote. The interface allows device occupation and control via a web interface or a remote. Devices are located in different locations, not in your home.

We also have state and UI code mostly shared between targets. We even have cases where code shared between web and TV targets, even though that UI-wise it looks completely different. But state-wise, it can be very similar and you may want to leverage your abilities there. And in order to share, we use different techniques.

One of these is Create React Contexts, basically very powerful and straightforward. The idea here is that on the common code base, you're using components which are available in your context. So in this case, on the target level, you can define which components to pass in and your common code becomes just agnostic to whatever you're going to pass in and it can enforce the interface of those components with the TypeScript.

With the feature flags, with assigned audiences, other popular and very powerful technique because basically with the feature flags you can toggle on or off your component, straightforward. But with the audience complementary to this, you can gradually define on which conditions you want to toggle it on or off. And let's say if you have incident for something, for let's say you haven't tested that there is a memory leak in your fancy new feature on certain targets, you can specify even the versions for which targets you want to toggle it off. For sure, we have target-specific models. As a legacy we also have model swapping. I'm not a big fan of this approach but model swapping, yeah, with us for a while and it's still there but the main downside of this that you need to provide model with a completely same interface without any help of TypeScript or anything else.

And I just want to remind you, on TVs, all the targets are different so we have lots of code shared but it doesn't mean that it's guaranteed that this shared code is going to work out of the box so we need to test something. Well, utilizing cinema room like on this slide kind of cool but requires lots of space at your home and at least you will need good iron cone over summertime. And to be fair with you, we started with this approach where we had sort of cinema rooms. So we have still have those in our office but we come up with something better for the remote working. And on this slide you see very simplified high level version of the architecture of our remote virtual lab where you can see there are two entry points. One is a web app and the other is one end to end tests. So with the web app, you can, yeah, it's really useful for, like, exploratory testing and manual testing when end to end tests can still access our remote devices through the API. API layer is responsible for authorization, cuing, and proxying request to the Raspberry Pi service. Now, on the Raspberry Pi, we have this shared responsibility to control camera in front of TV to make the recording of it and to control TV itself to toggle it on, off, restart, and control the remote. And let me show you how the interface looks like. So we start on the page, we can occupy the device now, device books for us. So when I'm testing, no one else will be using this device. And as you can see, I can use just the web interface, I can use this fancy remote to control it. For some of the targets, we even implemented debugger. And for those who were working with Cardova many years ago before Chrome introduced remote debugger, this plugin can be familiar, but it's been duplicated for a while and we are supporting their own version of it. I also want to bring your attention to the fact, it's not loaded, here we go, that devices, those devices, they are located in different locations. So some of them are located in Poland, others in the UK, and definitely not in your home.

6. Testing Changes and Smart TV Focus Management

Short description:

To ensure flexibility in testing changes, we implemented an overlay for non-prod environments that allows fetching specific package versions using GitHub run IDs. Teams can integrate these IDs into their push pipelines, and Alip will respect the overrides and fetch the corresponding packages. We also leverage teamplates projects and generators to simplify project setup. When developing for smart TVs, focus management can be challenging. We have an in-house solution and open-sourced projects that address this. Options include manually maintaining a focus map, a distance-based and priority-based solution, and a declarative approach using higher order components. Pointer navigation is also supported, with slightly different behavior for TVs with pointer input methods.

And we need to have flexibility to test our changes. We can't just reinstall every time because it's going to take ages every time we make a change in a new APK. So for non-prod environments, we implemented a really interesting overlay, which you can just open with a specific combination, and it allows you to just type GitHub run ID for your specific push, and in this case, Alip, will respect this run ID for the package and will fetch the exact version of this package. And it's possible because on the Alip side, we have extra thing for non-prod environments. Before Alip does a fetch of the version json to ensure which package should be downloaded, it checks if there's any local storage override. And if there are, then it will respect it. So teams are responsible in this case to integrate a specific GitHub run ID to their push pipeline, which uploads their package to Dev Infra and then Alip is responsible to respect those overrides and fetch them on demand.

As you can see, there are a lot of things teams need to remember, and you may worry how to start a new project with such complex setup. And for this, we are, of course, leveraging the power of teamplates projects. So we have generators created specifically for it, which spin up your project with a set-up pipelines and the best techniques, which we share.

When developing something for smart TVs, there is one special thing about them, right? It's like input control. You don't have mouse or touch, you probably have pointer, but it's a different story. And typically, focus is changed on the reaction on some of your key events. Up, down, left, right. And for this, we have the in-house solution, which is not yet open-sourced, but I just want to share with you an idea how it's possible to address focus management challenges and some open-sourced projects which you can use if you're interested in this topic. So one of the most straightforward approach is probably manually maintain focus map. It doesn't scale at all, but it's very suitable for small apps. And the idea here is that you just create an object which is linked list and you're iterating really in imperative way, switching the active node. The other approach which Netflix utilizes, it's distance-based and priority-based solution where you're just computing closest nodes based on the priorities, left to right, right to left and others, what should be focused next. And the other thing which we're utilizing is a more declarative one where you specify a higher order components like vertical list, horizontal list, and grid. And you make your node focusable by just adding the hook, like use focus in our case. And it tells you it is focus, is it selected, that's all. All those solutions can be combined together so you can check it out and try to implement your own. For some TVs, like LG, there is also another input method which is pointer one, is a magic remote and pointer navigation should be familiar for most of you as it's quite similar to mouse. But with the pointer navigation, we have slightly different behavior on TV because users can move their pointer and scroll at the same time and we need to support it as well.

7. Focusable Nodes and Key Moments Detection

Short description:

You can combine different solutions to make your node focusable and implement your own. Pointer navigation on TVs with magic remotes has slightly different behavior, so designers need to consider it. For performance, avoid unnecessary paint and layouting, render only what is needed, optimize resource delivery, introduce CDN and priority caching, and utilize virtualization. Thank you for listening and if you have any questions, feel free to ask. Denis explains the detection of key moments in sports and mentions that different techniques are used depending on the sport. For football, finding the exact moment where the game started is crucial, and data providers help in plotting the data.

And you make your node focusable by just adding the hook, like use focus in our case. And it tells you it is focus, is it selected, that's all. All those solutions can be combined together so you can check it out and try to implement your own.

For some TVs, like LG, there is also another input method which is pointer one, is a magic remote and pointer navigation should be familiar for most of you as it's quite similar to mouse. But with the pointer navigation, we have slightly different behavior on TV because users can move their pointer and scroll at the same time and we need to support it as well. And another thing in cases if you want to support horizontal scrolling with the pointer, you need to think about even different UI so you need to ask your designers to not forget about it as you can see on the slide. So, it's quite tricky.

Performance wise, yeah, just I had a full talk about performance separately but this is just general advice because on devices such as smart TVs we're really low in memory and CPU in most cases so we really need to think of rendering and startup performance. Rendering performances happen during user interaction when you have front up, yeah, and here of course one of the like most obvious advices is to avoid paint and layouting when you can. Avoid that. It sounds easy but not so easy in many cases. Render only what is really needed. Nadia Makharevich was doing the talk right before me so I think you all saw it. Avoid useless and heavy renders, yeah. About startup one of course optimize your resource delivery, introduce CDN, be closer to your customers, introduce priority caching, do asset optimization and load only what you need. Oh I also forget about like virtualizing, yeah. Like really, virtualization is very powerful so render only what you need on your screen at the moment. And thank you so much, I hope you enjoyed it. Yeah, if you have any questions.

Thank you so much Denis, that was amazing. So much content. I know we had a little discussion about some of your things and even as I was listening I was like I'm definitely going to go and watch some of the other talks you referenced because it sounded like so much amazing content. Why don't we jump in, this is one that I was questioning as well because you kind of talked a little bit about. There was some functionality you showed on Dazon which was the sort of moments inside the map. Yes, key moments. E-moments. So how do you detect those? Do you use a library or is something else doing that? You talked about machine learning or AI? Exactly, so it depends on which type of sport we're talking because we leverage different techniques. For example, for the football part everything's more straightforward and actually it was the first sport for which we introduced key moments. On the football side we just need to find the exact moment where the game started because we need zero position and all the streams are different because we have different broadcasters, different partnerships, even though the game is the same. And then we're utilizing the data provider which gives us the data and we can plot it.

8. Key Moments Detection

Short description:

We use machine learning to detect key moments in sports like baseball and synchronize them correctly. For boxing, we not only detect the start of each round but also detect fighter names, which are not always revealed beforehand. It's quite interesting and impressive.

It's more or less straightforward. With the baseball, for example, we do the detection ourselves with the machine learning to identify the moments and do the synchronization plotting them correctly. That's amazing. That also sounds like such a complex person's problem to figure out. Boxing, for example, is really interesting because when you're trying to detect, we do detection of every round starts, but we also do detection of fighter names because fighter cards are not always revealed prior to the fight, I mean for the entire night, and it's quite interesting. That's impressive. I'll remember that the next time I watch a boxing match. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Scaling Up with Remix and Micro Frontends
Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
This talk discusses the usage of Microfrontends in Remix and introduces the Tiny Frontend library. Kazoo, a used car buying platform, follows a domain-driven design approach and encountered issues with granular slicing. Tiny Frontend aims to solve the slicing problem and promotes type safety and compatibility of shared dependencies. The speaker demonstrates how Tiny Frontend works with server-side rendering and how Remix can consume and update components without redeploying the app. The talk also explores the usage of micro frontends and the future support for Webpack Module Federation in Remix.
Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
React Compiler - Understanding Idiomatic React (React Forget)
React Advanced 2023React Advanced 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
Watch video: React Compiler - Understanding Idiomatic React (React Forget)
Joe Savona
Mofei Zhang
2 authors
The Talk discusses React Forget, a compiler built at Meta that aims to optimize client-side React development. It explores the use of memoization to improve performance and the vision of Forget to automatically determine dependencies at build time. Forget is named with an F-word pun and has the potential to optimize server builds and enable dead code elimination. The team plans to make Forget open-source and is focused on ensuring its quality before release.
Using useEffect Effectively
React Advanced 2022React Advanced 2022
30 min
Using useEffect Effectively
Top Content
Today's Talk explores the use of the useEffect hook in React development, covering topics such as fetching data, handling race conditions and cleanup, and optimizing performance. It also discusses the correct use of useEffect in React 18, the distinction between Activity Effects and Action Effects, and the potential misuse of useEffect. The Talk highlights the benefits of using useQuery or SWR for data fetching, the problems with using useEffect for initializing global singletons, and the use of state machines for handling effects. The speaker also recommends exploring the beta React docs and using tools like the stately.ai editor for visualizing state machines.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js for React.js Developers
React Day Berlin 2023React Day Berlin 2023
157 min
Next.js for React.js Developers
Top Content
Featured WorkshopFree
Adrian Hajdin
Adrian Hajdin
In this advanced Next.js workshop, we will delve into key concepts and techniques that empower React.js developers to harness the full potential of Next.js. We will explore advanced topics and hands-on practices, equipping you with the skills needed to build high-performance web applications and make informed architectural decisions.
By the end of this workshop, you will be able to:1. Understand the benefits of React Server Components and their role in building interactive, server-rendered React applications.2. Differentiate between Edge and Node.js runtime in Next.js and know when to use each based on your project's requirements.3. Explore advanced Server-Side Rendering (SSR) techniques, including streaming, parallel vs. sequential fetching, and data synchronization.4. Implement caching strategies for enhanced performance and reduced server load in Next.js applications.5. Utilize React Actions to handle complex server mutation.6. Optimize your Next.js applications for SEO, social sharing, and overall performance to improve discoverability and user engagement.
Concurrent Rendering Adventures in React 18
React Advanced 2021React Advanced 2021
132 min
Concurrent Rendering Adventures in React 18
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
Introducing FlashList: Let's build a performant React Native list all together
React Advanced 2022React Advanced 2022
81 min
Introducing FlashList: Let's build a performant React Native list all together
Top Content
Featured Workshop
David Cortés Fulla
Marek Fořt
Talha Naqvi
3 authors
In this workshop you’ll learn why we created FlashList at Shopify and how you can use it in your code today. We will show you how to take a list that is not performant in FlatList and make it performant using FlashList with minimum effort. We will use tools like Flipper, our own benchmarking code, and teach you how the FlashList API can cover more complex use cases and still keep a top-notch performance.You will know:- Quick presentation about what FlashList, why we built, etc.- Migrating from FlatList to FlashList- Teaching how to write a performant list- Utilizing the tools provided by FlashList library (mainly the useBenchmark hook)- Using the Flipper plugins (flame graph, our lists profiler, UI & JS FPS profiler, etc.)- Optimizing performance of FlashList by using more advanced props like `getType`- 5-6 sample tasks where we’ll uncover and fix issues together- Q&A with Shopify team
React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured Workshop
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.