Remember CSS Sprites? Let's Do That with Video!

Rate this content
Bookmark

Synchronizing multiple streams, particularly multiple live streams, and especially when the audio is critical too, can be incredibly difficult. How, then, could you build an experience where you have a few camera angles and you want a viewer to be able to seamlessly pick one? We'll walk through a hack technique that allows you to do just that without ever having to worry about synchronization.

This talk has been presented at React Summit Remote Edition 2021, check out the latest edition of this React Conference.

FAQ

A video sprite is similar to a CSS sprite, but instead of combining images, it combines multiple video feeds into a single video. Viewers can then choose which part of the video to watch, allowing for synchronized feed selection.

Matt McClure is one of the co-founders of Mux, a company that builds video infrastructure for developers. He runs the Developer Experience organization at Mux.

Mux is a company that provides video infrastructure for developers, offering a video API to help developers build and manage video streaming services.

In live streaming, video sprites can be used to combine multiple camera feeds into one video. Viewers can then select which part of the combined video they want to watch, ensuring synchronized feeds and a single audio stream.

Video sprites can be useful for scenarios like live concerts or sports events where multiple camera angles are available. They allow viewers to choose their preferred feed while ensuring synchronization and simplifying the broadcast process.

Using video sprites for feed synchronization ensures that all video feeds are perfectly synchronized, simplifies the management of audio streams, and allows viewers to switch between different feeds without desynchronization issues.

Video sprites offer viewers the ability to choose their preferred camera angle or feed, enhancing their viewing experience. This flexibility allows for a more personalized and engaging experience during live events.

Implementing video sprites involves combining video feeds into a single video, defining the coordinates for each feed, and using a canvas to display the selected quadrant. The code for this can be found on GitHub.

The code for implementing video sprites is available on GitHub. You can find it by searching for video-sprites.mux.dev or visiting the Mux GitHub repository.

One limitation of using video sprites is that the resolution of each feed is limited by the overall resolution of the combined video. For example, if the total resolution is 1080p, each quadrant will be limited to a quarter of that resolution.

Matt McClure
Matt McClure
7 min
17 Apr, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Today's Talk discusses the use of video sprites to optimize video playback. Video sprites, similar to CSS sprites, allow for the selection of specific parts of an image. By combining multiple videos into one and selecting the desired one, video synchronization can be enhanced, and viewers can choose different camera angles in live events. The implementation involves dividing the video into quadrants, allowing viewers to select which quadrant they want to watch for synchronized feeds and a shared audio stream.

1. Introduction to Video Sprites

Short description:

Today I want to talk about using video sprites, similar to CSS sprites, to optimize video playback. A sprite is an image with multiple images in it, allowing the client to choose which parts to display. This technique was widely used in the early aughts for optimizing web buttons. Video game sprites, like Mario's states, are a common example. By combining multiple videos into one and selecting the desired one, we can enhance video synchronization and enable viewers to choose different camera angles in live events such as concerts and sports.

Hi everyone, today I want to talk about something a little hacky I've been thinking about lately, but the idea here is that we want to use video sprites in the same way that a lot of you might have used CSS sprites back in the day, so we'll talk about what I mean by a lot of this but hello I'm Matt McClure, I'm one of the co-founders of a company called Mux, and I run the Developer Experience org there, and in a nutshell we build awesome video infrastructure for developers, so if you're looking for a video API, we're there so check us out.

So okay taking a step back, what is a sprite? So in a nutshell this is an image with a bunch of images in it. So the client gets this big combined image and then can just pick and choose which parts of it to show, so if you're relatively new then you might not have seen this as widely used but this is a really common optimization technique from the early aughts, so if you had a button with hover states and depressed states and active states then you would just send one button image and then use your CSS background to decide which one of that image to show. So a little more common if you're kind of started there, you might remember this from back then.

But to show this a little more concretely, a lot of people know about this from video game sprites, so all of Mario's states are in one image and the client viewport then just shows the state of Mario at once. So, you might be wondering what in the world does this have to do with video. The gist here is that the same technique works. You can send a few videos combined into one and then just show the one you care about in the player. So you might be wondering why, why in the world would we be doing this? And I would say, you know, use your imagination. There could be a bunch of examples of this, like, I think sports, sporting events, concerts.

2. Feed Synchronization and Viewer Selection

Short description:

To synchronize feeds and allow viewers to choose between different camera angles in live events, a solution is to combine all the feeds at a local encoder box and send them as one video. The video is divided into quadrants, and viewers can select which quadrant they want to watch, ensuring synchronized feeds and a shared audio stream.

But the biggest example that comes to mind and what we see the most from customers wanting to do stuff like this is around feed synchronization. Particularly around being able to pick between these different feeds in a synchronized way. So, let's say you're streaming live concerts, live music. So, you've got a bunch of different cameras streaming a bunch of different angles. So, one's on the drummer, one's on the singer, one's on the audience, and a producer on site is typically deciding which one of those feeds to show at any given time. So, they might do a nice little transition, go from the drummer to the audience, etc. That producer then sends a feed to an encoder service or whatever else that looks like. I'm using Mux's example here for obvious reasons. But then that service then broadcasts that to all of your end viewers. So then those viewers start saying like, actually I just want to watch the drummer all the time and I hate the transitions that this producer's doing. So, they want the power to be able to pick which feed they watch. So you decide, okay, how can we go about building this out? So you start thinking, okay, I'll send every camera directly to that encoder or broadcast service. And then every viewer can get all the feeds, in this example, three feeds. And this is where things really get hairy if you start going down this path. So now you've got three different live streams that people can watch, but how do you switch between them? Do people just click another feed? And then you might be a few seconds off in terms of audio for all of them. So it can be tough to synchronize that in the client, or honestly, next to impossible to do that well. So one solution would just be to send one video again. So like you were doing before, but instead of that video being produced, you just combine all the feeds at that level and then send them along. So in this example, all the cameras go into that one encoder box locally. It just lays them out in four quadrants, sends those to the encoder broadcaster service, and that goes out to all the viewers. And then from there, the viewers can then pick which one they want. So now you're guaranteed that your feeds are synced, you only have to worry about one audio stream that's shared between all of them for obvious reasons, and then you only show the quadrant of the video that the viewer selects at any given time.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
React Compiler - Understanding Idiomatic React (React Forget)
React Advanced Conference 2023React Advanced Conference 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
Watch video: React Compiler - Understanding Idiomatic React (React Forget)
Joe Savona
Mofei Zhang
2 authors
The Talk discusses React Forget, a compiler built at Meta that aims to optimize client-side React development. It explores the use of memoization to improve performance and the vision of Forget to automatically determine dependencies at build time. Forget is named with an F-word pun and has the potential to optimize server builds and enable dead code elimination. The team plans to make Forget open-source and is focused on ensuring its quality before release.
Using useEffect Effectively
React Advanced Conference 2022React Advanced Conference 2022
30 min
Using useEffect Effectively
Top Content
Today's Talk explores the use of the useEffect hook in React development, covering topics such as fetching data, handling race conditions and cleanup, and optimizing performance. It also discusses the correct use of useEffect in React 18, the distinction between Activity Effects and Action Effects, and the potential misuse of useEffect. The Talk highlights the benefits of using useQuery or SWR for data fetching, the problems with using useEffect for initializing global singletons, and the use of state machines for handling effects. The speaker also recommends exploring the beta React docs and using tools like the stately.ai editor for visualizing state machines.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.
(Easier) Interactive Data Visualization in React
React Advanced Conference 2021React Advanced Conference 2021
27 min
(Easier) Interactive Data Visualization in React
Top Content
This Talk is about interactive data visualization in React using the Plot library. Plot is a high-level library that simplifies the process of visualizing data by providing key concepts and defaults for layout decisions. It can be integrated with React using hooks like useRef and useEffect. Plot allows for customization and supports features like sorting and adding additional marks. The Talk also discusses accessibility concerns, SSR support, and compares Plot to other libraries like D3 and Vega-Lite.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Concurrent Rendering Adventures in React 18
React Advanced Conference 2021React Advanced Conference 2021
132 min
Concurrent Rendering Adventures in React 18
Top Content
Featured WorkshopFree
Maurice de Beijer
Maurice de Beijer
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React, TypeScript, and TDD
React Advanced Conference 2021React Advanced Conference 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Web3 Workshop - Building Your First Dapp
React Advanced Conference 2021React Advanced Conference 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
Nader Dabit
Nader Dabit
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn