Creating Videos... With React!

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

I bet you've always used React to build websites and applications, right? What if I told you that we can also edit a movie with it! Learn how a simple render engine built in React and Node.js works to make videos using React components.

Also, you will learn how to create a scalable render-farm using AWS. The main library to build the render engine is Remotion and allows to render videos programmatically.

This talk has been presented at React Summit 2024, check out the latest edition of this React Conference.

FAQ

ReMotion is a React library that allows developers to create videos programmatically using React components.

Using React to create videos allows developers to build declarative and reusable components, version their videos, and automate the entire video creation process.

A video is composed of a sequence of images over time, called frames, and one or more audio tracks.

You can start a ReMotion project by running the command 'npx create video' and following the simple wizard to select options like TypeScript, JavaScript, React, or Next.

A composition in ReMotion is an entity that can be rendered. It has an ID, a React component, duration, frames per second (FPS), width, height, and default props.

Advantages include the ability to create reusable components, version control for videos, and automation of the video creation and uploading process through API calls.

Use cases include creating parameterized videos, automated video workflows, and cloud-based video SaaS products.

You can render videos at scale using ReMotion Lambda for high parallelization or Docker for handling larger volumes and longer videos.

The ReMotion player allows developers to load a ReMotion project inside a simple React application for live preview of the video.

ReMotion offers built-in components for video, images, GIFs, audio, and sequences, among others.

Alfonso Graziano
Alfonso Graziano
20 min
18 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today's Talk covers creating videos with React, including using Puppeteer and FFmpeg to build videos frame by frame. The ReMotion library offers advantages such as declarative and reusable components, versioning, and automation. The Talk also demonstrates building a video with ReMotion, embedding previews in React, and customization options. It explores rendering at scale with ReMotion's Lambda or Docker options and the rendering process using Lambdas.
Available in Español: ¡Creando Videos... Con React!

1. Introduction to Creating Videos with React

Short description:

Today we will talk about how to create videos with React. Learn how to create videos programmatically using just React. Also, we will see how to build a JSON-based render engine.

Welcome, everyone. Thanks for being here. I'm super excited to give this talk and today we will talk a little bit about how to create videos with React.

But the first time I heard that you can create actual videos with React, my reaction has been more or less like that. I wasn't sure why this was a thing. In fact, you can use tools like DaVinci Resolve, Adobe Premiere Pro, there are a lot of tools to actually create videos. So why should you use a front-end framework or front-end library to create a video? Well, it makes a lot of sense for a lot of reasons and bear with me because today we will learn how to create videos programmatically using just React. And as a bonus, in case you are interested in that, we will see real fast how to build a JSON-based render engine based on the library.

So just a couple of words about me. I'm Alfonso. I'm from Italy. I'm a senior engineer at NearForm and if you're interested in the slides of this talk, you can just scan the QR code. Before we move on to the content, just a few words on NearForm, we are an independent team of engineers, designers and strategists. And we actually build digital solutions and products. We have more than 10 years of experience. We are more than 400 people in 28, 30 countries. So in case you need anything related to Node.js, the React ecosystem, just send us a message.

2. Building Videos with React

Short description:

A video is a sequence of images over time, with frames per second and audio tracks. React can be used to build the single frame and mount components inside the viewport. Puppeteer is used to take a screenshot of the viewport for each frame and FFmpeg is used to stitch the frames together and add audio. ReMotion is a React library for creating videos programmatically.

Let's get started from the ground. So what is a video? We can see a video as a sequence of images over time and the number of the images in a second is called frames per second. So we can have videos with 30 frames per second, 60 frames per second. It usually depends on the use case, right? So for example, films has usually 24, 25 frames per second. And then of course we have audios. So we can have a single audio track, a stereo audio track and we can have also multiple audio tracks as well. But this is the real basics.

So images over time with one or more audio tracks. Now the intuition here is really interesting because we can use React, our front-end framework, our front-end library to build the single frame. So we mount all the components inside the viewport. Of course, we bundle everything with Webpack. In this way, we can actually put images, text, videos, whatever we can think of on the screen. Then we use Puppeteer to take a screenshot of the viewport for each frame, of course, to make it performant, we have this parallelized. And then once we have all the frames, we use FFmpeg to stitch all the frames together, add the audio and build the final video in a container like the mp4 container. All this stuff is done by a beautiful library which is called ReMotion. ReMotion is a React library to create videos programmatically. And once we download ReMotion, we have a lot of things inside.

3. Advantages of using ReMotion

Short description:

We have the React library, ReMotion player, and ReMotion Lambda for fast cloud rendering. The advantages include building declarative and reusable components, versioning video and code, and automating the entire video creation process.

So first of all, we have the React library itself, but then we also have the ReMotion player, which allows us to load a ReMotion project inside a really simple React application so we can have live preview. And then we have other components like, for example, ReMotion Lambda. ReMotion Lambda is really cool. In fact, it allows us to render videos really fast on the cloud with a high level of parallelization. But you may think, okay, but still interesting, but okay, what are the real advantages of using this technology instead of, for example, using Premiere Pro to build our video?

Well, first of all, we are developers and we really love to build declarative and reusable components. So let's say that we have our text, our title, we can build the title component and we can reuse that every time we want. Or let's say that we have a specific transition, we can move that and reuse it. Or I don't know, the logo of your company on the top right of the screen, every time you need it, just import the logo component and it's that. So this is really cool. Also, we have versioning so we can version our video, we can version our code. Since everything is written in React code, we can just use it and we can version our video. And last but not least, and this is what makes me excited, is that we can automate the entire video creation process from the rendering trigger, so we can render through an API call to the automatic upload, let's say on Instagram or Facebook or whenever we need it. And of course, the video itself can be customized with full automation. So for example, I make an API call and I say this is the structure of the video, the video is rendered, and then it is uploaded automatically. But without further ado, let's build our first video using ReMotion. Bear with me, it just takes a couple of minutes, it is so simple.

4. Building the First Video with ReMotion

Short description:

Let's build our first video using ReMotion. We create a ReMotion project and start by creating a composition. The interview component is super easy and includes the logo, presenter info, and the video of the person speaking. The presenter and logo components are plain React components built with HTML, CSS, and JSX. We can start ReMotion Studio to manage compositions and render videos.

But without further ado, let's build our first video using ReMotion. Bear with me, it just takes a couple of minutes, it is so simple.

So first of all, this is the final result. As you can see here, we have a video and then we have on the top right the logo of the company, and on the bottom, we have the name of the person and the title, and everything is branded, as you can see.

First of all, we create the ReMotion project. Mpx, create video, and latest. That's it. We have a really simple wizard, we can select whether we want to use TypeScript, JavaScript, React or Next, we can decide. Then we start by creating a composition. A composition is basically inside the ReMotion universe, it's something that you can render, that is it. A composition has an ID, takes a component, a React component. As a duration, you have to declare the FPS, the frames per second that will be rendered later on. And then you have to define the width and the height, and as you can see, that's interesting, and line 13, you can also define some default props that will be passed through the interview component.

Let's now see the interview component, which is super easy. As you can see, we have the logo, which is our image, we have the presenter info, which is another custom component, and then we have the video, so the video itself of the person who was speaking. The video is a wrapper on top of the classic HTML video element, but when you have to mount the video, you have to use the video component from ReMotion because it handles a lot of things for us. And then, really quickly, the presenter component. This is so cool. As you can see, we are building all that presenter component just with plain HTML and CSS, well, in this case, JSX, but you get it, right? So we are just passing the presenter information and we are rendering it. So this is a plain React component. And then the same applies for the logo, right? So we have the image components, we are defining the style, so we are defining where the logo should be positioned inside the page, and then we use the static file to read from the public folder, also exposed by ReMotion. As you will see, we have a lot of utilities. And then we can start ReMotion Studio. So when we run npm run start or npm start, we will see this UI. Inside this UI, we have the compositions. In this case, we created the interview composition. We have a timeline in case we have multiple sequences. We have the preview itself, and we have a lot of tools and also we can render as well from this interface. In case we want to render, we can both click on that button and we will see this UI and we can customize a little bit the rendering, or in case we want, we can also use the CLI. So mpx, ReMotion, render, the ID of the composition, and then the output folder, and that's it.

5. Use Cases and Embedding Previews in React

Short description:

The rendering will start, and it takes, depending on your machine, a few seconds or a few minutes. ReMotion has various use cases, such as parameterized videos, automated video workflows, and cloud-based video SaaS. Examples include the fire chip video, a tool for automatic captions, and the GitHub unwrap. ReMotion allows embedding the preview of a video in a React project using the ReMotion player component. Customization options are available, and there are also built-in components.

The rendering will start, and it takes, depending of course on your machine, it will take a few seconds or a few minutes, depending also on the video, on the length of the video.

But, okay, that's interesting. It makes a lot of sense, but what can I actually do with this technology? What are some use cases? What should I do with ReMotion? There are a lot of use cases.

First of all, parameterized videos. As we saw initially, we can pass params to our video, so we can dynamically change, I don't know, the interview URL, we can change the name of the person, we can change everything. Then we can have automated video workflows. So as we said initially, we can create an entire video from start to finish just with an API call, and for me that's super interesting. And then last but not least, we can create cloud-based video SaaS, so we can create products with ReMotion.

For example, we can create, I don't know, services for social media, for video editing, for automatic transcription. So everything we can think of. And then let's see just a couple of really interesting examples. We have this fire chip video. This is an eight minutes video entirely made with ReMotion, and in case you are interested, you can even download the source code of this video. Also there is this tool, which is a SaaS which makes captions, automatic captions. And then for example, there is this GitHub unwrap. It is from last year, if I remember well. And as you can see, there are some top languages, there is the TypeScript animation, there are a lot of animations, and all this stuff is made in real time on your machine. So this is the preview of the video, it's not the rendered video itself, that is why it is so cool. But now let's see how we can embed a preview in a React project, because we said, okay, I can render a video using ReMotion, so I can create my video in React, but I can also embed the preview of the video in a React project.

And this is how we do it. It is so easy. So we have just the player from the ReMotion player, and we import that component, and we say, okay, the main component, the entry point for the video itself, for the player itself, is my video. That's it. Of course we have to give also the directioning frame, the composition width, and the FPS, and some other information. Of course we can add controls, or we can write our own controls, if we want. And this is more or less how it looks. As you can see, I can type my name, and the name is going to change inside the video, then I can change this color, and color is going to change, so this is super, super cool. Now let's see a couple of built-in components.

6. Customization, Rendering at Scale, and Deployment

Short description:

We can customize and use built-in components in ReMotion, such as video, images, audio, and transitions. The sequence component allows us to time shift elements. If we can't find a suitable component, we can use HTML, CSS, and JavaScript. However, we need to use the useCurrentFrame hook for animations. Rendering at scale can be done with ReMotion's Lambda or Docker options, each with its advantages and limitations.

As you can see, I can type my name, and the name is going to change inside the video, then I can change this color, and color is going to change, so this is super, super cool.

Now let's see a couple of built-in components. So we have, of course, video, but we have also images, or GIFs, then we have the audio, then we have other components like transitions, series, or for example the sequence.

The sequence is a really important component, because it allows us to time shift something. So let's say that I have three scenes, or let's say that for example I want an image to be shown after, I don't know, 60 frames, I can just say this is the sequence, so the sequence contains my image, and my image is going to be shown inside the sequence, and the sequence is going to start after 60 frames.

But okay, there are some cool components, but what if I don't find the perfect component for my specific use case? Well in that case, we can just use plain HTML, CSS, and JavaScript, because we are in the end inside the React ecosystem, so we can use whatever we want. But we need to keep in mind to use the useCurrentFrame hook while we deal with animations. Since FreeMotion uses parallelized rendering, we cannot use normal CSS animations, we just need to keep in mind to use the useCurrentFrame hook, and we will see an example in just a few seconds.

So let's go to the example. This is the final result, and as you can see here, we just created a simple 3D effect for this presenter component. And how we can do it? Well, first of all, we have to import the useCurrentFrame, the spring animation, and useVideoConfig. We can get the current frame from FreeMotion with the useCurrentFrame, and we are creating here opacity, which is a number from zero up to one, which is a spring animation, and we are injecting that animation number, and line 22 inside the style. And that's it. And we can use this more or less for everything. Of course, in case we don't like the spring animation, we can use also the interpolate function, also from FreeMotion, to generate the middle values.

Alright, let's see how we can render at scale, because we saw something with rendering. We saw how we can render a single video from the ReMotion Studio or from the ReMotion CLI. But let's say that we don't have to render a single video, but let's say that we have to render hundreds or thousands of videos. How we can do that? Well, with ReMotion, also, that is really simple. ReMotion, out of the box, offers multiple ways of rendering, but we will focus just on a couple of them.

First of all, we have Lambda, which we saw earlier, and also we can use Docker via the Node.js APIs. There are a couple of differences, and it's worth knowing the differences. First of all, Lambda usually is faster because it's really highly parallelized, but unfortunately there are some limitations on the video length and on the video size that we can generate. Usually, it is cheaper than keeping running a container Docker if we don't have big volumes of rendering.

Let's say that I have to render, I don't know, like 10 videos per month or 10 videos per day. Usually, Lambda, in that case, is cheaper, but if we have hundreds or thousands of videos, then the Docker option with, you know, I don't know, an auto-scaling group, that is usually cheaper. Also, Docker doesn't have the limitations on the video length and on the video size, but unfortunately it is slower due to the lack of this extreme parallelization that we have on Lambda.

But how does Lambda work? So, first of all, we have to understand how it works, and then we'll see a tiny demo. So, first of all, we have a Lambda function and an S3 bucket, which are deployed on S3. Then the ReMotion project is deployed through an S3 bucket, through the S3 bucket that we just created as a website.

7. Rendering Process

Short description:

The Lambda function is invoked to open the ReMotion project. Multiple Lambdas render portions of the video, which are then stitched together. The final video is uploaded to S3. We can trigger the rendering process by calling the renderMediaOnLambda function, providing the function name, S3 website URL, composition ID, and input props.

Then the Lambda function, once we invoke the render method, is invoked and opens the ReMotion project. Then, starting from the main Lambda, a lot of Lambdas are spawned, and every Lambda is going to render a tiny portion of the video. Then the initial Lambda downloads all these tiny videos and stitches them together, and then the final video is uploaded on S3. As we can see here, this is the process of triggering the rendering. It is really simple. Once we created the Lambda, and there is the tutorial for that, of course, we can get the function. We just need to get the function name, and then we can just call the renderMediaOnLambda function, which is going to take the function name, the URL of the website using S3, the composition ID, and the input props. Doing that, we can customize as much as we want our video.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Full Stack Documentation
JSNation 2022JSNation 2022
28 min
Full Stack Documentation
Top Content
The Talk discusses the shift to full-stack frameworks and the challenges of full-stack documentation. It highlights the power of interactive tutorials and the importance of user testing in software development. The Talk also introduces learn.svelte.dev, a platform for learning full-stack tools, and discusses the roadmap for SvelteKit and its documentation.
SolidJS: Why All the Suspense?
JSNation 2023JSNation 2023
28 min
SolidJS: Why All the Suspense?
Top Content
Suspense is a mechanism for orchestrating asynchronous state changes in JavaScript frameworks. It ensures async consistency in UIs and helps avoid trust erosion and inconsistencies. Suspense boundaries are used to hoist data fetching and create consistency zones based on the user interface. They can handle loading states of multiple resources and control state loading in applications. Suspense can be used for transitions, providing a smoother user experience and allowing prioritization of important content.
From GraphQL Zero to GraphQL Hero with RedwoodJS
GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
Tom Pressenwurter introduces Redwood.js, a full stack app framework for building GraphQL APIs easily and maintainably. He demonstrates a Redwood.js application with a React-based front end and a Node.js API. Redwood.js offers a simplified folder structure and schema for organizing the application. It provides easy data manipulation and CRUD operations through GraphQL functions. Redwood.js allows for easy implementation of new queries and directives, including authentication and limiting access to data. It is a stable and production-ready framework that integrates well with other front-end technologies.
Tanstack Start - A Client-Side First Full-Stack React Framework
React Summit US 2024React Summit US 2024
30 min
Tanstack Start - A Client-Side First Full-Stack React Framework
Top Content
We surveyed thousands of developers to show that a louder audience leads to a better presentation. There has been a shift in web app development towards server-first architectures, which has improved full-stack capabilities but at the cost of complexity and divergence from the client-centric approach. Tanstec Start is a meta-framework that aims to provide the best client-side authoring experience with powerful server-side primitives. The Tansec Router supports advanced routing features, URL state management, and JSON storage. Combined with the server-side rendering capabilities of TanStack Start, it becomes even more powerful. The TanStack Router has isomorphic loaders and integrates seamlessly with TanStack Query for additional features like polling and offline support. UseSuspenseQuery allows for dynamic streaming of data during SSR. TanStack Start also offers server-side features, API routes, server functions, and middleware. The future plans include RSCs, websockets, real-time primitives, and static pre-rendering. TanStack Start is now in beta and is suitable for building React apps. It is open source.

Workshops on related topic

Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Back to the Roots With Remix
React Summit 2023React Summit 2023
106 min
Back to the Roots With Remix
Workshop
Alex Korzhikov
Pavlik Kiselev
2 authors
The modern web would be different without rich client-side applications supported by powerful frameworks: React, Angular, Vue, Lit, and many others. These frameworks rely on client-side JavaScript, which is their core. However, there are other approaches to rendering. One of them (quite old, by the way) is server-side rendering entirely without JavaScript. Let's find out if this is a good idea and how Remix can help us with it?
Prerequisites- Good understanding of JavaScript or TypeScript- It would help to have experience with React, Redux, Node.js and writing FrontEnd and BackEnd applications- Preinstall Node.js, npm- We prefer to use VSCode, but also cloud IDEs such as codesandbox (other IDEs are also ok)
Building Serverless Applications on AWS with TypeScript
Node Congress 2021Node Congress 2021
245 min
Building Serverless Applications on AWS with TypeScript
Workshop
Slobodan Stojanović
Slobodan Stojanović
This workshop teaches you the basics of serverless application development with TypeScript. We'll start with a simple Lambda function, set up the project and the infrastructure-as-a-code (AWS CDK), and learn how to organize, test, and debug a more complex serverless application.
Table of contents:        - How to set up a serverless project with TypeScript and CDK        - How to write a testable Lambda function with hexagonal architecture        - How to connect a function to a DynamoDB table        - How to create a serverless API        - How to debug and test a serverless function        - How to organize and grow a serverless application


Materials referred to in the workshop:
https://excalidraw.com/#room=57b84e0df9bdb7ea5675,HYgVepLIpfxrK4EQNclQ9w
DynamoDB blog Alex DeBrie: https://www.dynamodbguide.com/
Excellent book for the DynamoDB: https://www.dynamodbbook.com/
https://slobodan.me/workshops/nodecongress/prerequisites.html
Let AI Be Your Docs
JSNation 2024JSNation 2024
69 min
Let AI Be Your Docs
Workshop
Jesse Hall
Jesse Hall
Join our dynamic workshop to craft an AI-powered documentation portal. Learn to integrate OpenAI's ChatGPT with Next.js 14, Tailwind CSS, and cutting-edge tech to deliver instant code solutions and summaries. This hands-on session will equip you with the knowledge to revolutionize how users interact with documentation, turning tedious searches into efficient, intelligent discovery.
Key Takeaways:
- Practical experience in creating an AI-driven documentation site.- Understanding the integration of AI into user experiences.- Hands-on skills with the latest web development technologies.- Strategies for deploying and maintaining intelligent documentation resources.
Table of contents:- Introduction to AI in Documentation- Setting Up the Environment- Building the Documentation Structure- Integrating ChatGPT for Interactive Docs
Learn Fastify One Plugin at a Time
Node Congress 2021Node Congress 2021
128 min
Learn Fastify One Plugin at a Time
Workshop
Matteo Collina
Matteo Collina
Fastify is an HTTP framework for Node.js that focuses on providing a good developer experience without compromising on performance metrics. What makes Fastify special are not its technical details, but its community which is wide open for contributions of any kind. Part of the secret sauce is Fastify plugin architecture that enabled developers to write more than a hundred plugins.This hands-on workshop is structured around a series of exercises that covers from basics "hello world", to how to structure a project, perform database access and authentication.

https://github.com/nearform/the-fastify-workshop