As you can see, I can type my name, and the name is going to change inside the video, then I can change this color, and color is going to change, so this is super, super cool.
Now let's see a couple of built-in components. So we have, of course, video, but we have also images, or GIFs, then we have the audio, then we have other components like transitions, series, or for example the sequence.
The sequence is a really important component, because it allows us to time shift something. So let's say that I have three scenes, or let's say that for example I want an image to be shown after, I don't know, 60 frames, I can just say this is the sequence, so the sequence contains my image, and my image is going to be shown inside the sequence, and the sequence is going to start after 60 frames.
But okay, there are some cool components, but what if I don't find the perfect component for my specific use case? Well in that case, we can just use plain HTML, CSS, and JavaScript, because we are in the end inside the React ecosystem, so we can use whatever we want. But we need to keep in mind to use the useCurrentFrame hook while we deal with animations. Since FreeMotion uses parallelized rendering, we cannot use normal CSS animations, we just need to keep in mind to use the useCurrentFrame hook, and we will see an example in just a few seconds.
So let's go to the example. This is the final result, and as you can see here, we just created a simple 3D effect for this presenter component. And how we can do it? Well, first of all, we have to import the useCurrentFrame, the spring animation, and useVideoConfig. We can get the current frame from FreeMotion with the useCurrentFrame, and we are creating here opacity, which is a number from zero up to one, which is a spring animation, and we are injecting that animation number, and line 22 inside the style. And that's it. And we can use this more or less for everything. Of course, in case we don't like the spring animation, we can use also the interpolate function, also from FreeMotion, to generate the middle values.
Alright, let's see how we can render at scale, because we saw something with rendering. We saw how we can render a single video from the ReMotion Studio or from the ReMotion CLI. But let's say that we don't have to render a single video, but let's say that we have to render hundreds or thousands of videos. How we can do that? Well, with ReMotion, also, that is really simple. ReMotion, out of the box, offers multiple ways of rendering, but we will focus just on a couple of them.
First of all, we have Lambda, which we saw earlier, and also we can use Docker via the Node.js APIs. There are a couple of differences, and it's worth knowing the differences. First of all, Lambda usually is faster because it's really highly parallelized, but unfortunately there are some limitations on the video length and on the video size that we can generate. Usually, it is cheaper than keeping running a container Docker if we don't have big volumes of rendering.
Let's say that I have to render, I don't know, like 10 videos per month or 10 videos per day. Usually, Lambda, in that case, is cheaper, but if we have hundreds or thousands of videos, then the Docker option with, you know, I don't know, an auto-scaling group, that is usually cheaper. Also, Docker doesn't have the limitations on the video length and on the video size, but unfortunately it is slower due to the lack of this extreme parallelization that we have on Lambda.
But how does Lambda work? So, first of all, we have to understand how it works, and then we'll see a tiny demo. So, first of all, we have a Lambda function and an S3 bucket, which are deployed on S3. Then the ReMotion project is deployed through an S3 bucket, through the S3 bucket that we just created as a website.
Comments