The Force of Engineering: Bringing Your Own Star Wars Sidekick to Life

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In Star Wars, Pit Droids are repair droids designed to maintain racing vehicles "podracers". These droids are 1.2 meters in height and can fold when not in use. Their funny behavior is explained in a way that they were programmed with a sense of urgency but without enough processing power to perform some complex tasks.

I will show you how I built a realistic Pit Droid and empowered it with NVIDIA Jetson Orin Nano. The droid can perform AI object detection, move its head toward the objects, and more. You will also see some other droid projects I worked on.

NVIDIA Developer post: https://blogs.nvidia.com/blog/2023/08/03/goran-vuksic-pit-droid/

Hackster project: https://www.hackster.io/gvuksic/nvidia-jetson-orin-nano-powered-pit-droid-7da0e8

Through this session, you will:

- hear how this robot was assembled,

- learn about Azure AI Studio & Azure Custom Vision

- learn how to easily prepare and tag images for model training,

- see how to train a model for object detection,

- analyse images from the camera of the robot,

- control motor and other IoT devices connected to the robot,

- and much more.

I hope this session will give you a great introduction to AI and IoT, and inspire you to build similar projects on your own!"

This talk has been presented at C3 Dev Festival 2024, check out the latest edition of this Tech Conference.

FAQ

The first Star Wars movie was released on May 25th, 1977.

Goran Vukicic is the co-founder of Synthetic AI Data with 20 years of experience in IT. He is a Microsoft AI MVP and one of the organizers of Azure Skona.

Goran Vukicic was inspired by the droids in the Star Wars movies, particularly the P-Droid from Star Wars Episode 1.

P-Droid is a droid that first appeared in Star Wars Episode 1. It is designed for maintaining pod racers, stands at 1.19 meters high, and can fold into a compact form when not in use.

3D designs for droids like P-Droid can be found on platforms like Etsy, specifically from designers like David Mook who runs DroidVision.

Goran Vukicic's droids are powered by technologies like Arduino relays, servo motors, NVIDIA Jetson Ori Nano, and Raspberry Pi.

Instructions and code for building Goran Vukicic's droids can be found on Hackster.io, where he has documented his projects.

Goran Vukicic faced challenges like sanding and painting 3D printed parts, fitting electronics into the droid's head, and assembling the droid's components.

Nvidia Jetson Ori Nano is a more powerful alternative to Raspberry Pi, capable of running AI models and supporting more advanced functionalities in droids.

Goran Vukicic prefers using Python because it is simple, understandable, and has a large community of users.

Goran Vuksic
Goran Vuksic
28 min
15 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
May 25th, 1977, the first Star Wars movie inspired generations with its space travel and lightsaber fights. Goran Vukicic built replicas of the P-Droid and found a community of designers who sell 3D designs for printing. Bringing the droid to a conference is challenging but rewarding. The NVIDIA Jetson Ori Nano is a powerful device for running AI models. Building Star Wars droids at home is possible with open-source development.

1. Introduction to Droids and P-Droid

Short description:

May 25th, 1977, the first Star Wars movie inspired generations with its space travel and lightsaber fights. Droids, small intelligent robots, have always fascinated me. Today, technology allows us to build our own. I'm Goran Vukicic, co-founder of Synthetic AI Data and a Microsoft AI MVP. Let's talk about P-Droid, a cheap and expendable droid from Star Wars Episode 1. It can fold into a compact form and was designed for maintaining pod racers. I built replicas and found a community of designers who sell 3D designs for printing.

May 25th, 1977, three years before I was born, the first Star Wars movie came to the cinema. And for many generations since then, it was inspiration because of the space travel, because of the lightsaber fights. But one interesting thing over there are droids. Those small robots that are helping out here and there, most of them are intelligent in some way, and moving forward to this year, this is something that technology actually allows us to build.

This is something that you can build at your home, and this is the way. So, my name is Goran Vukicic. I'm co-founder of Synthetic AI Data. In IT, I work for 20 years. Mostly the IT management roles and such, but I'm a tech guy, tech background, and I like to build stuff, especially things related to AI and IoT. Some of those I will show you today over here. I'm a Microsoft AI MVP, which means that I'm on stage a lot sharing my knowledge about AI. I'm also one of the organizers of Azure Skona. Azure Skona is the south part of the region where I live in Malmo, where we organize the meetups. If you want to connect, I'm on LinkedIn. I usually share about those topics of AI and IoT and innovation and such that are interesting to me.

So, let's speak about P-Droid. First appeared in Star Wars Episode 1. This is how it looks. They are defined, explained as a cheap and expendable droid designed for maintaining pod racers, those spaceships that they do races with, and they needed some cheap droids that will fix them here and there in the pits. It is 1.19 meters high and has this ability to fold in compact form when it's not in use. They say it was programmed with a sense of urgency, like this minimal logic prevents them from performing complex tasks, so in the movie, they look pretty funny because they cannot do complex things, but something that also triggered my mind why I decided to go into this story and start building.

This is how it looks. One artist made 3D design, basically, based on a movie, and you can see it over here and this ability to fold into the compact form. That is the P-Droid. I built several droids, like replicas. I previously used Legos and some other things, but I figured out, okay, this is pretty much with Legos, especially falling apart, so you need to glue it together and do things like that. Somewhere on the way, on the most famous social network, Facebook, it suddenly popped up this group, DroidVision, and at that point, I didn't know what I was getting into when I clicked on it. So I found this guy, David Mook, which is a STL designer. He's running this DroidVision, selling those 3D designs on the Etsy. If you purchase them, don't share them, I think they deserve to have beer for the awesome work they do, because they are recreating those droids and enable other people to print them out for selling those designs, just for a few bucks.

2. Building the P-Droid

Short description:

There is a whole print club where people share ideas and ask questions. I saw a video of a 3D printed P-Droid and decided to bring it to life. I ordered the parts and bought a 3D printer. Assembling the droid was fun but also stressful. I made sure the components fit in the head, including lights, an Arduino relay, a webcam, servers for movement, and an NVIDIA Jetson for AI. The process involved sending parts for printing, wearing protective gear, and painting. Slowly, I assembled the droid piece by piece until it was complete. The first test drive was successful.

There is a whole print club where a lot of people like me nowadays go there, share ideas, ask questions, and things like that. So I've seen this video of the P-Droid. This is a 3D printed P-Droid, how it looks, how it's assembled. I got the idea, like, yeah, maybe I could 3D print this out and I could bring it to life. At that point, they didn't have the 3D printer, so I decided to purchase the service online. This also, Star Wars fan, Irma Twano. I ordered the parts, later on I bought the 3D printer and figured out it's, yeah, a lot of fun to do but a whole new level of stress to get something nicely printed. If you ever tried, you know what I'm speaking about. So everything started basically here, getting the parts.

And first thing, I wanted to make sure that my idea will fit into the droid's head, right? Because in the head there is those lights that turn on and off. It should fit one Arduino relay, that is like a switch for the lights. It should fit one webcam in front of the lights to be able to do the computer vision. Two servers to move the head left and right, up and down. And also NVIDIA Jetson, Ori Nano as brain to the droid. As we said in the movie, they were not really smart with this. On the Jetson we can run AI models at the edge, which is a lot of fun and you can actually make those droids smart. I'll show you how.

So the process started. Sending, believe me, it's not fun. You can try to do it manually, like it will take ages. So you need to buy a machine and do that quicker. It's super small plastic that you could breathe in, so you need to wear protective protection for eyes, for mouth, not to breathe it in and so on. Painting, yeah, it's total mess if you live in an apartment. I had a conversation with my landlord why the floor in the basement is, yeah, in many different colors, mainly white and red. But slowly going through the process, it started assembling piece by piece. It went with the legs, then adding the torso, adding the arms to it, forming the whole body. And then the head. And basically, this is how it looked like when I assembled it, with all electronics in the head. And when I made the first test drive, the droid was turning on the lights and started moving. Right? So that was the Hello World.

3. Bringing the Droid to a Conference

Short description:

I have a droid that is able to do many things, including computer vision. I wrote a blog post on Hackster.io and it got featured. NVIDIA interviewed me and wrote a blog post about the AI-powered droid. I proudly call myself an Edge AI Jedi. Bringing the droid to a conference is a fun but challenging experience. Assembling and disassembling it can be tricky, and airport security can raise eyebrows. Despite the challenges, the moment of happiness after assembling the droid is worth it. However, connecting to hotel Wi-Fi can be a problem.

I was super happy about it. I have a droid that is moving, that's able to do many different things, added some computer vision to it. And I wrote the blog post on Hackster. Hackster.io is a community for the IoT. Enthusiasts, a lot of nice projects there. You can find this project there, like, with all the code instructions, how to do things with relevant links and everything, it's over there. It got featured by Hackster, which was also super nice. And also NVIDIA called in to make an interview with me and made this blog post, like, developer taps NVIDIA Jetson as force behind AI powered droid. They called me Edge AI Jedi, which was super nice. Just to brag about it.

So how to bring your droid to a conference? It's a lot of fun packing it and disassembling it and putting it back again. So it's not a good idea. Like, when you get to the hotel, you need to assemble it and then you figure out, like, you don't have the screwdrivers. One more thing regarding the bag. You know, if you see someone at the airport security sweating over the bag like this, because if you open the head, immediately there's a bunch of wires and things pointing out. So it's not the terrorist there explaining the security. It's just a Star Wars fan, you know, explaining, trying to get to the conference. When you get there, you need to put things together. Usually you don't have the screwdrivers. Right? You need to run to buy them. Maybe you forget the pliers and you need something is stuck, you need to pull it out. Then okay, slowly things get into place. It's a messy experience. In the end, you have this moment of happiness. Yeah, we assembled something. This is my co-founder, Sherry. We often do sessions together. So I'll show you what we did. Those are the pictures from last year when we documented how it went. This was the moment of happiness which doesn't last long because then you figure out it's not connecting to hotel Wi-Fi.

4. Demoing the Droid at a Conference

Short description:

Bringing the droid to a conference can be a fun experience, although not everyone is happy to see it. Last year in Utrecht, we demonstrated the droid controlled by a neurosity controller that measures brain waves. While the droid is not currently connected, you can still get an idea of how it looks and what you can build. For the technical details, you need something to run the droid, like a Raspberry Pi. However, since it can't run AI models, I opted for a more powerful version.

Hey, like, it needs to run tomorrow. So let's see what we can do about it. Then you spend night like this. But yeah, also bringing it to the conference can be fun experience. Not everybody is happy to see the robot. And yeah, some have different complaints about it. But sooner or later you get there.

This is last year in Utrecht. One conference where we got to the stage and actually this thing, if you can see that Sherry is having on the head, that is a neurosity controller. It measures your brain waves. And you can program it that it recognizes specific thought. And what we demoed over there live on the stage was that she was turning on the Droid live with some thought and then moving the head and Droid was moving the head. But Droid is also here, over there, actually. It's not connected at the moment. Like it would be a bit of wiring and such things. I try to skip this year. But you can get idea like how it looks, what you can actually build yourself. Check out the hackster. Everything is over there.

So let's get a bit into the technical details. How it works. You need the brain. You need something that will be running your Droid. One option is to use the Raspberry Pi. It is this tiny desktop computer, basically, like costs this one model for 35 bucks. But, yeah, there are some other versions. It's pretty cheap. Raspberry Pi is not capable to run AI models. So if you are using camera, you can take a picture and upload it to the cloud, process there, get data points and perform some action based on that picture. Therefore, I decided to go to a bit more powerful version.

5. Exploring the NVIDIA Jetson Ori Nano

Short description:

The NVIDIA Jetson Ori Nano is a powerful device capable of running AI models. Raspberry Pi recently announced a new AI kit that offers a cheaper option for running AI models. The Jetson has a microSD card slot, a 40 pin expansion header, and various ports for power and connectivity. When connecting devices, you need to consider the pin layout and ensure that the power, ground, and signal pins are properly connected. Writing code for this device is straightforward using the GPIO library.

And this is NVIDIA Jetson Ori Nano. There is older version, which is cheaper, but it comes with a passive cooler. This is active cooler. And because it's plastic, what we speak about, 3D printed plastic, right? Of course, you want to have the active cooler there. And they call it Raspberry Pi on steroids for a reason. It's capable to run AI models and performs like really, really nicely. Bit pricey, but highly recommended.

There is also, this is new, like few days ago, Raspberry Pi announced this AI kit. I haven't tried it out yet. I subscribed for pre-order, haven't received anything yet. Like a cheaper version where on Raspberry Pi, you should be able to run some AI models also. I guess we're trying somewhere between Raspberry Pi and Jetson, somewhere in between the price range.

Let's have a closer look at the Jetson, how it looks. Below this cooler, there is a microSD card slot. You flash your OS on the card, put it inside. It has 40 pin expansion header. This is where you connect different devices. I'll show you how. And some other parts like you plug in the power, the network for USBs, display port, and so on, right? And this number nine is the Raspberry Pi cameras, if you would like to use that.

And there are three things, like four things regarding this pin layout where you connect stuff. If you want something to work, you need to connect it to the power and to the ground, like red and yellow ones are power, right? The green ones here are marked as the ground. And you have this signal pin. So you give something power, and then with this signal pin, you say, okay, turn it on, turn it off. That's basically how it works. Super simple with most of those IoT devices. This is how LED lights are connected. You need to connect a relay. As I said, you give it power and you send it the impulse, and then it's closing this circle, turning on the LEDs. If you look at the code, also very, very easy to write your own code. You need to use this GPIO library.

6. Working with GPIO and Computer Vision

Short description:

GPIO is used to define output pins in code, making it easy to turn them on or off. Servo motors can be controlled by defining the output pin and specifying the desired movement. Assembling all the components together brings the droid to life. The camera adds awareness to the droid's surroundings and enables image analysis. Computer vision opens up a wide range of possibilities, such as object detection and tracking. You can also implement speech-to-text functionality and other features. Building Star Wars droids at home is not complicated. The community believes in open-source development, allowing people to create their own versions of these droids.

GPIO stands for general purpose input output. And what you are doing here in the code, you are defining this output pin, like where is your output, where you are sending information, and then you say just output to this pin, which means turns it on, or send zero, turns it off. Right? Super simple to make it work.

Servo motors, this is how they look like. They are in head, being able to move it left, right, up and down. And similar thing with them. Like you define the output pin, and in a similar way define, okay, how much you want to move this motor, how much you want to turn it on. Only few lines of code.

Assembling all this together, you are actually getting exactly this, what I showed you over here, like, hey, my droid is turning lights on and off, moving the head around. You made it alive. The thing that is left is the camera, making it aware of its own surroundings.

I took this super cheap off the shelf camera, I just needed something that's plug and play, and the reason I liked it is actually because of this eye that I was able to disassemble and put in front of the droid's lens. So when you have all this connected together, one way is to, as I said, go to the cloud, take your picture, basically two, three lines of Python code, take the picture, send it to some API in the cloud, and process it there. And this is where computer vision service is a natural, gives us a lot of possibilities to do different things.

So you can detect different objects, could react on the people when someone is entering its view that it positions the head toward that person, or could detect some objects if you are showing it, I don't know, the mobile phone that it tracks with the head of the mobile phone. So it takes a picture, processes it, figures out where the object is, positions the head towards that. You can do a lot of different things there, like vision studio that came out in July last year gives you a nice hub where you can see a lot of those examples, like what is possible to do, what is there, quick examples that you can try out, and then you can use your imagination and build something new with this droid, with some other droid that you are building.

You can make speech to text so it can recognize your voice commands, maybe, and perform some actions, and many, many other things. And everything you do is super, super simple. But basically, getting back to the camera and computer vision, you are doing this image analysis trying to understand what is in front of the droid. So you could, for example, take a picture and ask the droid to explain what it sees.

That was basically my droid story. I hope so far it inspired you a bit, or at least you've seen that it's not complicated to build. We can really build Star Wars droids at our home. This is not where my story stops. I don't know if you've seen the NVIDIA GTC keynote by Yen-Seng Hong, their CEO. They presented these droids from Disney. Those are the new things in the US in Disney Park, where they fully developed them with this is running on two NVIDIA Jetson Orin inside, and they are super cute, right? The thing is, we don't really like Disney. They purchased the Star Wars, right? But slightly changing movies. As a community, there is a lot of people out there like me that think those things should be open source and people should be able to build it.

7. Reverse Engineering and Building the Droid

Short description:

I started with a dummy version of the droid, using motors and a camera to simulate interaction. I am working on reverse engineering it and have been 3D printing and assembling the parts. It's a long-term project that I do in my free time for fun. Hopefully, I can bring it to the stage next year.

And that's why me as one of the community members out there is basically working on reverse engineering it. This was the first build out of the paper. I decided not to 3D print anything, and I'm not a 3D design expert or artist, which you can find in communities online, and there are big communities where people work and recreate things. I can show you that. But basically I started with dummy version built with motors and camera, where if I show you the hand, it will turn the light on, this is light off. And when I say, look at me, which means wait for command, and then it tracks my finger like where should it position its own head, right?

So I took approach, like if I had droid like this, how would I interact with it, right? How this interaction would work? And I tried to replicate this into some dummy models where basically there are some motors and this light with the switch and the camera inside. So how it's going, as you can see in the meantime I got 3D printed, this is how I spent a lot of time looking at it. Is it going good or bad? But so far some parts are printed, some parts are painted. This was like two weeks ago when I finished some paintings, so the head is slowly assembling. 3D designs are available online, you can find them in different groups that communities building and recreating, also developing some electronic components and things like that. It's still a long way to go to the full droid, and I see it like long running project because this is something I do in my free time for fun. My son is a teenager, being 70 now, and my wife usually says I'm now the one playing with toys, right? But it's a fun thing and I really like it and hopefully next year I can bring this one to the stage. So thank you. All right.

QnA

Building and Expanding the Droid

Short description:

The cost of building the droid is less than a thousand US dollars, depending on the brain used. Moving the arms and legs requires additional motors and increases the cost. The focus is currently on the Duckling droid project, with plans to make it smart. This is a free-time project and not related to Skynet. For beginners in robotics, starting with an Arduino kit is recommended.

So hi Goran. How much money and time do you think it costs all together? Amazing job. Thank you, guys. Thank you. We'll let you get your drink. All together, like, less than for the Pidroid, it's like less than a thousand US dollars. Depends on which brain you put inside. As I show you, the NVIDIA is 500, which is expensive, but if you go with Raspberry Pi, you can build all together for 200, 300 US dollars.

And the next question, really interesting for me as well. So what are the next steps to automate arms and legs as you showed from the NVIDIA presentation? Yeah, for the Pidroid, there are also designs. I found them personally a bit too complicated to go that far, but if you really want, you can find those designs. Someone in community already tried that and made it look pretty realistic. It will, of course, like moving one arm requires I think like four servo motors. The other arm is four more, so it's already eight, so it also brings up the cost. So how much you want to invest in it? And does it really make sense to move the arm? Or you could make it speak, which is speaker is much cheaper. You say it's also a matter of functionality and what's your goal? What do you want to get out of it?

People are commenting how amazing your creation is. Love it, someone said love it. Are you going to make choppers as well? Yeah, now I'm focused on the Duckling droid that I showed you over here. And it's a project for this year, for sure. Because it will take time for us to build it, like physically, and then also there is development part when I want to make it smart. So for this year at least, maybe next year the chopper will come to life. Is this a do-it-yourself Terminator? Do you work for Skynet? No, no, this is a free-time project. I think we all need some kind of free-time projects just to experiment with technology, to try out different things, and to get away from our standard daily job. I work for my own startup. If I really like my own job, I could work 24-7, but no one can work 24-7, so I need to at some point to say, okay, now I need to stop, and then I switch to some things like this, or playing bass guitar.

So the next question actually relates to that. A person from the audience said that he is a programmer, but has never done robotics. How can I start making droids like this? Obviously you kind of share the process, but what would be your tip? Where do they start from? The easiest to start is buying some Arduino kit that's pretty cheap, and just figuring out how easy it is. Because all you need to do is connect something to those pins, and you send instructions. Okay, turn it on, turn it off, no matter if it's LED light, is it a servo motor, is it some other device.

Nvidia Chip and Open Source Development

Short description:

Getting an Nvidia chip for domestic use depends on the scale of the project and the potential return on investment. Raspberry Pi has low power usage, while Nvidia requires more power. The open source community is developing projects like ducklink, using media simulators to teach droids how to walk. Python is chosen for motor and LED controls due to its simplicity and large community. The progress of the project can be followed on Hugster.io, with full descriptions, pictures, and instructions.

Do you think it is worth it to get Nvidia chip for domestic use and research? Yes, sure. Depends how much you use it, like, will it pay back? You can also maybe for some things experiment in the cloud if that suits you. Depends on the scale of the project. And in terms of power used, how much power was required to what type of batteries did you use for it? Basically, Raspberry Pi has its own connector to the power, and it's really low, low power usage. Nvidia is using a bit stronger, but it's like plugging in the TV and watching TV.

Another question is what development in the open source community would you like to see next? I pay a lot of attention now to this ducklink, for example, and people are using, for example, a media simulator to teach it how to walk, and that is going pretty well, and that's pretty amazing to see how it's trained and how it's moving and such, and there is a lot of things like people are trying to figure out how Disney built that, and now trying to recreate it, so I really look forward to what community will build. And of course Disney is taking it forward.

The next one is why you use Python for the motors and lead controls instead of another language like C or C++? I use Python. Python. It's simple, it's understandable, there's a pretty big community of people that understand it, so yeah.

Does it have a name, any place we can follow the progress? So basically when I finish the project, I will put it on Hugster.io, feel free to check over there, Pidroid is there, you can find it fully described, before that it was Bidi OneDroid, that was built with Legos and there are some other projects, so as soon as I finish, I write the whole story, add the pictures, how I assemble it, what kind of electronics I added, all the schemas and code and instructions how to do that. Fantastic. Thank you so much, Goran. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Don't Solve Problems, Eliminate Them
React Advanced 2021React Advanced 2021
39 min
Don't Solve Problems, Eliminate Them
Top Content
Kent C. Dodds discusses the concept of problem elimination rather than just problem-solving. He introduces the idea of a problem tree and the importance of avoiding creating solutions prematurely. Kent uses examples like Tesla's electric engine and Remix framework to illustrate the benefits of problem elimination. He emphasizes the value of trade-offs and taking the easier path, as well as the need to constantly re-evaluate and change approaches to eliminate problems.
Using useEffect Effectively
React Advanced 2022React Advanced 2022
30 min
Using useEffect Effectively
Top Content
Today's Talk explores the use of the useEffect hook in React development, covering topics such as fetching data, handling race conditions and cleanup, and optimizing performance. It also discusses the correct use of useEffect in React 18, the distinction between Activity Effects and Action Effects, and the potential misuse of useEffect. The Talk highlights the benefits of using useQuery or SWR for data fetching, the problems with using useEffect for initializing global singletons, and the use of state machines for handling effects. The speaker also recommends exploring the beta React docs and using tools like the stately.ai editor for visualizing state machines.
Design Systems: Walking the Line Between Flexibility and Consistency
React Advanced 2021React Advanced 2021
47 min
Design Systems: Walking the Line Between Flexibility and Consistency
Top Content
The Talk discusses the balance between flexibility and consistency in design systems. It explores the API design of the ActionList component and the customization options it offers. The use of component-based APIs and composability is emphasized for flexibility and customization. The Talk also touches on the ActionMenu component and the concept of building for people. The Q&A session covers topics such as component inclusion in design systems, API complexity, and the decision between creating a custom design system or using a component library.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
Managing React State: 10 Years of Lessons Learned
React Day Berlin 2023React Day Berlin 2023
16 min
Managing React State: 10 Years of Lessons Learned
Top Content
Watch video: Managing React State: 10 Years of Lessons Learned
This Talk focuses on effective React state management and lessons learned over the past 10 years. Key points include separating related state, utilizing UseReducer for protecting state and updating multiple pieces of state simultaneously, avoiding unnecessary state syncing with useEffect, using abstractions like React Query or SWR for fetching data, simplifying state management with custom hooks, and leveraging refs and third-party libraries for managing state. Additional resources and services are also provided for further learning and support.
TypeScript and React: Secrets of a Happy Marriage
React Advanced 2022React Advanced 2022
21 min
TypeScript and React: Secrets of a Happy Marriage
Top Content
React and TypeScript have a strong relationship, with TypeScript offering benefits like better type checking and contract enforcement. Failing early and failing hard is important in software development to catch errors and debug effectively. TypeScript provides early detection of errors and ensures data accuracy in components and hooks. It offers superior type safety but can become complex as the codebase grows. Using union types in props can resolve errors and address dependencies. Dynamic communication and type contracts can be achieved through generics. Understanding React's built-in types and hooks like useState and useRef is crucial for leveraging their functionality.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured Workshop
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Master JavaScript Patterns
JSNation 2024JSNation 2024
145 min
Master JavaScript Patterns
Top Content
Featured Workshop
Adrian Hajdin
Adrian Hajdin
During this workshop, participants will review the essential JavaScript patterns that every developer should know. Through hands-on exercises, real-world examples, and interactive discussions, attendees will deepen their understanding of best practices for organizing code, solving common challenges, and designing scalable architectures. By the end of the workshop, participants will gain newfound confidence in their ability to write high-quality JavaScript code that stands the test of time.
Points Covered:
1. Introduction to JavaScript Patterns2. Foundational Patterns3. Object Creation Patterns4. Behavioral Patterns5. Architectural Patterns6. Hands-On Exercises and Case Studies
How It Will Help Developers:
- Gain a deep understanding of JavaScript patterns and their applications in real-world scenarios- Learn best practices for organizing code, solving common challenges, and designing scalable architectures- Enhance problem-solving skills and code readability- Improve collaboration and communication within development teams- Accelerate career growth and opportunities for advancement in the software industry
Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.