Make Real: tldraw's AI Adventure

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

This is the full story behind "Make Real", tldraw's viral AI experiment that lets you turn your low fidelity drawing into a functional website. It's half startup rollercoaster ride and half moral tale about betting on React — as Make Real was only possible because of the ability to put any React component onto tldraw's canvas.

Here's the full behind-the-scenes perspective, and everything we learned along the way.

This talk has been presented at React Advanced 2024, check out the latest edition of this React Conference.

FAQ

tldraw is a canvas and whiteboard SDK that can be integrated into a React app to build and extend your own canvas.

tldraw is based in London.

Yes, tldraw can be used for wireframing, among other purposes like creating flowcharts and assets.

Yes, tldraw is collaborative and free to use.

Unlike other canvas applications that use custom rendering engines, tldraw is built with React and utilizes the DOM for rendering, making it versatile for web use.

tldraw uses browser APIs and techniques like hiding off-screen elements without unmounting them to manage performance.

No, the generated code is in vanilla HTML, CSS, and JavaScript.

Yes, tldraw can integrate AI models to generate content like prototypes and flowcharts from canvas drawings.

tldraw can be extended by adding custom shapes as React components. There are plans for plugins to enhance its functionality.

You can visit tldraw.com for general use and tldraw.dev for developer resources.

Steve Ruiz
Steve Ruiz
28 min
25 Oct, 2024

Comments

Sign in or register to post your comment.
  • Va Da
    Va Da
    P4
    Thanks!
  • Steve Ruiz
    Steve Ruiz
    tldraw
    > Is there a fully open source alternative to Tldraw? Yes Va Da, tldraw! Our v1 product is MIT licensed. https://github.com/tldraw/tldraw-v1
  • Va Da
    Va Da
    P4
    Is there a fully open source alternative to Tldraw?
Video Summary and Transcription
tldraw is a canvas and a whiteboard SDK that can be integrated into React apps. It allows for high-performance rendering using the DOM and React. tldraw's canvas is powerful and can be enhanced with Vision models for image creation and AI utilization. It enables easy iteration and updating of designs based on annotations. tldraw can be used to create UIs, complex applications, interactive things, calculators, ASCII art, and more. It explores the use of AI on the canvas and can generate images, code, flowcharts, and content. tldraw focuses on performance optimization, AI model exploration, security, usability, and accessibility. The Talk also discusses the business model, future plans, and the vision of building a plugin community for the canvas.

1. Introduction to Tealdraw

Short description:

I am Steve Ruiz from a company called Tealdraw. Tealdraw is a canvas and a whiteboard SDK that can be integrated into React apps. It's a beautiful software that allows you to do all the things you can do on the web on the canvas. Canvases are notoriously hard to build and render, but Tealdraw uses the DOM and React to achieve high performance. You can use Tealdraw for various purposes like making memes, flowcharts, wireframes, and more. It's collaborative, free, and easy to use.

I am Steve Ruiz from a company called Tealdraw. Let me put this over here. We are in London. You can follow me on Twitter, and you should, at SteveRuiz.

Okay. Tealdraw is this canvas. Whiteboard. It's also a whiteboard SDK, which is kind of the interesting thing, a little bit different about it, where you can put it into a React app and build your own canvas, extend it, add stuff to it, all that. It's beautiful software.

Just to prove that it really, really is a React app, I'll do one of these. This is the only time that I'm gonna show code and then I'll go back to everything else, you know. Hey, hello. What? Hey, hello. Right there, right? So even though the canvas is, you know, it feels like a canvas, it can zoom in, zoom out, you can do lots and lots and lots of shapes and all that stuff, and it'll just do it right, right, right. Every one of these little shapes is its own React component. And so you can do all the things that you can do on the web on the canvas as well. And that is not an obvious decision.

Canvases are notoriously, like, hard to build, and they're also hard to, like, render stuff in. Apps like Miro, Figma, et cetera, they used custom rendering engines that they were in very high-performance languages, they used WASM and all those things, like, in order to get the most out of the browser experience so that you can have thousands of shapes and move around very smoothly. But it turns out you can actually do that with the DOM and with React as well. So if your app is slow, it's on you, not on this.

So what do you do with this type of app? You can do, you know, whatever. Make your memes, make your, you know, write your hay, it's got that lovely, like, pressure-sensitive or variable-width line algorithm that I made. You can do flowcharts and make different assets and stuff like that. You can do wireframes. A lot of people use this for wireframes. But you know, it's just for drawing, it's just for communicating. It's collaborative, it's free, tealdraw.com, go for it. And for me, that's enough. I actually, you know, this is the thing that I wanted to build, I wanted to build technology and made it really easy for other people to build these types of applications.

2. Enhancing the Canvas with Vision Models

Short description:

Tealdraw allows you to drop in a component, extend it, add your own stuff, and wire it into your app. The canvas became even more powerful when the Vision models came out, enabling easy creation of images and utilization of AI. The Make Real project, inspired by Open AI's Vision models, generates code and prototypes based on user drawings. Tealdraw's web-based canvas allows for advanced functionality like annotation and interactivity.

Drop in a Tealdraw component, extend it, add your own stuff, wire it into your app. That was the hope. 2022, when I started the idea of, like, using AI on the canvas. I mean, AI was kind of getting popular, GPT-2, that type of stuff. And what people would be doing is, like, having the GPT-2 generate Mermaid.js diagrams, render them and then, like, basically render them onto the canvas. I'm like, oh, that's kind of cool, but it's not really that cool. Yeah, whatever, right? Or, like, having templates that you could use the AI to, like, swap things in and out, you know, kind of like FigJam style, like, their AI-powered template stuff. It's not really exciting to me, but it would have been very easy to build on top of Tealdraw.

It wasn't until the Vision models came out, and this was, you know, end of 2022, that suddenly the canvas became, like, oh, cool, like, the models now work with images. We have a canvas that we can very easily create images out of, right? Like, I can just do whatever, copy as SVG or copy as PNG, you know, paste, and now here's my image from whatever I just drew. Maybe we could use that as input to the models. Now, I wasn't actually, we weren't the, the original idea for Make Real didn't come from us, even. It came from a designer at Figma named Sawyer Hood, formerly of Figma, who saw the Open AI's presentation about these new Vision models and was like, oh, this is so cool, we can use this to generate code. And he had a little demo where it would kind of create a prototype based on whatever you drew, and it would send it to the model and say, hey, model, you are an expert tailwind and web developer who works from, like, low fidelity wireframes and creates working prototypes. Here's the latest image from your designers. Please come back with a working prototype. And it would. We took this a lot further. We put it back on the canvas. So this is a, again, the canvas is made of web stuff, so we can do this. Sadly, Miro and the others could not because their canvas wasn't web.

This little thing that I can still draw on top of and rotate, all that stuff, is a working website. I can double click in here, I can add minutes, I can add seconds, I can start the timer, and it sort of runs. Applause. Yeah, right. Yeah. Which is kind of cool. But again, because it's on the canvas, I can do more than that. I can also annotate. I can say, you know, put in a header here that says, like, timer.

3. Iteration and Updating Designs

Short description:

Tealdraw allows for easy iteration and updating of designs based on annotations. This workflow enables the creation of virtually anything through step-wise iterations.

I can say make this reset button, like, green or something like that. Right? And then I can use this as the next input to the model. So the first input, this thing was based just on my drawing. Now we've sent it to the model and said, not only are you an expert web developer, we, you know, say it's 10,000 years old and that it loves its developers and that if you get it wrong, the White House is gonna catch fire or something like that. But the, you know, all the nice little prompt engineering. But yeah. We also in this case, yeah, cool, right? Yeah. Okay. Alright. In this case, we say you've been given a new design that has annotations. Take what you wrote before and update it to reflect those annotations. And I could keep going. I could keep, you know, iterating on this. Which is surprisingly, like, that's all you need. Like, that's a workflow. Like, you could use this to make basically anything just through these, like, little step-wise iterations.

4. Creating UIs and Complex Applications

Short description:

You can make UIs based on various inputs and even do style transfer. Diagrams and graphs are helpful tools for developers to work from. You can also create complex applications with Tealdraw, such as a stop motion app.

State, you know, state graphs or diagrams or pictures of back end systems. I can make it run from, you know, maybe this is both. In this case, the models are so good now that it doesn't really matter. But it will come up with a UI, based on whatever you give it. The prompting we do is really, like, you cannot not make a UI. You cannot not make a working website. You have to make something. And it will. It just won't necessarily look like what it would if you provided an input.

You can also do a little bit of, like, style transfer. So you can say, like, hey, make this a stopwatch. But also make it look like Stripe or something like that. We'll let this thing cook for a little while. You can also give it the types of diagrams that you would kind of give to developers to say, like, I used to do this, you know, full-time, like, user experience designer whatever. And speccing out certain things, like, very hard to put it into words. I start talking with my hands and stuff, right? So in this case, you can kind of see how it should work. I mean, you're all developers. You probably get stuff like this every now and then from your designers. And it's so helpful just to have a little diagram or a little graph to work from. Much better than, like, a paragraph of text saying, like, well, it should be closed. And then when it opens, any other open thing should close in order to accommodate the room for the whatever. Sketch it up. And that works pretty well as well. I'll show you the kind of, like, cooking show equivalent of, like, and here it is, you know? Although it doesn't do the only one thing can be open at a time. Let's see if it did it in this one.

You can also do, like, surprisingly complex things with this. So in this case, I wanted to have a stop motion app that would take pictures of me and then, like, show, like, an onion skin of I'll just show you what it is, right? I'll let this thing come up with it. And just because, again, while this is loading, I'll show you, you know what? This is pretty cool. I'll let you spend my minutes here. So of course we try our best to stream in the elements as they come from the model.

5. Creating Interactive Things with MakeReal

Short description:

Usually, HTML comes first, followed by a delay in writing the script tag. I can show you the result without writing any code. You can create interactive things with code using MakeReal.

But usually what you have is, like, all of the, like, HTML comes first and then this very long stage where it's, like, writing the script tag at the end of the thing. So there's this delay at the end. Which is unavoidable.

But it should. Hang on. Gotta enable camera access here. Refresh. And yeah. So we're going to try this. Again, I didn't write any code here. But I can, you know, do exactly what. And just to prove to you, yeah, like this is what came out. I can delete these just like I asked for. And then I can press play to get my terrible little gifs there, right?

You can also do the things are going to get progressively dumber here as I run out of time. So I wanted to have a little face that went like this and said words out loud. And this is what I got. Hello. Whatever. It can also do games pretty well. So in this case, I wanted to have a kind of, like, missiles game, whatever, where things are falling from the sky and you shoot whatever. And so while this loads, I'll show you what I came up with. Boom! Like, one shot. Not even, like, any iterations here. Which is pretty amazing. And some of the coolest stuff that people made with MakeReal are these, like, games that they've dialed in, playing with their kids and stuff. For a lot of people, this, MakeReal was like the first, especially, like, tech adjacent people, things like project managers or people who worked at companies but didn't code. This was, like, their first experience. What they told me was, like, this was their first experience making something that, like, worked. Like, this was that zero to one, kind of like, oh, I can create things, like, with code that are interactive. Because it just skipped the entire no code, the entire design tool.

6. Creating Calculator and ASCII Art

Short description:

You could create a calculator by simply writing the word and pressing a button. You can also give it code to generate ASCII art using an OpenAI key.

It was very, very loose. You could just write the word calculator and press a button, and you'd get a calculator. So okay. This one actually looks a little bit better. The missiles are facing the right way. No, they're backwards. But it still works.

All right. Ooh, animations. Wonderful. Love it. You can also actually give it code, if you wanted to. So in this case, I'm saying I want to have, like, an ASCII art generator, where I type in what I want, and then it shows generate some ASCII art. In order to generate the ASCII art, I wanted to take my OpenAI key and make a call to OpenAI and come back with the result. Yeah, send the prompts, et cetera, et cetera.

No, there's leaked my key on stage. That's a new one for me. Don't use that, anyone. There's still money left. I know what I'm doing after this talk. So again, I'll let that load, and I'll just show you the... Let's see what it does. Generating the ASCII art. It's hitting OpenAI, using my key, coming back. Ah, our key? Yeah. All right. That's all credits, anyway, man. Go crazy. Let's try this. That doesn't really look like an octopus at all.

7. Exploring AI on the Canvas

Short description:

You can do pretty amazing things with this. We've been teaching the models to use the canvas, but they're not good at drawing. I asked it to draw Batman, and it didn't turn out well.

Maybe this one will. Hey, why not? So the things that you can do with this are pretty far out. I'll show you a couple of other, kind of like, AI related things. I didn't expect to ever have the canvas being used this way, but because we built... This is the early, like... The moral of the story is, hey, build with the web. Build with tools that allow people to create amazing things with them. And you don't have to know what's gonna happen. Good stuff just happens, right?

Because when these models came out, when it was even theoretically possible to have a canvas full of websites and stuff like that, we'd already built the exact thing that I would have started building. You know what I mean? To get that done. So I'll show you a couple of other things real quick. We've also been looking at... Kind of like teaching the models to use the canvas. So for example, like... By the way, they're not good at this. This is not a thing that the models are good at. They can't really draw very well. So I've asked it to draw Batman. It's just... All right. Cool.

8. Generating Images and Code

Short description:

This isn't image generation, it's the same stuff I've made. It can generate flowcharts and storyboard posters. It's writing code to start the React community.

But the cool thing is that this isn't, like, an image generation. This is the same stuff that I've made, right? So I can also do, like, a... You know, a face. And say, like... I don't know. Make him sad. You know? And it's getting that image, it's coming back with stuff.

All right. That's not... That's not very good either. Some things it's better at. Certainly with... You know, it can generate things like these flowcharts. I think one of my favorite uses of this is... What do you call it? Storyboards. So I'm going to have it do the poster for the storyboard. poster for the deepest dive of React conference drama. Coming soon to theaters near you. Oh! I think it was writing some code here. Let me put that to the back. Depth.

Okay. It's writing code. Good. Starting the React community. Hooks, components, state management. Coming to a conference near you. Wonderful. That's kind of cool. Scene one.

9. Creating Flowcharts and Content

Short description:

Complete the storyboard for scene one of this conference. It's good at creating and moving information. It can generate flowcharts and handle state in React.

Complete the storyboard for scene one of this conference. And I have like 60 seconds left. I'll show you something else that... Oh, that's boring. Catch me later. I'll show you some cool stuff.

Another thing that you can do here is like... All right. So it's not good at drawing. It's not good at that stuff. What is it good at? It's good at creating information, moving information around. This is a really interesting place for these sort of like squishy non-linear type of flows.

So in this case, this is very new. I'm having it kind of complete a... Kind of like a flowchart using LLMs to create the content. So this is a short commercial for the new React 21 version that introduces signals. And React has created... Wait. I'm gonna have to wait for this to complete. Now, it's also generated harsh Twitter feedback for this. And then it's also developed a... Let's read it. Ah, yes. Because React desperately needs another way to handle state. You got it. Right. So this is a way of creating different types of content. So websites, etc. Through this kind of flowchart. And really, anything can be anything.

QnA

Using Images and Exploring TealDRAW

Short description:

Use images as the imperative comp for speech, explore canvas with TealDRAW and language models. Steve Ruiz, TealDRAW demos for days. Questions about Steve's background, generated code, and more.

So I can use this image as the imperative comp for another generation of speech. I can make the speech sad and serious and things like that. We're looking at a lot of different ways of... How can we use this incredible, open paradigm of, like, the canvas with TealDRAW and some of these new language models? I don't even want to know. But... Picture a developer. Oh, sorry. No. Anyway. That's TealDRAW. I'm Steve Ruiz. I have demos for days and days and days. If you want to catch me later, I'm happy to share. And if you do want to make something with TealDRAW, tealdraw.dev is the website. You can grab it from NPM. Go nuts. Big Discord, et cetera.

Thank you. I want to pull up all of our little questions and see what people want to know. How are you doing? Doing good. Do you know that Steve has a background in art? And then decided to go into tech? Because he wanted to be sadder? I don't know. Not long ago. Not long ago. Is the generated code React? No. Give me one file, HTML, CSS, JavaScript file. So we haven't gone that far. So vanilla? Vanilla. Remember that? Yeah. Oh, god.

So many good questions.

Optimizing Performance and AI Models

Short description:

Handle performance in large canvas using browser APIs and custom signals library. Add context or training data programmatically. Explore AI models and their capabilities.

How do you handle performance in such a large canvas? We combination of like mostly browser APIs that are available through CSS. So for example, hiding things that are off screen, but not unmounting them, because that would be too slow to come in and come out of the document. We manage positions kind of like imperatively. We built our own signals library to manage the kind of data dependency tree. It is lower case open source. It's not an open license, but you can read our source code, if you want to like get into the really fine details. But like it's really hard to make React do this. But it is possible. And yeah, it can work.

Very cool. Is there a way now or in the future to add your own context or training data programmatically? I'd love to plug in a design system, along with documentation examples, et cetera. Make real.tildra.com, you can change the prompt, the system prompt if you want. We also have an open source like starter kit, if you wanted to build your own type of thing with this. You could just grab that and go nuts. It's really actually not that like the canvas itself is like 99.9% of the code. The make real feature set is like a very, very thin amount of code on top of that. So it's pretty easy to get going and customize, if you want.

Tell us a little about the AI part of it. The model. We've tried all the models, in fact on make real, you can kind of choose models and even make multiple models fight each other and like run simultaneously. My I mean, I don't make any money off of make real, but I use it a little bit to bully. I think I'm bullying them. Encourage. One of the big model providers to care about this type of visual reasoning and visual capability because it's not an obvious thing that these language models should be able to do. But if they are able to do it, if they are able to, you know, look at a drawing and not only tell you what it is, but decompose it into element parts and know what shapes a tree is made out of or what shapes a landing page is made out of. There's a lot of things that you can do with that capability. Every time a new model drops, I'm like, all right, let's see how the new SONNET compares to a 4.0. Yeah, let's see if I can shame open AI into improving their capabilities there. I think that's a valiant effort. There's a couple of questions in here.

Security, Usability, and Accessibility

Short description:

Discussing the security of putting keys in the canvas, popularizing the strategy of end users using their own API keys, and the usability and accessibility of generated output.

Interestingly, you guys are so smart, about security and if you can put stuff like keys in your canvas, is that safe? And private? Sort of. I wouldn't do it. And yet. I mean, on MakeReal, you also have to put in your open AI key in order to even use the thing. I think the input is placeholdered with like, risky but cool. By all means, don't use this if you don't want to. I can tell you the code is open source, you can read it. We're not doing anything surprising or shady. It's not a honeypot for like, stealing open AI keys or something like that. But you know, it's the same, essentially the same thing that you would, the same level of protection that you would use while using these models directly. Like, you know, if there's super sensitive information, maybe you don't put that into a chat GPT. Same thing with MakeReal.

Interesting. Risky but cool is all what they call me. I'm sad to say that that strategy of having end users put in their own API keys in order to avoid like, a viral project from crippling the company that's providing it or the developer, that has caught on a little bit more after MakeReal than before. Influencer. I hope. I probably, it's one of my sins as a developer is popularizing that.

Let's talk a little bit about the generated output. Is it besides obviously very cool, is it usable? Is it accessible? Yeah, I didn't, I didn't show it, but you can, all of those individual projects that are generated, those little websites on the canvas, they're permalinks. You could copy that and, you know, bookmark it. Now you have your own, like, I made one for like, calculating coffee to water ratios when I'm doing like, V60s or something. I just use that. I just needed a little, like, calculator. I made it. Now I use it. You can also take that code into Replit or CodeSandbox or the Acblitz, I think, and continue on. At one point, we did add a code editor into the website for MakeReal so that you could edit the code directly, but what we found is that that kind of took the fun out of it, because now you couldn't really tell whether something that someone had made on MakeReal was great because they managed to convince the model to do it or whether they just wrote it themselves. So we never shipped the editor inside of MakeReal. But yeah, it's kind of usable.

Transition, Business Model, and Future Plans

Short description:

Discussing the transition from art to tech, the business model of TealDRAW, and the future plans for extensibility and plugins.

It's a good starting point. It's not going to put anyone out of work, but it's a good example of, like, this type of visual co-generation.

Now, talk to us a little bit about you and your life, including how and why was this move from art to tech, and is TealDRAW a company or a hobby?

Yeah, we are a startup. We're a VC-backed startup. I've raised close to $15 million for TealDRAW. We make money, actually, which is great. We make money by licensing the canvas to other software companies that want to have this type of feature set but don't want to build all that undifferentiated stuff in between. A little bit like a text editor or a data grid, these things that, if I wanted to build a data grid, the best possible outcome would be that it ends up kind of as good as a G-grid, so I would just use a G-grid. Same thing with the canvas, is that so much of this user experience is going to be the same for all of them, that it makes sense for someone to be providing it kind of as a commodity. So that's the business side of TealDRAW. This is more or less the marketing and R&D side of TealDRAW.

Myself, how I ended up in here, background in contemporary art, writing about art, came to the UK, had a studio, was continuing to make work, turned 30, decided to make some money, and went very quickly from like into design. I had a tiny bit of technical skills, so I ended up kind of in prototyping, technical part of design, and then interaction design, all that stuff, and then started making open source projects around these very visual problems like arrows or variable with lines, like kind of digital ink. All of this on Twitter. If you like this type of stuff, definitely follow me. I post a lot of GIFs. And one of those things, which was the, hey, let's build a whiteboard, got really popular, and people started paying me to stay unemployed, which is a good foundation for a company.

One last question before we send you to the Q&A rooms, is do you have any sort of future visions for extensibility or plugins or play nice with others kind of things? Yeah, I love that stuff. It's very, very easy to extend TealDRAW. You can add your own shapes. You can write them basically as React components with a little bit more configuration on top. And we probably will ship plugins as something kind of like an end user experience, like on tealdraw.com eventually. It would be great to have a community of developers building, hey, it's a to-do list for TealDRAW, this type of stuff. That last project that I showed you, the kind of like computer flowchart thing, that is a place where I think we'll have a lot more and easier time having a kind of a plugin community. And that's very, very, very new. So yeah, keep an eye out for more there.

Building a Plugin Community and Canvas Viability

Short description:

Discussing the vision of building a plugin community, making the canvas a viable foundation for applications, and providing links for further information and updates.

It would be great to have a community of developers building, hey, it's a to-do list for TealDRAW, this type of stuff. That last project that I showed you, the kind of like computer flowchart thing, that is a place where I think we'll have a lot more and easier time having a kind of a plugin community. And that's very, very, very new. So yeah, keep an eye out for more there.

The idea is, again, just to make the canvas extremely viable as a foundation for applications in a way that right now it's like you'll spend your whole seed round building just the canvas so that you can build the differentiated stuff on top of it.

Remind the people where they can find you on the internet. Follow me at twitter.com/SteveRuizOK. I have a blog at steveruiz.me and tealdraw.com to play with the glorified demo. Tealdraw.dev to learn more about using it in the developer context. And yeah, twitter.com/Tealdraw to follow all the fun stuff. Thanks, Steve. That was really great. All right. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top ContentPremium
Today's Talk discusses the importance of managing technical debt through refactoring practices, prioritization, and planning. Successful refactoring requires establishing guidelines, maintaining an inventory, and implementing a process. Celebrating success and ensuring resilience are key to building a strong refactoring culture. Visibility, support, and transparent communication are crucial for addressing technical debt effectively. The team's responsibilities, operating style, and availability should be transparent to product managers.
The Whimsical Potential of JavaScript Frameworks
React Summit US 2023React Summit US 2023
28 min
The Whimsical Potential of JavaScript Frameworks
Top Content
Watch video: The Whimsical Potential of JavaScript Frameworks
The speaker discusses the whimsical and detailed work of Stripe, particularly their interactive and dynamic pages. They explain the use of React for building whimsical details and tracking mouse position. The speaker introduces React Spring for smooth animation and React3 Fiber for creating a 3D egg model. They also mention the use of Framer Motion and React server components for animating CSS properties.
Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
A Practical Guide for Migrating to Server Components
React Advanced 2023React Advanced 2023
28 min
A Practical Guide for Migrating to Server Components
Top Content
Watch video: A Practical Guide for Migrating to Server Components
React query version five is live and we'll be discussing the migration process to server components using Next.js and React Query. The process involves planning, preparing, and setting up server components, migrating pages, adding layouts, and moving components to the server. We'll also explore the benefits of server components such as reducing JavaScript shipping, enabling powerful caching, and leveraging the features of the app router. Additionally, we'll cover topics like handling authentication, rendering in server components, and the impact on server load and costs.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps