Make Real: Tldraw’s Accidental AI Play

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

This is the full story behind "Make Real", tldraw's viral AI experiment that lets you turn your low fidelity drawing into a functional website. It's half startup rollercoaster ride and half moral tale about betting on React — as Make Real was only possible because of the ability to put any React component onto tldraw's canvas.

Here's the full behind-the-scenes perspective, and everything we learned along the way.

This talk has been presented at React Day Berlin 2024, check out the latest edition of this React Conference.

FAQ

Tealdraw is a drawing application that started as a small drawing app and evolved into a canvas tool that integrates with React. It allows users to create shapes, manipulate them, and use features like arrows and custom color pickers.

Tealdraw is not open source in the traditional sense, but it does allow users to read the code and build with it using the TLDR SDK.

Developers can use the TLDR SDK to build their own applications on top of the Tealdraw canvas, creating Figma-style apps or other projects without spending extensive time building a canvas from scratch.

Tealdraw integrates AI to enhance the canvas experience, allowing for features like AI-generated images and workflows that incorporate AI-driven processes.

Tealdraw licenses its canvas technology to other applications and companies that want to integrate a whiteboard or canvas-like experience into their products.

Yes, Tealdraw can be used for free, with a watermark indicating it was made with Tealdraw. Licensing is available for removing the watermark.

Tealdraw uses React and its own signals implementation library to manage performance, particularly for high-refresh screens and multiplayer scenarios.

The goal of Tealdraw is to make canvas-based applications more accessible and practical, enabling more applications to use direct manipulation canvases and explore new use cases with AI integration.

Developers can learn more about using Tealdraw and access its SDK at TLDR.dev. For general use, Tealdraw.com offers a free platform for creating projects.

Steve Ruiz is the founder, creator, and CEO of Tealdraw.

Steve Ruiz
Steve Ruiz
32 min
13 Dec, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
I'm Steve Ruiz, the founder and CEO of Tealdraw. TLDR is a canvas app in React that allows developers to build with their SDK. TLDR pays attention to details that enhance the canvas experience and can be used to interact with different elements. TLDR's demos showcase its versatility, but may not perform well on social media. TLDR's AI collaboration features allow users to quickly turn their drawings into AI-generated images. The Make Real project helps users create user interfaces by drawing and providing prompts, generating working prototypes. The system behind Make Real can handle complex interactions and ambitious projects. OpenAI integration and the Steel Draw Computer enable building workflows and multi-stage generations. The goal is to bring the Canvas from being specialized to a primitive for the web. Teal Draw is free for open source and non-commercial use, with options to purchase licenses for additional features.

1. Introduction to TLDR and AI

Short description:

I'm Steve Ruiz, the founder and CEO of Tealdraw. TLDR started as a small drawing app, but we made it a canvas in React. It's like Figma or Miro, with nice arrows and the ability to curve shapes. TLDR is not open source, but you can build with our SDK. Today I'll talk about TLDR and our experience with AI. TLDR.dev is where developers can play with the SDK and build their ideas quickly.

Thanks for coming. I'm Steve Ruiz. I am the founder, creator, now CEO of Tealdraw, which started its life as a tiny little drawing app. Then someone pointed out that, hey, actually, TLDR is kind of like a thing. Maybe you should lean into that more. And so we did. We made our icon into a semicolon, and we raised a seed round based on the idea that the web kind of needs a canvas, and that it would be possible and even somewhat practical to do that in React.

So even though this is like, you know, it looks kind of like Figma or Miro, and I can do all the things that you would expect to be able to do in one of these apps, creating shapes, moving them around, direct manipulation, rotating. I am particularly very proud of my arrows. So yes, we do have nice arrows, and they do follow. All right. It gets weirder. Like, you can even hit arrows into, like, shapes that you drew yourself. And it'll curve just right around perfectly.

We are not open source in the upper case definition of open source, but we are open source in the lower case, come read my code. And you can build with TLDR. We have an SDK that you can use to build all sorts of interesting projects on top of it, and we finally have a landing page for the first time in a couple years. So it's used by some real companies. And big and small. So today I am going to talk, again, a little bit about TLDR, but also about this strange experience that we had with with AI.

So not an AI company. I don't make any money off of an AI product. This is not AI for dog food for cats or something. But it is kind of like the most or it was like kind of one of the most viral AI products of, like, last year, and we've still had a lot of luck with that this year. Mostly just pairing the canvas plus AI. I'll show you what I mean. But just so that my go to market hire doesn't kill me, TLDR.dev is where you can play with the SDK and learn all about it. If you're a developer and if you've ever had an idea about building, like, a Figma-style app for something, you can use TLDR. Probably get it done in a weekend. Don't spend two years building a canvas, please.

2. Using TLDR as a User

Short description:

You can use TLDR as a user on TLDR.com to create projects for free. TLDR pays attention to the little details that enhance the canvas experience, like hiding the selection overlay when changing colors. TLDR's canvas is a normal React app, allowing you to interact with different elements, even on a YouTube video. It's not TikTok.

I've already done it. It's really hard. And then you can also just use TLDR as a user. Plenty of people do. So we have TLDR.com, you know, what I'm using right now, where you can go and create projects. It's completely free. People use it for all sorts of things. I do my retros there, my wireframes and diagrams, stuff like that. I did my cool demos already.

But yeah, it's fun to look at all the little details in TLDR. We've paid a lot of attention to, I don't know, those little things that add up to a good canvas experience. You know, if I'm changing a color, maybe hide the selection overlay, you know, for just a second so that I can see what that color looks like. Unless, of course, my mouse leaves the panel and then shows it to me right away. Whatever.

It's like some tiny little micro interaction that you might not notice. But there are, like, millions of those inside of a canvas. And we've tried to catch all of them. You can even click and drag to select different colors. Unusual color picker, but there it is. And yeah, the weird thing about TLDR's canvas is that even though it is very high performance and I can have thousands and thousands of shapes and zoom in and out and all this stuff, it is like a web canvas. This is a normal React app. It's DOM. It's SVGs. It's divs. It's whatever. In this case, it's a YouTube video that I can, you know, interact with and play. But I can also, you know, rotate it. Or I could draw on top of it and do all these things that normally you don't get to do in canvases using DOM content. As much as I would like to watch all of that subway surfers video. This is not TikTok.

3. Demos and Experiments with TLDR

Short description:

This is a React conference where we create demos for TLDR to showcase its versatility. We can create a window into a document by cutting out selected content, allowing for a gradual reveal. We can also demonstrate PDF editors and control the camera in various ways. While these demos are impressive internally, they may not perform well on social media. Our experiments with Canvas and AI, though flawed, are still fascinating.

This is a React conference. We make a lot of demos for TLDR. Part of our job is to say, hey, you can do anything with this thing and show what anything could look like. So for example, in this project, someone had an idea to be like, well, I want to show certain content to my, you know, like what I'm demoing. But I don't want to show the rest of it. So is there a way for me to have a window into my document? I'm like, yeah, we'll just make whatever you have selected, you know, be kind of like cut out from a mask in front of the rest of that.

So now I can kind of like slowly reveal certain content, which you'll see in a minute. Because these are my actual slides. Based on what I have selected. Great little demo. Whatever. People want to see PDF editors. I'll show you a PDF editor. You know, we can control the camera and constrain it in these different ways. And I just lost my slides. And they're going to be back in a second. There we go.

Yeah. So, you know, use it the way that you use kind of preview on your on the Mac. Also a good demo, especially if you like to, you know, use TLDR to make some sort of slide viewer. But it doesn't really kill on social media. Arrows kill on social media. And surprisingly AI kills on social media. So we make a lot of kind of experiments as well using AI. Again, there's not a lot of people using Canvas and AI together. In part because it's kind of bad. And you'll see it in a second. But it's also kind of amazing. And for a lot of products, bad plus amazing is cool for internal demos. But you don't get to share that stuff.

4. TLDR and AI Collaboration

Short description:

For TLDR, AI and lo-fi aesthetics go well together. I'll show you the 'draw fast' feature where you can quickly turn your drawings into AI-generated images. You can manipulate the images and even combine them with other elements. It's a versatile tool for creative experimentation.

For TLDR, they just go well together. It's kind of a lo-fi aesthetic anyway. And so the combination of these things tends to go better. So I'll show you some things.

The first was not completely in order. But I'll show you draw fast. Draw fast, the idea was like let me take my drawing and let me turn it into slop. You know? Turn it into an AI generated image. The conceit is let me do that really, really, really quickly. So let's see if I can wake up the models a little bit.

Okay. So in this case, I have this drawing that I've made with big blue here. And I have a prompt, which is Polaroid photo of shark swimming in ocean. Realistic 4D scan. And I'm just going to take my little drawing of a shark here and kind of swim it through the ocean here. All right? And so it's doing this like flash image like latent consistency model or whatever thing where it's just trying to come up with an image as fast as possible without a lot of iterations.

And you can do... Yeah. Like oh, no. Now there's three sharks. Anything that you can do in a canvas you can do here. You can even, like, collapse it so that you're kind of editing the shape inside of the image. But it's hard to kind of tell what's going on. Here's another one of a tourist photo guy in Alexanderplatz, Berlin. And here's my little guy. And he can just kind of walk around here. You know? And you can combine this with a lot of different things. It will take whatever is on the canvas as the input image. So again, I can, you know, make the guy fall down or whatever. I can also bring in, like, normal photos.

5. TLDR Project Shutdown

Short description:

We had to shut down the project due to its popularity on Russian Reddit and the high cost of random image generation. However, the application layer is open source, so you can use it to create your own product. The canvas is our main focus, while everything else is for fun and inspiration.

Like, you know, here's me trying to get into Bergheim or something. I don't think I'm gonna get in with that outfit. So that's pretty fun project. But it's also something we had to shut down because it got very, very popular on Russian Reddit and I was paying a ton of money for random image generation. So if you want to see it, you can go to drawfast.teeldraw.com. And even though it's not available anymore to just use as a user, it is like that application layer is open source. You can grab that, bring it in, make a whole product out of it, if you want. It's up to you. But that's the kind of thing we try and do. The canvas is our product. Everything else is just for fun and inspiration.

6. Make Real Project

Short description:

The Make Real project allows users to create user interfaces by drawing and providing prompts, making it easier to communicate design ideas. Users can paste or draw their designs and have the model generate a working prototype. The model takes prompts like 'you are a software developer' and generates prototypes based on the provided drawings or wireframes. Users can also annotate and modify the generated prototypes, making it a collaborative process.

So back to the slides. There are more. The one that went really popular, really viral, was called Make Real. Maybe you saw this on Hacker News or something about a year ago. The concept there is what if I could just make user interfaces by just drawing them and saying, you know, like, I have a diagram. You know, make this thing. I want to click on this button. I want it to open it up, but I don't really like text prompts. I spend a lot of time, you know, talking to chat GPT and chatbots, but especially with user interface, sometimes the chat is a challenging interface for the language model. It's hard to put this stuff into words. But it's really easy to just kind of like paste it or draw it and then click a button that says Make Real and have a model take that as a prompt. I think the actual prompt that we use is like, you are a software developer. You are used to working from low fidelity wireframes that are given to you by your designers, who you like, and you hand back, like, working prototypes in a single page HTML file with a script and all that stuff. And then it just sort of like, just sort of does it. It doesn't always do it well, as we'll see in a second, but it just does it. It comes back and says here you go. Now, the cool thing, the really cool thing for me is that just like I did previously with a YouTube video, you can draw on top of these things, right? Even though it's a working little bit of a website, a little embedded iframe, I can draw on top of it, which means I can kind of like say, you know, make this green or something and then, you know, like center the modal or something like that and then grab this in the same way that I first grabbed this and used that as the prompt. Now I'm going to use this as the prompt.

7. The System's Capabilities

Short description:

The system can take previous generations as input, incorporating feedback and annotations from designers to generate user interfaces. It can work with images of interfaces or state charts to create functioning prototypes. The system can even generate interfaces based solely on a state chart, and it streams the HTML and script tags to create an interactive experience. Users can also provide images of existing interfaces and request the system to mimic a specific design, or use a previously generated interface.

And when you do that, when you have like one of its previous generations as part of the input, the prompt, we change the system prompt a little bit and be like, hey, you just got feedback. You just got like material back from your designers. Here's the annotations that they made on top of your project, and sure enough, we have a green centered modal based on a drawing and then also a drawing on top of a website. But it works. Let me show you some other things that it can do.

When it comes to behavior, again, you can give it an image of anything, universal whatever interfaces, the screenshot. And Canvas is kind of like a composer for screenshots, if you think about it. You just build images. You just put content on there, and you can put anything on there, images, websites, etc. So if I just give it this interface, drawing with nothing else, I get a working timer, right? That's pretty cool. When we first started a year ago, the first multimodal model with Vision that was generally available was GPT-4 with Vision. That's the one we used. Sometimes this number would go up rather than down. Or sometimes the buttons wouldn't work correctly. And so I'm like, well, why don't I just give it a little state chart, right? That shows how it should work. And when I had these two things together, then it would work. But what's also fun is I could just give it just the state chart, and not the image of the user interface. And it would still come up with something. This is only being based on this little whatever state machine. And sadly, it streams the HTML, and then it streams the script tag at the end. So we always have this delay before it becomes interactive. But yeah, this is the UI that it created on its own based on what I had. Although, you know what? I kind of think that the round... Eh, we'll see. Anyway, let's keep going.

And yeah, you can just grab stuff like, hey, here's a picture of Stripe. Here's my user interface. Can you make it look like Stripe? And I'll come back to that. Or you know what? I'll just take the one that it baked earlier, like any good food show. This is what they came up with earlier.

8. Handling Complex Interactions

Short description:

The system can handle complex interaction diagrams and generate interfaces based on them. It can also handle ambitious projects and generate functional prototypes over multiple generations. An example is a stop motion camera app that can capture frames, display them, and allow users to add, delete, and play them back as a GIF.

Basically the same, but with the purple thing. Yeah, they really like purple. Doesn't quite get the branding.

Likewise, you can give it fairly complex interaction diagrams. So I have a list of things, and you can imagine trying to type this into a text box. Like, okay, I have a list of things. I'm going to try and do it without talking with my hands. Only one of them can be open at a time. If it's open, it should make room. I see I can't even do it. It should make room for the other ones. And if I open in one, it should close the other. Right? But it got it just based on these images.

You can also plug in really well with... Well, I don't know. Like, the level of ambition can be quite high with these, especially over the course of multiple generations. So in this case, I want to have a stop motion camera app. And I'll let it generate while we talk. I want it to take the feed from my camera and, you know, show me live the feed. And then let me click a button to kind of freeze one of those frames. And then show that previous frame as well as the feed. And then let me kind of keep clicking the button in order to add extra frames to it. I want the frames to be visible down here. I want to be able to delete frames by clicking on the little delete button here. And I want to be able to play it back as like a GIF.

So here's what it came up with. If I give this permission to use my camera and then I reload the page. Let's see if it loads up again. It does. Okay.

9. Camera Feed and ASCII Art Generator

Short description:

The system allows capturing frames from a camera feed, deleting frames, and playing them back. It can also generate ASCII art using a model. Visit make real.Tiltrot.com to try it out with your own API keys.

Camera feed. It's very weird to do on stage. Click the capture button. Can I delete frames? Yes, I can. And can I play it? Looks weird. It's kind of bad but kind of amazing at the same time. Being generated just... Yeah. Hell yeah.

Don't thank me. Thank... Let's see. Claude. For this one. Yeah, if you go to make real.Tiltrot.com. This one is public and you can enter your own API keys. That's the only challenge. So if you do have API keys for OpenAI or Anthropic, you can use them there. So is that? Counting down. Good. Cool.

I can show you a couple more but including ones that use code, you can give it code and say, hey, this is how to prompt a... In this case, I'll pull it in. In this case, I just made this like five minutes ago. But the idea is to have an ASCII art generator. You can't generate ASCII art in just normal JavaScript. But you can do it... You can get a model to do it. So in this case, I'm saying, hey, build me this ASCII art generator. But also here's how to get the ASCII art.

10. OpenAI Integration and Steel Draw Computer

Short description:

You can use the OpenAI API token to talk to OpenAI and make fetch requests. The new hotness is Steel Draw Computer, which allows building workflows and multi-stage generations on the canvas. Components can be connected to pipe data from one to another, creating a graph-like structure.

It's like go to OpenAI, take my API token and use this prompt. Right? And here's a whole method that you can use in order to talk to OpenAI. And it built the user interface for this. But also built the... That capability of going off and making a fetch request and pulling it back. Extremely accurate. All right. I don't have too much time. So I'm gonna go to the next one.

The next new thing... Yeah, yeah. All right. You'll let me know when I go over. Much more interesting than asking me questions about this. But the new one, the new hotness is Steel Draw Computer. Which is... What if we could use the canvas as a place to kind of build, like, workflows or multi-stage generations that can branch and fork and do all the things that we can do on the canvas in order to make AI flop? And it turns out that it is possible. And it's also, again, kind of amazing.

So what's a good introduction? Yeah, this is a really good introduction. Right? Very simply, we have all these components. We can, like, run them. And when we run them, it kind of pipes the data into the next component down the line. Pretty straightforward. If there's another component, it'll just update anything that kind of is connected down from it. So, for example, straightforward, a fish living its best life is a text. That's gonna be received as input by this image component. And the image component should, if the demo gods like me, whatever, create an image based on that. I don't know what that looks like. But hey! Good times. Right? But you can have this, like, kind of graph, multi-stage thing.

11. Building a Computer-like System

Short description:

You can take a drawing and use it as the input to write a story. The system can also create websites and audio scripts based on the input. By storing state and utilizing natural language instructions, it is possible to build a computer-like system.

So you can kind of pull data out from the image. You can... Let me skip a few steps here and show you what you can do. You can take a drawing. I'm gonna change the drawing. So that we have something new. I'm gonna make this guy... Maybe like a hat. I'll make the hat purple. Right? That's gonna be used as the input to this instruction that says write me a story. So this block is gonna take whatever inputs are connected to it and write a story based on those inputs.

In this case, the only input is this weird drawing of an octopus with a hat. And it made text. Ali the octopus wasn't like the other octopuses. Now it pulled an image off of the thing that I made. And this wasn't working like ten minutes ago. But I don't know why. But you can also pull it to say, hey, create a website based on this. You can create audio scripts based off of something. And then you can use that as the input to something else.

So we were building this. And first it was like, wow, this is really silly, really fun. And then we're like, you know, hey, you can kind of store state here. You can say, hey, this is a number, then create a cycle that updates that number. You're like, oh, cool, right? We're doing this kind of natural language instructions. We're able to store state. I think we could kind of build a computer out of this. So in this case, we have a good number finder. We have our state here is whatever. A number.

12. Computing the Next State

Short description:

The instruction is to compute the next state based on your inputs. By combining nondeterministic thinking with logical flows, you can perform tasks like determining if a number is good aligned or generating songs about love. This language model enables logical reasoning that traditional code cannot achieve.

The instruction is to compute the next state based on your inputs. The input is like I++. So it's just going to keep going up. And of course there's a cycle. So it's kind of being pulled in. And we also branch off to this to say, hey, is this number good? Like is it good aligned or maybe bad or evil aligned, right? And if it is good aligned, then add it to this array. And apparently 27 was not good aligned. Or it is. But 26 wasn't. Right?

We're doing this stuff that you can do with a language model, which is this like nondeterministic, kind of like fuzzy, maybe poetic thinking. And yet we're also kind of combining it with the type of like logical flows and things that you can do with computing. So the one that I made just recently is like let me clear previous state here. Let's say we want to figure out which songs are about love and which aren't. Generate a random pop song. Don't repeat any of the previous results. Is this song about love? If it is, add it to this list. If it's not, add it to this list. The fun thing again, it's a language model. So if the answer is maybe, we can add it to a third list. Right? Because, again, like, yes, no, maybe? That's logical. Computers can't do it. The code that I normally write can't do it. But these can. So is Blinding Lights about love? Maybe? Is Watermelon Sugar about love? Yes. So you can do it. And then it kind of like cycles back to the top and this thing can run for hours.

I'll just kind of let it go for a bit. I am at time. I can keep or I can wrap. All right.

13. Demo: Nickname Generator

Short description:

One more demo and then we'll wrap. Let me pitch the nickname generator. It uses data sources like the camera to create fun nicknames based on appearance. Play with it at computer.teeldraw.com. Follow me on Twitter at Teeldraw.

One more demo and then we'll wrap. All right. The one that I was going to show. Let's see. Well, you know what? Let me just pitch it. Because this is something that you can use. Kind of. You know, let's do this.

Nickname generator. We were kind of playing with this earlier. Just like some of these earlier projects that I showed, you can always pull in the different data sources. In this case, like the camera. So it's going to make a fun nickname based on my appearance. Click the button. Three, two, one. Oh, that's embarrassing. Peaceful presenter. Hey! There we go. Not bad. Right. Thank you. want to play with this, it is computer.teeldraw.com. We did do this in combination with like Google. Or at least they gave us early access to the models. So you can read all about the kind of theory craft behind this. But computer.teeldraw.com to play with this one. I will let you in if you have an interesting email address or something.

If you want to follow me, I am on Twitter. Teeldraw is X.com slash teeldraw. And we are on all the other platforms.

14. React, Performance, and Control

Short description:

Definitely Twitter slash blue sky is the place to follow along with all the fun. Use teeldraw.com as a whiteboard for free. Build something with teeldraw.dev. AI augments us, it's not replacing us. React was a good choice for the challenge of building cool stuff. Performance and control were key. Third-party developers can add widgets and shapes with React without a custom rendering engine. Most people use it for productivity and presenting web content on the canvas. Performance matters.

Definitely Twitter slash blue sky is the place to follow along with all the fun. If you want to use teeldraw as a whiteboard, it's free. Multiplayer. It's really good. Teeldraw.com.

If you want to build something, teeldraw.dev. And yeah. Thank you. And yeah. Catch me later for more demos. Thank you. Thank you.

Well, firstly, you know, I think first, thank you so much. Now I can put UI UX developer into my CV after using teeldraw. So easy. Yeah. AI is not replacing us. It's augmenting us. So less code, even than a no code tool. Yeah. But there is a good react question here, which is that what was the biggest challenge building all of this cool stuff with React, given this is a React conference. It would be fun to go into the deep dive of performance and control that we did with React. We used React so that third party developers could add their own widgets. And build their own shapes in a way that didn't require them to use a custom rendering engine or our own kind of primitives in between. It's turned out to be a good bet. Most people don't want to build graphical apps. They're not using this to build creative painting apps. They're using it to do productivity and other stuff where you're using web content but presenting it on the canvas. AWS dashboards and stuff like that. Performance definitely.

15. Building a Performant Multiplayer Canvas

Short description:

We had to write our own signals implementation library. Multiplayer was hard to make performant. React is super good. We licensed the canvas to other applications for whiteboards or canvas-based experiences. It's prohibitively difficult to build. We licensed the canvas and it seems to be working.

We had to essentially write our own signals implementation library in order to do all this crazy computed data there. And then the multiplayer was also very hard to do. But also very hard to make performant. Especially on like high refresh screens, right? You have 30 people moving around, data coming in from 30 different sources. But you don't want it to be trying to update an extra 30 times per frame. So yeah. But it does work. React is super good. And the browser is incredibly capable. So yeah.

In case you ever wanted to try, I don't recommend it. But you can do this type of app using React. Nice. All right. I'm gonna go in democratic order and just ask the most upvoted questions. So what's in it for you? Out of your heart and your love for triangles and arrows of course. But how do you make money? So we licensed the canvas to other applications that want to have either a whiteboard in their app or want to build something that is canvas based or uses this type of experience. So for example, ClickUp uses teal draw for its whiteboard. Autodesk uses teal draw for kind of a collaboration whiteboard like experience inside of their form of product. And then we have like... We're not quite at a million dollars for our first year, but we're really close. There is kind of a demand to build more applications that leverage the canvas, I think. And it's prohibitively difficult to build it. It would be like... I talk about Mapbox as an example of another type of application where if it didn't exist, there would just be fewer apps that had maps in them. Because no one's gonna build that themselves. So yeah. We licensed the canvas. It seems to be working. Nice.

16. Building a Future for Teal Draw

Short description:

There are more business questions, but I'm gonna be selective. We're like 12 people, mostly in London. The goal is to bring Canvas from being specialized to a primitive for the web. Teal Draw is free for open source and non-commercial use.

There are more business questions, but I'm gonna actually be a little selective here and give you an opportunity also to shout out some other people.

So this question here is, did you build all of this on your own or how many people are in the project? And then there's a related question about how do you come up with all these incredible ideas with teal draw? Man, there are more ideas than we have time to build, for sure.

No, we're like 12 people. Most of them in London. We have a very nice office in London. If you are in London, let me know. Come by. David Sheldrick, myself, Alex Dietrich, Lew Wilson, slash Toadpond. I guess I'm not gonna name the whole team, but those are the first few people who joined. It is... It's a fantastic team. Mostly product engineers. We're still hiring for our first designer. A lot of these things just doesn't really make sense to design in Figma. You really need to be in it and that's the way that we build it. Nice.

And let me just then finish it with this question, as we are at time. Is that what is the end goal for teal draw now? It's of course, like you said, to propagate more Canvas user interfaces, but do you have a bigger sort of dream for this? Sort of. I love this type of application, this direct manipulation Canvas.

And it really has been... You really haven't been able to do, for example, like a startup that uses a Canvas, have it be good and not spend your first two years just on the Canvas alone. So if teal draw can be that big, like unlock for hundreds or thousands of applications that have this direct manipulation Canvas that can leverage it in different ways and really can discover new use cases for it, like new killer use cases for the Canvas, then I would be happy. But especially now that we had this unexpected new software paradigm just appear, not exactly out of nowhere, but definitely hit the mainstream, which is, I guess, AI. I feel like teal draw, we just happen to be in a really, really good space to explore this. We have this hackable SDK for a Canvas in front of us. And that's exactly what you want if you want to explore, what does software look like on a Canvas with these language models alongside it. So if we also happen to accidentally break open user experience for AI on the Canvas, then I would be just as happy.

But the main goal is to really bring Canvas from being something that's very specialized to very high risk and very kind of often very bad to use into something that is just like a primitive for the web. And as I understand it for like open source and non-commercial stuff, Teal Draw is free, right? So if anybody wants to go and build their own demos, there's no special... It's going to have a little watermark that says made with Teal Draw in the corner.

17. Teal Draw Community and Conclusion

Short description:

You can get stickers if you like the watermark. Purchase a license to remove the watermark. Build something neat with Teal Draw. Join our big developer community on Discord at tealdraw.dev.

I have stickers if you really like the watermark. If you want to get rid of the watermark, then you purchase a license from us. But yeah, free, commercial, non-commercial, it doesn't matter. Please build something neat with it.

We have a big community. We have a lot of developers who are playing around with it. So we have a Discord. tealdraw.dev, all the information is there. Nice.

Well, thank you very much, Steve. Let's give Steve a big round of applause. Thank you. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Premium
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps