And personally, I've been building web applications for over two decades. And in the last few years, I've been focusing heavily on AI-powered experiences. So before we dive into the how, how do we build generative UI applications, we need to clear up some confusion. Because when people say generative UI, they often are talking about completely different things. At its core, generative UI is simple. It's the process of allowing an AI to go beyond text and generate visual interfaces. Instead of an LLM returning a string that you render in a chat bubble, the LLM triggers the creation of UI components, charts, cards, forms, dashboards, whatever makes sense for the user's request.
But here's where it gets interesting. There's this debate happening right now in the AI community about what generate actually means. And honestly, the confusion is justified because there are fundamentally different approaches. I think of this as a spectrum with four levels. Level one is enhanced markdown. Level two is component selection. Level three is declarative UI generation. And level four is true code generation. Each one gives you different tradeoffs between safety and flexibility. So let's dive into each one of these levels.
Level one is what most chatbots today use, enhanced markdown. The AI outputs rich text like tables, code blocks, formatted lists. And so this is better than plain text, but it's still static content. Chat GPT rendering a markdown table, that's level one. It's an improvement, but it's not really generative UI. It's just formatted text. Level two is where things start to get practical. This is what I call component selection, and it's what we're starting to see with some production applications today, including the demo that you just saw. So here's how it works. You build a library of React components ahead of time, a weather card, a line chart, flash cards, quiz components, whatever your application needs. The AI doesn't create these components, you do.
Comments