Video Summary and Transcription
Today's Talk explores intelligent state management in React, highlighting the limitations of traditional state management and the need for innovation. Xdate's store simplifies state management by providing an easy way to update and retrieve data. The integration of AI and state machines enables the creation of intelligent apps that enhance user experience. The combination of state management and AI is achieved through packages like StatelyAI-Agent and the Vercel AI SDK. State machines, reinforcement learning, and large language models play a key role in creating intelligent agents. Graph algorithms can be used to traverse state machines and improve user experience. State agents store knowledge in short-term and long-term memory, while state machines provide guardrails and automation in multi-step processes. The impact of Language Models (LLMs) on UI performance and the future experimentation of building AI models to identify state machines are also discussed.
1. Intelligent State Management in React
Today we're going to talk about making state management intelligent. We'll explore the limitations of traditional state management in React and the need for innovation. Using AI, we can push the boundaries of UI and UX. Let's start with states and the use state hook in React. The code for managing state without a library can become complex and hard to understand. That's why there are various state management libraries available like Xdate, Redux, Jyoti, Recoil, and MobX. For modern projects, Zustan or Redux are popular choices.
Hello, everyone. So today we are going to be talking about making state management intelligent, whatever that means. So honestly, the title of this should have been why David likes state machines so much because that's literally all I'm going to talk about today.
It's so exciting to see all of you here. It's my second time at React Summit, my first time speaking, so this is extremely exciting. A little bit about me. My name is David Korsheid. I'm at David K. Piano. I don't play the piano, sorry, my name is not piano, I do play the piano, that's why piano is there. Sorry to confuse you. I work at a company called stately.ai, which happens to be the two things we're talking about today, state and AI. And the reason is because we've all done state management in React and our applications, and honestly, there's not much room for innovation, at least until now, now that we have AI everywhere. And so I honestly just want to spend the next few minutes with you just innovating on how we can make state management intelligent.
Now when I mean innovation, I'm not talking about this kind of innovation. This is not the kind we want. I mean, I'm sure there's a reason for this, but the American mind cannot comprehend this. I'm talking about innovation like this. I don't know if you've seen T.L. draw really, really exciting examples of just doing super cool things with an infinite canvas, and really using AI and stretching the limits of what UI and UX can really be for our users.
So let's start with states. We probably use state in React, the use state hook, and we're going to make a simple to-do app. So you probably, if you're not using a state management library, you would start like this. You would have to-dos, set to-dos, like a whole bunch of use dates, and honestly, this code makes me a little bit sad. We have a use effect in there, just making sure that our to-do is updated. We have all of these use dates all over the place. And then we have the dreaded dependency array from hell. So honestly, I think that we could do better than this, and I will say that when you manage state like this, does it work? Yes, it's going to work. But it becomes a lot harder to understand your actual application logic, and so that's why a bunch of state management libraries have showed up, such as Xdate, Redux, which Mark talked about earlier today, and other ones like Jyoti, Recoil, MobX, and some other ones that I didn't list here, but there's only so much space on the screen. So if you were to do it today, you would probably use something like Zustan or Redux.
2. Xdate's Store for State Management
Xdate's store is a new library that simplifies state management. It provides an easy way to update data and retrieve it using the use selector hook. The library follows the core concepts of states, events, and transitions.
This is actually Xdate's store. It's a new library that I released. It's a tiny library where all you do is you give the data for your store, and then you give in the second argument a way to actually update that data, such as add to-do, update to-do, select to-do, et cetera. And then you could use it inside your app by just grabbing the use selector hook and selecting from that store, and then you just send events. The API is extremely simple, and this is really meant to be a stepping stone for Xdate, but also a really simple way to manage your state.
Because no matter which state management library you use, you're essentially using three concepts — states, events, and transitions. It's the same for all libraries. Some may blur the lines between what is an event, what is an action or a method, but yeah, that's state management in a nutshell.
3. AI and Old School AI with xdate
AI SDK allows us to make applications intelligent. We can use generative AI to generate content, stream content, and create chat bots. Additionally, state machines, implemented using xdate library, provide a way to represent old school AI, like the behavior of ghosts in Pac-Man. Users are increasingly demanding intelligent apps that enhance their experience. By using the AI SDK and tool calling, we can go beyond generative examples and execute specific tasks.
So now let's talk about AI, actually making our applications intelligent. So when we think this and think about adding AI, we might think right now at least about the generative part. So this is using the Vercel AI SDK, and it's actually a really, really good SDK if you don't want to talk to OpenAI or other model providers directly.
With the OpenAI SDK, this is an example in the back end where you would have a prompt, you would pass that prompt into stream text, use whichever model you want, and just return that as a streaming text response. Now on the UI side, there is a convenient hook called use completion from the AI SDK that gives you a bunch of things that you could pass into a form using that API URL, and the result, if we're making our apps more intelligent, is a way to do things like this. Just generate content, stream content, maybe have a chat bot.
But I actually really want to talk about really, really, really old AI. Because when I made stately.ai, when I thought of AI, I was thinking of the old school AI, the enemy video game behavior, like just how the ghosts in Pac-Man behave, which I realize they all behave a little bit differently. And so you could represent this using a state machine. State machines are one of the oldest representations of AI, and we see here that this is how a ghost behaves, more or less, in Pac-Man. Now I created xdate. First of all, show of hands, how many of you have used xdate? Or know of it? It's a lot of hands. I love it.
So xdate, if you don't know, it's a state management library for creating state machines. You pass in an initial state, and then your states really tell you what can happen in each state. And this is going to become important for some of the demos I'm going to show later. So in this case, if you're in the wandering maze state, two things can happen. You could lose Pac-Man or Pac-Man can eat the power pill, and depending on what's happening, you have a different objective. Either chase Pac-Man or run away from Pac-Man. And then you create an actor, and this actor is something that you could subscribe to and send events, and then you could listen to the state changes, start the actor, and then send events, and that's the xdate API in a nutshell.
Now I don't have too much time, but if you want to talk to me about xdate, then just say state machine three times and I'm going to randomly appear next to you. So that's how you can find me. But like I said, UI's applications are evolving now especially because of AI. This is make real by TL draw, which this is one of the things that really stretches the limits of what we could do with user interfaces and even generative AI. And really we're discovering that users want intelligent apps. Now they might not want those chat bots where it's like, all right, I don't care, just get me to a human, but they want AI to enhance their experience so that they could just do things faster. This is an example from ramp, where it's just a proof of concept, and there are actually real extensions that intelligently guide users and sort of act as that intelligent, you know, assistant. So what we're going to do today is we're going to use the AI SDK, and in order to go beyond just the generative example, we're going to be using what's called tool calling. So just to show you what this is, we have this generates text function. You pass in the model such as GPT4 turbo, you pass in a bunch of tools, and so in those tools, when you give it a prompt, the model can decide, you know what, I'm going to actually execute a tool because I probably need to do some certain task.
4. Combining State Management and AI
LLMs are non-deterministic and confidently wrong. To address this, we can use LLMs to navigate a deterministic and predictable state machine. This is the concept behind StatelyAI-Agent, a package that combines state management and AI. An agent performs tasks to achieve a goal, observes the environment, and receives feedback to improve over time.
And then you could pass in an execute function which will actually be called. So for example, if I ask it is it going to rain in Amsterdam today, it might know that we should check the weather, and it's definitely going to rain in Amsterdam today. But LLMs are not enough, as we've discovered. You know, you ask Google, it's going to tell you that you should eat at least one small rock a day. Please don't do that. It's not an American thing, you might think so, but it's just wrong, okay? So LLMs are non-deterministic, you can't really easily explain what's happening in them, and they're confidently wrong, which is a huge problem. And so I had an idea. What if we use LLMs to navigate a state machine which is deterministic and predictable and use that to decide which events to cause to achieve a goal? So this is how we combine state management and AI. So I was working on a package for a few months now called StatelyAI-Agent, and right now it's in beta, I'm still working on the docs, so don't visit this page yet until like Monday or something. But I wanted to give you a preview of basically what's happening here. So first of all, what's an agent? An agent is something that performs tasks in order to accomplish a goal, but it does other things too. It observes what's happening so that it can learn the environment that it's working in, and it also receives feedback so it can improve over time.
5. Creating an Agent with Vercel AI SDK
Using the Vercel AI SDK and Zod, we create an agent that can decide what to do based on a goal and a state machine. We ask the agent for the next event, execute it, and see the results in the to-do app.
So this is what we're going to do, and this is sort of the idea that I had. Using this API, we create an agent, and we pass in the model. This is still using the OpenAI, or sorry, the Vercel AI SDK, and then we give it a bunch of events that it could do with descriptions on how it could do those events. And this is using Zod, and Zod is really useful for this.
And then the agent can decide what to do next. So you would give it a goal, such as create a new to-do for me, and then you give it the state machine so that it can know, depending which state I'm in, what are the possible things that can happen. And that's what's so useful about a state machine. So we're using the exact same state machine here, the one that controls the to-do app, and we're basically asking the server, or asking the agent, what should the next event be? What's this date and this goal? And it's going to give us back an event, and we're going to execute that event.
So let's try it out. And hopefully this demo works. So here's my to-do app. I have finished React Summit. I could mark that as completed. I could add a new to-do. Things like that. But if my Wi-Fi is working, I could also do things like say... I am tired. And it will try to guess what it should do. Wow. It just... Okay. That's AI for you. So... I need coffee, and so does this app, apparently. So what it's going to do is it's going to tell me to get coffee. I need coffee to stay awake and focused. I can say things like... I already got coffee. And then it will know exactly what to do. Which is to finish that to-do.
6. Advancing AI Capabilities with State Machines
Using the same state machine, we go from generative output to goal-oriented output, allowing for advanced applications like email flows and AI playing games. Witness a quick demo with audience participation.
And I don't need to get coffee. I know. What should I do? Name something out. I don't know. I need to try a strew brothel. And we'll just add that. And so, yeah. That is using... This is in case the demo failed. But basically, that is using the same state machine to cause events in the actual application, and we're using the backend to determine what the next event should be, so that we could send it to the state machine on the frontend. So it's sort of like isomorphic code, if you remember that word.
So really what we're doing here is... We are going from generative output to agentic, or goal-oriented output, where there can be many steps in between. And so that's what using state machines and AI really allows us to do. So we could do advanced things, like an email flow, where we want AI to actually check... To respond to an email and check... Are we missing any information? If so, we should ask the user for that information. And then we should make a draft. And then we should iterate with the user on that draft to see if it's correct. Or this one, where AI is actually playing a game. And so now this is beyond generative. This is something where the AI actually has to make proper actions in order to win the game.
I'll show you this quick demo. And I want audience participation. So someone... If you can see this, someone yell out a word. Any word. Potato. Okay. So...
7. Reinforcement Learning and Agent Observation
The agent tries to guess a word by reasoning and learning from an environment, using reinforcement learning techniques. This approach reduces the number of trials needed to accomplish a goal. The agent observes the environment, selects actions, and receives rewards. By updating its policy, the agent learns to take actions that lead to positive outcomes.
We're gonna enter a word, and the agent will try to guess it. So it's gonna be potato. And it's gonna guess one letter at a time. And hopefully, it guesses the word. So it's actually giving reasoning why it's guessing each letter. And it has history, so it understands what's happened before, which letters are matching, and it guessed sweet potato. I have no idea why. But it was close. It was close. Yeah. All right. So that's in case it failed.
So really, we're getting into reinforcement learning, too. Because again, we're not just generating content. The agent is learning from an environment, going through trial and error, and figuring out how to take actions inside that environment to maximize the future reward and to reach a goal. There's many examples of this, like this robot that's trying to play soccer, even though the human is just being really bad with the robot. Don't do this. The robots will attack one day. And there's also papers where basically by combining reinforcement learning with actual, in this case, game instructions, we see that agents perform a lot better at games. So rather than needing thousands and thousands of trials, you can reduce the number of trials an agent needs in order to accomplish a goal. So in other words, RTFM to learn faster.
But basically, an agent is observing an environment, it's selecting an action based on what it observes, and the environment reacts to whatever action it does. So the agent then gets a reward, or maybe something else, and if it does get a reward, it's going to update its policy and say, hey, that was a really good thing that we did. Otherwise it's just going to circle back and select another action and basically just keep trying to find its way to the reward. So for example, if we're playing Pac-Man, the environment would be the game, and the game could be in different modes, such as normal and scatter mode. The agents could do different things, like eat pellets, run away from the ghosts, or actually chase the ghosts, again, depending on which mode of the game that you're on. And the policy basically lets you know which action that you took was good. So an agent, if it goes up, it might get negative 100 points, or if it went down, that might have been a good action, so it gets one point.
8. Intelligent Agents and State Machines
The agent's policy determines the goodness of actions, and credit assignment helps determine which actions are good or bad. Intelligent agents incorporate observations, messages, plans, and feedback. Creating intelligent agents involves using state machines, reinforcement learning, and large language models. State machines can be represented as graphs, and graph algorithms can be used to determine transitions between states. Shortest paths can be used to find optimal paths based on weights, such as time or reward.
And the policy basically lets you know which action that you took was good. So an agent, if it goes up, it might get negative 100 points, or if it went down, that might have been a good action, so it gets one point. And credit assignment is how we determine which actions are good and bad.
But basically stately agent incorporates a lot more than just a prompt. It's also incorporating observations that the agent makes, messages that occur between the agents and the human, plans, so how it basically decides what to do next, and also feedback. So you could actually give the agent a signal that it did good or it did bad. So my plan for creating intelligent agents is using three things, state machines, reinforcement learning, and large language models together.
And that's the idea that with state machines you have this thing called a graph. A state machine creates a graph where each of the nodes are states, each of the edges are transitions between the states based on events, and so because the state machine is a graph, so imagine your redux producers or your Zestand objects, imagine if that was actually a graph showing you if I'm in this state and this action happens, now I'm in this state. We could actually use graph algorithms to determine how do I get from one state to another state? And we could do things like shortest paths to figure that out, and we could provide events so that it actually tries different things to traverse through the graph. And the shortest path might not always be the literal shortest path because paths can have weights, and this could be things like the time it gets from one state to another, or just however much reward you get. And again, this is based on the weight of the edges.
9. Traversing State Machines with Graph Algorithms
We can use graph algorithms, like shortest paths, to traverse state machines. By determining the end state and sending events, we can achieve goals. State machines provide predictable app logic and can enhance state management without code changes.
We could actually use graph algorithms to determine how do I get from one state to another state? And we could do things like shortest paths to figure that out, and we could provide events so that it actually tries different things to traverse through the graph. And the shortest path might not always be the literal shortest path because paths can have weights, and this could be things like the time it gets from one state to another, or just however much reward you get. And again, this is based on the weight of the edges.
So what I'm going to do while I have two minutes left is we have an espresso machine. I was inspired by all of the espresso machines I saw outside, but this is a special espresso machine. It actually, imagine a very basic espresso machine where you do not have any buttons for make me a cappuccino or a macchiato or something like that, it just knows to pull shots and to add milk or add water or add cream or add things like that. And it's all based on a state machine, so we need to do things in order. So this is just like the to-do app example, except the agent now has to operate through many steps in order to achieve a goal.
And so the way that we're going to do that is first we're going to use our shortest path algorithms to get the state space and basically that huge graph of states that show here's how to get from the initial state to every other state, and then we're going to determine what's the end state that we want to go to, figure out the shortest path, and actually send events to traverse that path. So let's try it out and hope it works. Okay, so here's my espresso machine. I can manually operate it, so I can press grind, done grinding, I can heat the water, it's done heating, and then I can tamp the espresso beans, run the espresso machine, and then you see we have an espresso. But instead, and just to show you over here, this has no knowledge of, for example, a cappuccino. So I can search here, no results found or anything, and again, this is all based on a big, big state machine right over here. So we're essentially trying to find the shortest path there. Alright, so, if I say I want a cappuccino, then it will figure it out, and it will do all of the events that lead to that state, and hopefully we get a cappuccino. So it's going to make the espresso, and now it's steaming milk, which is a good sign. It's combining that, and it made the cappuccino. So now, I've actually never tried this, but if I say I want a ridiculous super coffee drink, let's just see what happens. I don't know, we're having fun, right? So it's grinding beans. Imagine just talking to your espresso machine and seeing what happens when you ask it that. So it made espresso, it's combining water, it's also steaming milk, interesting. So it's just trying to do a bunch of different things, and cream, and chocolate. Alright. Great. Alright, so we learned, or at least I learned three things from this. First of all, LLMs are unpredictable. State machines are predictable, so the idea of declarative logic and LLMs, and reinforcement learning, and combining that all together, I really think can usher in the future of intelligent state management in our apps, and basically make our apps do more than what they currently do, even without changing the code substantially. So in my opinion, state machines are a great way to think about app logic, and it doesn't matter if you're using X8, or Shustand, or Redux, or any other library, it's just a fantastic way, and it helps you see the possibilities of what you could do with states. So with that, thank you very much React Summit.
AI State Machines and User Experience
State agents have the ability to see the entire state machine and anticipate events and transitions. Writing tests for LLMs requires a mock layer to handle unpredictability. State machines naturally provide guardrails and prevent transitions to unrelated states. AI state machines enable a better invisible user experience by automating steps based on user input. The power of function calling and tool using goes beyond chat-based interactions and allows for state changes.
We have some questions come in already, and so I'm going to ask, can an agent see, oh you can see it up there as well, can an agent see what the entire state machine, so they know what events and transitions are going to be available if it takes an action? Can it look ahead for this kind of stuff? Yeah, so that's actually exactly how state agents work. So basically, because you have the entire state machine as a declarative definition, you could just pass that and say, here's what could happen next. So it's like having Google Maps, you know exactly what will happen. Nice, easy, brilliant. That's what I like to hear.
Okay, this might be, this one's slightly off the top of what you said, I guess, but how would you approach writing tests for intelligent tools like this? Oh, this is tough, because of course, LLMs are unpredictable, so when you write tests, you really have to write that mock layer and say, if I give it this prompt, then I expect some sort of response like this. And because we're using tool calling, we could actually give a declarative, like a JSON object saying in an ideal world, this is what will happen. But when you're actually building the state machine, you do have to provide those safeguards for if the agent goes off the rails, let's correct that. How do you provide the guardrails? So state machines naturally have those guardrails anyway. So this is great, because if you're in a state, there is no possible way you could take a transition that is not attached to that state. It's like teleporting to another road. You just can't do it. So that actually leads on, I think, to another question that we had here, which was, is there any part of the stately API that handles reverting state when the model makes a mistake? So with state machines, you have to model everything, so it has to... Right, you can't make a mistake. Exactly. One state to another. The only transitions available are the only ones it can do. Right. Perfect. What are your thoughts about how the UX design is going to change with AI state machines? The idea being that a user now has an input field, but they're using it in what is effectively a very different way to the way we've used input fields up until now. So before LLMs, users are doing 100% of the actions. Imagine filling out a tax form or something like that, or maybe that's another United States thing, I don't know. But you're doing every step manually. So instead, with LLMs and state machines, you could essentially say, I want to do this, and the agent will just fill in the blanks. They will do all the steps for you to reach that. So it becomes a much better invisible user experience, and that's the key. Invisible user experience. I like that. And I do think, actually, I think there's a lot that we've seen from these LLMs that is just chat based, but I actually really appreciated the demo there, which shows the power of function calling or tool using, that that's the sort of stuff as developers we should be playing with, because it is more than just a chat box, it is actually, yeah, changing states. Exactly.
Storing Knowledge in State Agents
State agents have both short-term memory within the same session and long-term memory stored in a database. They remember decisions, observations, feedback, and plans made in the long-term memory.
Awesome. When an agent learns, where does that knowledge actually live on? Would that be on the server? Or can it be persisted somehow? Yeah, so just like humans, the way that I modeled stately agent is you have short-term memory and long-term memory. So short-term is within the same session. It's in browser memory, local storage, wherever. And long-term is in a database, so like a vector database. And then you could retrieve that. And so just like the four things I showed, the agents is remembering the decisions it made, the observations, the feedback it got, the plans that it made. All of those things are in long-term memory. So that's how it learns. Nice. Awesome.
Building State Machines and Adding Context to RL
Building a state machine in ActionScript may not be possible. XState is recommended for complex logic, while Recoil is suitable for simple state management. There is some similarity between the current approach and goal-oriented action planning. Adding context to RL can be complex but involves incorporating text-based feedback to improve actions based on past experiences.
There is a question that says, can you help me build a state machine in ActionScript? But I feel like that's probably not going to be able to be done on stage. I said state machine, not time machine. We can't go back.
A question that has jumped to the top that just asks why XState over Recoil? Why? I mean, why Recoil over Jotai? Use Jotai instead if you want that atomic state. But XState, it's for a different purpose. It's for if you have really complex logic. I wouldn't say you should use XState for everything. It's sort of like saying you should use an airplane for everything. Use an airplane to go long distances, cars or bikes to go shorter distances. Same thing with XState. If you have simple state, Recoil works fine, so stand, et cetera. But XState is for more complex logic. The classic it depends answer. It depends. It always does. It always does.
Is what you're doing similar to goal-oriented action planning? Yeah, I think that there's a lot of overlap, and that's definitely a good topic of research. Go app. Awesome.
Okay, so how complicated, how complex would it be to add context to RL? Like this was good for X reason, the same could be bad for a different reason. Yeah, with reinforcement learning, it's already built on the notion of having rewards. So if you're playing games, it's really easy to assign a numerical value. But when you're doing things more complex like working on an app, then the feedback can actually be text based. And so instead of it being number based, you're basically adding that feedback to the context. So you're saying, try to do this action. By the way, when you did that action in the past, this is what worked. This is what didn't. So it's the same way that we learn as humans. We're just retrieving that knowledge. That's quite mind blowing.
Interacting with LLMs and UI Performance
Talking back to the LLM and providing feedback is an important feature. To use XState Agent, you need to bring your own open AI key or use local models. State machines are useful for multi-step processes and complex interactions. The impact of LLM responses on UI performance is notable due to their non-deterministic nature.
It's when you can talk back to the LLM and just say, good job. This is why. Yeah. And it knows. It answers. It carries on. Brilliant.
We're going to skip the why, whatever, what. Dean, do you need an open AI key to use XState Agent? It's bring your own. So basically this wraps the Vercel AI SDK. And so you would have to provide your own open AI key or your Anthropic, et cetera, whichever model you're using. Or you could use local models, too. I believe people have made that for Vercel AI SDK. Like ViroLlama. Yeah. Like Mistral, et cetera. Yeah. Easy.
What example use cases do you have in mind for something like Stately? Well, I'll answer this for state machines. State machines are really great for multi-step processes, whether they're in the back end or the front end. You could have a multi-step form and also complex interactions. If you're building a Canvas API and you have to manage selecting and dragging and things like that, the state gets complicated. State machines help you reason through that a lot better. Nice.
So how do you find, ah, I was just thinking about something similar to this next question. How do you find this impacts UI performance, considering LLM responses can take multiple seconds? In my mind, I was thinking, did you build the Pac-Man version of this? I'm like, you can't ask an LLM to do that. I really should. So this is a really interesting question, too. Because LLMs, it's like, we're doing things slower, we're doing things non-deterministically.
LLMs, State Machines, and Future Experiments
LLMs are slower and non-deterministic compared to normal UIs. Reinforcement learning allows caching actions for faster response. State machines provide necessary constraints for LLMs to prevent going off the rails. Building AI models to automatically identify potential state machines is an interesting future experiment.
Because LLMs, it's like, we're doing things slower, we're doing things non-deterministically. And so, objectively, it's actually worse than normal UIs, yet VCs still give you millions of dollars because you have AI in your app. But with reinforcement learning, you could actually basically cache actions. So you could say, I know this action nine times out of ten is good. So whenever the user does something very similar or asks to do something very similar to this, I'm just going to do that action. It's sort of like having muscle memory, right? We don't need to think all the time. Right. We're just sort of treating these agents a bit more like a person, a bit more like an actual person that's going to learn and respond to stuff it's seen before. But yeah. Cool.
In a sense, state machines are limiting the capability and creativity of LLMs. Do you think that defining pathways between states is future-oriented enough? I think that LLMs do need that constraint. They need those guardrails because one of the biggest problems with LLMs is that they do go off the rails. And people have actually tried doing architecture where it's like, hey, use chain of thought to have the agent do something. But then it goes in a loop or it does something unexpected. Yes. We're living in a world of non-determinism with these. So getting it to be as deterministic as possible is going to make your application work in a way that users can deal with. Yeah. Too much talking.
What about could you build an AI model that checks the code base and reports back the states that could potentially be a state machine? That is something that I really want to experiment with. It's called state machine synthesis, where basically you map out all of the states in your app and this is just like an agent going through, clicking buttons and doing that RL loop where you're observing the environments and just trying different things. And essentially you've built a massive graph of what you could do and that's your state machine right there. And so you don't necessarily have to specify that. But that's for a future conference talk. Yeah. Right. Look forward to that. This time next year? Maybe. I won't make you promise anything. That's absolutely awesome, David. Thank you so much for that. Yeah. Super interesting talk. Everybody give it up again for David. Thank you. Thank you.
Comments