So we can ask it something like, should native apps be server rendered? And then it will stream back something about how native apps are typically not server rendered, and how that's more of a web thing. Because this is all pretty new.
Now to create the server action, we have another file which defines useServer in a function. And this function will be extracted into the server bundle, so it never gets shipped to the client and turned into an endpoint on the client, which we can version and make platform-specific. And we can render native text here, React native text, inside of a recursive component which iterates the stream.
One really novel aspect of this server approach in these server actions, is that it's bundled multiple different times based on the platform, so that if the iOS device pings it, then it shows the iOS version of the server bundle, and if the Android version shows, Android version. And this is how it knows which text to use, you know, if it's for iOS or Android web.
Now when a client makes a request, an RFC payload is sent back with suspense to keep the stream open. It has the static representation of our root JSX loaded inside of it, a URL to the client code needed for this RFC to load, because we bundle-split the code, and it can be loaded and fetched dynamically, and then the currently rendered text element. As the stream progresses, more content will be rendered, and React will update the tree with the new visuals, so, if it said, you know, once, and then once upon a time, then And finally, the stream will complete and React will close the connection.
And this is just one part of a much larger puzzle, but it's super powerful, because it means we can use any arbitrary JavaScript, so long as the native bindings are already on our target client, so this is kind of like creating custom HTML live by using use client and use server directives everywhere. So you notice there is no native code here. That's just JSON, and it's new line delimited, so it streams in. It's pretty sweet. And this is just the basics. We can scale this concept up to any native UI.
So let's do something a bit more exciting. When React Native was first demoed, they showed a truly native movies app. So I thought it would be fun to pull movies into this prototype somehow. For inspiration, I looked at what the state-of-the-art best AI apps had to offer. In chat GPT-4, when you ask for movies, it renders a bulleted list of markdown after pinging bing. I checked with GPT-4-0, same deal. On Gemini, it does the same, but with some inline images and some live links. So it's basically just sending down markdown, and then the markdown is being formatted with a markdown formatter. That's their transport method.
And in Xplorerc, we can do a lot better. We can match any query to search for movies, ping a movie database, server an interactive native movie card, stream it down to the client with things like UI menus, any sort of new native view that comes down and is available on that device. We can then recurse back with more information. We have a server action which is fetching cast members and crew members from a movie. And then we can even like create calendar events where we have a card which then pings for the calendar permission and opens up the calendar prompt.
Comments