Video Summary and Transcription
This Talk explores the experience of shipping server components in production and highlights the benefits and challenges of using Server Components in Next.js apps. The Talk discusses the deployment of UploadThing and the use of AppRouter for safe production usage. It delves into the implementation of different layouts, data fetching, and code centralization for improved performance. The Talk also covers the use of server components for performance optimization and latency handling. Additionally, it explores the use of Edge and Lambda for partial pre-rendering and the challenges faced with webpack performance and hydration. Overall, the Talk emphasizes the benefits and challenges of working with Server Components in Next.js applications.
1. Introduction to UploadThing and Server Components
Hi, I'm Theo, an engineer who shipped the UploadThing project. Let's dive into the experience of shipping server components in production for over a year. We learned a lot, both the great and terrible things. React Summit deserves a shout out for this opportunity.
Hi, I'm Theo. You probably know me from YouTube, but believe it or not, I'm also an engineer. Yes, I do ship code. In fact, I shipped a really big project around this time last year. You might have heard of it. It's called UploadThing. It's a thing for uploading files. This is it. This is what it looks like. It's, in my opinion, the best way to add file uploading to your project, but that's not what we're here to talk about today as much as I would love to because we need to go deep on server components. Specifically, what has it been like shipping server components in production for a bit over a year now? Yes, we went all in on AppRouter. We even went all in on Edge with this project. There's a lot of cool things we learned, a lot of cool things that worked great, and of course, a bunch that worked terribly. I want to go in detail on all of these because normally when I would do this in a video, no one would watch it. But since I have you guys all held hostage at this conference, we can talk about these things in way more depth than I normally get to. Isn't that exciting? Huge shout out to React Summit for letting me do this, recording the video, sending it ahead of time because this is such a stress relief. Wish I could have been there in person, but hopefully if all goes well next year.
2. UploadThing Deployment and AppRouter
Let's dive into UploadThing, a traditional Next.js app deployed with AppRouter. We broke the packages and infrastructure into separate repositories to open source the packages. We've learned a lot through this process and will showcase some of the cool things we've discovered. While most of our stuff is still on Vercel, we're trying out other deployment options for certain components. We'll also discuss AppRouter, which is crucial for using Server Components safely in production. If you need more background, check out my YouTube videos on the pages router in Next.js.
Anyways, let's dive in. I have a whole list of the good, bad, and ugly and I want to go through all these parts.
Okay, first things first, what is this app and how is it deployed? Well, UploadThing is a pretty traditional Next.js application using AppRouter. We can take a look in our Vercel PENG account, hop over to UploadThing and you see nothing too exciting here. We were using FileThing as our original name, but UploadThing is what we ended up sticking with. It's hooked up to a GitHub repo. It deploys automatically. The GitHub repo is where things start to get a bit more interesting though, because the GitHub repo is a monorepo.
Originally, we had the packages for UploadThing as well as the infrastructure all in one repository. We broke them apart because we wanted to open source all of the packages. So if you're curious how the packages work, we're not talking about that too much here. That's all open source. You can feel free to dig into it. This is how our infrastructure works and we've learned so much through doing this and I can't wait to showcase some of the cool things that we've learned.
But as I said, pretty traditional, you attach this to Vercel and it deploys automatically once you tell it which of these packages that the code is in, which is in infrared slash UploadThing server. See here, we've been trying out other places to deploy more and more of our infra. We'll definitely talk about that as we go because while most of our stuff is still Vercel entirely, we are using other pieces for other things like managing our S3 buckets, managing our redirect layer on top of those. That stuff we've been doing separately. First, it's worth acknowledging that this isn't just going to be about Server Components because we also are using AppRouter, which to be frank, is the only real way to use Server Components safely in production right now. So yes, we're on AppRouter plus Server Components. We're going to have to talk about AppRouter a bunch for this to be a fair conversation. You're not already familiar. Next.js replaced their router, which was the pages router, where you just had random page.tsx files.
Well, they weren't really page.tsx. Whatever the name of the file was, that became the URL. So if you had Theo.tsx, whatever you exported there would become page slash Theo. Now, we're on AppRouter, which is a different way of doing things. I don't want to go too in depth on how things used to work. So if you need to learn about that, I have tons of videos on my YouTube channel all about how pages router worked, how it compared to this. I am expecting some amount of knowledge of Next.js as we go through.
3. Exploring Layouts and Nesting
This part explores the traditional aspects of the homepage layout in UploadThing, the root layout that applies to every page, and the nesting nature of layouts and how they work with the URL.
This is more to show what we've learned, not just how these things actually work. So here we have the homepage, page.tsx. Notice runtime is edge. We'll talk about that later. Don't you worry. But this is all pretty traditional stuff.
We have our home wrapper, which is a wrapper component that has a bunch of content that we want on the homepage and pages similar to it. We have our flex layouts and all that. We have the hero. We have the callout, which is the little bit of news on the homepage, all traditional stuff. Nothing too interesting here, except for the fact that none of this code ships in the JS, which is really cool that this is just shipping HTML.
What I was here to show is the layout, because the layout is code that runs around every single page on the site. Since this is the root layout, this will apply on every single sub-page. So if you want your sub-page to do something different, you either need to route it separately from this layout or find ways to override these behaviors because this is the thing that wraps everything. Generally, the stuff you put in here is stuff like your auth provider, your page view component, your provider for your auth layer, not auth, this is analytics. That's my post hoc provider. You have the trpc provider here for the little bit of trpc we're using in the app, which we'll also be talking about. We have the toaster here for managing posts when you click something and a little toast comes up. All the things you want on every page. This is like the root of the app, the thing that surrounds all your components in the app TSX.
We've all had that. But this is just the root layout. As I mentioned before, there are layers to this. So if we go to the dashboard, you'll see we have these three sections. The one that matters for us right now is app ID. And here we have a layout, which is the dash layout, then it mounts the side nav, then it mounts the children. This layout gets applied underneath the previous one. And if one of these pages had a layout of its own, that would get applied as well. The nesting nature of the layouts and how it applies against the URL. That's the real magic here.
4. Different Layouts and Data Fetching
This pattern of using two different layouts allows us to have different behaviors and components for different parts of the dashboard. By using a special syntax to pull values out of the routing, we can pass parameters and use them in various parts of our code. Data fetching is made easy by checking for outdated versions and making database calls. The return null feature ensures that only relevant components are mounted, simplifying the code and improving performance.
And honestly, this pattern has been great. There are edges like you see here. Even though we had these multiple things under slash dashboard, we wanted dashboard like the route to be different from a given ID within it. Because when we go to my dashboard, we wanted all of these different parts to behave differently from the top level here. We wanted to show the picker so you could switch organizations and stuff with a different top nav here. We wanted the things inside to be different. We wanted this component here, we wanted other additional behaviors and such.
In order to do that, we had to have two different layouts. And we need to make sure this one didn't apply when we wanted this one to apply. In order to do that, you have to use the special syntax to pull things out of the routing. So the parentheses is invisible. It means that this is effectively if you just did the slash, and then the one with the brackets, that's a value that you can grab now. So if I go to page TSX here, you can see I have app ID as a property being passed as a parameter. And now that I have this being passed here, I can use it for stuff. In this case, I have a link for it. I'm sure there's plenty of other places where we use this more Oh, app outdated SDK is props that app ID.
And this is how easy it is to do data fetching, which honestly, we can just go into now the fact that if I need to hit some like database thing quickly, it's that simple is hilarious. So this code app has outdated SDK, what that is doing is it's checking to see which version of the SDK you used in your most recent file upload to see if it's out of date. Since this is an old tutorial, the stuff that we were using is obviously out of date. So it's recommending that we upgrade to the latest version. And it does that by running this function where we get the SDK version for the app, which is just a database call directly, by the way, we then get the latest upload thing version, which is checking from the npm registry, we could even cache this pretty trivially to make things even faster.
And then if the version is outdated, which we know after comparing these to make sure that they're the new version that you have is within the newest. Otherwise, we return this warning, we even have the simple return null here, this is just your usual component. But since this runs on the server, we can call server stuff like this directly. And if this returns null, then you just get nothing back on the client side, there is no penalty here whatsoever. This JavaScript doesn't like blowed up the client, the user gets this effectively doesn't exist, which is magic. It's so cool that you could write theoretically, super complex code here that only mounts if that component returns it, and you can just run logic to make sure it doesn't these data fetching patterns have not gotten old, it still feels magical every time you can just await and get app info, which is just this database call. It's so cool that in a component, you can just await data.
And if you don't have the thing you want, just return null, it just makes everything so much simpler. And we have found this simplicity absolutely scales, it doesn't scale in the sense that you never will have complexity again.
5. Centralizing Code and Optimizing Performance
By centralizing our code and using drizzle for type safety, we can easily update and maintain our app. Complex logic and utilities can be implemented on the server without affecting the client's bundle size. We have also optimized the loading of patterns from a CDN by downloading them and embedding them in the markup.
But it scales in the sense that it's a 10x win. And that 10x win allows for things to just feel way less complex as you continue building on top of it. You might have noticed this get app info isn't a database call directly in our component, nobody actually recommends you do that. It's a really nice demo to show like a SQL call inside of a component with the onclick because it shows how tightly knit these things are, but nobody should actually be doing that.
Instead, what we're doing here is we're writing actual we're using drizzle for the most part, but we even do dump some SQL in here for certain stuff around SSL certificates. This stuff all is here simply because we like having it centralized in one place, it makes it much less likely that we duplicate logic in ways that don't make sense. And this way, if we use get app info in 15 places, we can update it in this one place and it will affect everything else. And since we're using drizzle, which has a type safe return, we'll immediately get a warning like if I delete apt here from here, we immediately get a type error, because something here is expecting to know the tier, I didn't have to save this, I can just not even save. So if I delete this, I haven't saved and we're getting a type error there. That's so cool that from the back end to the user interface, we have full type safety keeping us from making dumb mistakes. And that doesn't go away no matter how big your codebase gets. And this one's gotten very large over time. And we can have really complex logic to like bytes, the human readable, okay, this one's not super complex, but you get the point, this is a utility that just runs on our server. And as long as you don't import on the client, you won't have to deal with any potential or like bundle size increases.
As a result, I think that's awesome. One more thing on the bundle size increases because there's a couple of these like magical Oh shit moments that happen once you go deep on server components. And one of my favorites by far, it's gonna seem so simple and dumb, but it really made this stuff click for me see all these patterns on this page, where we have a fancy little pattern for all the different apps and a different background color. Those are all coming from one file. See this file generate pattern, this file is quite large. I disclosed all of these patterns. And each of these is one line for the pattern itself. But they're massive. This file is like megabytes. You wonder where all those patterns came from? hero patterns. Dope. We literally just downloaded all of them and put them in one file because we didn't want to have them pop in because they're loading from a CDN and then coming in. We wanted them to be in the markup. So if you look here, you'll see in the actual markup, we have the SVG, I have to dig a little deeper to see it. But you get the idea here. It's the style embedded here data image SVG, yeah, you get the idea.
6. Server Components and Performance Optimization
We embed background patterns in the HTML, avoiding the need for complex SVG loader logic. RSCs eliminate the need for bundling SVGs and provide a seamless user experience. Suspense allows non-blocking loading of code and data, improving performance. Combining Suspense with server components simplifies logic and reduces the need for suspense boundaries.
We embed it as a actual background style element. So it's in the HTML, so there's no pop in. I obviously don't want to ship this giant file, even though the logic here is really simple BG pattern FG color opacity, we do some replacements of the fill to be the new foreground color, we then swap these so that it is valid as a data pattern instead of just being an SVG. You get the idea. This was Mark fixing some silly things way back when, but the get random is super simple.
This code never touches the client ever. We could even assert that if we really wanted to by putting import server only on top. And now if we try importing this somewhere, we shouldn't, we'll get an error that this code is being imported somewhere that's on client. That means that this whole pile of stuff we got stuck in here, these giant SVGs, we have literally over like 100 of doesn't affect the performance or the bundle size at all. This really dumb, simple way of doing things is totally safe and works fine.
Now, this is where the magic really started to click for me because it was less that RSCs are alternative to APIs or just the thing you use on the server. So RSCs make a lot of the things we had to build these crazy patterns around just kind of go away. Like think about all the time we've spent doing crazy SVG loader logic and trying to bundle SVGs in the best possible way in order to allow our users to have a good experience. What if we didn't have to do any of that? What if that just was how it worked? Wouldn't that be great? That's what this has been doing for me in so many different places.
One more thing I love before we start getting into the iffy stuff, I want to find a good example of my favorite pattern, suspense. Suspense is incredible because it allows you to do certain things without blocking the other things. So in this case, suspense prevents the code that manages post-hoc page views, which is our analytics provider, shout out post hoc, they've been great to work with. And this analytics provider is just not something we want to block page load on because it has to load JS, it has to load logic and has to do a few things. If we can just stream that result in when it's ready without making it take longer, but in the end we make everything else take less, why wouldn't we do that?
That's a pretty simple example, but we have more complex ones too where we have things like organization members, which is a component that has to fetch data from multiple places. Wouldn't it be nice we didn't have to block the whole page on that? Or something like the files page. So this is the files table and the files table has to fetch a ton of data. So the files table not having to block the rest of the page loading means when you navigate to it in the dashboard, it loads immediately. And if you just want to fetch some data from your database, it takes a little bit longer than everything else. Throw that in a component with a suspense above, it just streams right in. Suspense has been a really nice experience overall, and I've been super happy with it. When you combine it with some of the crazy abilities that server components provide, like the ability to pass a server component as a prop to a client component so that it's nested in like that. This stuff has been very, very nice to work with and has allowed us to simplify significant amounts of our logic. That all said, you might notice that despite the size of this code base, we don't have too many suspense boundaries. That's because we tend to put them at the top level with something simple and obvious like a layout.
7. Server Components and Latency Handling
We have found that putting one nice suspense at the heaviest load boundary and then letting everything else await its way to hell has allowed for us to have really fast feeling apps overall. Performance of the site's insane, especially if you're in weird, not great internet environments. Its ability to handle latency is one of the coolest things. The number of round trips has gone down so much and that shows the experience users have on this app is incredible. We did have to make some compromises to get there though.
That's because we tend to put them at the top level with something simple and obvious like a layout. And then we let the pages load everything in at once. Otherwise, if you overuse suspense, you'll end up back in the same loading spinner hell that we've been trying to get out of. We have found that putting one nice suspense at the heaviest load boundary and then letting everything else await its way to hell has allowed for us to have really fast feeling apps overall. And God, do they feel fast.
As I showed before with things like the SVG is just not loading. Performance of the site's insane, especially if you're in weird, not great internet environments. I remember I was at another conference, GraphQL Conf, and somebody was talking about how modern React and Next.js apps just aren't as performant as even something on Rails or PHP. And when he was on the airplane, he was trying to navigate around some random janky Next app. Spoiler, that random janky Next app was Vercel's dashboard. And he was having a really bad time because when you're up in the air with really spotty Wi-Fi with insane amounts of latency, the experience wasn't great. And honestly, I kind of agree. The Vercel dashboard is one of the best examples of a not great to use Next.js app. They've been making progress since, but it's rough.
That said, I dare you to go make two apps and upload thing and play with them when you're on airplane Wi-Fi, because I promise you, it flies. I'm on really good Wi-Fi right now and it's still moving that fast, but I promise you slow that down. I can go turn down the bandwidth if I want to here, but lower bandwidth isn't going to properly represent the chaos of how nice this feels. Its ability to handle latency is one of the coolest things. The reason for that is a bunch of little things. There's no one big thing. It's the fact that you can just load the payload from the server instead of having to navigate, then load the new JS, then fetch from the server to load the new JSON, then render that new JSON, realize you need other things, go fetch that. It's the number of round trips has gone down so much and that lowering of round trips has not changed over time. We still have three or less round trips to get the full page content. That's nuts. And that shows the experience users have on this app is incredible. We did have to make some compromises to get there though. And here's where we start getting into the bad for a bit. As much as I love server components, you might have noticed a few places throughout the app where we have this little export const runtime equals edge bit. There's a reason for that. When you move to server components initially, all of your web pages have to go through the React and Next.js server.
8. Edge, Lambda, and Partial Pre-rendering
For users to get a response from your server instead of the CDN, there is a delay as the server needs to spin up, process the request, and generate an HTML response. Cold starts on Lambda can add significant delay to the homepage loading time. To address this, we utilized Edge and were the first production app to deploy app router and server components on the edge. Although there were pain points with having two runtimes, putting it on the edge improved initial render times. However, we are excited to adopt partial pre-rendering in the future to further improve performance and eliminate the need for Edge and Lambda simultaneously.
For that to work, the user doesn't get a response from the CDN anymore. They have to get the response from your server. So if I send a request to a Next.js server, I don't get a CDN cache response immediately. I have to wait for that server to spin up, acknowledge my request, process the request, generate an HTML response, and then send it to me finally, which just takes a lot more time than hitting a cache from a CDN.
Since this also was on Lambda, we were eating cold starts all the time and cold starts, especially if you're using a complex database thing, like Drizzle is not too bad, but Prisma could add seconds, literal seconds, like 1.0 to 2.0 seconds, 2000 milliseconds to your cold starts. Miserable. We didn't want to deal with that because we didn't want to have the homepage take up to two seconds to load. We wanted the homepage to be instantaneous. Since we didn't have the benefit of some new stuff we'll talk about in a second, we went all in on Edge, and we were the first production app deploying app router and server components on edge. And I know that because they were making a lot of changes for us last minute during our launch. And that's had a lot of pain points. Having two runtimes in your app, I have entirely different expectations of how the world works resulted in a lot of annoying things like we don't use CUID and we use UUID because I couldn't get the CUID 2 package working on edge and on normal like Lambda at the same time. But at least putting it on edge, let our initial render times be faster. And they still are. If you go to our homepage, it's not hitting a CDN still to this day. I will sign out to show the homepage properly. I'm refreshing on command shift are refreshing. It's just immediate. Like you can't really make it load. It's so fast because it can generate a response on the edge that is super simple in almost no time.
But wouldn't it be nice if it was literally no time? What if using the same suspense stuff I just showed, we could load the shell instantly from a CDN and stream the rest in? The thing I wanted in the way I used to describe this is I want to just cache the first chunk that gets sent through the stream, put that on a CDN, send that immediately and then send the rest in later. I've been asking for this for well over a year, possibly even two at this point. Thankfully, Vercell listened. And during NextConf last year, they announced partial pre-rendering, which is literally just this. It lets you take the static part that's not different for any other users. That's just the shell or whatever you decide. And as long as there's a suspense boundary before there's the first user specific data, it can just cache that top layer. We've not set it up on here yet because we're happy with the performance of our current solution. And it introduces all sorts of changes we'll have to make because we built a lot of custom shit here. But in the very near future, I'm excited to adopt partial pre-rendering and finally be able to drop Edge because running both Edge and Lambda at the same time has been rough, especially for us, since we're using the AWS SDK quite heavily.
9. Performance, TRPC, and Webpack
If using server components without partial pre-rendering, the performance may not be satisfactory. It is essential to find ways to render most of the content statically or use Edge routing and runtime or cache via CDN. We had issues with server actions and switched back to TRPC, which has been reliable and valuable. TRPC is a type-safe RPC layer for TypeScript, allowing direct calling of backend functions from the frontend. Despite adopting server components, TRPC can still be used interchangeably. However, replacing everything is not easy. We are facing challenges with webpack performance and still awaiting the rewrite promised by TRPC.
Hopefully you're getting the idea here. We, we were not happy with the performance with server components without partial pre-rendering and went to Edge for that reason. If you're using this stuff and you're expecting a fast home page, please find some way to render most of it statically. If you don't, it will perform like shit and you will feel bad. This is effectively necessary stuff, either Edge routing and runtime stuff or caching some part of it via CDN. Otherwise, you're going to see that dreaded blank white page, which is not how we should expect our apps to perform and behave.
Speaking of random things you might've noticed as I was scrolling, I briefly mentioned TRPC. We were just not happy with server actions when we tried to consume them. They have lots of weird, unexpected security-ish side effects, mostly been addressed since. They were annoying to implement off, manage, bind. They've gotten way simpler overall, but we didn't want to die on that hill too hard too early. So we quickly gutted the stuff we had built around server actions and move back to our beloved TRPC, which has continued to behave great for us. If you don't know TRPC, it's the type safe RPC layer for TypeScript. It lets you write a function for backend and then call it pretty much directly from your front end. Really handy stuff. Part of the T3 stack. If you haven't heard of that, go watch my YouTube videos. If you're curious, TRPC has continued to be incredibly helpful and valuable to us. We've still leaned on it for a lot of our actions behavior, as well as for some of our suspenseful data loading stuff for things like our infinite scroll tables. It's been great to work with. Still love TRPC. Still my reliable go-to and fallback. You don't have to feel like you lose something like TRPC. Just because you're adopting server components, you can use them together and interchangeably. Totally fine. Not everything could be so easily replaced, though.
One of the more painful things that we've been dealing with is the developer server performance when using webpack. We've kind of been stuck on webpack for a while because we've been sitting patiently waiting for TRPC, the promised wonderful rewrite of webpack and Rust that has been told and foretold in the legacy for however long now. TRPC still isn't ready. It's getting close.
10. TRPC, Webpack, and Hydration
TRPC is improving the dev experience for Next, especially on older machines. The webpack performance has been a challenge, causing suffering in developer server performance. Hydration has been a pain point, particularly when creating components in the installed package. To solve this, we developed the NextSSR plugin to render the client component on both server and client and prevent hydration errors.
It's getting very close. Hopefully even by next week, we'll see TRPC properly 100% compat, in quotes. Ready to go for dev stuff. We're not there. And especially on people with older machines, the dev experience for Next was miserable for a bit. App router just isn't particularly fast to generate new pages. Now with TRPC, it's way better.
The fact that Vercel kind of waited for TRPC when the webpack performance was as atrocious as it was, it would have been nice to do more fixes and find ways to make it better in the interim. They did a little, but not quite enough. And we definitely have been suffering with atrocious developer server performance to the point where, like, if we're not rocking a newer MacBook, not even an M1 feels like it'll cut it right now. I'm on my M2 Macs and it's felt mostly fine here. But God, the one time I tried running that code based on my Windows machine, it was painful.
One, hydration. It always sucks. And there was a couple of places where the hydration sucked particularly hard. Not even here, but specifically when we wanted to create components in the package that you install. When you use UploadThing, you get an upload button component that runs the UploadThing code to upload a file that the user's uploading, but it validates and authenticates everything on your server, not on our servers, on yours. Because we think it's very important for you to authenticate every single user before they have permission to do an upload. You also, on your server, choose what kinds of content are allowed. Can they upload pictures? What size of picture can they upload? How many can they upload? And we show all of that on a little button below because it's really nice to see images up to four megabytes underneath your upload button. We wanted this to not have a loading state and pop in. Whenever we tried using things like the Use hook that would pass a promise, it would just infinitely render and break terribly on client.
So we needed a way to make a client component that can render on server and on client and have the state baked in. So we made what we call the NextSSR plugin. And what this does is it literally dumps all of the config data for UploadThing as a JSON blob inside of a script tag in your page in order to make sure we have the data we need both on server and on client to render this correctly, to prevent hydration errors. It actually works beautifully, but the fact we had to come up with a clever hack like this to make it work is obnoxious. And that's the other biggest pain point I've had overall. And this is more less a pain point, more a plea.
11. Building Full Stack Packages with React and Next
Building full stack packages with React and Next can be challenging. There is a need for better interfacing between the client and the server. Companies like us have put in a lot of effort to make code work ideally in both environments. Although there are pain points, they are outweighed by the benefits of working with Server Components.
And that's the other biggest pain point I've had overall. And this is more less a pain point, more a plea. This stuff needs a better way to interface between the client and the server. It has been incredibly painful. Not when building apps. When building apps, it's literal f***ing magic. But once you're starting to build packages for it, where you want to interweave the server and the client in a package that is then consumed, it does not work anywhere near as well.
There are lots of changes, both big and small, that could be made on the React and the Next side to make it easier to make a full stack package. But we're still getting there. And the amount of work companies like us, companies like Payload, companies like Clark have put in to make code that works ideally in both environments has been really painful. Imagine if a package couldn't provide components and hooks, and you tried to have a package that used hooks in its components or God forbid components inside of its hooks that React would just like put its hands up and say sorry. That's kind of how it feels building packages that will use react-server and react-client right now. This should change in the future, but right now it has been pain. If you want to see that pain, go look at the open source upload thing code. It is full of it.
That said, we're never going back. This stuff is the best it's ever been. I am the happiest I've ever been writing react. And I do really highly recommend you check out server components if you haven't yet. They're changing how I think about writing software. There's obviously no guarantees. And when you invest in something this futuristic, you might end up losing and we might not end up going with this long term. But right now, it just feels great to work with. And if you want to learn something new, and aren't scared of it possibly not being the next big thing, I couldn't recommend Server Components highly enough. And honestly, I'd be really surprised if these aren't how we end up writing software going forward. Until next time, peace nerds!
Comments