Your Frontend’s Best Friend - How to Ship Fast in 2025

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Most software projects fail, but shipping fast improves your chances of success. That’s why focusing on iteration velocity is so important and you should design your application architecture accordingly. Full stack frameworks like Next and Remix give you full control over the web experience with access to a dedicated web server. With Next and Remix, you expose custom endpoints for your UI (server actions, Remix actions and loaders) and use special-purpose payloads (e.g., via RSC or turbo-stream) tailored for the web frontend. But what if you need to add another frontend to your project? With LLM applications growing in popularity and the Model Context Protocol (MCP) gaining adoption, it’s becoming more likely that you’ll need to serve both third-party LLM applications and your existing frontend app. How can we best serve LLM apps with our existing full stack architecture? Let’s talk Backend for Frontend, full stack app architectures, and MCP!

This talk has been presented at React Summit 2025, check out the latest edition of this React Conference.

FAQ

Iteration velocity is crucial because it allows you to ship more features and fix more bugs with the same resources, improving your chances of success. A faster iteration means a quicker feedback loop, which can significantly increase the chances of a project's success.

Full stack web frameworks like Next.js and Remix improve development by providing a closely integrated environment where tailored endpoints are created for specific routes, making code sharing easier and increasing development speed and efficiency.

The BFF pattern allows each frontend to have a dedicated backend that handles requests specifically for that frontend, simplifying interactions with multiple backends and improving performance and maintainability.

TurboStream allows Remix applications to serialize promises and other JavaScript primitives, enabling streaming HTTP and better handling of asynchronous data, which enhances the frontend's responsiveness and user experience.

Observability is crucial as it involves monitoring and gaining insights into how people use your product, providing traces and bug reports to quickly identify and resolve issues, thus improving the reliability and performance of software projects.

MCP, or Model Context Protocol, enables large language models (LLMs) to integrate with third-party services and tools, allowing chat interfaces to provide more powerful functionalities by connecting with various services.

The MCP server adapter for Next.js applications allows developers to integrate MCP functionalities directly within their existing project architecture, facilitating the handling of requests for both traditional web frontends and MCP clients in a unified environment.

MCP servers face the challenge of being stateful and needing to support more complex workflows and higher-level concepts than traditional REST APIs, which require careful consideration of how tools and processes are exposed to LLMs.

Next.js and Remix applications can serve multiple front-ends by acting as backends tailored for specific front-end experiences, whether it's a React app or an MCP client, allowing them to handle different types of requests efficiently in a single deployment.

Code sharing allows developers to reuse code across different parts of a project or between projects, reducing duplication of effort, speeding up development, and ensuring consistency across applications.

Andre Landgraf
Andre Landgraf
20 min
17 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
The Talk delves into enhancing success rates in software projects through focusing on iteration velocity and observability. Discussions revolve around the advantages of utilizing full stack web frameworks like Next.js and Remix for efficient software development and improved code sharing. The integration of TurboStream, BFF, and Model Context Protocol (MCP) in full stack development is explored. Vercel and Cloudflare's innovative solutions for integrating MCP servers into Next.js applications are highlighted, simplifying code sharing and enabling diverse front-end experiences.

1. Improving Success in Software Projects

Short description:

The speaker discusses the challenges and importance of improving success rates in software projects by focusing on iteration velocity and observability to solve engineering problems.

Hey, React Summit. I have a really exciting, cool, new side project idea, but I'm also a little scared because I don't actually have a good track record when it comes to side projects and most of them failed. And I'm pretty sure I'm not the only one. If you look through your GitHub, I'm pretty sure there's a very long list of abandoned side projects as well. And that's all fine, right? Some of them are just for learning purposes. Then it was great, right? You succeeded anyways by learning something new.

But sometimes you also have a really cool idea and you really believe in it, and then it still fails, right? And that's just because most software projects fail. But if something is really important to you, like this side project is to me, you really want to ask yourself, how can you improve your chances of success? And I found this great tweet here by Graeme, who's the CEO and founder of Vercel, who's saying iteration velocity and observability solve all known software engineering problems. No challenge to create, no bug too difficult when you can ship fast and get visibility.

So observability, obviously very important. You want to know when something goes wrong. You want to have traces and bug reports, but also you want to have analytics about how people are using your product and get insights from that. But I want to focus right now on the iteration velocity part. So if you ship fast, that means you can ship more features and fix more bugs with the same resources, right? So you can use resources as whatever is constraining your project. For a startup, that's probably the funding, and for a side project, that could be time over motivation or a combination of these, right? So if you can ship faster, you can make more out of these resources. You have a quicker feedback loop, and usually that means you can increase the chances of success.

2. Streamlining Software Development

Short description:

Andrey, a developer advocate at Neon, discusses the benefits of using a full stack web framework like Next.js and Remix to streamline software development and improve code sharing.

And my name is Andrey. I'm a developer advocate at Neon. And these days I've built a lot of small applications and projects and examples. And I want to make sure that whenever I build something, I build them fast and I build them the right way. And in general, I love thinking about software architectures and seeing how they evolve over time.

Today, the pattern I want to talk about is called back-end for front-end or we're calling it here the front-end's best friend and how to ship fast in 2025. In 2023, I already gave a talk about the time to go full stack is now and I was basically motivating you should use a remix on Next.js and abandon your client-side only single page applications and general purpose rest APIs.

So instead of having client-side only single page applications, try out a full stack web framework and take advantage of the full stack of the web platform. What's the difference, right? First of all, with a full stack app, you're not building general purpose rest API endpoints anymore. Instead, you only build tailored endpoints for the routes and pages of your site, and it's way closer coupled together. Since everything is in one application that you deploy to one place, so instead of deploying to CDN for the static files of the SPA and then to Lambda function or something similar for the rest API, you know you have one deployment maybe to Vercel for Next.js application for your full app. And also, that makes code sharing obviously way easier, right?

3. Efficient Frontend-Backend Integration

Short description:

Building tailored endpoints for better code sharing and faster deployment in full stack applications. Utilizing web server for backend logic and handling redirects efficiently. Exploring the Remix architecture and its approach to data loading and transport using Turbostream for improved frontend-backend integration.

Instead, you only build tailored endpoints for the routes and pages of your site, and it's way closer coupled together. Since everything is in one application that you deploy to one place, so instead of deploying to CDN for the static files of the SPA and then to Lambda function or something similar for the rest API, you know you have one deployment maybe to Vercel for Next.js application for your full app. And also, that makes code sharing obviously way easier, right? Like obviously, you can do monorepo setups and whatnot, but by having one full stack application, you get type safety across the network for free. You have one app for both environments. So that already should make you faster. But most importantly, in my opinion, is that you now own the server, the web server that takes incoming document requests from the browser, right? So there's a lot of things that we have done traditionally in React that is probably better suited to be done on the web server environment. Good example for that is redirects, right? If you want to, if there's an incoming get request, you want to see if someone has authenticated and then redirect them to log in, that's something that is really nicely located on your web server or the environment that handles incoming browser requests, right? So now, you can handle backend logic in a dedicated web backend.

Here's the OG example of the Remix website. So if you go to Remix.run, this example I don't think has changed since Remix was first released. Also, React Router Framework mode still to this day looks very similar. So what we see here is one Remix route with a loader function, action function, and one route component. And that's kind of how the Remix routing convention looks like. The loader fetches data on the server, and then you have access to that in the React component with the use loader data hook. And then in that component, you can submit actions. Those go to the action function that handles mutations post get delete of that route. And that action can then redirect or return data back. So this is the Remix architecture. And what's very interesting about this is obviously we don't have a post project endpoint, REST endpoint, and a delete object endpoint and get projects REST endpoint. Instead, we have one loader that returns all the data needed for this route.

This may include projects, but could also be additionally return the user, right? So it's already aggregating data. But more importantly, what has changed since Remix came first out, the loader is not actually returning JSON data anymore. It's now returning a different format using a transport called Turbostream. And Turbostream was developed to be able to serialize promises, but also date objects and other JavaScript primitives, and then deserialize them again on the client. So with JSON, obviously, if you pass in a date object, it actually turns out to be a string in JSON because JSON can't handle data objects, but it becomes a string. And then you have to deserialize it again on the client if you want to transform it back into a date object. And obviously, there's no concept of promises in JSON. And that's something that Remix and now built to really like even couple the client side and the backend side more closely together.

4. Evolving Full Stack Development

Short description:

Integrating frontend and backend tightly with new protocols and patterns like TurboStream and BFF. Exploring the evolving landscape of full stack web frameworks and considerations for future application development. Introduction to Model Context Protocol (MCP) for integrating large language models with third-party services and tools.

And that's something that Remix and now built to really like even couple the client side and the backend side more closely together. So with Turbostream, which is streaming HTTP, you now can pass down promises and evade the promises or defer the promises on the client and show a suspense, some kind of loading spinner while it's still being fetched on the server, right? You can see how Remix and also Next.js, for that matter, React.js components, really take one step further of building transports and protocols on top of React to really tightly couple the frontend and backend, right? So like the full stack frameworks are even becoming more tailored in that way. I mentioned React Server Components, right? React Server Components is its own protocol that has its own payloads that it's sending to the client that is also not just regular JSON, but something that React on the client understands to understand how to handle different use cases when it comes to React Server Components. And here I have to mention BFF. BFF obviously is a very old pattern that has nothing to do with React Server Components or Remix, but it's a pattern where you say one frontend should have one dedicated server that takes all its requests and handles them in a very specific matter for that frontend and abstracts away all the other backend and downstream databases, microservices, and whatever craziness is within your corporation and company. So the BFF pattern comes from this microservice world landscape, but we can definitely say that Next.js and Remix are implementing the BFF pattern, right? So if you're deploying a Next.js application, it is a full stack app that serves a client, and then for that client, the Next.js backend is implementing or is a specific backend for the Next.js frontend. That kind of makes it the frontend's best friend. Cool. So all of this is, I guess, table stakes. I mean, React Server Components is still very new and we're all trying to figure it out. And obviously, Remix, aka React Router and Next.js are still changing, right? So like TurboStream is a very newer-ish change in Remix. It was JSON before, but still I think most people have accepted that the DX, or full stack web frameworks, is amazing, and it's a really great way to ship fast when you want to build for the web. But that's also just where things start, right? Like even if you have a great full stack web framework already in production, so say I have a new side project, I'm starting with Remix, right? Then a lot of the adventure still just starts, right? We still have to think about the requirements of our application, what we want to actually play, do we want to have offline support, multiplayer features, right? Like should it feel native? There's a lot of expectations for applications in 2025, right? So our job isn't done. If anything, it's just getting started. And this is kind of the state where I was in with my side project. This time I decided to go with Next.js. I generated the first prototype with V0, and V0 is just so good at building Next.js sites, and I wanted to try React Server Components. And then Chatsy, and Tavern, right? So like, I have this really cool stack, and I'm iterating on my project. It's all just crud in the end of the day. Really happy though. And then MCP comes around the corner, and I get confused. Am I still building the right thing? So what is MCP? MCP is the Model Context Protocol, and it enables large language models, LLMs, to integrate with third party services and tools. So if you have some kind of chat interface that is connected with a large language model, we call that an MCP client in the MCP protocol. So let's say Cloud Desktop or Cursor can be our MCP client. And then we're connecting, or want to connect that chat interface with a lot of different third party services. And those services can provide the LLM this context, but also provide tools that the LLM can execute. So let's, for instance, let's say we have a to-do MCP server, right? So in the chat, we can say, I'm working on refactoring the thing, please write me a to-do. Or like, hey, cursor, what are my to-do's for the day, right? You immediately see how integrating all these services kind of makes the chat interface more powerful, right? Like in the end of the day, you want to connect it with all your tools. What I'm showcasing here is just like some examples, right? Like say there's a Slack MCP, an email MCP. What I mean by that is that every service you want to utilize kind of should expose their own MCP server.

5. Integrating MCP Server in Tech Stack

Short description:

Addressing the integration of an MCP server into a tech stack for side projects while considering various communication and deployment options.

So for my side project, if I have some crowd capabilities, right, I may want to also create and host an MCP server so that I can connect it with the MCP clients of my choice and utilize my side project that way. But it also means I have to deploy one more thing. So the orange thing here, by the way, are the transports. So your MCP can communicate with the applications through different ways. You can have your MCP server running locally, then it communicates via standard I.O. But you can also deploy your MCP server remotely. And then the transport layers are either service and events or streamable HTTP. And all of this is also very new and changing a ton.

If you're new to the MCP protocol, I can recommend you to check out Kenzie Dot's sites. He's doing a lot of content about MCP right now. And I think MCP is here to stay. And we have to figure it out. So I can only recommend you to do so. What is my problem, though? So for my side project, I was really happy with my little tech stack. Right. I said I was using Next.js and Chatsie and Tailwind. So I have this application running. I'm connecting it to my NEON database. And now I want to also have an MCP server. How does that fit into this architecture? I just told everybody to get rid of their REST APIs, the general-purpose REST APIs. And it's really cool to have this tailored backend for your frontend. But with LLM applications, we kind of have a new type of frontend, right? Not a browser, but now some MCP clients. So we have to deploy or create an MCP server and then kind of try to keep it as tightly coupled with the rest of our code base as well, so we can still share the code and make sure that we ship fast.

6. Optimizing MCP Server Development

Short description:

Avoiding complexity by considering the specificity required for MCP servers and the need for tailored tools in MCP server development.

What I want to avoid is having too many different environments. I don't want to have a dedicated REST API, and then my Remix or Next.js app before that. And then an MCP server, and they're both consuming the REST API. I want to keep it simple. So more complexity. The problem is also that MCP is stateful. So if you say, okay, let's actually do this, let's have a REST API, and then just expose it as an MCP server, the problem is that, again, MCP expects your server to be very specific. It has to be stateful, it has to support streamable HTML or servers and events, and the things you want to expose shouldn't be as atomic as CRUD endpoints. Like if you have a todo MCP server, you can probably expose a create todo, list todo, and delete todo endpoints. But if your MCP server becomes more complicated, you really have to think about workflows and more higher level concepts.

So just to give an example, the NEON MCP server, and NEON is a database offering service that exposes a serverless Postgres architecture. And obviously, when you have a Postgres service running somewhere, you want to expose tools like queryMyDatabase or listMyDatabases. But some of these tools that we're exposing are also much higher level. For instance, we have a prepare database migration tool that does several things in one tool call. So we create a new branch, we return the LLM instructions, what to do with this branch and how we recommend the agent does the actual database migration. And then we recommend the agent that once the agent confirms that everything is good on the branch, to call the complete database migration tool. So these two tool calls have to be chained, right? So like there is some kind of like process here. So it's dateful.

So what I'm trying to communicate here is that if you're building an MCP server, you have to really think deeply about the consuming end, which is a large language model. So you have to provide tools that aren't generic at rest APIs again, but something that's very custom tailored towards that LLM, what that LLM expects and can handle and perform well with. And that's very similar to what we have just talked about this next year as a remix, right? Like turbo stream and rect server components are specific protocols and transport for our front-end experience. And very similarly here, we have an MCP protocol and a very custom tools and use cases specifically tailored towards this new type of front-end.

7. Innovative MCP Server Integration

Short description:

Vercel and Cloudflare offer solutions for creating MCP servers within Next.js. The MCP server adapter seamlessly integrates into existing projects, enabling a single Next.js app to serve multiple front-ends without the need for a separate MCP server deployment. This approach simplifies code sharing and transforms Next.js into a versatile backend for various front-end experiences.

And luckily I'm not the only one who's kind of like thinking about this. Just three days ago, I recorded this, this screenshot is from today. So the, this adapter is actually three years, three days old now. Vercel has now released an adaptor for creating MCP servers as part of an existing Next.js application. And obviously, Vercel is not the only one, Cloudflare also has a cool way of deploying MCP servers already. So there's different solutions, solutions coming our way. But what I really like about this MCP server adapter by Vercel is just that it just fits into my existing side project architecture.

So I can now turn my Next.js application into a system that serves two different front-ends, right? Like I have a new route now using that adapter called slash API slash MCP where I can route LM agents towards. And then that route or that list of routes is now handling MCP requests. And then I have all the other traditional routes of my Next.js application that serves the Next.js front-end. What's obviously really cool about this approach is that I don't actually have a dedicated server MCP server that I have to deploy or publish somewhere else. It's still one Next.js app that I can deploy to one place.

And since it's one project, I don't even have to bother about motor repo setups. Everything is inside one project. So code sharing is super easy. And then the funny thing here, and you tell me if you like this framing, MCP becomes kind of another backend for a specific front-end, right? So if you think about Next.js now being a backend for our React app, it is now also a backend for our different large language model applications. So it becomes kind of like a multi-backend system. So anyway, this is kind of how I think about it, what I ended up with. I think it's just very interesting how this is all developing right now. Again, this may all change tomorrow, right?

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Vue: Feature Updates
Vue.js London 2023Vue.js London 2023
44 min
Vue: Feature Updates
Top Content
Watch video: Vue: Feature Updates
The Talk discusses the recent feature updates in Vue 3.3, focusing on script setup and TypeScript support. It covers improvements in defining props using imported types and complex types support. The introduction of generic components and reworked signatures for defined components provides more flexibility and better type support. Other features include automatic inference of runtime props, improved define emits and defined slots, and experimental features like reactive props destructure and define model. The Talk also mentions future plans for Vue, including stabilizing suspense and enhancing computer invalidations.
Future of Frontend Frameworks Fireside Chat
React Summit 2024React Summit 2024
28 min
Future of Frontend Frameworks Fireside Chat
Fred K. Schott
Minko Gechev
Ryan Carniato
Daniel Afonso
Aakansha Doshi
Tim Neutkens
6 authors
Signals are being adopted by popular frameworks, enabling code reuse and improved tooling. While merging between frameworks is unlikely, they are learning from each other and adopting shared practices. It is important to embrace the diversity of frameworks and libraries. Instead of merging, focus on standardizing the principles behind frameworks. Consider tradeoffs and benefits when choosing a framework, and explore different technologies to learn new ideas.
A Frontend Developer’s Guide to Web3
React Summit 2022React Summit 2022
22 min
A Frontend Developer’s Guide to Web3
This talk covers an introduction to Web 3 and smart contracts, including deployment and bytecode compilation. It also discusses interacting with blockchain wallets, using RPC endpoints and block explorers, and accessing smart contract data. The talk emphasizes the use of ABIs and JavaScript libraries like ethers for interacting with smart contracts. It mentions the shift in mindset from HTTP requests to using ABI code and libraries for front-end development in Web 3. The talk concludes by mentioning Web3UI and tools like PolygonScan and Etherscan for building on the blockchain.
Monitoring Errors and Slowdowns with a JS Frontend and Node Backend
Node Congress 2022Node Congress 2022
8 min
Monitoring Errors and Slowdowns with a JS Frontend and Node Backend
Watch video: Monitoring Errors and Slowdowns with a JS Frontend and Node Backend
Sentry is code monitoring for developers, specifically designed for the application layer. It helps identify error details, frequency, release, user information, and stack trace. Source maps can be uploaded to see the original source code and suspect commits can be identified. Performance monitoring helps identify slowdowns and determine the cause. Automating alerts and investigating errors helps gain instant context and trace errors across different projects.
Serverless for Frontends
DevOps.js Conf 2022DevOps.js Conf 2022
8 min
Serverless for Frontends
Premium
Welcome to my session on Serverless for Front-ends. Serverless functions eliminate the need for a runtime and handle orchestration for you. Microfrontends require a runtime and orchestration, but side-less UIs provide a runtime-free solution. In the demo, a new team adds functionality to an application and publishes it easily. Building and deploying applications is quick and easy with micro apps and PowerCLI, offering true loose coupling and instant availability without a runtime.
DevOps for Frontend: beyond Desktop Browsers
DevOps.js Conf 2021DevOps.js Conf 2021
31 min
DevOps for Frontend: beyond Desktop Browsers
Today's Talk discusses DevOps for frontend beyond desktop browsers, focusing on the challenges and journey to DevOps, the importance of abstractions, maximizing flow and enabling team autonomy, applying DevOps principles beyond web applications, running automated tests on consoles and TVs, investing in tooling for TV testing, and the challenges of TV testing in the lab.

Workshops on related topic