In this hands-on workshop, you’ll learn how to integrate AI directly into your application using TanStack AI. We’ll build a working AI chat feature end-to-end, starting from server setup and finishing with a streaming client, tool calling, human-in-the-loop workflows, and real debugging using TanStack DevTools.
Through guided exercises, you’ll learn how to set up TanStack AI on the server, connect a client application to a streaming AI backend, build a functional chat interface, create your first AI tools, and implement approval flows so humans can stay in control when tools are invoked.
By the end of the session, you’ll understand the core building blocks of AI-powered applications and walk away with a solid foundation for adding intelligent chat and tool capabilities to your own apps.
Workshop outcomes
When you're finished with this workshop you will:
- Understand how to set up TanStack AI on the server and wire it to a provider (like OpenAI)
- Know how to connect a client app to a streaming AI endpoint and handle incremental updates cleanly
- Build a functional chat UI with a solid state model for messages, streaming tokens, and tool results
- Create and use your first AI tools with clear inputs, outputs, and predictable behavior
- Build human-in-the-loop workflows with tool approvals so your app can ask for confirmation before executing sensitive actions
- Debug and inspect your AI app using TanStack DevTools, including tool calls, responses, and streaming behavior
What you'll learn
TanStack AI gives you the primitives to build real AI features, not just a demo prompt box. This workshop focuses on wiring everything together properly, from server-side streaming to client UX, then layering in tools, approvals, and debugging so the final result is something you can confidently evolve into production features.
You’ll learn the following through these exercises:
- Server setup - Configure TanStack AI on the server, connect to your model provider, and expose endpoints that support streaming chat
- Streaming and SSE - Implement and consume streaming responses, understand the lifecycle of a stream, and build UI that stays responsive while tokens arrive
- Chat UI and state - Build a chat interface that handles message history, partial responses, loading states, and tool outputs in a clean way
- Tools - Define tools, validate inputs, return structured outputs, and integrate tool results back into the conversation flow
- Human in the loop - Add approval steps for tool execution, implement “approve/deny” flows, and keep users in control when actions matter
- Debugging with TanStack DevTools - Inspect requests, responses, tool calls, timing, and streaming behavior so you can troubleshoot fast and iterate safely
Prerequisites
This workshop assumes you can build and run a React + TypeScript app locally and you are comfortable working with a basic server setup.
- Basic understanding of SSE and streaming is required (we’ll use streaming heavily throughout the workshop)
- Experience with React is required (components, state, props, rendering lists)
- Basic TypeScript knowledge is required (we’ll rely on types for tools and structured outputs)
- You will need an OpenAI API key with available credits to use during the workshop exercises
This workshop has been presented at React Summit 2026, check out the latest edition of this React Conference.













