Local, on-device LLMs unlock private-by-default, low-latency AI experiences that work offline - ideal for mobile. In this talk, I’ll show how to run LLMs directly inside React Native apps using an AI SDK that provides a nice abstraction layer to simplify building AI applications.
Join us as we explore the creation of react-native-ai, a library that enables local LLM execution. We’ll dive deep into the provider architecture and demonstrate how we integrated it with the MLC LLM Engine and Apple’s foundation models on mobile devices.
This talk has been presented at React Advanced 2025, check out the latest edition of this React Conference.