This workshop introduces how to use open-source LLMs on-premise for enhanced data control, privacy, and cost efficiency. It highlights Ollama for simplified LLM management (downloading, running models offline) and FastAPI for efficient local hosting, enabling fast API development and seamless integration. This combined approach offers superior security, regulatory compliance, and customization compared to cloud-based solutions.
This workshop has been presented at TechLead Conf London 2025: Adopting AI in Orgs Edition, check out the latest edition of this Tech Conference.