In generative AI, the largest large language models (LLMs) often dominate the headlines, hailed as the best solutions for the most complex and diverse tasks. While they certainly have their place, are they the best option for every enterprise use case?
Smaller language models are gaining traction for their ability to deliver high performance with lower cost and resource requirements. These models are quicker, easier to fine-tune, and better suited for targeted business needs, making them an attractive alternative for many organizations.
In this session, we will:
-Explore the technical structure and content of LLMs.
-Discuss how smaller, purpose-built models can be more efficient and effective for enterprise tasks, including how model optimization techniques can boost performance even more.
-Demonstrate how smaller LLMs can provide faster, more cost-effective solutions while still meeting the demands of specialized use cases.
This talk has been presented at AI Coding Summit 2026, check out the latest edition of this Tech Conference.






















