Learnings from deploying LLMs to production
SaaS Engineering talk by Asha Vishwanathan from Verloop
Imagine you're a software engineer who's just stumbled upon a Swiss Army knife, specifically designed for language. This is the world of Large Language Models (LLMs). Just as a versatile tool can transform a block of wood into a masterpiece, LLMs can morph simple data inputs into a spectrum of linguistic outputs, from writing code to composing poetry.
The above paragraph was written by an LLM.
You’ve probably already used ChatGPT, Bing Chat, Co-pilot to help you out in writing code snippets, documentation etc. If you’re now thinking of incorporating LLMs into your product features, you should definitely see this talk by Asha.
In this talk, Asha talks about her experience incorporating LLMs into product features and tackling latency issues, hallucinations, token restrictions, open source vs closed source models, reliability and rate limits, testing among other things. She also talks about how to run quick POCs, how to structure prompts, RAG vs fine-tuning.
Go on - give it a watch and please leave comments!