A free course teaching how to design, train, and deploy a production-ready real-time financial advisor LLM system using RAG and LLMOps.
Hands-on LLMs is a free, open-source educational course that teaches developers and ML engineers how to build a production-ready real-time financial advisor using Large Language Models. It provides a complete, practical project covering dataset generation, model fine-tuning with QLoRA, building real-time RAG pipelines, and deployment using modern LLMOps tools.
Machine learning engineers, data scientists, and developers who want to gain practical, end-to-end experience in building and deploying real-world LLM applications with RAG and LLMOps.
It offers a unique, hands-on learning experience with a fully implemented, complex project (a financial advisor) instead of isolated tutorials. The course integrates a wide array of production tools and architectures, providing a realistic view of building scalable LLM systems.
🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Teaches a scalable three-pipeline design (training, streaming, inference) that mirrors real-world LLM systems, as detailed in the building blocks section, providing a blueprint for enterprise applications.
Offers a complete project from dataset generation using GPT-3.5 to deployment with Gradio UI, including all source code and step-by-step guides, emphasizing practical implementation over theory.
Integrates production-ready tools like Comet ML for LLMOps, Qdrant for vector search, and Beam for serverless GPU deployment, giving learners experience with current industry standards.
Demonstrates building a streaming pipeline with Bytewax to ingest financial news for up-to-date knowledge retrieval, a key feature for dynamic applications like the financial advisor.
Covers QLoRA for parameter-efficient fine-tuning of open-source LLMs like Falcon 7B on custom datasets, teaching cost-effective model adaptation methods.
The course has been archived and replaced with a remastered version, so some tools or code may be obsolete, and learners might encounter compatibility issues with newer libraries.
Requires setting up and managing accounts for Alpaca, Qdrant, Comet ML, Beam, and AWS, which is time-consuming and can be a barrier for those unfamiliar with cloud services.
Relies on external freemium services that may incur costs beyond free tiers (e.g., Beam's GPU hours), and changes in these services could break the course workflow.
Assumes prior knowledge of ML, Python, and cloud deployment, with high hardware requirements or cloud dependency, making it less accessible for beginners without guidance.