🚀 Welcome to the “Advanced Generative Ai Workflows” Course!
In this hands-on course, we’ll go beyond just using LLMs — you’ll learn to build powerful GenAI applications using cutting-edge techniques like Retrieval-Augmented Generation (RAG) and Multi-Agent Systems, and deploy them to production environments like AWS.
🧠 But we’re not just building — we’re also understanding.
In the first week, we’ll dive deep into the internal workings of conversational AI systems. You’ll explore the high-level concepts behind transformers, encoders, decoders, pretraining, supervised fine-tuning (SFT), and reinforcement learning — so you not only build better applications, but also understand how to improve them.
📐 While we won’t go deep into math (that’s covered in LLM internals course), you’ll come out of this course confident enough to explain how these systems work and evaluate and improve GenAI apps using the right metrics and testing strategies.
🛠️ What You’ll Learn:
-
How to build and deploy GenAI apps using RAG and Multi-Agent Systems
-
High-level understanding of conversational AI internals
-
How to design, test, and evaluate GenAI applications
-
How to deploy applications on AWS
-
Best practices for making LLM-powered systems robust and scalable
🎯 Who is this course for?
Developers, ML practitioners, and AI enthusiasts who want to build and deploy real-world GenAI applications — while understanding how they work under the hood.
🔍 Prerequisites:
To get the most out of this course, you should be comfortable with:
-
Basic Python programming
-
Working with lists, tuples, dictionaries
-
Using functions and classes
-
Familiarity with NumPy and Pandas