About This Course
This course takes you beyond basic chatbot development and dives deep into building powerful AI agents capable of reasoning, decision-making, and executing complex, multi-step tasks using modern tools and frameworks. You will learn how to design intelligent systems that interact with data, APIs, and users seamlessly. Whether you're a developer, AI enthusiast, or entrepreneur, this series equips...
Show moreWhat you'll learn
-
Deploy agents as scalable APIs and containerized services on cloud platforms (Hugging Face, Replit, Docker, Google Cloud Run).
-
Learn to evaluate, secure, and optimise agent workflows using guardrails, eval suites, and observability tools.
-
Gain hands-on experience with MCP (Model Context Protocol) for tool standardization and observability.
-
Design and deploy autonomous and multi-agent systems (CrewAI, Smol Developer, LangGraph).
-
Compare and implement different agent architectures (ReAct, Plan-and-Execute, multi-tool vs. single-tool).
-
Apply memory and retrieval techniques (RAG, vector DBs, embeddings) to ground answers in documents.
-
Understand LLM APIs and build structured pipelines (chains) for conversational and task-oriented applications.
Course Curriculum
.
-
Session 0: Introduction to the Course
10 mins : 7 sec Watch Now.
This module consists of one theory lecture and one hands-on session.
-
Lecture 1 - LangChain and LLM APIs Fundamentals
15 mins : 17 sec- Focus: Build your first LLM chain with minimal setup.
- Key Concepts: API key management, Groq, ChatOpenAI, PromptTemplate, LLMChain, Conversation BufferMemory.
- Hands-On: Simple FAQ chatbot that remembers context.
- Case Study: Product-support FAQ assistant.
Hands-on Module 1
21 mins : 46 secHands On Practice.
-
Session 2: Memory and Retrieval Basics in LangChain
- Focus: Add long-term context and ground answers with documents.
- Key Concepts: SummaryMemory, VectorStore RetrieverMemory, embeddings + FAISS, RAG workflow fundamentals.
- Hands-On: Extend the FAQ bot to a mini-RAG over docs.
- Case Study: Internal knowledge-base assistant for an SME.
Hands-on Module 2
28 mins : 54 secHands-on Practice.
-
Session 3: LangChain and LLM APIs Fundamentals
19 mins : 46 sec- Focus: Compare the core reasoning engines and architectures for building agents.
- Key Concepts: Agent types (ReAct, Plan-and-Execute, Self-Ask), single-tool vs. multi-tool agents, trade-offs in complexity and reliability.
- Hands-On: Implement a simple task using both a ReAct and a Plan-and-Execute agent to compare their outputs and trace.
- Case Study: Selecting the right agent architecture for different business needs (e.g., a simple data lookup agent vs. a complex research assistance
Hands-on Module 3
12 mins : 51 secHands-On Practice.
-
Session 4: Thinking Like an Agent
11 mins : 49 sec- Focus: Foundations of agent reasoning & tool use.
- Key Concepts: ReAct, Plan-and-Execute, function-calling, tool-design patterns.
- Hands-On: ReAct agent that plans a weekend trip (flight API stub, weather, maps).
- Case Study: Kayak-style concierge.
Hands-On Session 4
14 mins : 4 secHands-On Practice.
-
Session 5: Model Context Protocol (MCP) Fundamentals
17 mins : 51 secFocus: Standardised tool & memory interfaces with MCP.
Key Concepts: MCP server anatomy (stdio, HTTP-SSE), list_tools(), wrapping MCP tools in LangChain.
Hands-On: Spin up mcp-sandbox, expose calculator & weather tools, call them from an LLM chain.
.
-
Session 6: LangChain Agents Deep Dive
13 mins : 32 sec- Focus: Multi-step planning & observability (MCP-aware).
- Key Concepts: initialize_agent types, tool routing & fallback, LangSmith tracing.
- Hands-On: Upgrade Session 4 trip planner with LangSmith traces.
- Case Study: Expedia-style itinerary generator.
.
-
Session 7: CrewAI Basics
- Focus: Build your first CrewAI multi-agent crew.
- Key Concepts: Agents, Tasks, Crews; short- vs long-term memory; flow-state export; event emitters.
- Hands-On: “Researcher + Writer” crew that drafts a blog post.
- Case Study: Automated article writer.
.
-
Session 8: Building Smol Agents
14 mins : 30 sec- Focus: Develop autonomous agents that scaffold an entire codebase from a single prompt.
- Key Concepts: The "Smol Developer" paradigm, prompt-driven development, code generation and debugging loops, project structure scaffolding.
- Hands-On: Use a Smol-like framework to generate a simple "Hello World" web application or a Python script based on a detailed prompt.
- Case Study: Rapid prototyping assistant for generating minimum viable products (MVPs).
.
-
Session 9: CrewAI Advanced
9 mins : 4 sec- Focus: Production-quality CrewAI workflows.
- Key Concepts: Conversation Crew, multimodal abilities, HITL (Human-in-the-loop) loops, knowledge-source tools.
- Hands-On: Customer-support crew with HITL escalation.
- Case Study: SaaS support bot.
.
-
Session 10: LangGraph Essentials
4 mins : 36 sec- Focus: Graph-based agent orchestration, branching, streaming, fault-tolerance.
- Key Concepts: Nodes, edges, state; create_react_agent; branch/merge paths; retries; streaming through state.
- Hands-On: DAG chaining: extraction → enrichment → summarisation agents.
- Case Study: Media-monitor news summariser
Hands-On Session 10
15 mins : 23 secHands-On Practice
Our Alumni's Are Placed At
-
50-70%
Average Salary Hike
-
28 Lakhs
Highest Salary
-
1000+
Career Transitions
-
200+
Hiring Partners
Showcase Your Learning with a Verified Certificate
- Verified by
- Dummy certificate benefit 2
- Downloadable & sharable
- Proof of practical learning
Alumni's Testimonies
Frequently Asked Questions
This course is for developers, data scientists, AI engineers, and tech enthusiasts who want to learn how to build intelligent applications with LLMs, LangChain, CrewAI, Smol Agents, and LangGraph. Some prior programming experience (Python) is recommended.
Not necessarily. The course starts with fundamentals (LLM APIs, prompts, LangChain basics) and gradually builds up to advanced agent systems and deployment. Familiarity with Python and APIs will help you follow along smoothly.
You’ll work with:
• LangChain (chains, memory, agents)
• CrewAI (multi-agent workflows)
• Smol Agents (autonomous code-generating agents)
• LangGraph (graph-based orchestration)
• MCP (Model Context Protocol) for tool/memory integration
• Deployment tools: FastAPI, Docker, Google Cloud Run, Hugging Face SpacesNo — this is a hands-on, project-driven course. Each session includes a practical exercise and a case study (e.g., product-support chatbot, internal knowledge-base assistant, SaaS support bot, MVP generator).
By the end of the course, you’ll be able to design, build, and deploy:
• Context-aware chatbots
• RAG-based assistants
• Multi-agent crews for research/writing
• Autonomous code-generating agents
• Scalable SaaS-grade AI applicationsThe course runs across 14 sessions, each focusing on a theme:
• Sessions 1–4: LangChain fundamentals & agents
• Sessions 5–6: MCP + advanced LangChain agents
• Sessions 7–9: CrewAI & Smol agents
• Sessions 10–11: LangGraph & memory/data layers
• Sessions 12–14: Evaluation, guardrails & deploymentYes, for deployment labs you’ll need accounts for Hugging Face, Google Cloud, and/or Replit. Free tiers are sufficient for practice.
Completing this course prepares you for roles such as:
• AI/ML Engineer
• LLM Application Developer
• AI Solutions Architect
• Agentic AI Product BuilderPrerequisites
- Python for AI (Required)
Basic understanding of Python programming (variables, functions, loops, classes).
Experience with common Python libraries such as requests, json, and pandas.
Familiarity with virtual environments and package managers (pip/conda).
2. API Basics (Recommended)
Understanding how to call APIs and work with API keys.
Knowledge of HTTP methods (GET/POST) is helpful.



