AI Models & Agents Learning Roadmap
From beginner to advanced with hands-on courses and developer resources
Scope: LLMs → agents → MCP (Model Context Protocol) → Google Agent-to-Agent (A2A) → evals & safety.
All links are free to access (some may require free sign-up).
LLM (Large Language Model)
Transformer-based next-token predictor; gives you language understanding/generation. See a clear visual intro by 3Blue1Brown.
Agent
An LLM wrapped with tools, memory, planning, and feedback loops so it can act, not just answer.
MCP (Model Context Protocol)
An open protocol that standardizes how AI apps connect to tools and data (think “USB-C for tools & context”).
A2A (Agent-to-Agent)
An open protocol so heterogeneous agents can discover, negotiate, and collaborate across orgs, stacks, and clouds.
Transformers, the tech behind LLMs
VideoVisual, intuition-first walkthrough of attention & Transformers by 3Blue1Brown.
Watch on YouTubeThe Illustrated Transformer
ArticleFriendly diagrams explaining attention, encoder/decoder, and how the original Transformer works.
Read ArticleStanford CS25 — Transformers United
CourseSurvey talks on Transformer applications across domains; great for context and inspiration.
View CourseHugging Face LLM Course (Chapter 1)
InteractiveFoundations of Transformers and the Hugging Face ecosystem; run examples, fine-tune, share on the Hub.
Start LearningModel Context Protocol — Introduction
DocsWhat MCP is, why it exists, and the core concepts (servers, clients, resources, tools).
Read IntroductionGoogle Agent-to-Agent (A2A) — Overview
BlogWhat A2A solves (agent discovery, negotiation, secure collaboration) and where it fits in Google's agent stack.
Read Blog PostHugging Face AI Agents Course
Hands-onConcepts + frameworks (LangGraph, LlamaIndex, smolagents) and a capstone where you ship an agent.
Start CourseHugging Face MCP Course
Hands-onBuild MCP servers/clients; then compose them into an end-to-end app.
Start CourseBuilding Systems with ChatGPT API
DeepLearning.AIPatterns for tool-calling, routing, multi-step workflows.
Take CourseEvaluating AI Agents
DeepLearning.AIHow to trace, test, and iterate on agent performance with structured evals.
Take CourseLlamaIndex — Starter Tutorial
RAGMinimal app, then add RAG; clear examples for agents + retrieval pipelines.
View TutorialHaystack — First RAG Pipeline
RAGBuild a retrieval-augmented QA pipeline; great for seeing the “plumbing.”
A2A Protocol — Purchasing Concierge
CodelabDeploy multiple agents on Cloud Run/Agent Engine and watch them collaborate via A2A.
Try CodelabResearch Papers
Survey: LLM-based Autonomous Agents
arXivArchitecture patterns (planning, tools, memory, reflection) and applications.
Read PaperSurvey: LLM-based Multi-Agent Systems
arXivCollaboration mechanisms and environments; recent advances & open problems.
Read PaperAcademic Content
Stanford CS224N — Reasoning & Agents
VideoModern view on reasoning in LLMs and implications for agent design.
Watch LectureBerkeley MOOC: LLM Agents
CourseFull agentic computing syllabus; lectures are public.
Protocol Specifications
MCP — Official Spec & SDKs
ProtocolCanonical spec + official Python/TypeScript SDKs.
A2A — Protocol & Repos
ProtocolOpen protocol for inter-agent collaboration; docs, SDKs, and samples.
Expert Insights
Understanding Reasoning LLMs
BlogTraining and inference-time strategies for stronger reasoning by Sebastian Raschka.
Read ArticleAI Coding Tools
LangGraph & Orchestration
RAG & SDKs
Evaluation & Safety
AWS Resources
Watch the 3Blue1Brown video and skim Illustrated Transformer to cement intuition.
Do HF LLM Course (Chapter 1), then jump into HF Agents and HF MCP for hands-on work.
Build a tiny RAG app (LlamaIndex or Haystack), then wrap it with LangGraph.
Explore MCP for tools/data and the A2A codelab to see cross-agent collaboration.
Add evals and guardrails before you scale/deploy.
💡 Pro Tip
Everything above is free to access. Some platforms (Hugging Face, DeepLearning.AI, Google Cloud) may ask you to create a free account to save progress or run hosted examples.