
From Concepts to Production with ADK
AI Engineering Bootcamp

Aki Wijesundara
Instructor · Snapdrum
Seven sections — ~90 minutes total
| # | Section | Time | Type |
|---|---|---|---|
| 1 | What Are AI Agents, Really? | 10 min | Lecture |
| 2 | From Single Agent to Multi-Agent | 15 min | Lecture |
| 3 | Introducing ADK — Your Build Tool | 10 min | Lecture + Demo |
| 4 | MCP — Giving Agents Tools | 15 min | Lecture + Demo |
| 5 | A2A — Agents Talking to Agents | 15 min | Lecture + Demo |
| 6 | Full System: Putting It All Together | 15 min | Lecture + Demo |
| 7 | Homework & Next Steps | 10 min | Assignment |
A chatbot does input → output. An agent does think → act → observe → repeat.
System prompt — what to do, constraints, personality
GPT, Gemini, Claude, Llama — the reasoning engine
Functions: search, query DB, send email, call APIs
Short-term (session) and long-term (across sessions)
Multi-step tasks, ambiguous requests, external systems, unknown paths
Simple prompt/response, deterministic workflow, latency-critical
One agent with 15+ tools breaks. Split by domain — each agent gets a focused job, short instructions, and limited tools.
Root reads intent, delegates to specialist. We build this today.
Fixed order: A → B → C. Content gen, data processing.
Simultaneous agents, merge results. Research, aggregation.
Produce, review, loop until quality met. Writing, code gen.
Agent Development Kit — Google's open-source framework for building and deploying AI agents
Python, TypeScript, Go, Java — multi-language from the start
Use Gemini, Claude, Ollama, LiteLLM, or any model
Run locally, on Cloud Run, GKE, or Vertex AI
Built-in support for both open protocols that matter for production agent systems
Designed to make agent development feel like software development — not prompt engineering with extra steps.
Comparing the major agent frameworks
| Framework | Strength | Trade-off |
|---|---|---|
| LangGraph | Graph-based orchestration, strong community | Steeper learning curve |
| CrewAI | Role-based multi-agent, easy to start | Less control over execution flow |
| AutoGen | Research-grade multi-agent conversations | Complex setup, Microsoft-centric |
| ADK | Multi-language, native MCP + A2A, built-in eval + dev UI | Newer ecosystem, still maturing |
ADK's differentiator: natively supports both MCP (tool standard) and A2A (agent communication standard).
Tools are plain Python functions. ADK wraps them automatically.
from google.adk.agents import Agent
def lookup_customer(email: str) -> dict:
"Look up customer account information by email."
return {"name": "Jane Doe", "plan": "Pro"}
def check_order_status(order_id: str) -> dict:
"Check the current status of an order."
return {"order_id": order_id, "status": "shipped"}
support_agent = Agent(
name="support_agent",
model="gemini-2.5-flash",
instruction="You are a helpful customer support agent.",
tools=[lookup_customer, check_order_status],
)The LLM reads the function names and docstrings to decide when to call them. No schema definitions needed.
The root agent reads the description of each sub-agent and decides who handles the query
billing_agent = Agent(
name="billing_agent",
model="gemini-2.5-flash",
description="Handles billing: invoices, payments, refunds.",
instruction="Help customers with billing issues.",
tools=[lookup_invoice, process_refund],
)
technical_agent = Agent(
name="technical_agent",
model="gemini-2.5-flash",
description="Handles technical issues: bugs, outages, how-to.",
instruction="Help customers with technical problems.",
tools=[search_knowledge_base, check_system_status],
)
root_agent = Agent(
name="support_router",
model="gemini-2.5-flash",
instruction="Route customer queries to the right specialist.",
sub_agents=[billing_agent, technical_agent],
)No explicit routing logic needed. The LLM reads description and delegates automatically.
Building the multi-agent customer support system from the diagram
In Demo 1, we hardcoded tools as Python functions. In reality, agents need databases, SaaS platforms, APIs. MCP is the standard.
One universal connector instead of a custom cable for every device.
Customer records, order data
Asana, Jira, Salesforce
Knowledge bases, CRMs
Documents, logs
Your agent discovers tools at runtime through a client-server protocol
"What tools do you have?"
"I have: query, insert, update"
Your ADK agent connects to an MCP server, auto-discovers tools, and calls them transparently via McpToolset
Wrap your ADK tools as an MCP server. Any MCP client can use them: Claude Desktop, Cursor, other agents
No hardcoded database functions. The agent discovers tools from the MCP server automatically.
from google.adk.agents import Agent
from google.adk.tools.mcp_tool import McpToolset, StdioConnectionParams
from mcp.client.stdio import StdioServerParameters
supabase_mcp = McpToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="npx",
args=["-y", "@supabase/mcp-server-supabase@latest",
"--access-token", TOKEN],
),
),
)
billing_agent = Agent(
model="gemini-2.5-flash",
name="billing_agent_mcp",
instruction="You are a billing specialist with real database access.",
tools=[supabase_mcp],
)The agent discovers every available tool from the MCP server automatically. Zero custom integration code.
Pre-built MCP servers in the ecosystem
Supabase, Spanner, AlloyDB, Postgres (MCP Toolbox)
Asana (30+ tools), Atlassian (Jira + Confluence)
BigQuery, Bigtable, Cloud API Registry
Imagen, Veo, Chirp 3 HD, Lyria (Genmedia MCP)
Local filesystem access, document reading
Any API via Apigee or your own MCP server
The MCP ecosystem is growing fast. If it doesn't exist, build your own with FastMCP.
You don't have to expose all tools. Use tool_filter to whitelist.
McpToolset(
connection_params=StdioConnectionParams(...),
tool_filter=[
"read_file",
"list_directory",
"search_files",
], # Only these tools available to the agent
)🚨 In production, always filter. Don't give an agent write access if it only needs to read.
The Billing Agent now connects to a real Supabase database via MCP
customers, orders, support_ticketsMcpToolsetMCP connects agents to tools. But what connects agents to other agents across systems? A2A is the open standard.
Giving an agent a toolkit
agent ↔ tool
Teammates in the same room
agent ↔ agent, same app
Calling a colleague at another office
agent ↔ agent, across network
A2A lets a Python agent talk to a Java agent, or your support system call a partner's shipping agent — without knowing how it's built.
Two steps: expose and consume
Make your agent available on the network
Gets an Agent Card + network endpoint
Use someone else's agent
Feels like a local sub-agent. ADK handles networking.
Every A2A agent has an Agent Card — a JSON file describing what it can do
An API spec, but for agents. ADK auto-generates Agent Cards when you use to_a2a().
Two sides of the same protocol
from google.adk.agents import Agent
from google.adk.a2a.utils.agent_to_a2a import
to_a2a
shipping_agent = Agent(
name="shipping_status_agent",
model="gemini-2.5-flash",
instruction="Look up shipping status.",
tools=[get_shipping_status],
)
app = to_a2a(shipping_agent, port=8001)
# uvicorn agent:app --port 8001from google.adk.agents.remote_a2a_agent import
RemoteA2aAgent
remote_shipping = RemoteA2aAgent(
name="shipping_agent",
agent_card="http://localhost:8001",
)
root_agent = Agent(
name="customer_support",
sub_agents=[billing, technical,
remote_shipping],
)The remote agent sits alongside local sub-agents. The LLM routes to it like any other — any framework, any language.
Where agent-to-agent communication makes sense
Order Agent ↔ Inventory Agent ↔ Shipping Agent ↔ Payment Agent — each an independent service, A2A as the communication layer.
Your support agent calls a partner's warranty verification agent. You don't know their tech stack.
Python orchestrator talks to a Java compliance agent and a Go data processing agent. A2A standardizes communication.
A financial data provider exposes real-time stock prices through an A2A agent. Your advisor agent consumes it.
A separate Shipping Agent that our support system talks to via A2A
to_a2a()RemoteA2aAgentThree layers, one system
A decision guide for the three patterns
| You need to… | Use |
|---|---|
| Connect an agent to a database, API, or file system | MCP |
| Split a complex task into focused agents in one app | Multi-Agent |
| Call an agent running as a separate service | A2A |
| Build a predictable pipeline (step 1 → 2 → 3) | Workflow Agents |
| Let the LLM decide which agent handles a request | LLM Delegation |
| Make your agent usable by anyone, any framework | A2A (expose) |
Everything together in one demo
Before you ship: test. When you ship: pick the right target.
Is the final answer correct? Define test cases with expected outputs.
Did the agent call the right tools and route to the correct sub-agent?
| ADK Dev UI | Local dev & debugging |
| CLI | Quick testing, CI/CD |
| Agent Engine | Managed production (Vertex AI) |
| Cloud Run | Containerized, custom infra |
| GKE | Kubernetes, multi-service |
All paths support A2A — agents can be exposed and consumed regardless of where they run.
Build a Multi-Agent Customer Support System with MCP and A2A
Create a Supabase project with customers, orders, support_tickets tables. Seed with 10+ records per table.
Root router + at least 2 specialist sub-agents. At least one connected to Supabase via MCP.
Separate service with check_return_eligibility + initiate_return. Exposed via to_a2a().
Connect Returns Agent via RemoteA2aAgent. Test 3 scenarios: billing (MCP), returns (A2A), escalation.
Everything you need to build with ADK, MCP, and A2A
How agents access tools and data
agent ↔ tool
How agents work together locally
agent ↔ agent, same app
How agents collaborate across boundaries
agent ↔ agent, across network
These three patterns compose. Start with the concepts, pick the right pattern, then implement with ADK.