Day 9 -- Kubernetes¶
Your trip planner runs in Docker Compose. Today you deploy it to Kubernetes -- the same agents, the same code, the same mesh. The only new file per agent is a Helm values file, and meshctl scaffold already created that on Day 1.
What we're building today¶
graph TB
subgraph k8s["Kubernetes — trip-planner namespace"]
direction TB
subgraph core["mcp-mesh-core (Helm)"]
PG[(postgres)]
REG[registry :8000]
RD[(redis)]
TM[tempo]
GR[grafana :3000]
end
subgraph agents["13 Agents (Helm)"]
GW[gateway :8080]
CH[chat-history]
PL[planner]
CP[claude-provider]
OP[openai-provider]
FA[flight-agent]
HA[hotel-agent]
WA[weather-agent]
PA[poi-agent]
UP[user-prefs]
BA[budget-analyst]
AA[adventure-advisor]
LP[logistics-planner]
end
end
U[User] -->|"port-forward\nor ingress"| GW
style U fill:#555,color:#fff
style k8s fill:#1a1a2e,color:#fff,stroke:#4a9eff
style core fill:#2d2d44,color:#fff,stroke:#666
style agents fill:#2d2d44,color:#fff,stroke:#666
style GW fill:#e67e22,color:#fff
style REG fill:#1abc9c,color:#fff
style PG fill:#336791,color:#fff
style RD fill:#d63031,color:#fff
style TM fill:#f39c12,color:#fff
style GR fill:#f39c12,color:#fff
style PL fill:#9b59b6,color:#fff
style CP fill:#9b59b6,color:#fff
style OP fill:#9b59b6,color:#fff
style BA fill:#f39c12,color:#fff
style AA fill:#f39c12,color:#fff
style LP fill:#f39c12,color:#fff
style FA fill:#4a9eff,color:#fff
style PA fill:#4a9eff,color:#fff
style UP fill:#1a8a4a,color:#fff
style WA fill:#1a8a4a,color:#fff
style HA fill:#1a8a4a,color:#fff
style CH fill:#1abc9c,color:#fff One namespace. Two Helm charts (mcp-mesh-core for infrastructure, mcp-mesh-agent for each agent). Thirteen agents, a registry, a database, and a full observability stack. Same agents as Day 8 -- running in Kubernetes pods instead of Docker containers.
Today has five parts:
- The DDDI payoff -- same code, new platform
- Create the namespace and secrets -- one-time setup
- Deploy the registry and infrastructure --
helm install mcp-core - Deploy the agents -- one
helm installper agent - Verify --
kubectl get pods,meshctl list,curlthe gateway
The DDDI payoff¶
Open your Day 8 flight agent and your Day 9 flight agent side by side.
80c80
< description="TripPlanner flight search tool -- Day 8",
---
> description="TripPlanner flight search tool -- Day 9",
One line changed: the description string. The flight_search function -- its parameters, its return type, its stub data -- is identical. The imports are identical. The decorators are identical. The function you wrote on Day 1 and evolved through Day 8 runs on Kubernetes without a single code change.
Remember that helm-values.yaml file from Day 1 that you ignored?
# Helm values for flight-agent — scaffolded by meshctl, adapted for k8s
# Usage: helm install flight-agent oci://ghcr.io/dhyansraj/mcp-mesh/mcp-mesh-agent \
# --version 1.2.0 -n trip-planner -f values-flight-agent.yaml
image:
repository: trip-planner/flight-agent
tag: "latest"
pullPolicy: IfNotPresent # Use Never for minikube local builds
agent:
name: flight-agent
runtime: python
# K8s uses port 8080 by default (each pod has its own IP, no conflicts)
# The Helm chart sets MCP_MESH_HTTP_PORT=8080, overriding the decorator's http_port=9101
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
mesh:
enabled: true
registry:
host: "mcp-core-mcp-mesh-registry"
port: "8000"
That is the Kubernetes deployment manifest for your flight agent. The scaffold generated it on Day 1. It tells the Helm chart which image to pull, what to name the agent, and how many resources to give it. The chart handles the rest: Deployment, Service, health probes, environment variables, service account.
No env-specific config files. No sidecars. No wrapper code. The function you wrote on Day 1 runs here.
Prerequisites¶
- A Kubernetes cluster (minikube, kind, EKS, GKE, AKS)
kubectlconfigured for your cluster- Helm 3.8+ (OCI registry support)
- Agent images built and available to the cluster
For minikube, use minikube's Docker daemon so images are available locally without pushing to a registry:
Part 1: Build agent images¶
Each agent has a Dockerfile (generated by meshctl scaffold) that uses the official mcpmesh/python-runtime base image. Build all thirteen agents:
$ cd day-09/python
$ for agent in flight-agent hotel-agent weather-agent poi-agent \
user-prefs-agent chat-history-agent claude-provider openai-provider \
planner-agent gateway budget-analyst adventure-advisor logistics-planner
do
echo "Building $agent..."
docker build -t "trip-planner/${agent}:latest" "$agent/"
done
Verify the images are available:
$ docker images --filter "reference=trip-planner/*" --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
REPOSITORY TAG SIZE
trip-planner/flight-agent latest 409MB
trip-planner/hotel-agent latest 409MB
trip-planner/weather-agent latest 409MB
trip-planner/poi-agent latest 409MB
trip-planner/user-prefs-agent latest 409MB
trip-planner/chat-history-agent latest 409MB
trip-planner/claude-provider latest 409MB
trip-planner/openai-provider latest 409MB
trip-planner/planner-agent latest 409MB
trip-planner/gateway latest 409MB
trip-planner/budget-analyst latest 409MB
trip-planner/adventure-advisor latest 409MB
trip-planner/logistics-planner latest 409MB
Cloud clusters
For EKS, GKE, or AKS, push images to your container registry instead:
docker buildx build --platform linux/amd64 \
-t your-registry/flight-agent:v1.0.0 --push flight-agent/
image.repository in each values file. Part 2: Create the namespace and secrets¶
LLM agents need API keys. Create a Kubernetes Secret:
$ kubectl -n trip-planner create secret generic llm-keys \
--from-literal=ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
--from-literal=OPENAI_API_KEY=$OPENAI_API_KEY
The Helm values files for LLM agents reference this secret by name:
# Helm values for claude-provider — scaffolded by meshctl, adapted for k8s
# Added: API key from Kubernetes Secret
image:
repository: trip-planner/claude-provider
tag: "latest"
pullPolicy: IfNotPresent # Use Never for minikube local builds
agent:
name: claude-provider
runtime: python
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
mesh:
enabled: true
registry:
host: "mcp-core-mcp-mesh-registry"
port: "8000"
# API key injected from the llm-keys Secret created in the namespace
env:
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: llm-keys
key: ANTHROPIC_API_KEY
optional: true
The secretKeyRef mounts the key as an environment variable inside the pod. The agent code reads ANTHROPIC_API_KEY from the environment -- the same way it did locally. No code change needed.
Part 3: Deploy the registry¶
The mcp-mesh-core chart deploys the registry, PostgreSQL, Redis, Tempo, and Grafana as a single Helm release:
$ helm install mcp-core oci://ghcr.io/dhyansraj/mcp-mesh/mcp-mesh-core \
--version 1.2.0 \
-n trip-planner \
-f helm/values-core.yaml \
--wait --timeout 5m
Wait for the registry to become available:
$ kubectl wait --for=condition=available \
deployment/mcp-core-mcp-mesh-registry \
-n trip-planner --timeout=120s
Part 4: Deploy the agents¶
Each agent gets its own helm install using the mcp-mesh-agent chart and the values file from helm/:
$ AGENTS=(
flight-agent hotel-agent weather-agent poi-agent user-prefs-agent
chat-history-agent claude-provider openai-provider planner-agent
gateway budget-analyst adventure-advisor logistics-planner
)
$ for agent in "${AGENTS[@]}"; do
echo "Installing $agent..."
helm install "$agent" \
oci://ghcr.io/dhyansraj/mcp-mesh/mcp-mesh-agent \
--version 1.2.0 \
-n trip-planner \
-f "helm/values-${agent}.yaml"
done
Installing flight-agent...
Installing hotel-agent...
Installing weather-agent...
Installing poi-agent...
Installing user-prefs-agent...
Installing chat-history-agent...
Installing claude-provider...
Installing openai-provider...
Installing planner-agent...
Installing gateway...
Installing budget-analyst...
Installing adventure-advisor...
Installing logistics-planner...
minikube image pull
If you built images with eval $(minikube docker-env), add --set image.pullPolicy=Never to each helm install so Kubernetes uses the local images instead of trying to pull from a registry.
Port strategy¶
On Day 8, each agent had a unique port (9101, 9102, ...) because all containers shared the host network. In Kubernetes, each pod has its own IP address, so every agent listens on port 8080. The Helm chart sets MCP_MESH_HTTP_PORT=8080 as an environment variable, which overrides the http_port in the @mesh.agent decorator. Your code does not change.
Part 5: Verify¶
Check pods¶
NAME READY STATUS AGE
adventure-advisor-mcp-mesh-agent-b5fcb5d9-tw48r 1/1 Running 30s
budget-analyst-mcp-mesh-agent-6cdfc8c5c5-bmr9d 1/1 Running 30s
chat-history-agent-mcp-mesh-agent-57b497ffc9-6dgd4 1/1 Running 30s
claude-provider-mcp-mesh-agent-55756498b9-9sndc 1/1 Running 30s
flight-agent-mcp-mesh-agent-5df865b559-jc6cx 1/1 Running 30s
gateway-mcp-mesh-agent-79cbcf7d88-wxng4 1/1 Running 30s
hotel-agent-mcp-mesh-agent-94d8f8b8-dnfh8 1/1 Running 30s
logistics-planner-mcp-mesh-agent-5db8d9555-ndjff 1/1 Running 30s
mcp-core-mcp-mesh-grafana-6d7b9f68d6-rhbqx 1/1 Running 6m
mcp-core-mcp-mesh-postgres-0 1/1 Running 6m
mcp-core-mcp-mesh-redis-7df8848cb7-bdlqs 1/1 Running 6m
mcp-core-mcp-mesh-registry-8448c85b75-4p9h7 1/1 Running 6m
mcp-core-mcp-mesh-tempo-5d8d4cbb49-gmqpd 1/1 Running 6m
openai-provider-mcp-mesh-agent-7cfd4b55bb-stqwr 1/1 Running 30s
planner-agent-mcp-mesh-agent-54876f44f4-6cp87 1/1 Running 30s
poi-agent-mcp-mesh-agent-b7fcf4864-gmslk 1/1 Running 30s
user-prefs-agent-mcp-mesh-agent-c4746c7c8-vz5bh 1/1 Running 30s
weather-agent-mcp-mesh-agent-875b6477c-wvrkv 1/1 Running 30s
Eighteen pods: five infrastructure, thirteen agents. All 1/1 Running.
Check services¶
Every agent has a ClusterIP service on port 8080. The gateway has a NodePort service so you can reach it from outside the cluster.
Check agent registration¶
Port-forward the registry and use meshctl list:
$ kubectl -n trip-planner port-forward svc/mcp-core-mcp-mesh-registry 8000:8000 &
$ meshctl list --registry-url http://localhost:8000
Registry: running (http://localhost:8000) - 13 healthy
NAME RUNTIME TYPE STATUS DEPS ENDPOINT
adventure-advisor-491aeceb Python Agent healthy 0/0 adventure-advisor-mcp-mesh-agent.trip-planner:8080
budget-analyst-bbde0bf2 Python Agent healthy 0/0 budget-analyst-mcp-mesh-agent.trip-planner:8080
chat-history-agent-e6fe4291 Python Agent healthy 0/0 chat-history-agent-mcp-mesh-agent.trip-planner:8080
claude-provider-de41d665 Python Agent healthy 0/0 claude-provider-mcp-mesh-agent.trip-planner:8080
flight-agent-b5a0bfb6 Python Agent healthy 1/1 flight-agent-mcp-mesh-agent.trip-planner:8080
gateway-api-b7080b01 Python API healthy 1/1 gateway-mcp-mesh-agent.trip-planner:8080
hotel-agent-db0a6b18 Python Agent healthy 0/0 hotel-agent-mcp-mesh-agent.trip-planner:8080
logistics-planner-5fd4a0e7 Python Agent healthy 0/0 logistics-planner-mcp-mesh-agent.trip-planner:8080
openai-provider-b32513de Python Agent healthy 0/0 openai-provider-mcp-mesh-agent.trip-planner:8080
planner-agent-9b662efc Python Agent healthy 5/5 planner-agent-mcp-mesh-agent.trip-planner:8080
poi-agent-2ccdd8e5 Python Agent healthy 1/1 poi-agent-mcp-mesh-agent.trip-planner:8080
user-prefs-agent-3bfc1af9 Python Agent healthy 0/0 user-prefs-agent-mcp-mesh-agent.trip-planner:8080
weather-agent-b8c26c65 Python Agent healthy 0/0 weather-agent-mcp-mesh-agent.trip-planner:8080
Thirteen agents, all healthy. The planner resolves all five dependencies (5/5). The gateway resolves its single dependency (1/1). Endpoints use Kubernetes DNS names -- <service>.<namespace>:<port> -- which resolve automatically within the cluster.
Call the gateway¶
Port-forward the gateway and send a request:
$ kubectl -n trip-planner port-forward svc/gateway-mcp-mesh-agent 8080:8080 &
$ curl -s http://localhost:8080/health
$ curl -s -X POST http://localhost:8080/plan \
-H "Content-Type: application/json" \
-H "X-Session-Id: k8s-test-1" \
-d '{"destination":"Kyoto","dates":"June 1-5, 2026","budget":"$2000"}' \
| python -m json.tool
The response includes the full trip plan with specialist insights -- the same output you saw on Day 7 and Day 8, now served from Kubernetes pods.
Call a tool directly¶
You can also call individual tools through the registry, the same way you did on Day 1:
$ meshctl call flight_search \
'{"origin":"SFO","destination":"NRT","date":"2026-06-01"}' \
--registry-url http://localhost:8000
{
"result": [
{
"carrier": "MH",
"flight": "MH007",
"origin": "SFO",
"destination": "NRT",
"date": "2026-06-01",
"depart": "09:15",
"arrive": "14:40",
"price_usd": 842
},
{
"carrier": "SQ",
"flight": "SQ017",
"origin": "SFO",
"destination": "NRT",
"date": "2026-06-01",
"depart": "11:50",
"arrive": "17:05",
"price_usd": 901
}
]
}
The same stub data. The same function. Running in a Kubernetes pod.
Optional: Ingress¶
Instead of port-forwarding, you can expose the gateway via Ingress. On minikube, enable the ingress addon:
Apply the ingress manifest:
# Ingress for the TripPlanner gateway
# Requires: minikube addons enable ingress (or an ingress controller)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: trip-planner-gateway
namespace: trip-planner
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- host: trip-planner.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gateway-mcp-mesh-agent
port:
number: 8080
Add the hostname to your /etc/hosts:
Then call the gateway via the ingress:
What changed from Day 8¶
| Aspect | Day 8 (Docker Compose) | Day 9 (Kubernetes) |
|---|---|---|
| Agent code | Identical | Identical |
| Orchestrator | docker compose up | helm install |
| Port strategy | Unique ports (9101, 9102...) | All agents on 8080 |
| Secrets | .env file | Kubernetes Secret |
| Networking | Docker bridge network | Kubernetes DNS |
| Health probes | Docker health checks | k8s liveness/readiness |
| Scaling | Manual (docker compose up --scale) | kubectl scale or HPA |
The agent code column is the important one. It says "Identical" twice.
Clean up¶
$ helm uninstall gateway -n trip-planner
$ helm uninstall planner-agent -n trip-planner
$ # ... (repeat for all agents, or use the teardown script)
$ # Or use the provided teardown script:
$ ./helm/teardown.sh
The teardown script uninstalls all Helm releases and deletes the namespace:
=== Uninstalling agents ===
Removed flight-agent
Removed hotel-agent
...
=== Uninstalling core ===
Removed mcp-core
=== Deleting namespace ===
namespace "trip-planner" deleted
=== Done ===
Troubleshooting¶
Image pull errors. On minikube, build images inside minikube's Docker daemon (eval $(minikube docker-env)) and set image.pullPolicy=Never in the Helm install. On cloud clusters, push images to your container registry and update image.repository in the values files.
Pod in CrashLoopBackOff. Check the logs:
Common causes: missing secrets (the llm-keys Secret was not created), missing dependencies (Redis not ready before chat-history-agent starts), or import errors in agent code.
meshctl list shows no agents. Make sure the registry port-forward is running:
$ kubectl -n trip-planner port-forward svc/mcp-core-mcp-mesh-registry 8000:8000 &
$ meshctl list --registry-url http://localhost:8000
Gateway returns "capability unavailable". The planner or its dependencies have not registered yet. Wait 30 seconds for all agents to complete registration, then retry.
Ingress not working. Verify the ingress controller is running:
Check the ingress resource:
Recap¶
You deployed all thirteen trip planner agents to Kubernetes using two Helm charts: mcp-mesh-core for infrastructure and mcp-mesh-agent for each agent. The agent code is identical to Day 8. The only new files are the Helm values files -- and meshctl scaffold generated those on Day 1.
The DDDI pattern delivered on its promise: the function you wrote on Day 1 runs in Kubernetes without modification. The decorators handle registration. The Helm chart handles deployment. The registry handles discovery. Your code handles your business logic.
See also¶
meshctl man deployment-- local, Docker, and Kubernetes deployment patternsmeshctl man security-- TLS, entity trust, and certificate management for production clusters- Kubernetes basics -- reference guide for Helm charts and common operations
Next up¶
Day 10 wraps up the tutorial -- a celebration of what you built, production readiness pointers, and open-ended challenges for where to go from here.