In blog post #10 of the Min reise mot CCIE Automation series, I move from Docker Compose to Kubernetes. Using the Nautix platform as a starting point, I demonstrate how parts of the solution can be migrated to a local Kubernetes environment using kind, and how this directly maps to the requirements in CCIE Automation Blueprint 4.3 – covering declarative deployments, secrets management, ingress, health checks, and lifecycle operations.
![<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >[My journey to CCIE Automation #10] From Docker Compose to Kubernetes</span>](https://sicra.no/hs-fs/hubfs/two_guys_working_on_a_computer.jpg?width=1024&height=576&name=two_guys_working_on_a_computer.jpg)
(This article was originally published on Bluetree.no. Following the merger of Sicra and Bluetree, content from Bluetree has now been migrated to Sicra.)
[My journey to CCIE Automation #10] From Docker Compose to Kubernetes is part of a series documenting my CCIE Automation journey. In the previous post, I focused on secure coding practices in Python. In this post, I take the next step by migrating parts of the platform from Docker Compose to Kubernetes.
Up until now, Nautix has been running using Docker Compose. That worked well in the early stages, but for Blueprint 4.3 – Package and deploy a solution by using Kubernetes, it was time to take the next step.
In this post, I’ll show how I migrated parts of Nautix to Kubernetes, running locally on my development laptop, and how that maps directly to the CCIE Automation blueprint.
Docker Compose is great for local development, but Kubernetes gives you:
Declarative deployments
Built-in health checking and self-healing
Native secrets and configuration management
Service discovery and load balancing
A consistent operational model across environments
For local development, I chose kind (Kubernetes IN Docker).
Why kind?
Lightweight and fast
Runs entirely inside Docker
Perfect for labs and experimentation
Uses real Kubernetes APIs and tooling
Forutsetninger
Docker Desktop (with WSL2 integration)
kubectl
kind
I created the cluster with explicit port mappings so the ingress controller could be reached from my laptop:
# kind-nautix.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
Create the cluster:
kind create cluster --name nautix --config kind-nautix.yaml
Install ingress-nginx:
kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
At this point, I had a fully functional Kubernetes cluster running locally.
First, I created a dedicated namespace:
apiVersion: v1
kind: Namespace
metadata:
name: nautix
kubectl apply -f k8s/00-namespace.yaml
kubectl config set-context --current --namespace=nautix
Namespaces are a simple but powerful concept — they provide isolation and make it much easier to reason about resources.
In Docker Compose, secrets often end up in .env files.
In Kubernetes, secrets are first-class objects.
Instead of committing secrets to Git, I created them imperatively using kubectl:
kubectl create secret generic nautix-secrets \
--from-literal=VAULT_DEV_ROOT_TOKEN_ID=root-token \
-n nautix
This creates a native Kubernetes Secret that can be safely referenced by Deployments without storing sensitive values in source control.
The Inventory service is a stateless Flask API, making it a perfect candidate for a Deployment.
Inventory Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
namespace: nautix
spec:
replicas: 1
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- name: inventory
image: nautix-inventory:dev
ports:
- containerPort: 8000
readinessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 3
livenessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 10
Inventory Service
apiVersion: v1
kind: Service
metadata:
name: inventory
namespace: nautix
spec:
selector:
app: inventory
ports:
- port: 8000
type: ClusterIP
The Deployment manages pod lifecycle and replicas, while the Service provides a stable network endpoint inside the cluster.
Vault runs in dev mode for this lab, but I still wanted to demonstrate volumes and persistent storage.
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vault-data
namespace: nautix
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Vault Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: vault
namespace: nautix
spec:
replicas: 1
selector:
matchLabels:
app: vault
template:
metadata:
labels:
app: vault
spec:
containers:
- name: vault
image: vault:1.8.7
ports:
- containerPort: 8200
env:
- name: VAULT_DEV_LISTEN_ADDRESS
value: "0.0.0.0:8200"
- name: VAULT_DEV_ROOT_TOKEN_ID
valueFrom:
secretKeyRef:
name: nautix-secrets
key: VAULT_DEV_ROOT_TOKEN_ID
volumeMounts:
- name: data
mountPath: /vault/file
volumes:
- name: data
persistentVolumeClaim:
claimName: vault-data
Vault Service
apiVersion: v1
kind: Service
metadata:
name: vault
namespace: nautix
spec:
selector:
app: vault
ports:
- port: 8200
type: ClusterIP
To expose the services externally, I used host-based routing via Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nautix
namespace: nautix
spec:
ingressClassName: nginx
rules:
- host: inventory.nautix.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: inventory
port:
number: 8000
- host: vault.nautix.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vault
port:
number: 8200
Med riktige host-oppføringer på laptopen min kunne jeg nå nå:
http://inventory.nautix.local
http://vault.nautix.local
With the appropriate host entries on my laptop, I could now access:
kubectl apply -f k8s/
Verify:
kubectl get pods
kubectl get svc
kubectl get ingress
At this point, both Inventory and Vault were running fully inside Kubernetes.
This is where Kubernetes really shines.
Scaling
kubectl scale deploy inventory --replicas=3
kubectl get pods -l app=inventory
Logs
kubectl logs deploy/inventory
kubectl logs -f deploy/inventory
Self-healing
kubectl delete pod <inventory-pod>
The pod is automatically recreated — no manual intervention required.
Instead of Docker Compose depends_on, Kubernetes uses readiness and liveness probes.
Readiness controls whether traffic is sent to a pod
Liveness controls when a pod should be restarted
This makes the platform far more resilient by default.
By migrating even a small part of Nautix to Kubernetes, I was able to demonstrate every requirement in Blueprint 4.3 using real workloads:
Declarative deployments
Secure secret handling
Persistent storage
Ingress routing
Health checks
Scaling and lifecycle management
Full control via kubectl
[My Journey to CCIE Automation #1] Intro + Building a Python CLI app
[My Journey to CCIE Automation #2] Building a Inventory REST API + Docker
[My Journey to CCIE Automation #3] Orchestration API + NETCONF
[My Journey to CCIE Automation #4] Automating Network Discovery and Reports with Python & Ansible
[My Journey to CCIE Automation #7] Exploring Model-Driven Telemetry for Real-Time Network Insights
[My Journey to CCIE Automation #8] Exploring ThousandEyes and Automating Enterprise Agent Deployment
[My Journey to CCIE Automation #9] Applying OWASP Secure Coding Practices



