Sicra Header Logo
  • Careers
  • About us
  • People
EnglishNorsk
Talk to us
  1. Knowledge
  2. Insights
  3. Blog
Blog
31.12.2025
min read

[My journey to CCIE Automation #10] From Docker Compose to Kubernetes

In blog post #10 of the Min reise mot CCIE Automation series, I move from Docker Compose to Kubernetes. Using the Nautix platform as a starting point, I demonstrate how parts of the solution can be migrated to a local Kubernetes environment using kind, and how this directly maps to the requirements in CCIE Automation Blueprint 4.3 – covering declarative deployments, secrets management, ingress, health checks, and lifecycle operations.

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >[My journey to CCIE Automation #10] From Docker Compose to Kubernetes</span>
bilde
Bjørnar LintvedtSenior Network Engineer

Senior Network Engineer med fokus på nettverk, sikkerhet og automasjon.

(This article was originally published on Bluetree.no. Following the merger of Sicra and Bluetree, content from Bluetree has now been migrated to Sicra.)

[My journey to CCIE Automation #10] From Docker Compose to Kubernetes is part of a series documenting my CCIE Automation journey. In the previous post, I focused on secure coding practices in Python. In this post, I take the next step by migrating parts of the platform from Docker Compose to Kubernetes.

Blog #10

Up until now, Nautix has been running using Docker Compose. That worked well in the early stages, but for Blueprint 4.3 – Package and deploy a solution by using Kubernetes, it was time to take the next step.

In this post, I’ll show how I migrated parts of Nautix to Kubernetes, running locally on my development laptop, and how that maps directly to the CCIE Automation blueprint.

Why Kubernetes?

Docker Compose is great for local development, but Kubernetes gives you:

  • Declarative deployments

  • Built-in health checking and self-healing

  • Native secrets and configuration management

  • Service discovery and load balancing

  • A consistent operational model across environments

Local Kubernetes setup using kind

For local development, I chose kind (Kubernetes IN Docker).

Why kind?

  • Lightweight and fast

  • Runs entirely inside Docker

  • Perfect for labs and experimentation

  • Uses real Kubernetes APIs and tooling

Forutsetninger

  • Docker Desktop (with WSL2 integration)

  • kubectl

  • kind 

Create a kind cluster with ingress support

I created the cluster with explicit port mappings so the ingress controller could be reached from my laptop:

# kind-nautix.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443


Create the cluster:

kind create cluster --name nautix --config kind-nautix.yaml


Install ingress-nginx:

kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

At this point, I had a fully functional Kubernetes cluster running locally.

Namespace: isolating Nautix

First, I created a dedicated namespace:

apiVersion: v1
kind: Namespace
metadata:
name: nautix
kubectl apply -f k8s/00-namespace.yaml
kubectl config set-context --current --namespace=nautix

Namespaces are a simple but powerful concept — they provide isolation and make it much easier to reason about resources.

Handling secrets

In Docker Compose, secrets often end up in .env files.
In Kubernetes, secrets are first-class objects.

Instead of committing secrets to Git, I created them imperatively using kubectl:

kubectl create secret generic nautix-secrets \
--from-literal=VAULT_DEV_ROOT_TOKEN_ID=root-token \
-n nautix

This creates a native Kubernetes Secret that can be safely referenced by Deployments without storing sensitive values in source control.

Deploying the Inventory service

The Inventory service is a stateless Flask API, making it a perfect candidate for a Deployment.


Inventory Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
namespace: nautix
spec:
replicas: 1
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- name: inventory
image: nautix-inventory:dev
ports:
- containerPort: 8000
readinessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 3
livenessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 10


Inventory Service

apiVersion: v1
kind: Service
metadata:
name: inventory
namespace: nautix
spec:
selector:
app: inventory
ports:
- port: 8000
type: ClusterIP

The Deployment manages pod lifecycle and replicas, while the Service provides a stable network endpoint inside the cluster.

Deploying Vault with persistent storage

Vault runs in dev mode for this lab, but I still wanted to demonstrate volumes and persistent storage.


PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vault-data
namespace: nautix
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi


Vault Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: vault
namespace: nautix
spec:
replicas: 1
selector:
matchLabels:
app: vault
template:
metadata:
labels:
app: vault
spec:
containers:
- name: vault
image: vault:1.8.7
ports:
- containerPort: 8200
env:
- name: VAULT_DEV_LISTEN_ADDRESS
value: "0.0.0.0:8200"
- name: VAULT_DEV_ROOT_TOKEN_ID
valueFrom:
secretKeyRef:
name: nautix-secrets
key: VAULT_DEV_ROOT_TOKEN_ID
volumeMounts:
- name: data
mountPath: /vault/file
volumes:
- name: data
persistentVolumeClaim:
claimName: vault-data


Vault Service

apiVersion: v1
kind: Service
metadata:
name: vault
namespace: nautix
spec:
selector:
app: vault
ports:
- port: 8200
type: ClusterIP

Exposing services with Ingress

To expose the services externally, I used host-based routing via Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nautix
namespace: nautix
spec:
ingressClassName: nginx
rules:
- host: inventory.nautix.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: inventory
port:
number: 8000
- host: vault.nautix.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vault
port:
number: 8200

Med riktige host-oppføringer på laptopen min kunne jeg nå nå:

  • http://inventory.nautix.local

  • http://vault.nautix.local

Spinning everything up

With the appropriate host entries on my laptop, I could now access:

kubectl apply -f k8s/


Verify:

kubectl get pods
kubectl get svc
kubectl get ingress

At this point, both Inventory and Vault were running fully inside Kubernetes.

Managing pod lifecycle with kubectl

This is where Kubernetes really shines.


Scaling

kubectl scale deploy inventory --replicas=3
kubectl get pods -l app=inventory


Logs

kubectl logs deploy/inventory
kubectl logs -f deploy/inventory


Self-healing

kubectl delete pod <inventory-pod>

The pod is automatically recreated — no manual intervention required.

Health checks and self-healing

Instead of Docker Compose depends_on, Kubernetes uses readiness and liveness probes.

  • Readiness controls whether traffic is sent to a pod

  • Liveness controls when a pod should be restarted

This makes the platform far more resilient by default.

Closing thoughts

By migrating even a small part of Nautix to Kubernetes, I was able to demonstrate every requirement in Blueprint 4.3 using real workloads:

  • Declarative deployments

  • Secure secret handling

  • Persistent storage

  • Ingress routing

  • Health checks

  • Scaling and lifecycle management

  • Full control via kubectl

Useful Links

  • GitLab Repo – My CCIE Automation Code
  • Kubernetes documentation

Blog series

  • [My Journey to CCIE Automation #1] Intro + Building a Python CLI app

  • [My Journey to CCIE Automation #2] Building a Inventory REST API + Docker

  • [My Journey to CCIE Automation #3] Orchestration API + NETCONF

  • [My Journey to CCIE Automation #4] Automating Network Discovery and Reports with Python & Ansible

  • [My Journey to CCIE Automation #5] Building Network Pipelines for Reliable Changes with pyATS & GitLab CI

  • [My Journey to CCIE Automation #6] Automating Cisco ACI Deployments with Terraform, Vault, and GitLab CI

  • [My Journey to CCIE Automation #7] Exploring Model-Driven Telemetry for Real-Time Network Insights

  • [My Journey to CCIE Automation #8] Exploring ThousandEyes and Automating Enterprise Agent Deployment

  • [My Journey to CCIE Automation #9] Applying OWASP Secure Coding Practices

Need Assistance?

We are happy to have a non-binding conversation. 
Contact us

Explore more

Cyber Threat Landscape 2026: Insights from Arctic Wolf’s threat report
Blog

Cyber Threat Landscape 2026: Insights from Arctic Wolf’s threat report

Arctic Wolf Threat Report 2026: Ransomware remains the #1 threat.
IAM for dummies
Blog

IAM for dummies

A simple, practical introduction to IAM and why correct access is critical.
Cost reduction in Microsoft Sentinel and Defender XDR
Blog

Cost reduction in Microsoft Sentinel and Defender XDR

Costs and choices for logging in Microsoft Sentinel and Defender XDR.
Sicra’s security triangle: Holistic IT and OT security through leadership, monitoring, and expertise
Blog

Sicra’s security triangle: Holistic IT and OT security through leadership, monitoring, and expertise

Sicra’s security triangle provides holistic security across IT, OT, and leadership.

Stay updated
Receive the latest news

Links
SustainabilityFAQPartnersCertifications and awardsCareerPress & brand
Contact
Tel: +47 648 08 488
E-mail: firmapost@sicra.no
Posthuset, Biskop Gunnerus’ gate 14A, 0185 Oslo, Norway
Follow us on LinkedIn
Certifications
iso27001-white
ISO 27001 compliance
miljofyrtarnlogo-hvit-rgb
Eco-Lighthouse
Sicra Footer Logo
Sicra © 2025
Privacy Policy