Sicra Header Logo
  • Careers
  • About us
  • People
EnglishNorsk
Talk to us
  1. Knowledge
  2. Insights
  3. Blog
Blog
17.09.2025
min read

[My journey to CCIE Automation #5] Building network pipelines for reliable changes with pyATS and GitLab CI

In blog #5 of the CCIE Automation journey, the focus shifts to network verification and CI/CD. Using pyATS and GitLab CI, a pipeline is built to automate prechecks, deployments, and postchecks, enabling repeatable and reliable network changes.

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >[My journey to CCIE Automation #5] Building network pipelines for reliable changes with pyATS and GitLab CI</span>
bilde
Bjørnar LintvedtSenior Network Engineer

Senior Network Engineer focused on networking, security, and automation.

(This article was originally published on Bluetree.no. Following the merger of Sicra and Bluetree, content from Bluetree has now been migrated to Sicra.)

[My journey to CCIE Automation #5] Building network pipelines for reliable changes with pyATS and GitLab CI is part of an ongoing CCIE Automation series. In the previous post, I automated network discovery and reporting with Python and Ansible. This time, I focus on network verification and CI/CD using pyATS and GitLab CI.

Blog #5

After playing with Ansible in blog #4, I wanted to move into something closer to network verification and CI/CD integration.

That’s where pyATS and GitLab CI come in.

Why pyATS?

pyATS is Cisco’s Python framework for testing and validating network devices.
It allows you to:

  • Parse command outputs into structured data

  • Learn entire device features (like routing, interfaces, logging)

  • Write automated prechecks/postchecks

  • Integrate with CI pipelines

In other words: instead of reading CLI output, your code can understand the network state.

Why GitLab CI?

GitLab CI lets us run pipelines whenever we push code or trigger jobs. Instead of manually running prechecks on a lab or production network, we let GitLab:

  1. Spin up a container with pyATS

  2. Connect to our devices (from inventory/testbed files)

  3. Run verification jobs

  4. Collect results and make them available as artifacts

This brings DevOps-style repeatability to network engineering.

This time's project

I built a Gitlab CI pipeline that have 4 stages:

  • Setup
    Get's all hosts in Inventory API (Service created a previous time, see Gitlab repository), and generates a pyATS testbed file to be used by the next stages

  • Prechecks
    Verifies syslog servers are configured

  • Deploy
    Verifies syslog servers are configured

  • Postchecks
    Verifies that all the syslogs servers configured

The pipeline is triggered by creating a pipeline with the following variables:

image-png-Sep-18-2025-06-55-29-3616-AM

The pipeline will then start to run the different stages:

image-png-Sep-17-2025-09-54-47-7026-PM

For each stage you can open the logs and see in details of whats happening. As you can see, the precheck stage has a warning. By opening the precheck log you can see that some of the tests failed:

image-png-Sep-17-2025-09-56-42-1386-PM

3 of 4 syslog servers that are going to be deployed in next stage are not in the current config on the network devices. 

I also added so that test results are shown in Gitlab Web UI:

image-png-Sep-17-2025-09-59-58-0009-PM

Implementation

Here’s the files and folder structure:

pipelines/jobs/add_syslog/
  • precheck_job.py – Run prechecks using pyATS
  • precheck_test_syslog.py – Test class verifying syslog server presence
  • deploy_job.py – Deployment wrapper
  • deploy_syslog.py – Actual syslog deployment logic
  • postcheck_job.py – Run postchecks
  • postcheck_test_syslog.py – Test verifying syslog server after deployment

Also updated:

  • .gitlab-ci.yml – Pipeline definition
  • docker-compose.yml – Environment for local dev/test

How it works

Let’s break it down step by step.

1. GitLab CI

The .gitlab-ci.yml ties it all together, and tells how the pipeline shall be executed (I have removed some of the code to only show the relevant stuff, see repo for full file):

stages:
  - setup
  - precheck
  - deploy
  - postcheck


# This trigger should only be triggered when manually creating a pipeline in Gitlab Web UI
workflow:
  rules:
    - if: $CI_PIPELINE_SOURCE == "web"
      when: always
    - when: never

...


# Set default image to run by gitlab runner with the required software

default:
  image: python:3.9-slim
  before_script:
    ...
    - apt-get update
    - apt-get install -y --no-install-recommends openssh-client iputils-ping
   ...
    - pip install --upgrade pip
    - pip install -r pipelines/requirements.txt


# Setup stage runs a script that gets devices from InventoryAPI and generates a testbed file for the next stages

setup:
  stage: setup
  tags: ["nautix"]
  variables:
    DEVICE_USERNAME: $DEVICE_USERNAME
    DEVICE_PASSWORD: $DEVICE_PASSWORD
    SYSLOG_SERVERS: $SYSLOG_SERVERS
  script:
    - python pipelines/get_devices_from_inventory.py --outfile build/testbed.yml
  artifacts:
    paths:
      - build/testbed.yml
    expire_in: 1 week


# Precheck stage allows tests to fail, as it only check if any of the syslog servers are configured

precheck:
  stage: precheck
  needs: ["setup"]
  tags: ["nautix"]
  allow_failure: true
  script:
    - mkdir -p build/reports
    - pyats run job pipelines/jobs/$JOB_NAME/precheck_job.py --testbed build/testbed.yml --xunit build/reports/precheck
  artifacts:
    when: always
    paths:
      - build/reports/
    reports:
      junit:
        - build/reports/precheck/*.xml
# Deploy stage will configure the devices in testbed file with the syslog servers provided

deploy:
  stage: deploy
  needs: 
    - job: setup
      artifacts: true
    - job: precheck
  tags: ["nautix"]
  script:
    - mkdir -p build/reports
    - pyats run job pipelines/jobs/$JOB_NAME/deploy_job.py --testbed build/testbed.yml --xunit build/reports/deploy
  artifacts:
    when: always
    paths:
      - build/reports/
    reports:
      junit:
        - build/reports/deploy/*.xml


# Postcheck stage will verify if the syslog servers are configured. It will mark the pipeline as failed if any of the syslog servers are missing

postcheck:
  stage: postcheck
  needs: 
    - job: setup
      artifacts: true
    - job: deploy
    - job: precheck
  tags: ["nautix"]
  script:
    - mkdir -p build/reports
    - pyats run job pipelines/jobs/$JOB_NAME/postcheck_job.py --testbed build/testbed.yml --xunit build/reports/postcheck
  artifacts:
    when: always
    paths:
      - build/reports/
    reports:
      junit:
        - build/reports/postcheck/*.xml

2. Precheck job

We first confirm whether the required syslog server is already configured. As you can also see i run different loops here, one on function level that will loop the different syslog servers provided. Then another inside that will loop all the devices from the testbed file.

image-png-Sep-18-2025-07-10-55-8837-AM

3. Deploy job

This script connects to the device and pushes syslog configuration. The pyATS "test" has same build of of test as the precheck with Common Setup and Common Cleanup. But to configure I have used the "device.configure()" command.

Note: Deploy stage could easily be switched with another  configuration method, e.g Netconf, Ansible etc. As the gitlab CI runs a function called "deploy_job.py", so if you change this to whatever, it should work as well

image-png-Sep-18-2025-07-14-49-0145-AM

4. Postcheck job

After deployment, we re-run the same test as in precheck.
If the syslog server is still missing, the pipeline fails.

This gives a full before–after validation loop.

image-png-Sep-18-2025-07-14-49-0145-AM

5. GitLab runner

When a pipeline is triggered, a Gitlab runner picks up the task to execute the different stages of the pipeline. To be able for the gitlab runner to fetch devices from Inventory API I needed to add a Gitlab runner to my docker-compose file and run it locally. Here is an extract from the docker-compose file. 

image-png-Sep-18-2025-07-26-32-0767-AM

To enable the gitlab runner to communicate with the Gitlab cloud (so that it can pick up the pipeline tasks), I had to register it (See below). In addition I needed to do a "hack" so that the container, that the gitlab runner runs to execute the stages, could reach the Inventory.

# Register gitlab runner (You need to have this repo in your own gitlab account)
# GitLab instance URL: https://gitlab.com or your self-hosted URL
# Registration token: from your project/group in GitLab → Settings → CI/CD → Runners
# Description: nautix
# Executor: docker
# Default Docker image: python:3.9-slim
docker exec -it gitlab-runner gitlab-runner register

# Allow gitlab runner CI job container to talk with the Nautix services
sed -i '/\[runners.docker\]/a \ network_mode = "ccie-automation_default"' services/gitlab-runner/config/config.toml

Why this matters

Instead of “hoping” changes were applied correctly, we now have:

  • Automated verification before changes
  • Automated deployment
  • Automated verification after changes
  • Repeatability inside GitLab

This is NetDevOps in action.

Service Interactions update

I've updated the new use case to the Nautix diagram.

image-png-Sep-18-2025-08-11-55-2259-AM

What's next

In blog #6 I will focus on Terraform:

Blueprint item 2.8 Use Terraform to statefully manage infrastructure, given support documentation
    2.8.a Loop control
    2.8.b Resource graphs
    2.8.c Use of variables
    2.8.d Resource retrieval
    2.8.e Resource provision
    2.8.f Management of the state of provisioned resources

Useful links

  • GitLab Repo – My CCIE Automation Code
  • pyATS documentation
  • Genie documentation
  • Gitlab CI documentation

Blog series

  • [My journey to CCIE Automation #1] Intro + building a Python CLI app

  • [My journey to CCIE Automation #2] Inventory REST API and microservices architecture

  • [My journey to CCIE Automation #3] Orchestration API and NETCONF

  • [My journey to CCIE Automation #4] Automating network discovery and reports with Python and Ansible

  • [My journey to CCIE Automation #5] Building network pipelines for reliable changes with pyATS and GitLab CI

  • [My journey to CCIE Automation #6] Automating Cisco ACI deployments with Terraform, Vault and GitLab CI

  • [My journey to CCIE Automation #7] Exploring Model-Driven Telemetry for real-time network insights

  • [My journey to CCIE Automation #8] Exploring ThousandEyes and automating Enterprise Agent deployment

  • [My journey to CCIE Automation #9] Applying OWASP Secure Coding Practices

  • [My journey to CCIE Automation #10] From Docker Compose to Kubernetes

Need Assistance?

We are happy to have a non-binding conversation.
Contact us

Explore more

Cyber Threat Landscape 2026: Insights from Arctic Wolf’s threat report
Blog

Cyber Threat Landscape 2026: Insights from Arctic Wolf’s threat report

Arctic Wolf Threat Report 2026: Ransomware remains the #1 threat.
IAM for dummies
Blog

IAM for dummies

A simple, practical introduction to IAM and why correct access is critical.
Cost reduction in Microsoft Sentinel and Defender XDR
Blog

Cost reduction in Microsoft Sentinel and Defender XDR

Costs and choices for logging in Microsoft Sentinel and Defender XDR.
Sicra’s security triangle: Holistic IT and OT security through leadership, monitoring, and expertise
Blog

Sicra’s security triangle: Holistic IT and OT security through leadership, monitoring, and expertise

Sicra’s security triangle provides holistic security across IT, OT, and leadership.

Stay updated
Receive the latest news

Links
SustainabilityFAQPartnersCertifications and awardsCareerPress & brand
Contact
Tel: +47 648 08 488
E-mail: firmapost@sicra.no
Posthuset, Biskop Gunnerus’ gate 14A, 0185 Oslo, Norway
Follow us on LinkedIn
Certifications
iso27001-white
ISO 27001 compliance
miljofyrtarnlogo-hvit-rgb
Eco-Lighthouse
Sicra Footer Logo
Sicra © 2025
Privacy Policy