Skip to main content

How to Set Up a CI/CD Pipeline from Scratch in 2026: The Complete Developer Guide

How to Set Up a CI/CD Pipeline from Scratch in 2026: The Complete Developer Guide

How to Set Up a CI/CD Pipeline from Scratch in 2026: The Complete Developer Guide

The first CI/CD pipeline I built took three full days and broke production twice before it worked correctly. I still remember watching a deployment script silently overwrite a production database with test data. At the time, I thought CI/CD was an advanced concept reserved for large engineering teams. I was wrong. Today, a junior developer can set up a functional CI/CD pipeline in a single afternoon using GitHub Actions. The tools have matured dramatically. The concepts have not changed. This guide gives you both — the practical configuration you can copy today, and the deep understanding of why each piece exists, so you can adapt it when things inevitably go sideways.

Who Is This Guide For?

This guide is written for developers who are ready to stop deploying manually and start deploying automatically — correctly.

  • Solo developers and small teams still using FTP, manual SSH deployments, or "push to main and pray"
  • Backend and full-stack engineers who understand their application but have never configured a CI/CD pipeline
  • Engineering leads standardizing deployment processes across a growing team
  • DevOps engineers migrating from Jenkins or CircleCI to GitHub Actions in 2026

What Is CI/CD and Why Does It Matter in 2026?

CI/CD stands for Continuous Integration and Continuous Delivery. These are not buzzwords — they represent a fundamental shift in how software is delivered reliably at speed.

TermWhat It MeansWhat It Prevents
Continuous IntegrationEvery commit triggers automated tests and build verificationBugs that only appear when multiple developers' code merges
Continuous DeliveryEvery passing build is prepared for release — human approves deployLong, risky manual release processes
Continuous DeploymentEvery passing build deploys to production automaticallyDeployment bottlenecks and release anxiety
200x
More deploys/day (elite teams)
2,604x
Faster incident recovery
7x
Lower change failure rate
💬

From the field: A client was deploying once every three weeks because deployments were so manual and risky that the team needed a full Friday afternoon to execute one. After implementing CI/CD, they were deploying 8-12 times per day. Not because they worked faster — but because each individual deployment became trivially small and safe.

The CI/CD Pipeline — Visual Overview

Before writing any configuration, understand what a complete pipeline looks like. Each stage has a specific job, and the order matters:

📄
Code Push
Lint + Test
🚧
Build
🔒
Security
🚀
Staging
🌎
Production

Step 1: Choose Your CI/CD Tool

In 2026, GitHub Actions has won the majority share for teams already on GitHub. Here is the honest comparison:

ToolBest ForFree TierLearning Curve
GitHub Actions ⭐Most teams — GitHub-native, massive ecosystem2,000 min/monthLow
GitLab CITeams on GitLab, self-hosted requirements400 min/monthMedium
JenkinsLarge enterprises, complex custom pipelinesSelf-hosted onlyHigh
CircleCIFast builds, Docker-heavy workflows6,000 min/monthMedium
ArgoCDKubernetes GitOps deploymentsOpen sourceHigh
i

This guide uses GitHub Actions for all examples. If you are on GitLab, the concepts are identical and the syntax is 80% similar.

Step 2: Your First GitHub Actions Pipeline

GitHub Actions pipelines live in .github/workflows/ in your repository. Let us build one from scratch — starting with the essentials.

The Minimum Viable Pipeline — CI Only

.github/workflows/ci.yml GitHub Actions · YAML
name: CI Pipeline

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    name: Lint and Test
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ['3.11', '3.12']

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v5
        with:
          python-version: ${{ matrix.python-version }}
          cache: 'pip'

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt
          pip install -r requirements-dev.txt

      - name: Lint with flake8
        run: |
          flake8 . --count --select=E9,F63,F7,F82 --show-source
          flake8 . --count --max-complexity=10 --max-line-length=120

      - name: Run tests with coverage
        run: |
          pytest tests/ -v --cov=app --cov-report=xml --cov-fail-under=80

      - name: Upload coverage report
        uses: codecov/codecov-action@v4
        with:
          file: ./coverage.xml

This workflow tests against two Python versions simultaneously using a matrix strategy — catching version-specific bugs before they reach production. I have seen this matrix catch real bugs on Python 3.12 that passed silently on 3.11.

Step 3: Adding Docker Build and Push

Once tests pass, the next stage builds a Docker image and pushes it to a registry. This image becomes the deployable artifact — the exact same binary that runs in every environment.

.github/workflows/build.yml — build job GitHub Actions · Docker
  build:
    name: Build and Push Docker Image
    runs-on: ubuntu-latest
    needs: test
    if: github.ref == 'refs/heads/main'

    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract Docker metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ghcr.io/${{ github.repository }}
          tags: |
            type=sha,prefix=sha-
            type=raw,value=latest

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
+

Layer Caching: cache-from: type=gha stores Docker layer cache in GitHub Actions. On a typical Python app, this reduces build time from 4-5 minutes to under 60 seconds on subsequent runs.

Step 4: Security Scanning

In 2026, shipping a Docker image without security scanning is the equivalent of deploying without tests. Supply chain attacks have become one of the most common attack vectors — a single pipeline step catches the majority automatically.

.github/workflows/security.yml — security job GitHub Actions · Trivy
  security:
    name: Security Scan
    runs-on: ubuntu-latest
    needs: build

    steps:
      - uses: actions/checkout@v4

      - name: Audit Python dependencies
        run: |
          pip install pip-audit
          pip-audit -r requirements.txt --severity medium

      - name: Scan Docker image with Trivy
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: 'ghcr.io/${{ github.repository }}:latest'
          format: 'sarif'
          output: 'trivy-results.sarif'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

      - name: Upload results to GitHub Security tab
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: 'trivy-results.sarif'
!

First-run tip: Set exit-code: '0' on legacy codebases to audit first without failing the pipeline. Once the backlog is resolved, tighten to '1' to enforce going forward.

Step 5: Automated Deployment with Environment Gates

The deployment stage is where most teams make their most expensive mistakes. The key insight I learned after breaking production twice: staging and production must be identical environments, and promotion between them must be explicit.

.github/workflows/deploy.yml — deploy jobs GitHub Actions · Multi-Environment
  deploy-staging:
    name: Deploy to Staging
    runs-on: ubuntu-latest
    needs: [test, build, security]
    environment:
      name: staging
      url: https://staging.bioquro.com

    steps:
      - name: Deploy to staging
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.STAGING_HOST }}
          username: ${{ secrets.STAGING_USER }}
          key: ${{ secrets.STAGING_SSH_KEY }}
          script: |
            docker pull ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
            docker stop app-staging || true
            docker rm app-staging || true
            docker run -d --name app-staging --restart unless-stopped \
              -p 8000:8000 \
              -e DATABASE_URL=${{ secrets.STAGING_DB_URL }} \
              ghcr.io/${{ github.repository }}:sha-${{ github.sha }}

  deploy-production:
    name: Deploy to Production
    runs-on: ubuntu-latest
    needs: deploy-staging
    environment:
      name: production        # Requires manual approval in GitHub Environments
      url: https://bioquro.com

    steps:
      - name: Deploy to production (blue-green)
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.PROD_HOST }}
          username: ${{ secrets.PROD_USER }}
          key: ${{ secrets.PROD_SSH_KEY }}
          script: |
            docker pull ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
            docker run -d --name app-green --restart unless-stopped \
              -p 8001:8000 \
              -e DATABASE_URL=${{ secrets.PROD_DB_URL }} \
              ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
            sleep 10
            curl -f http://localhost:8001/health || (docker stop app-green && exit 1)
            docker stop app-blue || true
            docker rename app-green app-blue

Step 6: Managing Secrets Correctly

  1. Use GitHub Environments for secret scoping: Staging secrets must be completely inaccessible to production jobs. GitHub Environments enforce this at the platform level.

  2. Never echo secrets in logs: GitHub Actions masks registered secrets, but custom scripts can accidentally expose them. Always validate your logs after the first run.

  3. Rotate secrets on a schedule: SSH deployment keys and database credentials used by CI should rotate every 90 days.

  4. Use OIDC for cloud deployments: Instead of storing long-lived AWS or GCP credentials, use OpenID Connect to grant GitHub Actions temporary credentials. Zero stored secrets, zero rotation burden.

Complete Pipeline Summary

StageTriggerBlocks Deploy If
Lint + TestEvery push + PRAny test fails or coverage < 80%
Docker BuildPush to main onlyBuild fails
Security ScanPush to main onlyCRITICAL/HIGH CVE found
Deploy StagingPush to main onlyHealth check fails
Deploy ProductionManual approvalReviewer rejects

Production Readiness Checklist

  • Tests run on every push and pull request — no exceptions
  • Code coverage minimum enforced (80% recommended)
  • Docker image tagged with commit SHA — never mutable tags in production
  • Security scan on every build — dependency and container vulnerabilities
  • Staging environment mirrors production exactly
  • Health check validates new deployment before traffic switch
  • Production deployment requires manual approval gate
  • Rollback procedure documented and tested — not theorized
  • All secrets in GitHub Environments — none in workflow YAML files
  • Pipeline runtime under 10 minutes — slow pipelines get bypassed

Frequently Asked Questions

What is a CI/CD pipeline and why do I need one?+

A CI/CD pipeline automates testing, building, and deploying code on every commit. CI catches bugs early with automated tests on every push. CD automates getting tested code to production. Teams with CI/CD deploy far more frequently with dramatically lower failure rates than teams deploying manually — because each change is smaller, tested, and reversible.

What is the best CI/CD tool in 2026?+

GitHub Actions is the dominant choice in 2026 for most development teams — deeply integrated with GitHub, massive action marketplace, and a generous free tier. For Kubernetes-native GitOps deployments, ArgoCD combined with GitHub Actions is the most popular production combination. Jenkins remains relevant for large enterprises with complex existing pipelines.

How long does it take to set up a CI/CD pipeline?+

A basic CI pipeline is operational in 2-4 hours with GitHub Actions. A production-grade pipeline with multiple environments, security scanning, Docker builds, and rollback capabilities takes 1-3 days to configure and validate. The investment pays back within the first week — a single prevented production incident saves more time than the entire setup took.

What is the difference between Continuous Delivery and Continuous Deployment?+

Continuous Delivery prepares every passing build for release but requires human approval for production deployment. Continuous Deployment goes further — every passing build deploys to production automatically with no human intervention. Most teams start with Continuous Delivery and graduate to full Continuous Deployment as their test suite matures and confidence in automation grows.

What does your current deployment process look like?

Still deploying manually, or stuck on a specific pipeline blocker? Leave a comment with your stack and challenge — I respond to every technical question personally.


👤
Tahar Maqawil

Senior Application Developer · DevOps Engineer · Bioquro

10+ years building CI/CD pipelines for production software systems — from the first broken pipeline that overwrote a production database, to multi-environment GitOps workflows handling hundreds of deployments per week. I write at Bioquro to share the lessons that documentation never covers.

Comments

Popular posts from this blog

The Evolution of Microservices Architecture in 2026

The Evolution of Microservices Architecture in 2026: Patterns, Pitfalls, and What Actually Works Architecture Microservices 2026 Guide May 3, 2026  · 10 min read The Evolution of Microservices Architecture in 2026: Patterns, Pitfalls, and What Actually Works  Tahar Maqawil — Senior Application Developer Informaticien d'Application · Systems Architect · Bioquro 10+ years designing and deploying distributed systems in production I remember the first time I recommended microservices to a client. The project was a mid-sized e-commerce platform, the team was excited, and the architecture diagrams looked clean and elegant. Eight months later, we had 23 services, a Kafka cluster no one fully understood, distributed transactions that occasionally went silent, and an on-call rotation that had become everyone's worst nightmare. The system worked — but it was fragile in w...

Maximizing Server Performance for High-Traffic Applications in 2026: A Complete Engineering Guide

Maximizing Server Performance for High-Traffic Applications in 2026: A Complete Engineering Guide Server Performance High Traffic 2026 Guide May 3, 2026  · 11 min read Maximizing Server Performance for High-Traffic Scalable Applications in 2026: A Complete Engineering Guide &#128100; Tahar Maqawil — Senior Application Developer Informaticien d'Application · Infrastructure & Scalability Engineer · Bioquro 10+ years scaling production systems from hundreds to millions of requests per day The call came at 2:47am. A client's e-commerce platform had just been featured on a major news site — the kind of exposure every startup dreams of. Within eight minutes of the article going live, 40,000 simultaneous users hit the site. Within twelve minutes, the server was returning 502 errors to everyone. By the time I joined the emergency call, the traffic spike had ...

Database Encryption in 2026: A Security-First Implementation Guide for Developers

Database Encryption in 2026: A Security-First Implementation Guide for Developers Security Encryption 2026 Guide May 3, 2026  · 11 min read Database Encryption in 2026: A Security-First Implementation Guide for Developers &#128100; Tahar Maqawil — Senior Application Developer Informaticien d'Application · Security-Conscious Engineer · Bioquro 10+ years implementing secure data systems across regulated and high-stakes environments In 2023, a healthcare startup I consulted for suffered a data breach. The attacker gained read access to their PostgreSQL database for approximately 11 hours before detection. The technical entry point was a misconfigured API endpoint — a classic vulnerability. What made it catastrophic was that 340,000 patient records were stored in plain text. Full names, dates of birth, medical history, contact information — all directly read...