How to Set Up a CI/CD Pipeline from Scratch in 2026: The Complete Developer Guide
The first CI/CD pipeline I built took three full days and broke production twice before it worked correctly. I still remember watching a deployment script silently overwrite a production database with test data. At the time, I thought CI/CD was an advanced concept reserved for large engineering teams. I was wrong. Today, a junior developer can set up a functional CI/CD pipeline in a single afternoon using GitHub Actions. The tools have matured dramatically. The concepts have not changed. This guide gives you both — the practical configuration you can copy today, and the deep understanding of why each piece exists, so you can adapt it when things inevitably go sideways.
Who Is This Guide For?
This guide is written for developers who are ready to stop deploying manually and start deploying automatically — correctly.
- Solo developers and small teams still using FTP, manual SSH deployments, or "push to main and pray"
- Backend and full-stack engineers who understand their application but have never configured a CI/CD pipeline
- Engineering leads standardizing deployment processes across a growing team
- DevOps engineers migrating from Jenkins or CircleCI to GitHub Actions in 2026
What Is CI/CD and Why Does It Matter in 2026?
CI/CD stands for Continuous Integration and Continuous Delivery. These are not buzzwords — they represent a fundamental shift in how software is delivered reliably at speed.
| Term | What It Means | What It Prevents |
|---|---|---|
| Continuous Integration | Every commit triggers automated tests and build verification | Bugs that only appear when multiple developers' code merges |
| Continuous Delivery | Every passing build is prepared for release — human approves deploy | Long, risky manual release processes |
| Continuous Deployment | Every passing build deploys to production automatically | Deployment bottlenecks and release anxiety |
From the field: A client was deploying once every three weeks because deployments were so manual and risky that the team needed a full Friday afternoon to execute one. After implementing CI/CD, they were deploying 8-12 times per day. Not because they worked faster — but because each individual deployment became trivially small and safe.
The CI/CD Pipeline — Visual Overview
Before writing any configuration, understand what a complete pipeline looks like. Each stage has a specific job, and the order matters:
Step 1: Choose Your CI/CD Tool
In 2026, GitHub Actions has won the majority share for teams already on GitHub. Here is the honest comparison:
| Tool | Best For | Free Tier | Learning Curve |
|---|---|---|---|
| GitHub Actions ⭐ | Most teams — GitHub-native, massive ecosystem | 2,000 min/month | Low |
| GitLab CI | Teams on GitLab, self-hosted requirements | 400 min/month | Medium |
| Jenkins | Large enterprises, complex custom pipelines | Self-hosted only | High |
| CircleCI | Fast builds, Docker-heavy workflows | 6,000 min/month | Medium |
| ArgoCD | Kubernetes GitOps deployments | Open source | High |
This guide uses GitHub Actions for all examples. If you are on GitLab, the concepts are identical and the syntax is 80% similar.
Step 2: Your First GitHub Actions Pipeline
GitHub Actions pipelines live in .github/workflows/ in your repository. Let us build one from scratch — starting with the essentials.
The Minimum Viable Pipeline — CI Only
name: CI Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
name: Lint and Test
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.11', '3.12']
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Lint with flake8
run: |
flake8 . --count --select=E9,F63,F7,F82 --show-source
flake8 . --count --max-complexity=10 --max-line-length=120
- name: Run tests with coverage
run: |
pytest tests/ -v --cov=app --cov-report=xml --cov-fail-under=80
- name: Upload coverage report
uses: codecov/codecov-action@v4
with:
file: ./coverage.xml
This workflow tests against two Python versions simultaneously using a matrix strategy — catching version-specific bugs before they reach production. I have seen this matrix catch real bugs on Python 3.12 that passed silently on 3.11.
Step 3: Adding Docker Build and Push
Once tests pass, the next stage builds a Docker image and pushes it to a registry. This image becomes the deployable artifact — the exact same binary that runs in every environment.
build:
name: Build and Push Docker Image
runs-on: ubuntu-latest
needs: test
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
tags: |
type=sha,prefix=sha-
type=raw,value=latest
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
Layer Caching: cache-from: type=gha stores Docker layer cache in GitHub Actions. On a typical Python app, this reduces build time from 4-5 minutes to under 60 seconds on subsequent runs.
Step 4: Security Scanning
In 2026, shipping a Docker image without security scanning is the equivalent of deploying without tests. Supply chain attacks have become one of the most common attack vectors — a single pipeline step catches the majority automatically.
security:
name: Security Scan
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/checkout@v4
- name: Audit Python dependencies
run: |
pip install pip-audit
pip-audit -r requirements.txt --severity medium
- name: Scan Docker image with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: 'ghcr.io/${{ github.repository }}:latest'
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1'
- name: Upload results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'
First-run tip: Set exit-code: '0' on legacy codebases to audit first without failing the pipeline. Once the backlog is resolved, tighten to '1' to enforce going forward.
Step 5: Automated Deployment with Environment Gates
The deployment stage is where most teams make their most expensive mistakes. The key insight I learned after breaking production twice: staging and production must be identical environments, and promotion between them must be explicit.
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: [test, build, security]
environment:
name: staging
url: https://staging.bioquro.com
steps:
- name: Deploy to staging
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.STAGING_HOST }}
username: ${{ secrets.STAGING_USER }}
key: ${{ secrets.STAGING_SSH_KEY }}
script: |
docker pull ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
docker stop app-staging || true
docker rm app-staging || true
docker run -d --name app-staging --restart unless-stopped \
-p 8000:8000 \
-e DATABASE_URL=${{ secrets.STAGING_DB_URL }} \
ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: deploy-staging
environment:
name: production # Requires manual approval in GitHub Environments
url: https://bioquro.com
steps:
- name: Deploy to production (blue-green)
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.PROD_HOST }}
username: ${{ secrets.PROD_USER }}
key: ${{ secrets.PROD_SSH_KEY }}
script: |
docker pull ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
docker run -d --name app-green --restart unless-stopped \
-p 8001:8000 \
-e DATABASE_URL=${{ secrets.PROD_DB_URL }} \
ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
sleep 10
curl -f http://localhost:8001/health || (docker stop app-green && exit 1)
docker stop app-blue || true
docker rename app-green app-blue
Step 6: Managing Secrets Correctly
Use GitHub Environments for secret scoping: Staging secrets must be completely inaccessible to production jobs. GitHub Environments enforce this at the platform level.
Never echo secrets in logs: GitHub Actions masks registered secrets, but custom scripts can accidentally expose them. Always validate your logs after the first run.
Rotate secrets on a schedule: SSH deployment keys and database credentials used by CI should rotate every 90 days.
Use OIDC for cloud deployments: Instead of storing long-lived AWS or GCP credentials, use OpenID Connect to grant GitHub Actions temporary credentials. Zero stored secrets, zero rotation burden.
Complete Pipeline Summary
| Stage | Trigger | Blocks Deploy If |
|---|---|---|
| Lint + Test | Every push + PR | Any test fails or coverage < 80% |
| Docker Build | Push to main only | Build fails |
| Security Scan | Push to main only | CRITICAL/HIGH CVE found |
| Deploy Staging | Push to main only | Health check fails |
| Deploy Production | Manual approval | Reviewer rejects |
Production Readiness Checklist
- ✅ Tests run on every push and pull request — no exceptions
- ✅ Code coverage minimum enforced (80% recommended)
- ✅ Docker image tagged with commit SHA — never mutable tags in production
- ✅ Security scan on every build — dependency and container vulnerabilities
- ✅ Staging environment mirrors production exactly
- ✅ Health check validates new deployment before traffic switch
- ✅ Production deployment requires manual approval gate
- ✅ Rollback procedure documented and tested — not theorized
- ✅ All secrets in GitHub Environments — none in workflow YAML files
- ✅ Pipeline runtime under 10 minutes — slow pipelines get bypassed
Frequently Asked Questions
A CI/CD pipeline automates testing, building, and deploying code on every commit. CI catches bugs early with automated tests on every push. CD automates getting tested code to production. Teams with CI/CD deploy far more frequently with dramatically lower failure rates than teams deploying manually — because each change is smaller, tested, and reversible.
GitHub Actions is the dominant choice in 2026 for most development teams — deeply integrated with GitHub, massive action marketplace, and a generous free tier. For Kubernetes-native GitOps deployments, ArgoCD combined with GitHub Actions is the most popular production combination. Jenkins remains relevant for large enterprises with complex existing pipelines.
A basic CI pipeline is operational in 2-4 hours with GitHub Actions. A production-grade pipeline with multiple environments, security scanning, Docker builds, and rollback capabilities takes 1-3 days to configure and validate. The investment pays back within the first week — a single prevented production incident saves more time than the entire setup took.
Continuous Delivery prepares every passing build for release but requires human approval for production deployment. Continuous Deployment goes further — every passing build deploys to production automatically with no human intervention. Most teams start with Continuous Delivery and graduate to full Continuous Deployment as their test suite matures and confidence in automation grows.
What does your current deployment process look like?
Still deploying manually, or stuck on a specific pipeline blocker? Leave a comment with your stack and challenge — I respond to every technical question personally.

Comments
Post a Comment