Skip to main content

How to use Docker in a CI/CD pipeline?

Docker in CI/CD is the workflow that turns code commits into deployed containers. The standard pattern is well-established: build, test, scan, tag, push, deploy. The differences between teams are mostly in cache strategy, registry choice, and signing.

Theory

TL;DR

The pipeline shape:

  1. Checkout code.
  2. Setup Docker (with BuildKit, multi-arch buildx if needed).
  3. Build the image, with cache from previous builds.
  4. Test inside the image (multi-stage --target test).
  5. Scan for CVEs (trivy, grype, Snyk).
  6. Tag with commit SHA + branch + semver.
  7. Push to registry (Docker Hub, ECR, GHCR).
  8. Sign with Cosign.
  9. Deploy by referencing the new tag (or digest).
  • Critical principle: build once, deploy everywhere. Same image goes from CI → staging → prod. Do not rebuild for prod.
  • Cache backend matters: without --cache-from, every CI run starts cold.
  • Tag by SHA for reproducibility; promote by changing what the deploy points at, not by rebuilding.

A complete GitHub Actions example

yaml
# .github/workflows/ci.yml name: build-test-deploy on: push: branches: [main] pull_request: jobs: build: runs-on: ubuntu-latest permissions: contents: read packages: write # for GHCR id-token: write # for Sigstore OIDC steps: - uses: actions/checkout@v4 - uses: docker/setup-buildx-action@v3 - uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Build and push uses: docker/build-push-action@v5 id: build with: context: . push: ${{ github.event_name != 'pull_request' }} tags: | ghcr.io/${{ github.repository }}:${{ github.sha }} ghcr.io/${{ github.repository }}:latest labels: | org.opencontainers.image.source=${{ github.event.repository.html_url }} org.opencontainers.image.revision=${{ github.sha }} cache-from: type=gha cache-to: type=gha,mode=max - name: Run tests run: | docker run --rm ghcr.io/${{ github.repository }}:${{ github.sha }} npm test - name: Scan for vulnerabilities uses: aquasecurity/trivy-action@master with: image-ref: ghcr.io/${{ github.repository }}:${{ github.sha }} exit-code: '1' severity: 'HIGH,CRITICAL' - uses: sigstore/cosign-installer@v3 - name: Sign image if: github.event_name != 'pull_request' run: cosign sign --yes ghcr.io/${{ github.repository }}@${{ steps.build.outputs.digest }}

This covers build, test, scan, sign, push in one workflow. Tag by SHA, promote later.

Cache strategies

Without cache, every CI run re-downloads base images and re-runs every step. Three common backends:

GitHub Actions cache

yaml
cache-from: type=gha cache-to: type=gha,mode=max

Built-in to GHA. Free up to 10GB. Works for repo-scoped builds.

Registry cache

yaml
cache-from: type=registry,ref=ghcr.io/myorg/myapp:cache cache-to: type=registry,ref=ghcr.io/myorg/myapp:cache,mode=max

Cache stored in your own registry. Works across CI providers, across repos. The most portable option.

S3/inline

yaml
cache-from: type=s3,region=us-east-1,bucket=mybucket cache-to: type=s3,region=us-east-1,bucket=mybucket

For self-hosted runners or AWS-native pipelines.

With good caching, repeated builds with no source changes complete in seconds.

Tagging strategy

ghcr.io/myorg/api:abc123def # commit SHA — reproducibility anchor ghcr.io/myorg/api:1.2.3 # semver — for production deploys ghcr.io/myorg/api:1.2 # semver minor — auto-pulls latest patch ghcr.io/myorg/api:1 # semver major — auto-pulls latest in line ghcr.io/myorg/api:latest # tip of main — for non-production users ghcr.io/myorg/api:pr-1234 # PR builds — for review apps

Production deploys reference the SHA tag (or the @digest form). Semver tags are for humans and downstream consumers.

Build once, promote

CI: build myorg/api:abc123def → push Staging: deploy myorg/api:abc123def Prod: deploy SAME myorg/api:abc123def

Do NOT have separate "prod build". The image you tested in staging is the image that goes to prod. If you rebuild for prod, you have not actually tested what you ship.

Multi-stage Dockerfile for CI

dockerfile
FROM node:22-alpine AS deps WORKDIR /app COPY package*.json ./ RUN npm ci FROM deps AS test COPY . . RUN npm test FROM deps AS build COPY . . RUN npm run build FROM node:22-alpine AS runtime WORKDIR /app COPY --from=build /app/dist /app/dist COPY --from=build /app/node_modules /app/node_modules USER node CMD ["node", "dist/server.js"]

CI runs docker build --target test to validate. The runtime stage is what gets pushed for deploy. Same Dockerfile, multiple use cases.

Vulnerability scanning

yaml
# Trivy in GitHub Actions - uses: aquasecurity/trivy-action@master with: image-ref: myorg/api:${{ github.sha }} severity: 'HIGH,CRITICAL' exit-code: '1' ignore-unfixed: true # Or as a Dockerfile build stage (catches at build time) FROM aquasec/trivy:0.59.0 AS scan COPY --from=build / /scan-target RUN trivy filesystem --severity HIGH,CRITICAL --exit-code 1 /scan-target

Fail the build on HIGH/CRITICAL CVEs. Allow exceptions via a .trivyignore.

Common mistakes

Rebuilding for each environment

yaml
# WRONG - name: Build for staging run: docker build -t myapp:staging . - name: Build for prod run: docker build -t myapp:prod .

Two builds, two opportunities for drift. The image you tested is not the image you deploy.

yaml
# RIGHT - name: Build once run: docker build -t myapp:${{ github.sha }} . - name: Tag for staging run: docker tag myapp:${{ github.sha }} myapp:staging

Build once, tag many.

Forgetting the cache and wondering why CI is slow

Without cache-from, every job starts from a cold base image pull. Add cache and watch builds drop from 8 minutes to 90 seconds.

Using latest in production deploys

yaml
# WRONG: prod might pull a different image than what was tested deploy: image: myorg/api:latest # RIGHT: pin to SHA or digest deploy: image: myorg/api:abc123def

Mutable tags = surprise rollouts.

Embedding secrets in build args

dockerfile
# WRONG: BUILD_TOKEN visible in image history ARG BUILD_TOKEN RUN curl -H "Auth: $BUILD_TOKEN" ...

Use BuildKit secret mounts: RUN --mount=type=secret,id=token .... Secret stays out of image history.

Not testing inside the image

If tests run on the host (npm test outside Docker), you are not testing what you ship. Test in the same image that will deploy: docker build --target test . or docker run --rm myapp:test npm test.

Real-world variations

GitLab CI

yaml
build: stage: build image: docker:27 services: - docker:27-dind script: - docker build --cache-from $CI_REGISTRY_IMAGE:cache -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

GitLab's docker-in-docker (DinD) pattern. Built-in registry per project.

Jenkins

groovy
pipeline { stages { stage('Build') { steps { sh 'docker buildx build --cache-from type=registry,ref=myreg/cache --tag myreg/api:${env.GIT_COMMIT} .' } } } }

Long history, heavy plugin ecosystem, declarative pipelines became the norm.

CircleCI

yaml
jobs: build: docker: - image: cimg/base:current steps: - checkout - setup_remote_docker - run: docker buildx build --tag myorg/api:${CIRCLE_SHA1} .

CircleCI's setup_remote_docker gives you a docker daemon for builds.

Follow-up questions

Q: Should I run tests inside the image or on the host?


A: Inside the image. The test environment must match production exactly — same OS, same library versions, same paths. Multi-stage build with a test target is the canonical pattern.

Q: What is docker buildx and why use it in CI?


A: buildx is the BuildKit-aware Docker CLI extension. It enables multi-arch builds, advanced cache backends, secret mounts, and much faster builds. CI should use buildx (via docker/setup-buildx-action) by default.

Q: How do I handle secrets like NPM tokens in CI builds?


A: BuildKit secret mounts: RUN --mount=type=secret,id=npmrc cp /run/secrets/npmrc ~/.npmrc && npm ci. Pass via CI: --secret id=npmrc,src=$HOME/.npmrc. The secret never lands in any layer.

Q: Should I push pull-request builds to a registry?


A: Yes — to a separate tag (pr-1234). Lets reviewers run the actual built image, also enables review apps. Auto-prune old PR tags via cron or registry policy.

Q: (Senior) How do you validate that the image deployed in prod is bit-for-bit what was tested in CI?


A: Pin to digest, not tag. The CI captures the digest from docker push output and writes it into the deploy manifest. Production references myreg/api@sha256:abc.... Tag-mutation cannot affect this. Combined with Cosign verification at admission, you get cryptographic certainty: the bytes serving production are the bytes that passed CI.

Examples

yaml
name: ci-cd on: push: { branches: [main] } pull_request: env: REGISTRY: ghcr.io IMAGE: ${{ github.repository }} jobs: build: runs-on: ubuntu-latest permissions: contents: read packages: write id-token: write # OIDC for Sigstore outputs: digest: ${{ steps.build.outputs.digest }} steps: - uses: actions/checkout@v4 - uses: docker/setup-buildx-action@v3 - uses: docker/login-action@v3 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Build & push id: build uses: docker/build-push-action@v5 with: push: true tags: | ${{ env.REGISTRY }}/${{ env.IMAGE }}:${{ github.sha }} ${{ env.REGISTRY }}/${{ env.IMAGE }}:latest cache-from: type=gha cache-to: type=gha,mode=max provenance: true sbom: true - name: Test run: docker run --rm ${{ env.REGISTRY }}/${{ env.IMAGE }}:${{ github.sha }} npm test - name: Scan uses: aquasecurity/trivy-action@master with: image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE }}:${{ github.sha }} severity: HIGH,CRITICAL exit-code: '1' - uses: sigstore/cosign-installer@v3 - name: Sign run: cosign sign --yes ${{ env.REGISTRY }}/${{ env.IMAGE }}@${{ steps.build.outputs.digest }} deploy-staging: needs: build if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest steps: - run: | # Update staging to use the new digest kubectl set image deploy/api api=${{ env.REGISTRY }}/${{ env.IMAGE }}@${{ needs.build.outputs.digest }}

Build → test → scan → sign → push → staging deploy by digest. Single source of truth.

Promotion via image retag

bash
# CI built and pushed myreg/api:abc123def # commit SHA # Staging gets it kubectl set image deploy/api api=myreg/api:abc123def # After verification, promote to prod docker pull myreg/api:abc123def docker tag myreg/api:abc123def myreg/api:1.2.3 docker tag myreg/api:abc123def myreg/api:prod-stable docker push myreg/api:1.2.3 docker push myreg/api:prod-stable # Prod uses the same digest, just different tags kubectl set image deploy/api api=myreg/api:1.2.3

The SAME image (same digest) goes from staging to prod. The promotion is just relabeling.

Multi-arch build for hybrid x86/ARM clusters

yaml
- uses: docker/build-push-action@v5 with: platforms: linux/amd64,linux/arm64 push: true tags: myorg/api:${{ github.sha }} cache-from: type=gha cache-to: type=gha,mode=max

One CI step builds for both architectures. Consumers pull whichever matches their CPU. Important for clusters mixing Graviton and x86 nodes.

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet