Skip to main content

How to build multi-platform Docker images (ARM + AMD64)?

Multi-platform Docker images are images built once and tagged under one name, but containing layers compiled for several CPU architectures (e.g. linux/amd64 and linux/arm64). The daemon transparently pulls the right variant at runtime. This matters because Apple Silicon Macs, AWS Graviton instances, and Raspberry Pi all run on ARM, while most laptops and CI runners are x86-64.

Theory

TL;DR

  • Use docker buildx build --platform linux/amd64,linux/arm64 -t name --push .
  • Output is a manifest list: one tag pointing to N per-arch images.
  • Buildx by default uses QEMU emulation for non-native arches. Slow but works.
  • For speed, set up native builders per architecture (a real ARM box).
  • Cannot store multi-arch locally; must --push to a registry. Or use --output=oci for an OCI bundle.
  • Common pain points: native deps that break under emulation, CGO_ENABLED=0 for Go, prebuilt wheels for Python.

Why multi-platform

A single CPU architecture used to be the norm. Today:

  • Apple Silicon (M1/M2/M3) developers run ARM locally.
  • AWS Graviton, Ampere, Oracle ARM offer 30-40% better price/performance.
  • Raspberry Pi, IoT, edge need ARM-32 or ARM-64.
  • Server farms still mostly x86-64.

If your image only ships linux/amd64, an Apple Silicon dev pulling it gets transparent QEMU emulation (slow), or a no matching manifest error. Multi-arch fixes this with one tag.

Manifest list (a.k.a. fat manifest)

The registry stores one extra manifest per tag that points to per-arch images:

json
{ "manifests": [ { "platform": { "architecture": "amd64", "os": "linux" }, "digest": "sha256:abc..." }, { "platform": { "architecture": "arm64", "os": "linux" }, "digest": "sha256:def..." } ] }

When you docker pull myorg/app:1.0 on an ARM Mac, the daemon reads the manifest list, picks the arm64 digest, pulls the right layers. Same tag, same command, different bytes.

How buildx builds it

Buildkit (the engine behind buildx) compiles each architecture in a separate context. Two options:

  1. Single builder + QEMU. Buildkit registers binfmt_misc to translate non-native binaries via QEMU. The host runs RUN instructions for each platform via emulation. Works everywhere, slow for compiled languages.
  2. Multiple native builders. Buildkit farms out the arm64 work to a real ARM machine, the amd64 work to an x86 machine. Fast, requires infrastructure.

Examples

Quick start: emulated multi-arch

bash
# Enable QEMU once per host (Docker Desktop does this automatically) docker run --privileged --rm tonistiigi/binfmt --install all # Create a buildx builder that uses docker-container driver docker buildx create --name multi --driver docker-container --use docker buildx inspect --bootstrap # Build for two platforms, push to registry docker buildx build \ --platform linux/amd64,linux/arm64 \ -t myorg/app:1.0 \ --push \ .

The --push is required because multi-arch images cannot live in the local Docker image store (it indexes by single arch). The build pushes both arches and the manifest list in one step.

Verify:

bash
docker buildx imagetools inspect myorg/app:1.0 # Manifest: docker.io/myorg/app:1.0@sha256:... # MediaType: application/vnd.oci.image.index.v1+json # Manifests: # linux/amd64 # linux/arm64

CI-friendly: GitHub Actions

yaml
name: build on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: docker/setup-qemu-action@v3 - uses: docker/setup-buildx-action@v3 - uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USER }} password: ${{ secrets.DOCKERHUB_TOKEN }} - uses: docker/build-push-action@v5 with: platforms: linux/amd64,linux/arm64 push: true tags: myorg/app:${{ github.sha }} cache-from: type=gha cache-to: type=gha,mode=max

setup-qemu-action registers binfmt; cache-from: type=gha reuses GitHub Actions cache across runs (huge speedup).

Native ARM builder (production)

Emulation can take 10x longer than native. For real workloads, register a separate ARM machine:

bash
# On the x86 "orchestrator" host docker buildx create \ --name multi-native \ --node x86 --platform linux/amd64 docker buildx create \ --append \ --name multi-native \ --node arm \ --platform linux/arm64 \ ssh://user@arm-host docker buildx use multi-native docker buildx inspect --bootstrap # Build: each arch runs natively on its node docker buildx build --platform linux/amd64,linux/arm64 -t myorg/app:1.0 --push .

Now linux/amd64 work runs on the local x86 machine and linux/arm64 work runs on the remote ARM box; buildx merges results into one manifest list.

Building only the local platform during dev

bash
# When iterating, you do not need both arches docker buildx build --platform local -t myorg/app:dev --load .

--load brings the result into the local image store (impossible with multi-arch). Use --load for dev, --push for release.

Common pitfalls

Native dependencies break under emulation

A Python package that compiles C extensions might fail when run under QEMU because the build script detects the host glibc but binaries are emulated:

fatal error: Python.h: No such file or directory

Fix: use prebuilt wheels (PyPI manylinux), or build natively per arch.

Go: CGO_ENABLED and cross-compilation

Go cross-compiles natively without emulation:

dockerfile
FROM --platform=$BUILDPLATFORM golang:1.22 AS build ARG TARGETOS TARGETARCH COPY . /src WORKDIR /src RUN CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o app FROM gcr.io/distroless/static COPY --from=build /src/app /app ENTRYPOINT ["/app"]

$BUILDPLATFORM is the host arch (fast); $TARGETPLATFORM is the output arch. Go compiles without emulation; only the final layer is per-arch.

Different package managers per arch

dockerfile
FROM alpine:3.18 ARG TARGETARCH RUN apk add --no-cache libstdc++ # alpine repos auto-resolve per arch; no extra work needed.

But for distros without auto-resolution, you might need:

dockerfile
RUN case "$TARGETARCH" in \ amd64) URL=https://example.com/amd64.tar.gz ;; \ arm64) URL=https://example.com/arm64.tar.gz ;; \ esac && wget -O - "$URL" | tar xz

docker manifest vs buildx

The older docker manifest command can stitch existing per-arch images into a manifest list manually. Buildx automates this and is the modern path. Prefer buildx unless you have legacy tooling.

Real-world usage

  • Public images on Docker Hub: ship multi-arch (Postgres, Redis, nginx all do).
  • Internal apps: at minimum amd64 + arm64 if any dev runs Apple Silicon or any prod runs Graviton.
  • CI matrix: use buildx + cache-from: type=gha to avoid rebuilding both arches every push.
  • Edge deployments: include linux/arm/v7 for Raspberry Pi class devices.

Follow-up questions

Q: Can I run a multi-arch image without buildx?


A: Yes, the daemon handles pull-time selection automatically. You only need buildx to produce multi-arch images.

Q: How much slower is QEMU emulation?


A: For compiled languages (Rust, C++), 5-10x slower than native. For interpreted (Python, Node.js install), 2-3x. For Go (cross-compile, no emulation), nearly free.

Q: What is --platform=$BUILDPLATFORM?


A: It says "run this stage on the host arch, regardless of the target." Use it for build steps that produce arch-specific output (Go cross-compile). The final stage uses $TARGETPLATFORM to assemble per-arch images.

Q: (Senior) How do you debug a multi-arch build that fails only for arm64?


A: Build that platform alone (--platform linux/arm64 --load --platform local after switching builders), then docker run --rm -it --platform linux/arm64 myorg/app:dev sh to poke around the failed image. Inspect logs with docker buildx build --progress=plain for verbose output. If the error is exec format error, you copied an amd64 binary into the arm64 stage.

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet