How to transfer Docker images between hosts without a registry?
Transferring Docker images without a registry is the answer when you have an image on host A and need it on host B but cannot or do not want to push to a registry. The docker save/load pair is the canonical solution.
Theory
TL;DR
docker save= export an IMAGE (with all its layers + metadata) to a tar file or stdout.docker load= import an image tar back into another daemon.- Different from
docker export/import, which work on containers and produce a flattened single-layer image without metadata. - Use cases: air-gapped environments, USB transfer between dev machines, image promotion between secure networks, offline demos.
- For routine cross-host transfer, a registry is much better (deduplication, auth, pull-by-digest).
Quick example
# Source host: package the image
$ docker save myapp:1.0 -o myapp.tar
$ ls -lh myapp.tar
-rw------- 1 me me 256M ... myapp.tar
# Compress for faster transfer
$ docker save myapp:1.0 | gzip > myapp.tar.gz
# Transfer (any way you like)
$ scp myapp.tar.gz user@dest:/tmp/
# Destination host: load it
$ docker load -i /tmp/myapp.tar
# OR
$ gunzip -c /tmp/myapp.tar.gz | docker load
Loaded image: myapp:1.0
$ docker run --rm myapp:1.0The image is now on the destination host as if it had been pulled from a registry, with the same name, tag, and layers.
One-liner over SSH
No intermediate file:
$ docker save myapp:1.0 | gzip | ssh user@dest 'gunzip | docker load'Streams over the wire, no temp file. Works great for one-off transfers.
save vs export
This trips people up.
docker save | docker export | |
|---|---|---|
| Operates on | image | container |
| Layers | preserved (each layer in tar) | flattened to one layer |
| Image metadata | preserved (CMD, ENV, history, etc.) | lost |
| Reverse with | docker load | docker import |
| Use case | sharing images | snapshotting filesystem only |
# IMAGE save → load (recommended)
docker save myimg -o image.tar
docker load -i image.tar
# CONTAINER export → import (filesystem only)
docker export <container> -o fs.tar
docker import fs.tar imported:1.0 # creates a new image with no metadata99% of the time you want save/load. export/import is for legacy or specialty cases (rebuilding an image's filesystem from scratch).
Multiple images in one tar
docker save myapp:1.0 myapp:1.1 nginx:1.27 -o multi.tar
docker load -i multi.tarUseful for shipping a whole stack to an air-gapped environment in one file.
Common mistakes
Confusing save with export
# WRONG: container export, loses CMD/ENV/etc.
$ docker export api > api.tar
$ docker import api.tar api:fresh
$ docker run api:fresh # error: no command specified
# RIGHT: image save, preserves everything
$ docker save myimg:1.0 > myimg.tar
$ docker load < myimg.tar
$ docker run myimg:1.0 # works as beforeSaving to stdout without redirection
# WRONG: tar binary corrupts your terminal
$ docker save myimg
# (terminal goes haywire)
# RIGHT
$ docker save myimg -o myimg.tar
# OR
$ docker save myimg > myimg.tarAlways -o file or > file. The default is stdout, which is harmless if you redirect, terminal-breaking if you do not.
Forgetting that the tar is huge
A save tar is roughly the size of the image (multi-stage with 25MB final → 25MB tar; 1GB image → 1GB tar). Compress for transfer:
docker save myimg | zstd > myimg.tar.zst # zstd is faster than gzip
docker save myimg | xz > myimg.tar.xz # xz is smaller, slowerImage name not preserved on import
# `docker import` does NOT preserve names
$ docker export api > fs.tar
$ docker import fs.tar
sha256:abc123... # untagged
# Need to tag manuallydocker load preserves names; docker import does not. Another reason to use save/load.
When to use save/load (and when not)
Use save/load when:
- Air-gapped or offline transfer (no network between hosts).
- One-off transfer between two known machines.
- Small team that has not set up a registry yet.
- Backup/archive of specific image versions.
- Initial bootstrap of a registry (load images first, then push).
Use a registry instead when:
- Multiple consumers need the image (each does
docker pull). - You want pull-by-digest, signing, vulnerability scanning, RBAC.
- The transfer is part of a CI/CD pipeline.
- You care about deduplication across versions (registries store layers once).
Real-world usage
- Air-gapped enterprise environments: all images flow as save tars between security zones.
- Customer-on-premise deployments: ship the application as a tar that the customer's
docker loads on their network. - Disaster recovery: save critical images to backup storage as tars; load if registry is lost.
- Embedded / edge devices: preload images via
docker loadfrom a USB stick during manufacturing. - Image promotion across security boundaries: save in dev, scan offline, load to prod through a one-way diode.
Follow-up questions
Q: What is the difference between save and pull?
A: pull fetches an image from a registry (HTTP). save exports a local image to a tar. They are opposite directions of moving images, with different storage mediums.
Q: Will the image's history be preserved?
A: With docker save/load, yes — full image history including all layers and metadata. With docker export/import, no — flattened to a single layer.
Q: Does the destination host need to have the same Docker version?
A: OCI-compliant tarballs are interoperable across modern Docker versions. Very old daemons may have format issues, but for any Docker from the past several years it works.
Q: What does the tar actually contain?
A: A directory per layer (each with a tar of its filesystem changes), a manifest.json, a config blob, and tag information. The tar is a zip-bundled OCI image.
Q: (Senior) How do you script multi-image transfer for an air-gapped deployment?
A: Build a manifest of needed images (grep image: compose.yaml | awk ...), pull each on the connected side, save them all to one tar (docker save img1 img2 img3 -o stack.tar), zstd-compress for transfer, ship across the gap, load on the air-gapped side. Add a digest verification step on the receiving side: docker images --digests | tee received.txt compared against the source's digest list, to catch tampered or partial transfers.
Examples
Air-gapped deployment of a Compose stack
# On connected build machine
$ docker compose -f compose.yaml pull # ensure all images present
$ images=$(grep image: compose.yaml | awk '{print $2}')
$ docker save $images -o stack-2026-04-30.tar.gz # all images in one file
# 1.2 GB ... transfer through whatever channel allowed
# On air-gapped target
$ docker load -i stack-2026-04-30.tar.gz
Loaded image: nginx:1.27-alpine
Loaded image: postgres:16
Loaded image: myorg/api:1.2.3
$ docker compose up -dOne tar, one load, the whole stack runs offline.
Stream over SSH
$ docker save myapp:1.0 | ssh user@destination 'docker load'
Loaded image: myapp:1.0No temporary file. Useful when disk space is tight on either side.
Compare with registry approach
# save/load: simple, no setup, slow for repeated transfers
docker save myapp:1.0 | ssh dest 'docker load'
# 30 seconds, full image transferred
# Registry: dedup, faster on repeated transfers
docker push myreg.example.com/myapp:1.0
# On dest:
docker pull myreg.example.com/myapp:1.0
# 5 seconds, only changed layers transferredFor every-day workflows, registries win on speed and tooling. For one-off transfers, save/load wins on simplicity.
Short Answer
Interview readyA concise answer to help you respond confidently on this topic during an interview.
Comments
No comments yet