What are dangling Docker images and how to remove them?
Dangling images accumulate naturally in any Docker setup that builds images frequently. They are not corruption or a bug — they are the byproduct of how tags work. Cleaning them up is routine disk hygiene.
Theory
TL;DR
- A dangling image is an image with no tag and no other image referencing it as a parent. It is reachable only by digest.
- They appear most often after
docker buildreuses a tag — the previous image with that tag becomes dangling. - Visible in
docker imagesas rows where REPOSITORY and TAG are both<none>. - Safe to delete: nothing references them; nothing breaks if they vanish.
docker image pruneremoves ALL dangling images.docker image prune -ais more aggressive — also removes unused-but-tagged images.
Quick example
# Build the same tag twice → first image becomes dangling
$ docker build -t myapp:1.0 .
$ # ... edit Dockerfile ...
$ docker build -t myapp:1.0 .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp 1.0 4f06b3e2c0c1 2 minutes ago 180MB
<none> <none> 8a3f2d1c9b8e 10 minutes ago 180MB ← DANGLING
nginx 1.27 a3b4c5d6e7f8 3 weeks ago 54MB
# List only dangling
$ docker images -f dangling=true
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 8a3f2d1c9b8e 10 minutes ago 180MB
# Clean them up
$ docker image prune -f
Deleted Images:
deleted: sha256:8a3f2d1c9b8e...
Total reclaimed space: 180MBThe second build replaced myapp:1.0's tag pointer; the old image is still on disk but tag-less. prune removes it.
Dangling vs unused
This is the distinction that trips people up.
| Dangling | Unused | |
|---|---|---|
| Has a tag? | No (<none>:<none>) | Yes |
| Referenced by a container? | No | No |
Removed by docker image prune | Yes | No |
Removed by docker image prune -a | Yes | Yes |
Dangling = untagged + nothing references it. Unused = tagged but no current container is using it.
A tagged image you pulled three months ago and never started a container from is unused but not dangling. Plain prune will not touch it; prune -a will.
How dangling images appear
The most common ways:
- Tag reuse on build.
docker build -t myapp:1.0twice. First build's image loses its tag. - Re-pull with same tag.
docker pull nginx:latestafter upstream pushes a new latest. Old image is now untagged. - Multi-stage builds. Each intermediate stage produces an untagged image. With BuildKit's default settings these are cached invisibly; with the legacy builder they show up as dangling.
- Failed builds. A
docker buildthat errors mid-way leaves intermediate untagged images.
BuildKit (the default since Docker 23) is much better at not leaving these around. The legacy builder produced more dangling images per build.
The cleanup commands, in order of aggression
# Just dangling images (safe, what you usually want)
docker image prune
docker image prune -f # skip confirmation
# Dangling AND any tagged image with no container
docker image prune -a
# Everything: stopped containers, dangling images, unused networks, build cache
docker system prune
# Same plus volumes (DESTRUCTIVE — wipes named volumes too)
docker system prune -a --volumes
# Filtered
docker image prune -a --filter 'until=24h' # only images older than 24hDaily-ops habit: docker system prune -f weekly on dev machines. Reclaims gigabytes.
Production care: never system prune -a --volumes blindly — the --volumes flag deletes named volumes too, which can be your databases.
Common mistakes
Confusing prune with prune -a
# Removes only untagged dangling images
$ docker image prune
# Removes ALL images that no container is using (including tagged ones you might want)
$ docker image prune -aIf you have nginx:1.27, node:22, postgres:16 pulled but no container running them right now, plain prune keeps them; prune -a deletes them. Surprise the first time you pull a 200MB image again because prune -a swept it.
Running system prune --volumes on prod
# DESTRUCTIVE: also deletes named volumes (databases!)
$ docker system prune -af --volumes--volumes removes any volume not currently mounted by a running container. If your DB is briefly down for a deploy, its volume is unattached and gets nuked. Never use --volumes on prod without explicit verification.
Thinking dangling = corrupt
$ docker images
REPOSITORY TAG IMAGE ID SIZE
<none> <none> 8a3f2d1c9b8e 180MBDangling images are not corrupt; they are perfectly valid OCI images that simply lost their tag. You can docker run <image-id> against them just fine. They are removable, not broken.
Pruning on a CI runner with active builds
A docker image prune -a mid-build can race with the build cache and slow subsequent builds. Schedule cleanup between builds, not during.
Real-world usage
- Dev machines: weekly
docker system prune -f(and--volumesonly when you are sure nothing important is unmounted). - CI runners: cleanup hook at end of each job:
docker container prune -f && docker image prune -f. - Production hosts: scheduled
docker image prune -af --filter 'until=168h'(one week) via cron — keeps recent images, removes old. - Disk-pressure response:
docker system dffirst to see where space is going, then targeted prune.
Follow-up questions
Q: What is the difference between docker image prune and docker rmi?
A: prune is bulk + filtered; rmi <id> is targeted to a specific image (or list). Use prune when you do not care which exact images go; use rmi when you want to delete a particular one.
Q: How do I see disk usage by Docker?
A: docker system df shows totals (images, containers, volumes, build cache) and reclaimable space. docker system df -v is verbose with per-image and per-volume sizes.
Q: Can I keep dangling images for cache reasons?
A: Generally no — they are not used as build cache (BuildKit's cache lives elsewhere). The only time you might keep them is if you are using the legacy builder and a specific intermediate is reused. With modern BuildKit, just prune.
Q: Why does prune not remove my image even though no container is running?
A: Because the image has a tag. Plain prune only touches dangling (untagged) images. Add -a to also delete unused tagged ones.
Q: (Senior) How would you set up automatic Docker cleanup on a production server?
A: A daily systemd timer or cron job that runs docker container prune -f --filter 'until=24h' && docker image prune -af --filter 'until=168h' && docker builder prune -f --filter 'until=72h'. Avoid --volumes. Monitor disk with docker system df before and after to verify the cleanup is doing what you expect. For large clusters, this lives in the host's config-management (Ansible/Chef), not per-container.
Examples
Cleanup loop for a CI runner
# .github/workflows/cleanup.yml or Jenkins postBuild
#!/bin/sh
set -e
docker container prune -f --filter 'until=1h'
docker image prune -f
docker builder prune -f --filter 'until=24h'
# We keep volumes — they hold inter-job caches if any.A tiny cleanup runs after each job; over a week it keeps the runner's /var/lib/docker from blowing up.
Inspecting before pruning
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 47 12 8.421GB 6.3GB (74%)
Containers 18 4 412MB 287MB (69%)
Local Volumes 9 5 3.2GB 180MB (5%)
Build Cache 128 0 2.1GB 2.1GB (100%)
$ docker image prune -af
Total reclaimed space: 6.3GB
$ docker builder prune -f
Total reclaimed space: 2.1GBNumbers tell you where the space is going. Image prune freed 6.3GB; build cache another 2.1GB. Volumes were left alone (only 180MB anyway, and they likely hold real data).
Bulk delete with a filter
$ docker images -f dangling=true -q | xargs -r docker rmi
# Or simply:
$ docker image prune -fThe two forms are equivalent. prune is the modern one-liner; the xargs form is from before prune existed and still works.
Short Answer
Interview readyA concise answer to help you respond confidently on this topic during an interview.
Comments
No comments yet