Skip to main content

Where are Docker volumes stored on the host?

Docker volumes live somewhere on disk — but "somewhere" depends on your OS, on whether you are using volumes or bind mounts, and on Docker's storage configuration. Knowing the answer helps with backups, debugging permissions, and disaster recovery.

Theory

TL;DR

  • Linux: /var/lib/docker/volumes/<volume-name>/_data/ (root-owned, readable with sudo).
  • macOS / Windows: same path, but inside the Docker Desktop Linux VM. Not directly accessible from Finder / Explorer.
  • Find a specific volume's path: docker volume inspect <name> --format '{{.Mountpoint}}'.
  • The location is configurable via the daemon's data-root option.
  • Never edit files directly while a container holds them open — let Docker mediate.

Quick example

bash
# Create a volume and write to it via a container $ docker volume create demo $ docker run --rm -v demo:/data alpine sh -c 'echo hello > /data/note.txt' # Where does the file live? $ docker volume inspect demo --format '{{.Mountpoint}}' /var/lib/docker/volumes/demo/_data # On Linux, sudo reveals it $ sudo cat /var/lib/docker/volumes/demo/_data/note.txt hello # On macOS / Windows: that path is in the VM. From the host, you do not see it. # Read it via a container instead: $ docker run --rm -v demo:/data alpine cat /data/note.txt hello

The Mountpoint field tells the truth on every platform. The shell visibility differs.

Why Mac / Windows hide the volume path

Docker Desktop on macOS and Windows runs the entire Docker engine inside a Linux VM (because containers need a Linux kernel). Volumes are stored on the VM's filesystem, not on the host's. The host filesystem mostly just sees the VM image file (Docker.raw on Mac, ext4.vhdx on WSL).

Result: ls /var/lib/docker/volumes/... does not work on the Mac/Windows host. You have to either:

  • Read the data via a temporary container (docker run --rm -v vol:/data alpine cat /data/...)
  • Or use bind mounts instead of volumes when host-side editing matters (Compose dev workflows do this)

Anatomy of /var/lib/docker/volumes/

/var/lib/docker/volumes/ ├── metadata.db # bolt DB of volume metadata ├── pgdata/ │ └── _data/ # actual volume contents │ ├── PG_VERSION │ ├── base/ │ └── ... ├── webdata/ │ └── _data/ │ └── index.html └── 7f8a3e2d.../ # anonymous volume (UUID name) └── _data/

Each volume = one directory. Inside, _data/ is the actual mounted root. The wrapper directory holds Docker's metadata.

Changing the storage location

If /var/lib/docker is on the wrong disk, point the daemon elsewhere via /etc/docker/daemon.json:

json
{ "data-root": "/mnt/big-disk/docker" }

Restart dockerd (systemctl restart docker) and the daemon now stores everything (images, containers, volumes) under the new path. Migrating existing data is a separate operation: stop docker, copy /var/lib/docker to the new path, start docker.

Common mistakes

Editing volume files on Linux while the container is running

bash
$ sudo vi /var/lib/docker/volumes/pgdata/_data/postgresql.conf # Postgres has those files open. Your edit may corrupt the on-disk state.

Always stop the container first, edit, restart. Better: use docker exec and edit through the container, or mount a config file as a separate read-only bind mount and avoid touching the volume.

Trying to find the volume on macOS

bash
$ ls /var/lib/docker/volumes ls: /var/lib/docker/volumes: No such file or directory

The VM has it; the host does not. Use docker run --rm -v <name>:/data alpine ls /data to read its contents from a container.

Deleting /var/lib/docker/volumes/<name> directly

bash
$ sudo rm -rf /var/lib/docker/volumes/myvol

Bypasses Docker's metadata. The volume might still appear in docker volume ls but be broken. Use docker volume rm myvol instead.

Backing up the volume by copying the host path while in use

The volume might have files mid-write. Better: dump via a container.

bash
# Live backup via container — consistent if your app is quiescent docker run --rm -v pgdata:/data -v $PWD:/backup alpine \ tar czf /backup/pgdata.tar.gz -C /data .

For databases specifically, prefer the DB's native dump (pg_dump) — file-level copy of a live DB risks corruption.

Real-world usage

  • Backups: docker run --rm -v <vol>:/data -v $PWD:/backup alpine tar czf /backup/<vol>.tar.gz -C /data . is the cross-platform pattern.
  • Migration to a bigger disk: stop docker, copy /var/lib/docker to new disk, set data-root in daemon.json, restart.
  • Troubleshooting permissions: docker volume inspect to find Mountpoint, sudo ls -la <Mountpoint> to check ownership (Linux), then fix with chown from inside a container with appropriate --user.
  • Disk-full alerts on Linux: du -sh /var/lib/docker/volumes/*/_data | sort -h shows which volume is the elephant.

Follow-up questions

Q: What is the difference between volume and bind-mount storage location?


A: Volumes live in Docker-managed paths under data-root. Bind mounts use whatever host path you specify in -v. Volumes are Docker's responsibility; bind mounts are yours.

Q: Can I share the same /var/lib/docker between two Docker installs?


A: No, never. Two daemons writing to the same data-root corrupts each other. Each Docker install needs its own data-root.

Q: Where is the volume on Docker Desktop for Mac specifically?


A: Inside ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw (the VM disk image). The path inside the VM is the same /var/lib/docker/volumes/.... To peek inside, use the docker desktop terminal or docker run --rm -v <vol>:/data alpine ls /data.

Q: What are anonymous volumes' names?


A: A SHA-like hash, e.g., 7f8a3e2d6c5a.... They appear when an image's VOLUME directive runs without -v <name>:. Find them with docker volume ls -f dangling=true.

Q: (Senior) How do you migrate volumes from one host to another?


A: The portable approach: docker run --rm -v old-volume:/data -v $PWD:/out alpine tar czf /out/data.tar.gz -C /data . on the old host. Move the tarball. On the new host: docker volume create new-volume && docker run --rm -v new-volume:/data -v $PWD:/in alpine tar xzf /in/data.tar.gz -C /data. Direct copy of /var/lib/docker/volumes/<name>/ between hosts works too if both are Linux with the same Docker version, but tar+restore is what you ship in a runbook.

Examples

Inspecting a volume

bash
$ docker volume inspect pgdata [ { "CreatedAt": "2026-04-15T10:30:00Z", "Driver": "local", "Labels": {}, "Mountpoint": "/var/lib/docker/volumes/pgdata/_data", "Name": "pgdata", "Options": null, "Scope": "local" } ]

Mountpoint is the answer. Driver local means "a directory under data-root". Other drivers (NFS, AWS EFS) point elsewhere.

Reading volume contents from any OS

bash
$ docker run --rm -v pgdata:/data alpine ls -la /data total 156 drwx------ 19 70 70 4096 ... -rw------- 1 70 70 3 ... PG_VERSION drwx------ 5 70 70 4096 ... base -rw------- 1 70 70 4760 ... pg_hba.conf ...

Launches a tiny container, mounts the volume, runs ls. Works identically on Linux, Mac, Windows. The cross-platform answer to "what is in this volume?".

Backup and restore loop

bash
# Backup $ docker run --rm \ -v pgdata:/data \ -v $PWD:/backup \ alpine \ tar czf /backup/pgdata-$(date +%F).tar.gz -C /data . # Restore (into a fresh empty volume) $ docker volume create pgdata-restored $ docker run --rm \ -v pgdata-restored:/data \ -v $PWD:/backup \ alpine \ tar xzf /backup/pgdata-2026-04-30.tar.gz -C /data

Clean, OS-independent backup pattern. For databases, prefer pg_dump / mysqldump for application-level consistency, but the tar pattern is fine for static volumes (configs, uploads).

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet