How does Docker bridge networking work?
Docker bridge networking is the default mode for single-host container communication. Underneath, it is a Linux software bridge plus a pair of virtual interfaces per container, all wired together with iptables NAT rules. Knowing this stack is what lets you debug "why can't my containers talk?" in 30 seconds instead of 30 minutes.
Theory
TL;DR
- Docker creates a Linux software bridge on the host (default name
docker0; user-defined bridges get abr-<id>name). - For each container, Docker creates a
vethpair: one end inside the container's network namespace aseth0, the other end attached to the bridge. - All containers on the same bridge are on a private subnet (e.g.,
172.17.0.0/16) and can reach each other directly on any port. - The host can reach containers via their bridge IP (but not by container name from the host).
- iptables MASQUERADE rules NAT outgoing container traffic to the host's IP. iptables DNAT rules implement
-p. - User-defined bridges add embedded DNS by container name. The default
docker0lacks this.
Quick example
$ docker network create mynet
$ docker run -d --name web --network mynet nginx
# On the host, see the bridge
$ ip link show | grep -E 'br-|docker0'
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
7: br-1234abcd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
# See which veth is attached
$ brctl show br-1234abcd # or: bridge link
bridge name bridge id STP enabled interfaces
br-1234abcd 8000.0242a3f9d2b8 no vethabcd123
# Inside the container
$ docker exec web ip a show eth0
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet 172.18.0.2/24 brd 172.18.0.255 scope global eth0The bridge br-1234abcd on the host, the vethabcd123 end, the matching eth0@if6 inside the container — three views of the same wire.
The veth pair architecture
Host network namespace Container network namespace
+--------------------+ +----------------------+
| | | |
| docker0 / br-X |═══vethXXX═══eth0│ container |
| (Linux bridge) | | process |
| | | |
+--------------------+ +----------------------+
|
v iptables MASQUERADE for outbound
v iptables DNAT for -p<host>:<container>
+--------------------+
| host eth0 / wlan0 | ↔ outside network
+--------------------+A veth is essentially a virtual cable. One end lives in the host (attached to the bridge); the other end lives in the container's namespace (renamed to eth0). Packets that enter one end come out the other.
How container-to-container traffic flows
Container A (172.18.0.2) Container B (172.18.0.3)
| |
| eth0 | eth0
| vethA | vethB
+----------------+ +------------+
v v
br-1234abcd (bridge)A sends a packet to B's IP. The kernel's bridge logic forwards it from vethA to vethB. No NAT, no host involvement at L3 except the routing.
On a user-defined bridge, A can also reach B by name (db, api) — Docker injects 127.0.0.11 (its embedded DNS) into the container's /etc/resolv.conf, and that DNS knows about all container names on the same bridge.
How outbound traffic works
Container wants to reach https://api.github.com. The packet:
- Leaves container's
eth0(172.18.0.2 → 140.82.x.x). - Arrives at the bridge.
- Routed to the host's outbound interface.
- iptables
MASQUERADErule rewrites the source IP to the host's IP. - Sent out to the internet.
Return traffic uses conntrack to find its way back to the container.
How -p (port publishing) works
With docker run -p 8080:80 nginx:
- iptables DNAT rule:
host:8080 → 172.18.0.2:80. - External request hits host:8080.
- iptables rewrites destination to the container.
- Bridge forwards to the container.
- nginx replies. Reverse path applies.
A docker-proxy userspace process is also started as a fallback for some IPv6 / loopback cases. Most traffic takes the iptables path.
Default bridge vs user-defined bridges
Default bridge | User-defined bridge | |
|---|---|---|
| Created by | Docker on install | docker network create <name> |
| Name | bridge | whatever you choose |
| Linux interface | docker0 | br-<random-id> |
| DNS by container name | No (only legacy --link) | Yes (embedded resolver) |
| Isolation between projects | None (every container is here by default) | Each network is isolated |
| Auto cleanup | Persistent | Can docker network rm when unused |
| Recommended for | almost nothing new | everything |
If your container does not specify --network, it lands on the default bridge. Always specify a user-defined bridge for any non-trivial use.
Common mistakes
Trying to reach a container by name from the host
$ curl http://web # host trying to resolve container name
curl: (6) Could not resolve host: webThe embedded DNS only serves containers on the same bridge, not the host. From the host, use the published port (localhost:8080) or the container's bridge IP (172.18.0.2).
Two containers on different bridges, expecting them to talk
$ docker network create net-a && docker network create net-b
$ docker run -d --name api --network net-a myapp
$ docker run -d --name db --network net-b postgres:16
$ docker exec api ping db # failsBridges are isolated. Either put both on the same network, or use docker network connect net-b api to attach api to both.
Forgetting that docker0 containers have no DNS
A fresh install + docker run -d --name x ... lands x on docker0. From inside, ping x does not work. Modernized advice: always create a network first.
iptables flush breaks Docker networking
Docker manages its own iptables rules in a DOCKER chain. Running iptables -F or some firewall managers (UFW with default rules) can wipe Docker's rules and break port publishing. Restart Docker (systemctl restart docker) to recreate them.
Inspecting and debugging
# What bridges does Docker manage?
$ docker network ls --filter driver=bridge
# What is on a specific network?
$ docker network inspect mynet
# Container's IP
$ docker inspect web --format '{{.NetworkSettings.Networks.mynet.IPAddress}}'
# Live traffic on the bridge
$ sudo tcpdump -i br-1234abcd -n
# iptables rules Docker has set up
$ sudo iptables -t nat -L DOCKER -n -vReal-world usage
- Compose: auto-creates a user-defined bridge per project (
<projectname>_default). Service-to-service traffic happens here, with DNS by service name. - Single-host production: explicit
docker network create appnet, all containers attached, only the entry point published with-p. - Multiple isolated stacks on one host: one bridge per stack. Postgres on
appnet1cannot accidentally be reached from a container onappnet2. - Reverse-proxy patterns: Traefik or nginx-proxy attaches to multiple bridges to route traffic across stacks.
Follow-up questions
Q: Why does my container have IP 172.17.x.x when I expected 172.18.x.x?
A: It is on the default bridge (172.17.0.0/16), not on a user-defined bridge. Specify --network <yourname> at run time.
Q: How do I find a container's IP from the host?
A: docker inspect <name> --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}'. A container can be on multiple networks; the format above prints all IPs.
Q: Why is traffic from my container slow?
A: Bridge networking has overhead vs host mode (NAT, extra hop through the bridge, possibly docker-proxy). For raw throughput, --network host is fastest. For typical web/API workloads, the overhead is negligible.
Q: Can I customize the bridge subnet?
A: Yes, on creation: docker network create --subnet 10.0.0.0/24 --gateway 10.0.0.1 mynet. Useful when the default 172.17/16 collides with VPN or office subnets.
Q: (Senior) How would you debug a packet that arrives at the host but never reaches the container?
A: Trace the iptables path. sudo iptables -t nat -L DOCKER -n -v --line-numbers to see the DNAT rules for your published port. Then sudo iptables -L FORWARD -n -v to confirm the forward chain accepts container-bound traffic. Run sudo tcpdump -i any -n port 80 to see where the packet stops. Common culprits: a host firewall (UFW, firewalld) inserting deny rules above Docker's, or a docker-proxy issue on IPv6.
Examples
Tracing a -p mapping end-to-end
$ docker run -d --name web -p 8080:80 nginx
# 1. iptables rule that does the DNAT
$ sudo iptables -t nat -L DOCKER -n | grep 8080
DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.17.0.2:80
# 2. Container's IP
$ docker inspect web --format '{{.NetworkSettings.IPAddress}}'
172.17.0.2
# 3. From host, reach it directly via bridge IP (no -p needed for this)
$ curl http://172.17.0.2/
# 4. Or via the published mapping
$ curl http://localhost:8080/One packet, two valid paths in. The DNAT rule is the bridge between localhost:8080 and 172.17.0.2:80.
Two-container app on a user-defined bridge
$ docker network create appnet
$ docker run -d --name db --network appnet \
-e POSTGRES_PASSWORD=devpass postgres:16
$ docker run -d --name api --network appnet \
-e DATABASE_URL=postgres://postgres:devpass@db:5432/app \
myapp
$ docker exec api nslookup db
Server: 127.0.0.11
Name: db
Address 1: 172.18.0.2 db.appnetdb resolves by name only because both containers are on the user-defined appnet. On the default bridge, the same setup would fail.
Short Answer
Interview readyA concise answer to help you respond confidently on this topic during an interview.
Comments
No comments yet