github.com/hustcat/docker@v1.3.3-0.20160314103604-901c67a8eeab/docs/userguide/networking/work-with-networks.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Work with network commands" 4 description = "How to work with docker networks" 5 keywords = ["commands, Usage, network, docker, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-4 9 +++ 10 <![end-metadata]--> 11 12 # Work with network commands 13 14 This article provides examples of the network subcommands you can use to interact with Docker networks and the containers in them. The commands are available through the Docker Engine CLI. These commands are: 15 16 * `docker network create` 17 * `docker network connect` 18 * `docker network ls` 19 * `docker network rm` 20 * `docker network disconnect` 21 * `docker network inspect` 22 23 While not required, it is a good idea to read [Understanding Docker 24 network](dockernetworks.md) before trying the examples in this section. The 25 examples for the rely on a `bridge` network so that you can try them 26 immediately. If you would prefer to experiment with an `overlay` network see 27 the [Getting started with multi-host networks](get-started-overlay.md) instead. 28 29 ## Create networks 30 31 Docker Engine creates a `bridge` network automatically when you install Engine. 32 This network corresponds to the `docker0` bridge that Engine has traditionally 33 relied on. In addition to this network, you can create your own `bridge` or `overlay` network. 34 35 A `bridge` network resides on a single host running an instance of Docker Engine. An `overlay` network can span multiple hosts running their own engines. If you run `docker network create` and supply only a network name, it creates a bridge network for you. 36 37 ```bash 38 $ docker network create simple-network 39 69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a 40 $ docker network inspect simple-network 41 [ 42 { 43 "Name": "simple-network", 44 "Id": "69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a", 45 "Scope": "local", 46 "Driver": "bridge", 47 "IPAM": { 48 "Driver": "default", 49 "Config": [ 50 { 51 "Subnet": "172.22.0.0/16", 52 "Gateway": "172.22.0.1/16" 53 } 54 ] 55 }, 56 "Containers": {}, 57 "Options": {} 58 } 59 ] 60 ``` 61 62 Unlike `bridge` networks, `overlay` networks require some pre-existing conditions 63 before you can create one. These conditions are: 64 65 * Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. 66 * A cluster of hosts with connectivity to the key-value store. 67 * A properly configured Engine `daemon` on each host in the swarm. 68 69 The `docker daemon` options that support the `overlay` network are: 70 71 * `--cluster-store` 72 * `--cluster-store-opt` 73 * `--cluster-advertise` 74 75 It is also a good idea, though not required, that you install Docker Swarm 76 to manage the cluster. Swarm provides sophisticated discovery and server 77 management that can assist your implementation. 78 79 When you create a network, Engine creates a non-overlapping subnetwork for the 80 network by default. You can override this default and specify a subnetwork 81 directly using the the `--subnet` option. On a `bridge` network you can only 82 specify a single subnet. An `overlay` network supports multiple subnets. 83 84 > **Note** : It is highly recommended to use the `--subnet` option while creating 85 > a network. If the `--subnet` is not specified, the docker daemon automatically 86 > chooses and assigns a subnet for the network and it could overlap with another subnet 87 > in your infrastructure that is not managed by docker. Such overlaps can cause 88 > connectivity issues or failures when containers are connected to that network. 89 90 In addition to the `--subnetwork` option, you also specify the `--gateway` `--ip-range` and `--aux-address` options. 91 92 ```bash 93 $ docker network create -d overlay 94 --subnet=192.168.0.0/16 --subnet=192.170.0.0/16 95 --gateway=192.168.0.100 --gateway=192.170.0.100 96 --ip-range=192.168.1.0/24 97 --aux-address a=192.168.1.5 --aux-address b=192.168.1.6 98 --aux-address a=192.170.1.5 --aux-address b=192.170.1.6 99 my-multihost-network 100 ``` 101 102 Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns an error. 103 104 When creating a custom network, the default network driver (i.e. `bridge`) has additional options that can be passed. 105 The following are those options and the equivalent docker daemon flags used for docker0 bridge: 106 107 | Option | Equivalent | Description | 108 |--------------------------------------------------|-------------|-------------------------------------------------------| 109 | `com.docker.network.bridge.name` | - | bridge name to be used when creating the Linux bridge | 110 | `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading | 111 | `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity | 112 | `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports | 113 | `com.docker.network.mtu` | `--mtu` | Set the containers network MTU | 114 115 The following arguments can be passed to `docker network create` for any network driver. 116 117 | Argument | Equivalent | Description | 118 |--------------|------------|------------------------------------------| 119 | `--internal` | - | Restricts external access to the network | 120 | `--ipv6` | `--ipv6` | Enable IPv6 networking | 121 122 For example, now let's use `-o` or `--opt` options to specify an IP address binding when publishing ports: 123 124 ```bash 125 $ docker network create -o "com.docker.network.bridge.host_binding_ipv4"="172.23.0.1" my-network 126 b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a 127 $ docker network inspect my-network 128 [ 129 { 130 "Name": "my-network", 131 "Id": "b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a", 132 "Scope": "local", 133 "Driver": "bridge", 134 "IPAM": { 135 "Driver": "default", 136 "Options": {}, 137 "Config": [ 138 { 139 "Subnet": "172.23.0.0/16", 140 "Gateway": "172.23.0.1/16" 141 } 142 ] 143 }, 144 "Containers": {}, 145 "Options": { 146 "com.docker.network.bridge.host_binding_ipv4": "172.23.0.1" 147 } 148 } 149 ] 150 $ docker run -d -P --name redis --net my-network redis 151 bafb0c808c53104b2c90346f284bda33a69beadcab4fc83ab8f2c5a4410cd129 152 $ docker ps 153 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 154 bafb0c808c53 redis "/entrypoint.sh redis" 4 seconds ago Up 3 seconds 172.23.0.1:32770->6379/tcp redis 155 ``` 156 157 ## Connect containers 158 159 You can connect containers dynamically to one or more networks. These networks 160 can be backed the same or different network drivers. Once connected, the 161 containers can communicate using another container's IP address or name. 162 163 For `overlay` networks or custom plugins that support multi-host 164 connectivity, containers connected to the same multi-host network but launched 165 from different hosts can also communicate in this way. 166 167 Create two containers for this example: 168 169 ```bash 170 $ docker run -itd --name=container1 busybox 171 18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731 172 173 $ docker run -itd --name=container2 busybox 174 498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152 175 ``` 176 177 Then create an isolated, `bridge` network to test with. 178 179 ```bash 180 $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw 181 06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8 182 ``` 183 184 Connect `container2` to the network and then `inspect` the network to verify the connection: 185 186 ``` 187 $ docker network connect isolated_nw container2 188 $ docker network inspect isolated_nw 189 [ 190 { 191 "Name": "isolated_nw", 192 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 193 "Scope": "local", 194 "Driver": "bridge", 195 "IPAM": { 196 "Driver": "default", 197 "Config": [ 198 { 199 "Subnet": "172.21.0.0/16", 200 "Gateway": "172.21.0.1/16" 201 } 202 ] 203 }, 204 "Containers": { 205 "90e1f3ec71caf82ae776a827e0712a68a110a3f175954e5bd4222fd142ac9428": { 206 "Name": "container2", 207 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 208 "MacAddress": "02:42:ac:19:00:02", 209 "IPv4Address": "172.25.0.2/16", 210 "IPv6Address": "" 211 } 212 }, 213 "Options": {} 214 } 215 ] 216 ``` 217 218 You can see that the Engine automatically assigns an IP address to `container2`. 219 Given we specified a `--subnet` when creating the network, Engine picked 220 an address from that same subnet. Now, start a third container and connect it to 221 the network on launch using the `docker run` command's `--net` option: 222 223 ```bash 224 $ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox 225 467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551 226 ``` 227 228 As you can see you were able to specify the ip address for your container. 229 As long as the network to which the container is connecting was created with 230 a user specified subnet, you will be able to select the IPv4 and/or IPv6 address(es) 231 for your container when executing `docker run` and `docker network connect` commands. 232 The selected IP address is part of the container networking configuration and will be 233 preserved across container reload. The feature is only available on user defined networks, 234 because they guarantee their subnets configuration does not change across daemon reload. 235 236 Now, inspect the network resources used by `container3`. 237 238 ```bash 239 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container3 240 {"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 241 "EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}} 242 ``` 243 Repeat this command for `container2`. If you have Python installed, you can pretty print the output. 244 245 ```bash 246 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 247 { 248 "bridge": { 249 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 250 "EndpointID": "0099f9efb5a3727f6a554f176b1e96fca34cae773da68b3b6a26d046c12cb365", 251 "Gateway": "172.17.0.1", 252 "GlobalIPv6Address": "", 253 "GlobalIPv6PrefixLen": 0, 254 "IPAMConfig": null, 255 "IPAddress": "172.17.0.3", 256 "IPPrefixLen": 16, 257 "IPv6Gateway": "", 258 "MacAddress": "02:42:ac:11:00:03" 259 }, 260 "isolated_nw": { 261 "NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 262 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 263 "Gateway": "172.25.0.1", 264 "GlobalIPv6Address": "", 265 "GlobalIPv6PrefixLen": 0, 266 "IPAMConfig": null, 267 "IPAddress": "172.25.0.2", 268 "IPPrefixLen": 16, 269 "IPv6Gateway": "", 270 "MacAddress": "02:42:ac:19:00:02" 271 } 272 } 273 ``` 274 275 You should find `container2` belongs to two networks. The `bridge` network 276 which it joined by default when you launched it and the `isolated_nw` which you 277 later connected it to. 278 279  280 281 In the case of `container3`, you connected it through `docker run` to the 282 `isolated_nw` so that container is not connected to `bridge`. 283 284 Use the `docker attach` command to connect to the running `container2` and 285 examine its networking stack: 286 287 ```bash 288 $ docker attach container2 289 ``` 290 291 If you look a the container's network stack you should see two Ethernet interfaces, one for the default bridge network and one for the `isolated_nw` network. 292 293 ```bash 294 / # ifconfig 295 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 296 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 297 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 298 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 299 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 300 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 301 collisions:0 txqueuelen:0 302 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 303 304 eth1 Link encap:Ethernet HWaddr 02:42:AC:15:00:02 305 inet addr:172.25.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 306 inet6 addr: fe80::42:acff:fe19:2/64 Scope:Link 307 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 308 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 309 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 310 collisions:0 txqueuelen:0 311 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 312 313 lo Link encap:Local Loopback 314 inet addr:127.0.0.1 Mask:255.0.0.0 315 inet6 addr: ::1/128 Scope:Host 316 UP LOOPBACK RUNNING MTU:65536 Metric:1 317 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 318 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 319 collisions:0 txqueuelen:0 320 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 321 ``` 322 323 On the `isolated_nw` which was user defined, the Docker embedded DNS server enables name resolution for other containers in the network. Inside of `container2` it is possible to ping `container3` by name. 324 325 ```bash 326 / # ping -w 4 container3 327 PING container3 (172.25.3.3): 56 data bytes 328 64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.070 ms 329 64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.080 ms 330 64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.080 ms 331 64 bytes from 172.25.3.3: seq=3 ttl=64 time=0.097 ms 332 333 --- container3 ping statistics --- 334 4 packets transmitted, 4 packets received, 0% packet loss 335 round-trip min/avg/max = 0.070/0.081/0.097 ms 336 ``` 337 338 This isn't the case for the default `bridge` network. Both `container2` and `container1` are connected to the default bridge network. Docker does not support automatic service discovery on this network. For this reason, pinging `container1` by name fails as you would expect based on the `/etc/hosts` file: 339 340 ```bash 341 / # ping -w 4 container1 342 ping: bad address 'container1' 343 ``` 344 345 A ping using the `container1` IP address does succeed though: 346 347 ```bash 348 / # ping -w 4 172.17.0.2 349 PING 172.17.0.2 (172.17.0.2): 56 data bytes 350 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms 351 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms 352 64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms 353 64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms 354 355 --- 172.17.0.2 ping statistics --- 356 4 packets transmitted, 4 packets received, 0% packet loss 357 round-trip min/avg/max = 0.072/0.085/0.101 ms 358 ``` 359 360 If you wanted you could connect `container1` to `container2` with the `docker 361 run --link` command and that would enable the two containers to interact by name 362 as well as IP. 363 364 Detach from a `container2` and leave it running using `CTRL-p CTRL-q`. 365 366 In this example, `container2` is attached to both networks and so can talk to 367 `container1` and `container3`. But `container3` and `container1` are not in the 368 same network and cannot communicate. Test, this now by attaching to 369 `container3` and attempting to ping `container1` by IP address. 370 371 ```bash 372 $ docker attach container3 373 / # ping 172.17.0.2 374 PING 172.17.0.2 (172.17.0.2): 56 data bytes 375 ^C 376 --- 172.17.0.2 ping statistics --- 377 10 packets transmitted, 0 packets received, 100% packet loss 378 379 ``` 380 381 You can connect both running and non-running containers to a network. However, 382 `docker network inspect` only displays information on running containers. 383 384 ### Linking containers in user-defined networks 385 386 In the above example, `container2` was able to resolve `container3`'s name automatically 387 in the user defined network `isolated_nw`, but the name resolution did not succeed 388 automatically in the default `bridge` network. This is expected in order to maintain 389 backward compatibility with [legacy link](default_network/dockerlinks.md). 390 391 The `legacy link` provided 4 major functionalities to the default `bridge` network. 392 393 * name resolution 394 * name alias for the linked container using `--link=CONTAINER-NAME:ALIAS` 395 * secured container connectivity (in isolation via `--icc=false`) 396 * environment variable injection 397 398 Comparing the above 4 functionalities with the non-default user-defined networks such as 399 `isolated_nw` in this example, without any additional config, `docker network` provides 400 401 * automatic name resolution using DNS 402 * automatic secured isolated environment for the containers in a network 403 * ability to dynamically attach and detach to multiple networks 404 * supports the `--link` option to provide name alias for the linked container 405 406 Continuing with the above example, create another container `container4` in `isolated_nw` 407 with `--link` to provide additional name resolution using alias for other containers in 408 the same network. 409 410 ```bash 411 $ docker run --net=isolated_nw -itd --name=container4 --link container5:c5 busybox 412 01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c 413 ``` 414 415 With the help of `--link` `container4` will be able to reach `container5` using the 416 aliased name `c5` as well. 417 418 Please note that while creating `container4`, we linked to a container named `container5` 419 which is not created yet. That is one of the differences in behavior between the 420 *legacy link* in default `bridge` network and the new *link* functionality in user defined 421 networks. The *legacy link* is static in nature and it hard-binds the container with the 422 alias and it doesn't tolerate linked container restarts. While the new *link* functionality 423 in user defined networks are dynamic in nature and supports linked container restarts 424 including tolerating ip-address changes on the linked container. 425 426 Now let us launch another container named `container5` linking `container4` to c4. 427 428 ```bash 429 $ docker run --net=isolated_nw -itd --name=container5 --link container4:c4 busybox 430 72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a 431 ``` 432 433 As expected, `container4` will be able to reach `container5` by both its container name and 434 its alias c5 and `container5` will be able to reach `container4` by its container name and 435 its alias c4. 436 437 ```bash 438 $ docker attach container4 439 / # ping -w 4 c5 440 PING c5 (172.25.0.5): 56 data bytes 441 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 442 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 443 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 444 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 445 446 --- c5 ping statistics --- 447 4 packets transmitted, 4 packets received, 0% packet loss 448 round-trip min/avg/max = 0.070/0.081/0.097 ms 449 450 / # ping -w 4 container5 451 PING container5 (172.25.0.5): 56 data bytes 452 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 453 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 454 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 455 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 456 457 --- container5 ping statistics --- 458 4 packets transmitted, 4 packets received, 0% packet loss 459 round-trip min/avg/max = 0.070/0.081/0.097 ms 460 ``` 461 462 ```bash 463 $ docker attach container5 464 / # ping -w 4 c4 465 PING c4 (172.25.0.4): 56 data bytes 466 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 467 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 468 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 469 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 470 471 --- c4 ping statistics --- 472 4 packets transmitted, 4 packets received, 0% packet loss 473 round-trip min/avg/max = 0.065/0.070/0.082 ms 474 475 / # ping -w 4 container4 476 PING container4 (172.25.0.4): 56 data bytes 477 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 478 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 479 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 480 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 481 482 --- container4 ping statistics --- 483 4 packets transmitted, 4 packets received, 0% packet loss 484 round-trip min/avg/max = 0.065/0.070/0.082 ms 485 ``` 486 487 Similar to the legacy link functionality the new link alias is localized to a container 488 and the aliased name has no meaning outside of the container using the `--link`. 489 490 Also, it is important to note that if a container belongs to multiple networks, the 491 linked alias is scoped within a given network. Hence the containers can be linked to 492 different aliases in different networks. 493 494 Extending the example, let us create another network named `local_alias` 495 496 ```bash 497 $ docker network create -d bridge --subnet 172.26.0.0/24 local_alias 498 76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa 499 ``` 500 501 let us connect `container4` and `container5` to the new network `local_alias` 502 503 ``` 504 $ docker network connect --link container5:foo local_alias container4 505 $ docker network connect --link container4:bar local_alias container5 506 ``` 507 508 ```bash 509 $ docker attach container4 510 511 / # ping -w 4 foo 512 PING foo (172.26.0.3): 56 data bytes 513 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 514 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 515 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 516 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 517 518 --- foo ping statistics --- 519 4 packets transmitted, 4 packets received, 0% packet loss 520 round-trip min/avg/max = 0.070/0.081/0.097 ms 521 522 / # ping -w 4 c5 523 PING c5 (172.25.0.5): 56 data bytes 524 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 525 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 526 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 527 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 528 529 --- c5 ping statistics --- 530 4 packets transmitted, 4 packets received, 0% packet loss 531 round-trip min/avg/max = 0.070/0.081/0.097 ms 532 ``` 533 534 Note that the ping succeeds for both the aliases but on different networks. 535 Let us conclude this section by disconnecting `container5` from the `isolated_nw` 536 and observe the results 537 538 ``` 539 $ docker network disconnect isolated_nw container5 540 541 $ docker attach container4 542 543 / # ping -w 4 c5 544 ping: bad address 'c5' 545 546 / # ping -w 4 foo 547 PING foo (172.26.0.3): 56 data bytes 548 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 549 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 550 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 551 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 552 553 --- foo ping statistics --- 554 4 packets transmitted, 4 packets received, 0% packet loss 555 round-trip min/avg/max = 0.070/0.081/0.097 ms 556 557 ``` 558 559 In conclusion, the new link functionality in user defined networks provides all the 560 benefits of legacy links while avoiding most of the well-known issues with *legacy links*. 561 562 One notable missing functionality compared to *legacy links* is the injection of 563 environment variables. Though very useful, environment variable injection is static 564 in nature and must be injected when the container is started. One cannot inject 565 environment variables into a running container without significant effort and hence 566 it is not compatible with `docker network` which provides a dynamic way to connect/ 567 disconnect containers to/from a network. 568 569 ### Network-scoped alias 570 571 While *link*s provide private name resolution that is localized within a container, 572 the network-scoped alias provides a way for a container to be discovered by an 573 alternate name by any other container within the scope of a particular network. 574 Unlike the *link* alias, which is defined by the consumer of a service, the 575 network-scoped alias is defined by the container that is offering the service 576 to the network. 577 578 Continuing with the above example, create another container in `isolated_nw` with a 579 network alias. 580 581 ```bash 582 $ docker run --net=isolated_nw -itd --name=container6 --net-alias app busybox 583 8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17 584 ``` 585 586 ```bash 587 $ docker attach container4 588 / # ping -w 4 app 589 PING app (172.25.0.6): 56 data bytes 590 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 591 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 592 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 593 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 594 595 --- app ping statistics --- 596 4 packets transmitted, 4 packets received, 0% packet loss 597 round-trip min/avg/max = 0.070/0.081/0.097 ms 598 599 / # ping -w 4 container6 600 PING container5 (172.25.0.6): 56 data bytes 601 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 602 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 603 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 604 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 605 606 --- container6 ping statistics --- 607 4 packets transmitted, 4 packets received, 0% packet loss 608 round-trip min/avg/max = 0.070/0.081/0.097 ms 609 ``` 610 611 Now let us connect `container6` to the `local_alias` network with a different network-scoped 612 alias. 613 614 ``` 615 $ docker network connect --alias scoped-app local_alias container6 616 ``` 617 618 `container6` in this example now is aliased as `app` in network `isolated_nw` and 619 as `scoped-app` in network `local_alias`. 620 621 Let's try to reach these aliases from `container4` (which is connected to both these networks) 622 and `container5` (which is connected only to `isolated_nw`). 623 624 ```bash 625 $ docker attach container4 626 627 / # ping -w 4 scoped-app 628 PING foo (172.26.0.5): 56 data bytes 629 64 bytes from 172.26.0.5: seq=0 ttl=64 time=0.070 ms 630 64 bytes from 172.26.0.5: seq=1 ttl=64 time=0.080 ms 631 64 bytes from 172.26.0.5: seq=2 ttl=64 time=0.080 ms 632 64 bytes from 172.26.0.5: seq=3 ttl=64 time=0.097 ms 633 634 --- foo ping statistics --- 635 4 packets transmitted, 4 packets received, 0% packet loss 636 round-trip min/avg/max = 0.070/0.081/0.097 ms 637 638 $ docker attach container5 639 640 / # ping -w 4 scoped-app 641 ping: bad address 'scoped-app' 642 643 ``` 644 645 As you can see, the alias is scoped to the network it is defined on and hence only 646 those containers that are connected to that network can access the alias. 647 648 In addition to the above features, multiple containers can share the same network-scoped 649 alias within the same network. For example, let's launch `container7` in `isolated_nw` with 650 the same alias as `container6` 651 652 ```bash 653 $ docker run --net=isolated_nw -itd --name=container7 --net-alias app busybox 654 3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554 655 ``` 656 657 When multiple containers share the same alias, name resolution to that alias will happen 658 to one of the containers (typically the first container that is aliased). When the container 659 that backs the alias goes down or disconnected from the network, the next container that 660 backs the alias will be resolved. 661 662 Let us ping the alias `app` from `container4` and bring down `container6` to verify that 663 `container7` is resolving the `app` alias. 664 665 ```bash 666 $ docker attach container4 667 / # ping -w 4 app 668 PING app (172.25.0.6): 56 data bytes 669 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 670 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 671 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 672 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 673 674 --- app ping statistics --- 675 4 packets transmitted, 4 packets received, 0% packet loss 676 round-trip min/avg/max = 0.070/0.081/0.097 ms 677 678 $ docker stop container6 679 680 $ docker attach container4 681 / # ping -w 4 app 682 PING app (172.25.0.7): 56 data bytes 683 64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms 684 64 bytes from 172.25.0.7: seq=1 ttl=64 time=0.075 ms 685 64 bytes from 172.25.0.7: seq=2 ttl=64 time=0.072 ms 686 64 bytes from 172.25.0.7: seq=3 ttl=64 time=0.101 ms 687 688 --- app ping statistics --- 689 4 packets transmitted, 4 packets received, 0% packet loss 690 round-trip min/avg/max = 0.072/0.085/0.101 ms 691 692 ``` 693 694 ## Disconnecting containers 695 696 You can disconnect a container from a network using the `docker network 697 disconnect` command. 698 699 ``` 700 $ docker network disconnect isolated_nw container2 701 702 docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 703 { 704 "bridge": { 705 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 706 "EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0", 707 "Gateway": "172.17.0.1", 708 "GlobalIPv6Address": "", 709 "GlobalIPv6PrefixLen": 0, 710 "IPAddress": "172.17.0.3", 711 "IPPrefixLen": 16, 712 "IPv6Gateway": "", 713 "MacAddress": "02:42:ac:11:00:03" 714 } 715 } 716 717 718 $ docker network inspect isolated_nw 719 [ 720 { 721 "Name": "isolated_nw", 722 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 723 "Scope": "local", 724 "Driver": "bridge", 725 "IPAM": { 726 "Driver": "default", 727 "Config": [ 728 { 729 "Subnet": "172.21.0.0/16", 730 "Gateway": "172.21.0.1/16" 731 } 732 ] 733 }, 734 "Containers": { 735 "467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551": { 736 "Name": "container3", 737 "EndpointID": "dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103", 738 "MacAddress": "02:42:ac:19:03:03", 739 "IPv4Address": "172.25.3.3/16", 740 "IPv6Address": "" 741 } 742 }, 743 "Options": {} 744 } 745 ] 746 ``` 747 748 Once a container is disconnected from a network, it cannot communicate with 749 other containers connected to that network. In this example, `container2` can no longer talk to `container3` on the `isolated_nw` network. 750 751 ``` 752 $ docker attach container2 753 754 / # ifconfig 755 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 756 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 757 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 758 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 759 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 760 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 761 collisions:0 txqueuelen:0 762 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 763 764 lo Link encap:Local Loopback 765 inet addr:127.0.0.1 Mask:255.0.0.0 766 inet6 addr: ::1/128 Scope:Host 767 UP LOOPBACK RUNNING MTU:65536 Metric:1 768 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 769 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 770 collisions:0 txqueuelen:0 771 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 772 773 / # ping container3 774 PING container3 (172.25.3.3): 56 data bytes 775 ^C 776 --- container3 ping statistics --- 777 2 packets transmitted, 0 packets received, 100% packet loss 778 ``` 779 780 The `container2` still has full connectivity to the bridge network 781 782 ```bash 783 / # ping container1 784 PING container1 (172.17.0.2): 56 data bytes 785 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms 786 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms 787 ^C 788 --- container1 ping statistics --- 789 2 packets transmitted, 2 packets received, 0% packet loss 790 round-trip min/avg/max = 0.119/0.146/0.174 ms 791 / # 792 ``` 793 794 There are certain scenarios such as ungraceful docker daemon restarts in multi-host network, 795 where the daemon is unable to cleanup stale connectivity endpoints. Such stale endpoints 796 may cause an error `container already connected to network` when a new container is 797 connected to that network with the same name as the stale endpoint. In order to cleanup 798 these stale endpoints, first remove the container and force disconnect 799 (`docker network disconnect -f`) the endpoint from the network. Once the endpoint is 800 cleaned up, the container can be connected to the network. 801 802 ``` 803 $ docker run -d --name redis_db --net multihost redis 804 ERROR: Cannot start container bc0b19c089978f7845633027aa3435624ca3d12dd4f4f764b61eac4c0610f32e: container already connected to network multihost 805 806 $ docker rm -f redis_db 807 $ docker network disconnect -f multihost redis_db 808 809 $ docker run -d --name redis_db --net multihost redis 810 7d986da974aeea5e9f7aca7e510bdb216d58682faa83a9040c2f2adc0544795a 811 ``` 812 813 ## Remove a network 814 815 When all the containers in a network are stopped or disconnected, you can remove a network. 816 817 ```bash 818 $ docker network disconnect isolated_nw container3 819 ``` 820 821 ```bash 822 docker network inspect isolated_nw 823 [ 824 { 825 "Name": "isolated_nw", 826 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 827 "Scope": "local", 828 "Driver": "bridge", 829 "IPAM": { 830 "Driver": "default", 831 "Config": [ 832 { 833 "Subnet": "172.21.0.0/16", 834 "Gateway": "172.21.0.1/16" 835 } 836 ] 837 }, 838 "Containers": {}, 839 "Options": {} 840 } 841 ] 842 843 $ docker network rm isolated_nw 844 ``` 845 846 List all your networks to verify the `isolated_nw` was removed: 847 848 ``` 849 $ docker network ls 850 NETWORK ID NAME DRIVER 851 72314fa53006 host host 852 f7ab26d71dbd bridge bridge 853 0f32e83e61ac none null 854 ``` 855 856 ## Related information 857 858 * [network create](../../reference/commandline/network_create.md) 859 * [network inspect](../../reference/commandline/network_inspect.md) 860 * [network connect](../../reference/commandline/network_connect.md) 861 * [network disconnect](../../reference/commandline/network_disconnect.md) 862 * [network ls](../../reference/commandline/network_ls.md) 863 * [network rm](../../reference/commandline/network_rm.md)