github.com/vieux/docker@v0.6.3-0.20161004191708-e097c2a938c7/docs/userguide/networking/work-with-networks.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Work with network commands" 4 description = "How to work with docker networks" 5 keywords = ["commands, Usage, network, docker, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-4 9 +++ 10 <![end-metadata]--> 11 12 # Work with network commands 13 14 This article provides examples of the network subcommands you can use to 15 interact with Docker networks and the containers in them. The commands are 16 available through the Docker Engine CLI. These commands are: 17 18 * `docker network create` 19 * `docker network connect` 20 * `docker network ls` 21 * `docker network rm` 22 * `docker network disconnect` 23 * `docker network inspect` 24 25 While not required, it is a good idea to read [Understanding Docker 26 network](index.md) before trying the examples in this section. The 27 examples for the rely on a `bridge` network so that you can try them 28 immediately. If you would prefer to experiment with an `overlay` network see 29 the [Getting started with multi-host networks](get-started-overlay.md) instead. 30 31 ## Create networks 32 33 Docker Engine creates a `bridge` network automatically when you install Engine. 34 This network corresponds to the `docker0` bridge that Engine has traditionally 35 relied on. In addition to this network, you can create your own `bridge` or 36 `overlay` network. 37 38 A `bridge` network resides on a single host running an instance of Docker 39 Engine. An `overlay` network can span multiple hosts running their own engines. 40 If you run `docker network create` and supply only a network name, it creates a 41 bridge network for you. 42 43 ```bash 44 $ docker network create simple-network 45 46 69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a 47 48 $ docker network inspect simple-network 49 [ 50 { 51 "Name": "simple-network", 52 "Id": "69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a", 53 "Scope": "local", 54 "Driver": "bridge", 55 "IPAM": { 56 "Driver": "default", 57 "Config": [ 58 { 59 "Subnet": "172.22.0.0/16", 60 "Gateway": "172.22.0.1" 61 } 62 ] 63 }, 64 "Containers": {}, 65 "Options": {}, 66 "Labels": {} 67 } 68 ] 69 ``` 70 71 Unlike `bridge` networks, `overlay` networks require some pre-existing conditions 72 before you can create one. These conditions are: 73 74 * Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. 75 * A cluster of hosts with connectivity to the key-value store. 76 * A properly configured Engine `daemon` on each host in the swarm. 77 78 The `dockerd` options that support the `overlay` network are: 79 80 * `--cluster-store` 81 * `--cluster-store-opt` 82 * `--cluster-advertise` 83 84 It is also a good idea, though not required, that you install Docker Swarm 85 to manage the cluster. Swarm provides sophisticated discovery and server 86 management that can assist your implementation. 87 88 When you create a network, Engine creates a non-overlapping subnetwork for the 89 network by default. You can override this default and specify a subnetwork 90 directly using the `--subnet` option. On a `bridge` network you can only 91 specify a single subnet. An `overlay` network supports multiple subnets. 92 93 > **Note** : It is highly recommended to use the `--subnet` option while creating 94 > a network. If the `--subnet` is not specified, the docker daemon automatically 95 > chooses and assigns a subnet for the network and it could overlap with another subnet 96 > in your infrastructure that is not managed by docker. Such overlaps can cause 97 > connectivity issues or failures when containers are connected to that network. 98 99 In addition to the `--subnet` option, you also specify the `--gateway`, 100 `--ip-range`, and `--aux-address` options. 101 102 ```bash 103 $ docker network create -d overlay \ 104 --subnet=192.168.0.0/16 \ 105 --subnet=192.170.0.0/16 \ 106 --gateway=192.168.0.100 \ 107 --gateway=192.170.0.100 \ 108 --ip-range=192.168.1.0/24 \ 109 --aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \ 110 --aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \ 111 my-multihost-network 112 ``` 113 114 Be sure that your subnetworks do not overlap. If they do, the network create 115 fails and Engine returns an error. 116 117 When creating a custom network, the default network driver (i.e. `bridge`) has 118 additional options that can be passed. The following are those options and the 119 equivalent docker daemon flags used for docker0 bridge: 120 121 | Option | Equivalent | Description | 122 |--------------------------------------------------|-------------|-------------------------------------------------------| 123 | `com.docker.network.bridge.name` | - | bridge name to be used when creating the Linux bridge | 124 | `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading | 125 | `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity | 126 | `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports | 127 | `com.docker.network.driver.mtu` | `--mtu` | Set the containers network MTU | 128 129 The following arguments can be passed to `docker network create` for any network driver. 130 131 | Argument | Equivalent | Description | 132 |--------------|------------|------------------------------------------| 133 | `--internal` | - | Restricts external access to the network | 134 | `--ipv6` | `--ipv6` | Enable IPv6 networking | 135 136 For example, now let's use `-o` or `--opt` options to specify an IP address binding when publishing ports: 137 138 ```bash 139 $ docker network create -o "com.docker.network.bridge.host_binding_ipv4"="172.23.0.1" my-network 140 141 b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a 142 143 $ docker network inspect my-network 144 145 [ 146 { 147 "Name": "my-network", 148 "Id": "b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a", 149 "Scope": "local", 150 "Driver": "bridge", 151 "IPAM": { 152 "Driver": "default", 153 "Options": {}, 154 "Config": [ 155 { 156 "Subnet": "172.23.0.0/16", 157 "Gateway": "172.23.0.1" 158 } 159 ] 160 }, 161 "Containers": {}, 162 "Options": { 163 "com.docker.network.bridge.host_binding_ipv4": "172.23.0.1" 164 }, 165 "Labels": {} 166 } 167 ] 168 169 $ docker run -d -P --name redis --network my-network redis 170 171 bafb0c808c53104b2c90346f284bda33a69beadcab4fc83ab8f2c5a4410cd129 172 173 $ docker ps 174 175 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 176 bafb0c808c53 redis "/entrypoint.sh redis" 4 seconds ago Up 3 seconds 172.23.0.1:32770->6379/tcp redis 177 ``` 178 179 ## Connect containers 180 181 You can connect containers dynamically to one or more networks. These networks 182 can be backed the same or different network drivers. Once connected, the 183 containers can communicate using another container's IP address or name. 184 185 For `overlay` networks or custom plugins that support multi-host 186 connectivity, containers connected to the same multi-host network but launched 187 from different hosts can also communicate in this way. 188 189 Create two containers for this example: 190 191 ```bash 192 $ docker run -itd --name=container1 busybox 193 194 18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731 195 196 $ docker run -itd --name=container2 busybox 197 198 498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152 199 ``` 200 201 Then create an isolated, `bridge` network to test with. 202 203 ```bash 204 $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw 205 206 06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8 207 ``` 208 209 Connect `container2` to the network and then `inspect` the network to verify 210 the connection: 211 212 ``` 213 $ docker network connect isolated_nw container2 214 215 $ docker network inspect isolated_nw 216 217 [ 218 { 219 "Name": "isolated_nw", 220 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 221 "Scope": "local", 222 "Driver": "bridge", 223 "IPAM": { 224 "Driver": "default", 225 "Config": [ 226 { 227 "Subnet": "172.25.0.0/16", 228 "Gateway": "172.25.0.1" 229 } 230 ] 231 }, 232 "Containers": { 233 "90e1f3ec71caf82ae776a827e0712a68a110a3f175954e5bd4222fd142ac9428": { 234 "Name": "container2", 235 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 236 "MacAddress": "02:42:ac:19:00:02", 237 "IPv4Address": "172.25.0.2/16", 238 "IPv6Address": "" 239 } 240 }, 241 "Options": {}, 242 "Labels": {} 243 } 244 ] 245 ``` 246 247 You can see that the Engine automatically assigns an IP address to `container2`. 248 Given we specified a `--subnet` when creating the network, Engine picked 249 an address from that same subnet. Now, start a third container and connect it to 250 the network on launch using the `docker run` command's `--network` option: 251 252 ```bash 253 $ docker run --network=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox 254 255 467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551 256 ``` 257 258 As you can see you were able to specify the ip address for your container. As 259 long as the network to which the container is connecting was created with a 260 user specified subnet, you will be able to select the IPv4 and/or IPv6 261 address(es) for your container when executing `docker run` and `docker network 262 connect` commands by respectively passing the `--ip` and `--ip6` flags for IPv4 263 and IPv6. The selected IP address is part of the container networking 264 configuration and will be preserved across container reload. The feature is 265 only available on user defined networks, because they guarantee their subnets 266 configuration does not change across daemon reload. 267 268 Now, inspect the network resources used by `container3`. 269 270 ```bash 271 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container3 272 273 {"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 274 "EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}} 275 ``` 276 Repeat this command for `container2`. If you have Python installed, you can pretty print the output. 277 278 ```bash 279 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 280 281 { 282 "bridge": { 283 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 284 "EndpointID": "0099f9efb5a3727f6a554f176b1e96fca34cae773da68b3b6a26d046c12cb365", 285 "Gateway": "172.17.0.1", 286 "GlobalIPv6Address": "", 287 "GlobalIPv6PrefixLen": 0, 288 "IPAMConfig": null, 289 "IPAddress": "172.17.0.3", 290 "IPPrefixLen": 16, 291 "IPv6Gateway": "", 292 "MacAddress": "02:42:ac:11:00:03" 293 }, 294 "isolated_nw": { 295 "NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 296 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 297 "Gateway": "172.25.0.1", 298 "GlobalIPv6Address": "", 299 "GlobalIPv6PrefixLen": 0, 300 "IPAMConfig": null, 301 "IPAddress": "172.25.0.2", 302 "IPPrefixLen": 16, 303 "IPv6Gateway": "", 304 "MacAddress": "02:42:ac:19:00:02" 305 } 306 } 307 ``` 308 309 You should find `container2` belongs to two networks. The `bridge` network 310 which it joined by default when you launched it and the `isolated_nw` which you 311 later connected it to. 312 313  314 315 In the case of `container3`, you connected it through `docker run` to the 316 `isolated_nw` so that container is not connected to `bridge`. 317 318 Use the `docker attach` command to connect to the running `container2` and 319 examine its networking stack: 320 321 ```bash 322 $ docker attach container2 323 ``` 324 325 If you look at the container's network stack you should see two Ethernet 326 interfaces, one for the default bridge network and one for the `isolated_nw` 327 network. 328 329 ```bash 330 / # ifconfig 331 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 332 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 333 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 334 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 335 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 336 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 337 collisions:0 txqueuelen:0 338 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 339 340 eth1 Link encap:Ethernet HWaddr 02:42:AC:15:00:02 341 inet addr:172.25.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 342 inet6 addr: fe80::42:acff:fe19:2/64 Scope:Link 343 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 344 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 345 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 346 collisions:0 txqueuelen:0 347 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 348 349 lo Link encap:Local Loopback 350 inet addr:127.0.0.1 Mask:255.0.0.0 351 inet6 addr: ::1/128 Scope:Host 352 UP LOOPBACK RUNNING MTU:65536 Metric:1 353 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 354 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 355 collisions:0 txqueuelen:0 356 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 357 ``` 358 359 On the `isolated_nw` which was user defined, the Docker embedded DNS server 360 enables name resolution for other containers in the network. Inside of 361 `container2` it is possible to ping `container3` by name. 362 363 ```bash 364 / # ping -w 4 container3 365 PING container3 (172.25.3.3): 56 data bytes 366 64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.070 ms 367 64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.080 ms 368 64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.080 ms 369 64 bytes from 172.25.3.3: seq=3 ttl=64 time=0.097 ms 370 371 --- container3 ping statistics --- 372 4 packets transmitted, 4 packets received, 0% packet loss 373 round-trip min/avg/max = 0.070/0.081/0.097 ms 374 ``` 375 376 This isn't the case for the default `bridge` network. Both `container2` and 377 `container1` are connected to the default bridge network. Docker does not 378 support automatic service discovery on this network. For this reason, pinging 379 `container1` by name fails as you would expect based on the `/etc/hosts` file: 380 381 ```bash 382 / # ping -w 4 container1 383 ping: bad address 'container1' 384 ``` 385 386 A ping using the `container1` IP address does succeed though: 387 388 ```bash 389 / # ping -w 4 172.17.0.2 390 PING 172.17.0.2 (172.17.0.2): 56 data bytes 391 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms 392 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms 393 64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms 394 64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms 395 396 --- 172.17.0.2 ping statistics --- 397 4 packets transmitted, 4 packets received, 0% packet loss 398 round-trip min/avg/max = 0.072/0.085/0.101 ms 399 ``` 400 401 If you wanted you could connect `container1` to `container2` with the `docker 402 run --link` command and that would enable the two containers to interact by name 403 as well as IP. 404 405 Detach from a `container2` and leave it running using `CTRL-p CTRL-q`. 406 407 In this example, `container2` is attached to both networks and so can talk to 408 `container1` and `container3`. But `container3` and `container1` are not in the 409 same network and cannot communicate. Test, this now by attaching to 410 `container3` and attempting to ping `container1` by IP address. 411 412 ```bash 413 $ docker attach container3 414 415 / # ping 172.17.0.2 416 PING 172.17.0.2 (172.17.0.2): 56 data bytes 417 ^C 418 --- 172.17.0.2 ping statistics --- 419 10 packets transmitted, 0 packets received, 100% packet loss 420 421 ``` 422 423 You can connect both running and non-running containers to a network. However, 424 `docker network inspect` only displays information on running containers. 425 426 ### Linking containers in user-defined networks 427 428 In the above example, `container2` was able to resolve `container3`'s name 429 automatically in the user defined network `isolated_nw`, but the name 430 resolution did not succeed automatically in the default `bridge` network. This 431 is expected in order to maintain backward compatibility with [legacy 432 link](default_network/dockerlinks.md). 433 434 The `legacy link` provided 4 major functionalities to the default `bridge` 435 network. 436 437 * name resolution 438 * name alias for the linked container using `--link=CONTAINER-NAME:ALIAS` 439 * secured container connectivity (in isolation via `--icc=false`) 440 * environment variable injection 441 442 Comparing the above 4 functionalities with the non-default user-defined 443 networks such as `isolated_nw` in this example, without any additional config, 444 `docker network` provides 445 446 * automatic name resolution using DNS 447 * automatic secured isolated environment for the containers in a network 448 * ability to dynamically attach and detach to multiple networks 449 * supports the `--link` option to provide name alias for the linked container 450 451 Continuing with the above example, create another container `container4` in 452 `isolated_nw` with `--link` to provide additional name resolution using alias 453 for other containers in the same network. 454 455 ```bash 456 $ docker run --network=isolated_nw -itd --name=container4 --link container5:c5 busybox 457 458 01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c 459 ``` 460 461 With the help of `--link` `container4` will be able to reach `container5` using 462 the aliased name `c5` as well. 463 464 Please note that while creating `container4`, we linked to a container named 465 `container5` which is not created yet. That is one of the differences in 466 behavior between the *legacy link* in default `bridge` network and the new 467 *link* functionality in user defined networks. The *legacy link* is static in 468 nature and it hard-binds the container with the alias and it doesn't tolerate 469 linked container restarts. While the new *link* functionality in user defined 470 networks are dynamic in nature and supports linked container restarts including 471 tolerating ip-address changes on the linked container. 472 473 Now let us launch another container named `container5` linking `container4` to 474 c4. 475 476 ```bash 477 $ docker run --network=isolated_nw -itd --name=container5 --link container4:c4 busybox 478 479 72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a 480 ``` 481 482 As expected, `container4` will be able to reach `container5` by both its 483 container name and its alias c5 and `container5` will be able to reach 484 `container4` by its container name and its alias c4. 485 486 ```bash 487 $ docker attach container4 488 489 / # ping -w 4 c5 490 PING c5 (172.25.0.5): 56 data bytes 491 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 492 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 493 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 494 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 495 496 --- c5 ping statistics --- 497 4 packets transmitted, 4 packets received, 0% packet loss 498 round-trip min/avg/max = 0.070/0.081/0.097 ms 499 500 / # ping -w 4 container5 501 PING container5 (172.25.0.5): 56 data bytes 502 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 503 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 504 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 505 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 506 507 --- container5 ping statistics --- 508 4 packets transmitted, 4 packets received, 0% packet loss 509 round-trip min/avg/max = 0.070/0.081/0.097 ms 510 ``` 511 512 ```bash 513 $ docker attach container5 514 515 / # ping -w 4 c4 516 PING c4 (172.25.0.4): 56 data bytes 517 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 518 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 519 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 520 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 521 522 --- c4 ping statistics --- 523 4 packets transmitted, 4 packets received, 0% packet loss 524 round-trip min/avg/max = 0.065/0.070/0.082 ms 525 526 / # ping -w 4 container4 527 PING container4 (172.25.0.4): 56 data bytes 528 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 529 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 530 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 531 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 532 533 --- container4 ping statistics --- 534 4 packets transmitted, 4 packets received, 0% packet loss 535 round-trip min/avg/max = 0.065/0.070/0.082 ms 536 ``` 537 538 Similar to the legacy link functionality the new link alias is localized to a 539 container and the aliased name has no meaning outside of the container using 540 the `--link`. 541 542 Also, it is important to note that if a container belongs to multiple networks, 543 the linked alias is scoped within a given network. Hence the containers can be 544 linked to different aliases in different networks. 545 546 Extending the example, let us create another network named `local_alias` 547 548 ```bash 549 $ docker network create -d bridge --subnet 172.26.0.0/24 local_alias 550 76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa 551 ``` 552 553 let us connect `container4` and `container5` to the new network `local_alias` 554 555 ``` 556 $ docker network connect --link container5:foo local_alias container4 557 $ docker network connect --link container4:bar local_alias container5 558 ``` 559 560 ```bash 561 $ docker attach container4 562 563 / # ping -w 4 foo 564 PING foo (172.26.0.3): 56 data bytes 565 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 566 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 567 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 568 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 569 570 --- foo ping statistics --- 571 4 packets transmitted, 4 packets received, 0% packet loss 572 round-trip min/avg/max = 0.070/0.081/0.097 ms 573 574 / # ping -w 4 c5 575 PING c5 (172.25.0.5): 56 data bytes 576 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 577 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 578 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 579 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 580 581 --- c5 ping statistics --- 582 4 packets transmitted, 4 packets received, 0% packet loss 583 round-trip min/avg/max = 0.070/0.081/0.097 ms 584 ``` 585 586 Note that the ping succeeds for both the aliases but on different networks. Let 587 us conclude this section by disconnecting `container5` from the `isolated_nw` 588 and observe the results 589 590 ``` 591 $ docker network disconnect isolated_nw container5 592 593 $ docker attach container4 594 595 / # ping -w 4 c5 596 ping: bad address 'c5' 597 598 / # ping -w 4 foo 599 PING foo (172.26.0.3): 56 data bytes 600 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 601 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 602 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 603 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 604 605 --- foo ping statistics --- 606 4 packets transmitted, 4 packets received, 0% packet loss 607 round-trip min/avg/max = 0.070/0.081/0.097 ms 608 609 ``` 610 611 In conclusion, the new link functionality in user defined networks provides all 612 the benefits of legacy links while avoiding most of the well-known issues with 613 *legacy links*. 614 615 One notable missing functionality compared to *legacy links* is the injection 616 of environment variables. Though very useful, environment variable injection is 617 static in nature and must be injected when the container is started. One cannot 618 inject environment variables into a running container without significant 619 effort and hence it is not compatible with `docker network` which provides a 620 dynamic way to connect/ disconnect containers to/from a network. 621 622 ### Network-scoped alias 623 624 While *link*s provide private name resolution that is localized within a 625 container, the network-scoped alias provides a way for a container to be 626 discovered by an alternate name by any other container within the scope of a 627 particular network. Unlike the *link* alias, which is defined by the consumer 628 of a service, the network-scoped alias is defined by the container that is 629 offering the service to the network. 630 631 Continuing with the above example, create another container in `isolated_nw` 632 with a network alias. 633 634 ```bash 635 $ docker run --network=isolated_nw -itd --name=container6 --network-alias app busybox 636 637 8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17 638 ``` 639 640 ```bash 641 $ docker attach container4 642 643 / # ping -w 4 app 644 PING app (172.25.0.6): 56 data bytes 645 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 646 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 647 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 648 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 649 650 --- app ping statistics --- 651 4 packets transmitted, 4 packets received, 0% packet loss 652 round-trip min/avg/max = 0.070/0.081/0.097 ms 653 654 / # ping -w 4 container6 655 PING container5 (172.25.0.6): 56 data bytes 656 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 657 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 658 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 659 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 660 661 --- container6 ping statistics --- 662 4 packets transmitted, 4 packets received, 0% packet loss 663 round-trip min/avg/max = 0.070/0.081/0.097 ms 664 ``` 665 666 Now let us connect `container6` to the `local_alias` network with a different 667 network-scoped alias. 668 669 ```bash 670 $ docker network connect --alias scoped-app local_alias container6 671 ``` 672 673 `container6` in this example now is aliased as `app` in network `isolated_nw` 674 and as `scoped-app` in network `local_alias`. 675 676 Let's try to reach these aliases from `container4` (which is connected to both 677 these networks) and `container5` (which is connected only to `isolated_nw`). 678 679 ```bash 680 $ docker attach container4 681 682 / # ping -w 4 scoped-app 683 PING foo (172.26.0.5): 56 data bytes 684 64 bytes from 172.26.0.5: seq=0 ttl=64 time=0.070 ms 685 64 bytes from 172.26.0.5: seq=1 ttl=64 time=0.080 ms 686 64 bytes from 172.26.0.5: seq=2 ttl=64 time=0.080 ms 687 64 bytes from 172.26.0.5: seq=3 ttl=64 time=0.097 ms 688 689 --- foo ping statistics --- 690 4 packets transmitted, 4 packets received, 0% packet loss 691 round-trip min/avg/max = 0.070/0.081/0.097 ms 692 693 $ docker attach container5 694 695 / # ping -w 4 scoped-app 696 ping: bad address 'scoped-app' 697 698 ``` 699 700 As you can see, the alias is scoped to the network it is defined on and hence 701 only those containers that are connected to that network can access the alias. 702 703 In addition to the above features, multiple containers can share the same 704 network-scoped alias within the same network. For example, let's launch 705 `container7` in `isolated_nw` with the same alias as `container6` 706 707 ```bash 708 $ docker run --network=isolated_nw -itd --name=container7 --network-alias app busybox 709 710 3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554 711 ``` 712 713 When multiple containers share the same alias, name resolution to that alias 714 will happen to one of the containers (typically the first container that is 715 aliased). When the container that backs the alias goes down or disconnected 716 from the network, the next container that backs the alias will be resolved. 717 718 Let us ping the alias `app` from `container4` and bring down `container6` to 719 verify that `container7` is resolving the `app` alias. 720 721 ```bash 722 $ docker attach container4 723 724 / # ping -w 4 app 725 PING app (172.25.0.6): 56 data bytes 726 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 727 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 728 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 729 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 730 731 --- app ping statistics --- 732 4 packets transmitted, 4 packets received, 0% packet loss 733 round-trip min/avg/max = 0.070/0.081/0.097 ms 734 735 $ docker stop container6 736 737 $ docker attach container4 738 739 / # ping -w 4 app 740 PING app (172.25.0.7): 56 data bytes 741 64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms 742 64 bytes from 172.25.0.7: seq=1 ttl=64 time=0.075 ms 743 64 bytes from 172.25.0.7: seq=2 ttl=64 time=0.072 ms 744 64 bytes from 172.25.0.7: seq=3 ttl=64 time=0.101 ms 745 746 --- app ping statistics --- 747 4 packets transmitted, 4 packets received, 0% packet loss 748 round-trip min/avg/max = 0.072/0.085/0.101 ms 749 750 ``` 751 752 ## Disconnecting containers 753 754 You can disconnect a container from a network using the `docker network 755 disconnect` command. 756 757 ```bash 758 $ docker network disconnect isolated_nw container2 759 760 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 761 762 { 763 "bridge": { 764 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 765 "EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0", 766 "Gateway": "172.17.0.1", 767 "GlobalIPv6Address": "", 768 "GlobalIPv6PrefixLen": 0, 769 "IPAddress": "172.17.0.3", 770 "IPPrefixLen": 16, 771 "IPv6Gateway": "", 772 "MacAddress": "02:42:ac:11:00:03" 773 } 774 } 775 776 777 $ docker network inspect isolated_nw 778 779 [ 780 { 781 "Name": "isolated_nw", 782 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 783 "Scope": "local", 784 "Driver": "bridge", 785 "IPAM": { 786 "Driver": "default", 787 "Config": [ 788 { 789 "Subnet": "172.21.0.0/16", 790 "Gateway": "172.21.0.1" 791 } 792 ] 793 }, 794 "Containers": { 795 "467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551": { 796 "Name": "container3", 797 "EndpointID": "dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103", 798 "MacAddress": "02:42:ac:19:03:03", 799 "IPv4Address": "172.25.3.3/16", 800 "IPv6Address": "" 801 } 802 }, 803 "Options": {}, 804 "Labels": {} 805 } 806 ] 807 ``` 808 809 Once a container is disconnected from a network, it cannot communicate with 810 other containers connected to that network. In this example, `container2` can 811 no longer talk to `container3` on the `isolated_nw` network. 812 813 ```bash 814 $ docker attach container2 815 816 / # ifconfig 817 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 818 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 819 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 820 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 821 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 822 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 823 collisions:0 txqueuelen:0 824 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 825 826 lo Link encap:Local Loopback 827 inet addr:127.0.0.1 Mask:255.0.0.0 828 inet6 addr: ::1/128 Scope:Host 829 UP LOOPBACK RUNNING MTU:65536 Metric:1 830 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 831 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 832 collisions:0 txqueuelen:0 833 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 834 835 / # ping container3 836 PING container3 (172.25.3.3): 56 data bytes 837 ^C 838 --- container3 ping statistics --- 839 2 packets transmitted, 0 packets received, 100% packet loss 840 ``` 841 842 The `container2` still has full connectivity to the bridge network 843 844 ```bash 845 / # ping container1 846 PING container1 (172.17.0.2): 56 data bytes 847 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms 848 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms 849 ^C 850 --- container1 ping statistics --- 851 2 packets transmitted, 2 packets received, 0% packet loss 852 round-trip min/avg/max = 0.119/0.146/0.174 ms 853 / # 854 ``` 855 856 There are certain scenarios such as ungraceful docker daemon restarts in 857 multi-host network, where the daemon is unable to cleanup stale connectivity 858 endpoints. Such stale endpoints may cause an error `container already connected 859 to network` when a new container is connected to that network with the same 860 name as the stale endpoint. In order to cleanup these stale endpoints, first 861 remove the container and force disconnect (`docker network disconnect -f`) the 862 endpoint from the network. Once the endpoint is cleaned up, the container can 863 be connected to the network. 864 865 ```bash 866 $ docker run -d --name redis_db --network multihost redis 867 868 ERROR: Cannot start container bc0b19c089978f7845633027aa3435624ca3d12dd4f4f764b61eac4c0610f32e: container already connected to network multihost 869 870 $ docker rm -f redis_db 871 872 $ docker network disconnect -f multihost redis_db 873 874 $ docker run -d --name redis_db --network multihost redis 875 876 7d986da974aeea5e9f7aca7e510bdb216d58682faa83a9040c2f2adc0544795a 877 ``` 878 879 ## Remove a network 880 881 When all the containers in a network are stopped or disconnected, you can 882 remove a network. 883 884 ```bash 885 $ docker network disconnect isolated_nw container3 886 ``` 887 888 ```bash 889 $ docker network inspect isolated_nw 890 891 [ 892 { 893 "Name": "isolated_nw", 894 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 895 "Scope": "local", 896 "Driver": "bridge", 897 "IPAM": { 898 "Driver": "default", 899 "Config": [ 900 { 901 "Subnet": "172.21.0.0/16", 902 "Gateway": "172.21.0.1" 903 } 904 ] 905 }, 906 "Containers": {}, 907 "Options": {}, 908 "Labels": {} 909 } 910 ] 911 912 $ docker network rm isolated_nw 913 ``` 914 915 List all your networks to verify the `isolated_nw` was removed: 916 917 ```bash 918 $ docker network ls 919 920 NETWORK ID NAME DRIVER 921 72314fa53006 host host 922 f7ab26d71dbd bridge bridge 923 0f32e83e61ac none null 924 ``` 925 926 ## Related information 927 928 * [network create](../../reference/commandline/network_create.md) 929 * [network inspect](../../reference/commandline/network_inspect.md) 930 * [network connect](../../reference/commandline/network_connect.md) 931 * [network disconnect](../../reference/commandline/network_disconnect.md) 932 * [network ls](../../reference/commandline/network_ls.md) 933 * [network rm](../../reference/commandline/network_rm.md)