github.com/ncdc/docker@v0.10.1-0.20160129113957-6c6729ef5b74/docs/userguide/networking/work-with-networks.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "Work with network commands"
     4  description = "How to work with docker networks"
     5  keywords = ["commands, Usage, network, docker, cluster"]
     6  [menu.main]
     7  parent = "smn_networking"
     8  weight=-4
     9  +++
    10  <![end-metadata]-->
    11  
    12  # Work with network commands
    13  
    14  This article provides examples of the network subcommands you can use to interact with Docker networks and the containers in them. The commands are available through the Docker Engine CLI.  These commands are:
    15  
    16  * `docker network create`
    17  * `docker network connect`
    18  * `docker network ls`
    19  * `docker network rm`
    20  * `docker network disconnect`
    21  * `docker network inspect`
    22  
    23  While not required, it is a good idea to read [Understanding Docker
    24  network](dockernetworks.md) before trying the examples in this section. The
    25  examples for the rely on a `bridge` network so that you can try them
    26  immediately.  If you would prefer to experiment with an `overlay` network see
    27  the [Getting started with multi-host networks](get-started-overlay.md) instead.
    28  
    29  ## Create networks
    30  
    31  Docker Engine creates a `bridge` network automatically when you install Engine.
    32  This network corresponds to the `docker0` bridge that Engine has traditionally
    33  relied on. In addition to this network, you can create your own `bridge` or `overlay` network.  
    34  
    35  A `bridge` network resides on a single host running an instance of Docker Engine.  An `overlay` network can span multiple hosts running their own engines. If you run `docker network create` and supply only a network name, it creates a bridge network for you.
    36  
    37  ```bash
    38  $ docker network create simple-network
    39  69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a
    40  $ docker network inspect simple-network
    41  [
    42      {
    43          "Name": "simple-network",
    44          "Id": "69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a",
    45          "Scope": "local",
    46          "Driver": "bridge",
    47          "IPAM": {
    48              "Driver": "default",
    49              "Config": [
    50                  {
    51                      "Subnet": "172.22.0.0/16",
    52                      "Gateway": "172.22.0.1/16"
    53                  }
    54              ]
    55          },
    56          "Containers": {},
    57          "Options": {}
    58      }
    59  ]
    60  ```
    61  
    62  Unlike `bridge` networks, `overlay` networks require some pre-existing conditions
    63  before you can create one. These conditions are:
    64  
    65  * Access to a key-value store. Engine supports Consul Etcd, and ZooKeeper (Distributed store) key-value stores.
    66  * A cluster of hosts with connectivity to the key-value store.
    67  * A properly configured Engine `daemon` on each host in the swarm.
    68  
    69  The `docker daemon` options that support the `overlay` network are:
    70  
    71  * `--cluster-store`
    72  * `--cluster-store-opt`
    73  * `--cluster-advertise`
    74  
    75  It is also a good idea, though not required, that you install Docker Swarm
    76  to manage the cluster. Swarm provides sophisticated discovery and server
    77  management that can assist your implementation.
    78  
    79  When you create a network, Engine creates a non-overlapping subnetwork for the
    80  network by default. You can override this default and specify a subnetwork
    81  directly using the the `--subnet` option. On a `bridge` network you can only
    82  create a single subnet. An `overlay` network supports multiple subnets.
    83  
    84  In addition to the `--subnetwork` option, you also specify the `--gateway` `--ip-range` and `--aux-address` options.
    85  
    86  ```bash
    87  $ docker network create -d overlay
    88    --subnet=192.168.0.0/16 --subnet=192.170.0.0/16
    89    --gateway=192.168.0.100 --gateway=192.170.0.100
    90    --ip-range=192.168.1.0/24
    91    --aux-address a=192.168.1.5 --aux-address b=192.168.1.6
    92    --aux-address a=192.170.1.5 --aux-address b=192.170.1.6
    93    my-multihost-network
    94  ```
    95  
    96  Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns an error.
    97  
    98  When creating a custom network, the default network driver (i.e. `bridge`) has additional options that can be passed.
    99  The following are those options and the equivalent docker daemon flags used for docker0 bridge:
   100  
   101  | Option                                           | Equivalent  | Description                                           |
   102  |--------------------------------------------------|-------------|-------------------------------------------------------|
   103  | `com.docker.network.bridge.name`                 | -           | bridge name to be used when creating the Linux bridge |
   104  | `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading                                |
   105  | `com.docker.network.bridge.enable_icc`           | `--icc`     | Enable or Disable Inter Container Connectivity        |
   106  | `com.docker.network.bridge.host_binding_ipv4`    | `--ip`      | Default IP when binding container ports               |
   107  | `com.docker.network.mtu`                         | `--mtu`     | Set the containers network MTU                        |
   108  | `com.docker.network.enable_ipv6`                 | `--ipv6`    | Enable IPv6 networking                                |
   109  
   110  For example, now let's use `-o` or `--opt` options to specify an IP address binding when publishing ports:
   111  
   112  ```bash
   113  $ docker network create -o "com.docker.network.bridge.host_binding_ipv4"="172.23.0.1" my-network
   114  b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a
   115  $ docker network inspect my-network
   116  [
   117      {
   118          "Name": "my-network",
   119          "Id": "b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a",
   120          "Scope": "local",
   121          "Driver": "bridge",
   122          "IPAM": {
   123              "Driver": "default",
   124              "Options": {},
   125              "Config": [
   126                  {
   127                      "Subnet": "172.23.0.0/16",
   128                      "Gateway": "172.23.0.1/16"
   129                  }
   130              ]
   131          },
   132          "Containers": {},
   133          "Options": {
   134              "com.docker.network.bridge.host_binding_ipv4": "172.23.0.1"
   135          }
   136      }
   137  ]
   138  $ docker run -d -P --name redis --net my-network redis
   139  bafb0c808c53104b2c90346f284bda33a69beadcab4fc83ab8f2c5a4410cd129
   140  $ docker ps
   141  CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                        NAMES
   142  bafb0c808c53        redis               "/entrypoint.sh redis"   4 seconds ago       Up 3 seconds        172.23.0.1:32770->6379/tcp   redis
   143  ```
   144  
   145  ## Connect containers
   146  
   147  You can connect containers dynamically to one or more networks. These networks
   148  can be backed the same or different network drivers. Once connected, the
   149  containers can communicate using another container's IP address or name.
   150  
   151  For `overlay` networks or custom plugins that support multi-host
   152  connectivity, containers connected to the same multi-host network but launched
   153  from different hosts can also communicate in this way.
   154  
   155  Create two containers for this example:
   156  
   157  ```bash
   158  $ docker run -itd --name=container1 busybox
   159  18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731
   160  
   161  $ docker run -itd --name=container2 busybox
   162  498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
   163  ```
   164  
   165  Then create an isolated, `bridge` network to test with.
   166  
   167  ```bash
   168  $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw
   169  06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8
   170  ```
   171  
   172  Connect `container2` to the network and then `inspect` the network to verify the connection:
   173  
   174  ```
   175  $ docker network connect isolated_nw container2
   176  $ docker network inspect isolated_nw
   177  [
   178      {
   179          "Name": "isolated_nw",
   180          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   181          "Scope": "local",
   182          "Driver": "bridge",
   183          "IPAM": {
   184              "Driver": "default",
   185              "Config": [
   186                  {
   187                      "Subnet": "172.21.0.0/16",
   188                      "Gateway": "172.21.0.1/16"
   189                  }
   190              ]
   191          },
   192          "Containers": {
   193              "90e1f3ec71caf82ae776a827e0712a68a110a3f175954e5bd4222fd142ac9428": {
   194                  "Name": "container2",
   195                  "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d",
   196                  "MacAddress": "02:42:ac:19:00:02",
   197                  "IPv4Address": "172.25.0.2/16",
   198                  "IPv6Address": ""
   199              }
   200          },
   201          "Options": {}
   202      }
   203  ]
   204  ```
   205  
   206  You can see that the Engine automatically assigns an IP address to `container2`.
   207  Given we specified a `--subnet` when creating the network, Engine picked
   208  an address from that same subnet. Now, start a third container and connect it to
   209  the network on launch using the `docker run` command's `--net` option:
   210  
   211  ```bash
   212  $ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
   213  467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551
   214  ```
   215  
   216  As you can see you were able to specify the ip address for your container.
   217  As long as the network to which the container is connecting was created with
   218  a user specified subnet, you will be able to select the IPv4 and/or IPv6 address(es)
   219  for your container when executing `docker run` and `docker network connect` commands.
   220  The selected IP address is part of the container networking configuration and will be
   221  preserved across container reload. The feature is only available on user defined networks,
   222  because they guarantee their subnets configuration does not change across daemon reload.
   223  
   224  Now, inspect the network resources used by `container3`.
   225  
   226  ```bash
   227  $ docker inspect --format='{{json .NetworkSettings.Networks}}'  container3
   228  {"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
   229  "EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}}
   230  ```
   231  Repeat this command for `container2`. If you have Python installed, you can pretty print the output.
   232  
   233  ```bash
   234  $ docker inspect --format='{{json .NetworkSettings.Networks}}'  container2 | python -m json.tool
   235  {
   236      "bridge": {
   237          "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812",
   238          "EndpointID": "0099f9efb5a3727f6a554f176b1e96fca34cae773da68b3b6a26d046c12cb365",
   239          "Gateway": "172.17.0.1",
   240          "GlobalIPv6Address": "",
   241          "GlobalIPv6PrefixLen": 0,
   242          "IPAMConfig": null,
   243          "IPAddress": "172.17.0.3",
   244          "IPPrefixLen": 16,
   245          "IPv6Gateway": "",
   246          "MacAddress": "02:42:ac:11:00:03"
   247      },
   248      "isolated_nw": {
   249          "NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
   250          "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d",
   251          "Gateway": "172.25.0.1",
   252          "GlobalIPv6Address": "",
   253          "GlobalIPv6PrefixLen": 0,
   254          "IPAMConfig": null,
   255          "IPAddress": "172.25.0.2",
   256          "IPPrefixLen": 16,
   257          "IPv6Gateway": "",
   258          "MacAddress": "02:42:ac:19:00:02"
   259      }
   260  }
   261  ```
   262  
   263  You should find `container2` belongs to two networks.  The `bridge` network
   264  which it joined by default when you launched it and the `isolated_nw` which you
   265  later connected it to.
   266  
   267  ![](images/working.png)
   268  
   269  In the case of `container3`, you connected it through `docker run` to the
   270  `isolated_nw` so that container is not connected to `bridge`.
   271  
   272  Use the `docker attach` command to connect to the running `container2` and
   273  examine its networking stack:
   274  
   275  ```bash
   276  $ docker attach container2
   277  ```
   278  
   279  If you look a the container's network stack you should see two Ethernet interfaces, one for the default bridge network and one for the `isolated_nw` network.
   280  
   281  ```bash
   282  / # ifconfig
   283  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
   284            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
   285            inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
   286            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   287            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   288            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   289            collisions:0 txqueuelen:0
   290            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   291  
   292  eth1      Link encap:Ethernet  HWaddr 02:42:AC:15:00:02  
   293            inet addr:172.25.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
   294            inet6 addr: fe80::42:acff:fe19:2/64 Scope:Link
   295            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   296            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   297            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   298            collisions:0 txqueuelen:0
   299            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   300  
   301  lo        Link encap:Local Loopback  
   302            inet addr:127.0.0.1  Mask:255.0.0.0
   303            inet6 addr: ::1/128 Scope:Host
   304            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   305            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   306            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   307            collisions:0 txqueuelen:0
   308            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   309  
   310  On the `isolated_nw` which was user defined, the Docker embedded DNS server enables name resolution for other containers in the network.  Inside of `container2` it is possible to ping `container3` by name.
   311  
   312  ```bash
   313  / # ping -w 4 container3
   314  PING container3 (172.25.3.3): 56 data bytes
   315  64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.070 ms
   316  64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.080 ms
   317  64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.080 ms
   318  64 bytes from 172.25.3.3: seq=3 ttl=64 time=0.097 ms
   319  
   320  --- container3 ping statistics ---
   321  4 packets transmitted, 4 packets received, 0% packet loss
   322  round-trip min/avg/max = 0.070/0.081/0.097 ms
   323  ```
   324  
   325  This isn't the case for the default `bridge` network. Both `container2` and  `container1` are connected to the default bridge network. Docker does not support automatic service discovery on this network. For this reason, pinging  `container1` by name fails as you would expect based on the `/etc/hosts` file:
   326  
   327  ```bash
   328  / # ping -w 4 container1
   329  ping: bad address 'container1'
   330  ```
   331  
   332  A ping using the `container1` IP address does succeed though:
   333  
   334  ```bash
   335  / # ping -w 4 172.17.0.2
   336  PING 172.17.0.2 (172.17.0.2): 56 data bytes
   337  64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms
   338  64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
   339  64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
   340  64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms
   341  
   342  --- 172.17.0.2 ping statistics ---
   343  4 packets transmitted, 4 packets received, 0% packet loss
   344  round-trip min/avg/max = 0.072/0.085/0.101 ms
   345  ```
   346  
   347  If you wanted you could connect `container1` to `container2` with the `docker
   348  run --link` command and that would enable the two containers to interact by name
   349  as well as IP.
   350  
   351  Detach from a `container2` and leave it running using `CTRL-p CTRL-q`.
   352  
   353  In this example, `container2` is attached to both networks and so can talk to
   354  `container1` and `container3`. But `container3` and `container1` are not in the
   355  same network and cannot communicate. Test, this now by attaching to
   356  `container3` and attempting to ping `container1` by IP address.
   357  
   358  ```bash
   359  $ docker attach container3
   360  / # ping 172.17.0.2
   361  PING 172.17.0.2 (172.17.0.2): 56 data bytes
   362  ^C
   363  --- 172.17.0.2 ping statistics ---
   364  10 packets transmitted, 0 packets received, 100% packet loss
   365  
   366  ```
   367  
   368  You can connect both running and non-running containers to a network. However,
   369  `docker network inspect` only displays information on running containers.
   370  
   371  ### Linking containers in user-defined networks
   372  
   373  In the above example, container_2 was able to resolve container_3's name automatically
   374  in the user defined network `isolated_nw`, but the name resolution did not succeed
   375  automatically in the default `bridge` network. This is expected in order to maintain
   376  backward compatibility with [legacy link](default_network/dockerlinks.md).
   377  
   378  The `legacy link` provided 4 major functionalities to the default `bridge` network.
   379  
   380  * name resolution
   381  * name alias for the linked container using `--link=CONTAINER-NAME:ALIAS`
   382  * secured container connectivity (in isolation via `--icc=false`)
   383  * environment variable injection
   384  
   385  Comparing the above 4 functionalities with the non-default user-defined networks such as
   386  `isolated_nw` in this example, without any additional config, `docker network` provides
   387  
   388  * automatic name resolution using DNS
   389  * automatic secured isolated environment for the containers in a network
   390  * ability to dynamically attach and detach to multiple networks
   391  * supports the `--link` option to provide name alias for the linked container
   392  
   393  Continuing with the above example, create another container `container_4` in `isolated_nw`
   394  with `--link` to provide additional name resolution using alias for other containers in
   395  the same network.
   396  
   397  ```bash
   398  $ docker run --net=isolated_nw -itd --name=container4 --link container5:c5 busybox
   399  01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c
   400  ```
   401  
   402  With the help of `--link` container4 will be able to reach container5 using the
   403  aliased name `c5` as well.
   404  
   405  Please note that while creating container4, we linked to a container named `container5`
   406  which is not created yet. That is one of the differences in behavior between the
   407  `legacy link` in default `bridge` network and the new `link` functionality in user defined
   408  networks. The `legacy link` is static in nature and it hard-binds the container with the
   409  alias and it doesnt tolerate linked container restarts. While the new `link` functionality
   410  in user defined networks are dynamic in nature and supports linked container restarts
   411  including tolerating ip-address changes on the linked container.
   412  
   413  Now let us launch another container named `container5` linking container4 to c4.
   414  
   415  ```bash
   416  $ docker run --net=isolated_nw -itd --name=container5 --link container4:c4 busybox
   417  72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a
   418  ```
   419  
   420  As expected, container4 will be able to reach container5 by both its container name and
   421  its alias c5 and container5 will be able to reach container4 by its container name and
   422  its alias c4.
   423  
   424  ```bash
   425  $ docker attach container4
   426  / # ping -w 4 c5
   427  PING c5 (172.25.0.5): 56 data bytes
   428  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   429  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   430  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   431  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   432  
   433  --- c5 ping statistics ---
   434  4 packets transmitted, 4 packets received, 0% packet loss
   435  round-trip min/avg/max = 0.070/0.081/0.097 ms
   436  
   437  / # ping -w 4 container5
   438  PING container5 (172.25.0.5): 56 data bytes
   439  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   440  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   441  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   442  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   443  
   444  --- container5 ping statistics ---
   445  4 packets transmitted, 4 packets received, 0% packet loss
   446  round-trip min/avg/max = 0.070/0.081/0.097 ms
   447  ```
   448  
   449  ```bash
   450  $ docker attach container5
   451  / # ping -w 4 c4
   452  PING c4 (172.25.0.4): 56 data bytes
   453  64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms
   454  64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms
   455  64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms
   456  64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms
   457  
   458  --- c4 ping statistics ---
   459  4 packets transmitted, 4 packets received, 0% packet loss
   460  round-trip min/avg/max = 0.065/0.070/0.082 ms
   461  
   462  / # ping -w 4 container4
   463  PING container4 (172.25.0.4): 56 data bytes
   464  64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms
   465  64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms
   466  64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms
   467  64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms
   468  
   469  --- container4 ping statistics ---
   470  4 packets transmitted, 4 packets received, 0% packet loss
   471  round-trip min/avg/max = 0.065/0.070/0.082 ms
   472  ```
   473  
   474  Similar to the legacy link functionality the new link alias is localized to a container
   475  and the aliased name has no meaning outside of the container using the `--link`.
   476  
   477  Also, it is important to note that if a container belongs to multiple networks, the
   478  linked alias is scoped within a given network. Hence the containers can be linked to
   479  different aliases in different networks.
   480  
   481  Extending the example, let us create another network named `local_alias`
   482  
   483  ```bash
   484  $ docker network create -d bridge --subnet 172.26.0.0/24 local_alias
   485  76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa
   486  ```
   487  
   488  let us connect container4 and container5 to the new network `local_alias`
   489  
   490  ```
   491  $ docker network connect --link container5:foo local_alias container4
   492  $ docker network connect --link container4:bar local_alias container5
   493  ```
   494  
   495  ```bash
   496  $ docker attach container4
   497  
   498  / # ping -w 4 foo
   499  PING foo (172.26.0.3): 56 data bytes
   500  64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms
   501  64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms
   502  64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms
   503  64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms
   504  
   505  --- foo ping statistics ---
   506  4 packets transmitted, 4 packets received, 0% packet loss
   507  round-trip min/avg/max = 0.070/0.081/0.097 ms
   508  
   509  / # ping -w 4 c5
   510  PING c5 (172.25.0.5): 56 data bytes
   511  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   512  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   513  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   514  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   515  
   516  --- c5 ping statistics ---
   517  4 packets transmitted, 4 packets received, 0% packet loss
   518  round-trip min/avg/max = 0.070/0.081/0.097 ms
   519  ```
   520  
   521  Note that the ping succeeds for both the aliases but on different networks.
   522  Let us conclude this section by disconnecting container5 from the `isolated_nw`
   523  and observe the results
   524  
   525  ```
   526  $ docker network disconnect isolated_nw container5
   527  
   528  $ docker attach container4
   529  
   530  / # ping -w 4 c5
   531  ping: bad address 'c5'
   532  
   533  / # ping -w 4 foo
   534  PING foo (172.26.0.3): 56 data bytes
   535  64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms
   536  64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms
   537  64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms
   538  64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms
   539  
   540  --- foo ping statistics ---
   541  4 packets transmitted, 4 packets received, 0% packet loss
   542  round-trip min/avg/max = 0.070/0.081/0.097 ms
   543  
   544  ```
   545  
   546  In conclusion, the new link functionality in user defined networks provides all the
   547  benefits of legacy links while avoiding most of the well-known issues with `legacy links`.
   548  
   549  One notable missing functionality compared to `legacy links` is the injection of
   550  environment variables. Though very useful, environment variable injection is static
   551  in nature and must be injected when the container is started. One cannot inject
   552  environment variables into a running container without significant effort and hence
   553  it is not compatible with `docker network` which provides a dynamic way to connect/
   554  disconnect containers to/from a network.
   555  
   556  ### Network-scoped alias
   557  
   558  While `links` provide private name resolution that is localized within a container,
   559  the network-scoped alias provides a way for a container to be discovered by an
   560  alternate name by any other container within the scope of a particular network.
   561  Unlike the `link` alias, which is defined by the consumer of a service, the
   562  network-scoped alias is defined by the container that is offering the service
   563  to the network.
   564  
   565  Continuing with the above example, create another container in `isolated_nw` with a
   566  network alias.
   567  
   568  ```bash
   569  $ docker run --net=isolated_nw -itd --name=container6 --net-alias app busybox
   570  8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
   571  ```
   572  
   573  ```bash
   574  $ docker attach container4
   575  / # ping -w 4 app
   576  PING app (172.25.0.6): 56 data bytes
   577  64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
   578  64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
   579  64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
   580  64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
   581  
   582  --- app ping statistics ---
   583  4 packets transmitted, 4 packets received, 0% packet loss
   584  round-trip min/avg/max = 0.070/0.081/0.097 ms
   585  
   586  / # ping -w 4 container6
   587  PING container5 (172.25.0.6): 56 data bytes
   588  64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
   589  64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
   590  64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
   591  64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
   592  
   593  --- container6 ping statistics ---
   594  4 packets transmitted, 4 packets received, 0% packet loss
   595  round-trip min/avg/max = 0.070/0.081/0.097 ms
   596  ```
   597  
   598  Now let us connect `container6` to the `local_alias` network with a different network-scoped
   599  alias.
   600  
   601  ```
   602  $ docker network connect --alias scoped-app local_alias container6
   603  ```
   604  
   605  `container6` in this example now is aliased as `app` in network `isolated_nw` and
   606  as `scoped-app` in network `local_alias`.
   607  
   608  Let's try to reach these aliases from `container4` (which is connected to both these networks)
   609  and `container5` (which is connected only to `isolated_nw`).
   610  
   611  ```bash
   612  $ docker attach container4
   613  
   614  / # ping -w 4 scoped-app
   615  PING foo (172.26.0.5): 56 data bytes
   616  64 bytes from 172.26.0.5: seq=0 ttl=64 time=0.070 ms
   617  64 bytes from 172.26.0.5: seq=1 ttl=64 time=0.080 ms
   618  64 bytes from 172.26.0.5: seq=2 ttl=64 time=0.080 ms
   619  64 bytes from 172.26.0.5: seq=3 ttl=64 time=0.097 ms
   620  
   621  --- foo ping statistics ---
   622  4 packets transmitted, 4 packets received, 0% packet loss
   623  round-trip min/avg/max = 0.070/0.081/0.097 ms
   624  
   625  $ docker attach container5
   626  
   627  / # ping -w 4 scoped-app
   628  ping: bad address 'scoped-app'
   629  
   630  ```
   631  
   632  As you can see, the alias is scoped to the network it is defined on and hence only
   633  those containers that are connected to that network can access the alias.
   634  
   635  In addition to the above features, multiple containers can share the same network-scoped
   636  alias within the same network. For example, let's launch `container7` in `isolated_nw` with
   637  the same alias as `container6`
   638  
   639  ```bash
   640  $ docker run --net=isolated_nw -itd --name=container7 --net-alias app busybox
   641  3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554
   642  ```
   643  
   644  When multiple containers share the same alias, name resolution to that alias will happen
   645  to one of the containers (typically the first container that is aliased). When the container
   646  that backs the alias goes down or disconnected from the network, the next container that
   647  backs the alias will be resolved.
   648  
   649  Let us ping the alias `app` from `container4` and bring down `container6` to verify that
   650  `container7` is resolving the `app` alias.
   651  
   652  ```bash
   653  $ docker attach container4
   654  / # ping -w 4 app
   655  PING app (172.25.0.6): 56 data bytes
   656  64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
   657  64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
   658  64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
   659  64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
   660  
   661  --- app ping statistics ---
   662  4 packets transmitted, 4 packets received, 0% packet loss
   663  round-trip min/avg/max = 0.070/0.081/0.097 ms
   664  
   665  $ docker stop container6
   666  
   667  $ docker attach container4
   668  / # ping -w 4 app
   669  PING app (172.25.0.7): 56 data bytes
   670  64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms
   671  64 bytes from 172.25.0.7: seq=1 ttl=64 time=0.075 ms
   672  64 bytes from 172.25.0.7: seq=2 ttl=64 time=0.072 ms
   673  64 bytes from 172.25.0.7: seq=3 ttl=64 time=0.101 ms
   674  
   675  --- app ping statistics ---
   676  4 packets transmitted, 4 packets received, 0% packet loss
   677  round-trip min/avg/max = 0.072/0.085/0.101 ms
   678  
   679  ```
   680  
   681  ## Disconnecting containers
   682  
   683  You can disconnect a container from a network using the `docker network
   684  disconnect` command.
   685  
   686  ```
   687  $ docker network disconnect isolated_nw container2
   688  
   689  docker inspect --format='{{json .NetworkSettings.Networks}}'  container2 | python -m json.tool
   690  {
   691      "bridge": {
   692          "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812",
   693          "EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0",
   694          "Gateway": "172.17.0.1",
   695          "GlobalIPv6Address": "",
   696          "GlobalIPv6PrefixLen": 0,
   697          "IPAddress": "172.17.0.3",
   698          "IPPrefixLen": 16,
   699          "IPv6Gateway": "",
   700          "MacAddress": "02:42:ac:11:00:03"
   701      }
   702  }
   703  
   704  
   705  $ docker network inspect isolated_nw
   706  [
   707      {
   708          "Name": "isolated_nw",
   709          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   710          "Scope": "local",
   711          "Driver": "bridge",
   712          "IPAM": {
   713              "Driver": "default",
   714              "Config": [
   715                  {
   716                      "Subnet": "172.21.0.0/16",
   717                      "Gateway": "172.21.0.1/16"
   718                  }
   719              ]
   720          },
   721          "Containers": {
   722              "467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551": {
   723                  "Name": "container3",
   724                  "EndpointID": "dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103",
   725                  "MacAddress": "02:42:ac:19:03:03",
   726                  "IPv4Address": "172.25.3.3/16",
   727                  "IPv6Address": ""
   728              }
   729          },
   730          "Options": {}
   731      }
   732  ]
   733  ```
   734  
   735  Once a container is disconnected from a network, it cannot communicate with
   736  other containers connected to that network. In this example, `container2` can no longer  talk to `container3` on the `isolated_nw` network.
   737  
   738  ```
   739  $ docker attach container2
   740  
   741  / # ifconfig
   742  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
   743            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
   744            inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
   745            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   746            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   747            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   748            collisions:0 txqueuelen:0
   749            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   750  
   751  lo        Link encap:Local Loopback  
   752            inet addr:127.0.0.1  Mask:255.0.0.0
   753            inet6 addr: ::1/128 Scope:Host
   754            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   755            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   756            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   757            collisions:0 txqueuelen:0
   758            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   759  
   760  / # ping container3
   761  PING container3 (172.25.3.3): 56 data bytes
   762  ^C
   763  --- container3 ping statistics ---
   764  2 packets transmitted, 0 packets received, 100% packet loss
   765  ```
   766  
   767  The `container2` still has full connectivity to the bridge network
   768  
   769  ```bash
   770  / # ping container1
   771  PING container1 (172.17.0.2): 56 data bytes
   772  64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms
   773  64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms
   774  ^C
   775  --- container1 ping statistics ---
   776  2 packets transmitted, 2 packets received, 0% packet loss
   777  round-trip min/avg/max = 0.119/0.146/0.174 ms
   778  / #
   779  ```
   780  
   781  ## Remove a network
   782  
   783  When all the containers in a network are stopped or disconnected, you can remove a network.
   784  
   785  ```bash
   786  $ docker network disconnect isolated_nw container3
   787  ```
   788  
   789  ```bash
   790  docker network inspect isolated_nw
   791  [
   792      {
   793          "Name": "isolated_nw",
   794          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   795          "Scope": "local",
   796          "Driver": "bridge",
   797          "IPAM": {
   798              "Driver": "default",
   799              "Config": [
   800                  {
   801                      "Subnet": "172.21.0.0/16",
   802                      "Gateway": "172.21.0.1/16"
   803                  }
   804              ]
   805          },
   806          "Containers": {},
   807          "Options": {}
   808      }
   809  ]
   810  
   811  $ docker network rm isolated_nw
   812  ```
   813  
   814  List all your networks to verify the `isolated_nw` was removed:
   815  
   816  ```
   817  $ docker network ls
   818  NETWORK ID          NAME                DRIVER
   819  72314fa53006        host                host                
   820  f7ab26d71dbd        bridge              bridge              
   821  0f32e83e61ac        none                null  
   822  ```
   823  
   824  ## Related information
   825  
   826  * [network create](../../reference/commandline/network_create.md)
   827  * [network inspect](../../reference/commandline/network_inspect.md)
   828  * [network connect](../../reference/commandline/network_connect.md)
   829  * [network disconnect](../../reference/commandline/network_disconnect.md)
   830  * [network ls](../../reference/commandline/network_ls.md)
   831  * [network rm](../../reference/commandline/network_rm.md)