github.com/kobeld/docker@v1.12.0-rc1/docs/userguide/networking/work-with-networks.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "Work with network commands"
     4  description = "How to work with docker networks"
     5  keywords = ["commands, Usage, network, docker, cluster"]
     6  [menu.main]
     7  parent = "smn_networking"
     8  weight=-4
     9  +++
    10  <![end-metadata]-->
    11  
    12  # Work with network commands
    13  
    14  This article provides examples of the network subcommands you can use to
    15  interact with Docker networks and the containers in them. The commands are
    16  available through the Docker Engine CLI. These commands are:
    17  
    18  * `docker network create`
    19  * `docker network connect`
    20  * `docker network ls`
    21  * `docker network rm`
    22  * `docker network disconnect`
    23  * `docker network inspect`
    24  
    25  While not required, it is a good idea to read [Understanding Docker
    26  network](dockernetworks.md) before trying the examples in this section. The
    27  examples for the rely on a `bridge` network so that you can try them
    28  immediately.  If you would prefer to experiment with an `overlay` network see
    29  the [Getting started with multi-host networks](get-started-overlay.md) instead.
    30  
    31  ## Create networks
    32  
    33  Docker Engine creates a `bridge` network automatically when you install Engine.
    34  This network corresponds to the `docker0` bridge that Engine has traditionally
    35  relied on. In addition to this network, you can create your own `bridge` or
    36  `overlay` network.
    37  
    38  A `bridge` network resides on a single host running an instance of Docker
    39  Engine. An `overlay` network can span multiple hosts running their own engines.
    40  If you run `docker network create` and supply only a network name, it creates a
    41  bridge network for you.
    42  
    43  ```bash
    44  $ docker network create simple-network
    45  69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a
    46  $ docker network inspect simple-network
    47  [
    48      {
    49          "Name": "simple-network",
    50          "Id": "69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a",
    51          "Scope": "local",
    52          "Driver": "bridge",
    53          "IPAM": {
    54              "Driver": "default",
    55              "Config": [
    56                  {
    57                      "Subnet": "172.22.0.0/16",
    58                      "Gateway": "172.22.0.1/16"
    59                  }
    60              ]
    61          },
    62          "Containers": {},
    63          "Options": {}
    64      }
    65  ]
    66  ```
    67  
    68  Unlike `bridge` networks, `overlay` networks require some pre-existing conditions
    69  before you can create one. These conditions are:
    70  
    71  * Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores.
    72  * A cluster of hosts with connectivity to the key-value store.
    73  * A properly configured Engine `daemon` on each host in the swarm.
    74  
    75  The `dockerd` options that support the `overlay` network are:
    76  
    77  * `--cluster-store`
    78  * `--cluster-store-opt`
    79  * `--cluster-advertise`
    80  
    81  It is also a good idea, though not required, that you install Docker Swarm
    82  to manage the cluster. Swarm provides sophisticated discovery and server
    83  management that can assist your implementation.
    84  
    85  When you create a network, Engine creates a non-overlapping subnetwork for the
    86  network by default. You can override this default and specify a subnetwork
    87  directly using the `--subnet` option. On a `bridge` network you can only
    88  specify a single subnet. An `overlay` network supports multiple subnets.
    89  
    90  > **Note** : It is highly recommended to use the `--subnet` option while creating
    91  > a network. If the `--subnet` is not specified, the docker daemon automatically
    92  > chooses and assigns a subnet for the network and it could overlap with another subnet
    93  > in your infrastructure that is not managed by docker. Such overlaps can cause
    94  > connectivity issues or failures when containers are connected to that network.
    95  
    96  In addition to the `--subnet` option, you also specify the `--gateway`,
    97  `--ip-range`, and `--aux-address` options.
    98  
    99  ```bash
   100  $ docker network create -d overlay \
   101    --subnet=192.168.0.0/16 \
   102    --subnet=192.170.0.0/16 \
   103    --gateway=192.168.0.100 \
   104    --gateway=192.170.0.100 \
   105    --ip-range=192.168.1.0/24 \
   106    --aux-address a=192.168.1.5 --aux-address b=192.168.1.6 \
   107    --aux-address a=192.170.1.5 --aux-address b=192.170.1.6 \
   108    my-multihost-network
   109  ```
   110  
   111  Be sure that your subnetworks do not overlap. If they do, the network create
   112  fails and Engine returns an error.
   113  
   114  When creating a custom network, the default network driver (i.e. `bridge`) has
   115  additional options that can be passed. The following are those options and the
   116  equivalent docker daemon flags used for docker0 bridge:
   117  
   118  | Option                                           | Equivalent  | Description                                           |
   119  |--------------------------------------------------|-------------|-------------------------------------------------------|
   120  | `com.docker.network.bridge.name`                 | -           | bridge name to be used when creating the Linux bridge |
   121  | `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading                                |
   122  | `com.docker.network.bridge.enable_icc`           | `--icc`     | Enable or Disable Inter Container Connectivity        |
   123  | `com.docker.network.bridge.host_binding_ipv4`    | `--ip`      | Default IP when binding container ports               |
   124  | `com.docker.network.mtu`                         | `--mtu`     | Set the containers network MTU                        |
   125  
   126  The following arguments can be passed to `docker network create` for any network driver.
   127  
   128  | Argument     | Equivalent | Description                              |
   129  |--------------|------------|------------------------------------------|
   130  | `--internal` | -          | Restricts external access to the network |
   131  | `--ipv6`     | `--ipv6`   | Enable IPv6 networking                   |
   132  
   133  For example, now let's use `-o` or `--opt` options to specify an IP address binding when publishing ports:
   134  
   135  ```bash
   136  $ docker network create -o "com.docker.network.bridge.host_binding_ipv4"="172.23.0.1" my-network
   137  b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a
   138  $ docker network inspect my-network
   139  [
   140      {
   141          "Name": "my-network",
   142          "Id": "b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a",
   143          "Scope": "local",
   144          "Driver": "bridge",
   145          "IPAM": {
   146              "Driver": "default",
   147              "Options": {},
   148              "Config": [
   149                  {
   150                      "Subnet": "172.23.0.0/16",
   151                      "Gateway": "172.23.0.1/16"
   152                  }
   153              ]
   154          },
   155          "Containers": {},
   156          "Options": {
   157              "com.docker.network.bridge.host_binding_ipv4": "172.23.0.1"
   158          }
   159      }
   160  ]
   161  $ docker run -d -P --name redis --net my-network redis
   162  bafb0c808c53104b2c90346f284bda33a69beadcab4fc83ab8f2c5a4410cd129
   163  $ docker ps
   164  CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                        NAMES
   165  bafb0c808c53        redis               "/entrypoint.sh redis"   4 seconds ago       Up 3 seconds        172.23.0.1:32770->6379/tcp   redis
   166  ```
   167  
   168  ## Connect containers
   169  
   170  You can connect containers dynamically to one or more networks. These networks
   171  can be backed the same or different network drivers. Once connected, the
   172  containers can communicate using another container's IP address or name.
   173  
   174  For `overlay` networks or custom plugins that support multi-host
   175  connectivity, containers connected to the same multi-host network but launched
   176  from different hosts can also communicate in this way.
   177  
   178  Create two containers for this example:
   179  
   180  ```bash
   181  $ docker run -itd --name=container1 busybox
   182  18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731
   183  
   184  $ docker run -itd --name=container2 busybox
   185  498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
   186  ```
   187  
   188  Then create an isolated, `bridge` network to test with.
   189  
   190  ```bash
   191  $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw
   192  06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8
   193  ```
   194  
   195  Connect `container2` to the network and then `inspect` the network to verify
   196  the connection:
   197  
   198  ```
   199  $ docker network connect isolated_nw container2
   200  $ docker network inspect isolated_nw
   201  [
   202      {
   203          "Name": "isolated_nw",
   204          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   205          "Scope": "local",
   206          "Driver": "bridge",
   207          "IPAM": {
   208              "Driver": "default",
   209              "Config": [
   210                  {
   211                      "Subnet": "172.21.0.0/16",
   212                      "Gateway": "172.21.0.1/16"
   213                  }
   214              ]
   215          },
   216          "Containers": {
   217              "90e1f3ec71caf82ae776a827e0712a68a110a3f175954e5bd4222fd142ac9428": {
   218                  "Name": "container2",
   219                  "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d",
   220                  "MacAddress": "02:42:ac:19:00:02",
   221                  "IPv4Address": "172.25.0.2/16",
   222                  "IPv6Address": ""
   223              }
   224          },
   225          "Options": {}
   226      }
   227  ]
   228  ```
   229  
   230  You can see that the Engine automatically assigns an IP address to `container2`.
   231  Given we specified a `--subnet` when creating the network, Engine picked
   232  an address from that same subnet. Now, start a third container and connect it to
   233  the network on launch using the `docker run` command's `--net` option:
   234  
   235  ```bash
   236  $ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
   237  467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551
   238  ```
   239  
   240  As you can see you were able to specify the ip address for your container. As
   241  long as the network to which the container is connecting was created with a
   242  user specified subnet, you will be able to select the IPv4 and/or IPv6
   243  address(es) for your container when executing `docker run` and `docker network
   244  connect` commands by respectively passing the `--ip` and `--ip6` flags for IPv4
   245  and IPv6. The selected IP address is part of the container networking
   246  configuration and will be preserved across container reload. The feature is
   247  only available on user defined networks, because they guarantee their subnets
   248  configuration does not change across daemon reload.
   249  
   250  Now, inspect the network resources used by `container3`.
   251  
   252  ```bash
   253  $ docker inspect --format='{{json .NetworkSettings.Networks}}'  container3
   254  {"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
   255  "EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}}
   256  ```
   257  Repeat this command for `container2`. If you have Python installed, you can pretty print the output.
   258  
   259  ```bash
   260  $ docker inspect --format='{{json .NetworkSettings.Networks}}'  container2 | python -m json.tool
   261  {
   262      "bridge": {
   263          "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812",
   264          "EndpointID": "0099f9efb5a3727f6a554f176b1e96fca34cae773da68b3b6a26d046c12cb365",
   265          "Gateway": "172.17.0.1",
   266          "GlobalIPv6Address": "",
   267          "GlobalIPv6PrefixLen": 0,
   268          "IPAMConfig": null,
   269          "IPAddress": "172.17.0.3",
   270          "IPPrefixLen": 16,
   271          "IPv6Gateway": "",
   272          "MacAddress": "02:42:ac:11:00:03"
   273      },
   274      "isolated_nw": {
   275          "NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
   276          "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d",
   277          "Gateway": "172.25.0.1",
   278          "GlobalIPv6Address": "",
   279          "GlobalIPv6PrefixLen": 0,
   280          "IPAMConfig": null,
   281          "IPAddress": "172.25.0.2",
   282          "IPPrefixLen": 16,
   283          "IPv6Gateway": "",
   284          "MacAddress": "02:42:ac:19:00:02"
   285      }
   286  }
   287  ```
   288  
   289  You should find `container2` belongs to two networks.  The `bridge` network
   290  which it joined by default when you launched it and the `isolated_nw` which you
   291  later connected it to.
   292  
   293  ![](images/working.png)
   294  
   295  In the case of `container3`, you connected it through `docker run` to the
   296  `isolated_nw` so that container is not connected to `bridge`.
   297  
   298  Use the `docker attach` command to connect to the running `container2` and
   299  examine its networking stack:
   300  
   301  ```bash
   302  $ docker attach container2
   303  ```
   304  
   305  If you look at the container's network stack you should see two Ethernet
   306  interfaces, one for the default bridge network and one for the `isolated_nw`
   307  network.
   308  
   309  ```bash
   310  / # ifconfig
   311  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
   312            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
   313            inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
   314            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   315            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   316            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   317            collisions:0 txqueuelen:0
   318            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   319  
   320  eth1      Link encap:Ethernet  HWaddr 02:42:AC:15:00:02  
   321            inet addr:172.25.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
   322            inet6 addr: fe80::42:acff:fe19:2/64 Scope:Link
   323            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   324            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   325            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   326            collisions:0 txqueuelen:0
   327            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   328  
   329  lo        Link encap:Local Loopback  
   330            inet addr:127.0.0.1  Mask:255.0.0.0
   331            inet6 addr: ::1/128 Scope:Host
   332            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   333            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   334            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   335            collisions:0 txqueuelen:0
   336            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   337  ```
   338  
   339  On the `isolated_nw` which was user defined, the Docker embedded DNS server
   340  enables name resolution for other containers in the network. Inside of
   341  `container2` it is possible to ping `container3` by name.
   342  
   343  ```bash
   344  / # ping -w 4 container3
   345  PING container3 (172.25.3.3): 56 data bytes
   346  64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.070 ms
   347  64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.080 ms
   348  64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.080 ms
   349  64 bytes from 172.25.3.3: seq=3 ttl=64 time=0.097 ms
   350  
   351  --- container3 ping statistics ---
   352  4 packets transmitted, 4 packets received, 0% packet loss
   353  round-trip min/avg/max = 0.070/0.081/0.097 ms
   354  ```
   355  
   356  This isn't the case for the default `bridge` network. Both `container2` and
   357  `container1` are connected to the default bridge network. Docker does not
   358  support automatic service discovery on this network. For this reason, pinging
   359  `container1` by name fails as you would expect based on the `/etc/hosts` file:
   360  
   361  ```bash
   362  / # ping -w 4 container1
   363  ping: bad address 'container1'
   364  ```
   365  
   366  A ping using the `container1` IP address does succeed though:
   367  
   368  ```bash
   369  / # ping -w 4 172.17.0.2
   370  PING 172.17.0.2 (172.17.0.2): 56 data bytes
   371  64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms
   372  64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
   373  64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
   374  64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms
   375  
   376  --- 172.17.0.2 ping statistics ---
   377  4 packets transmitted, 4 packets received, 0% packet loss
   378  round-trip min/avg/max = 0.072/0.085/0.101 ms
   379  ```
   380  
   381  If you wanted you could connect `container1` to `container2` with the `docker
   382  run --link` command and that would enable the two containers to interact by name
   383  as well as IP.
   384  
   385  Detach from a `container2` and leave it running using `CTRL-p CTRL-q`.
   386  
   387  In this example, `container2` is attached to both networks and so can talk to
   388  `container1` and `container3`. But `container3` and `container1` are not in the
   389  same network and cannot communicate. Test, this now by attaching to
   390  `container3` and attempting to ping `container1` by IP address.
   391  
   392  ```bash
   393  $ docker attach container3
   394  / # ping 172.17.0.2
   395  PING 172.17.0.2 (172.17.0.2): 56 data bytes
   396  ^C
   397  --- 172.17.0.2 ping statistics ---
   398  10 packets transmitted, 0 packets received, 100% packet loss
   399  
   400  ```
   401  
   402  You can connect both running and non-running containers to a network. However,
   403  `docker network inspect` only displays information on running containers.
   404  
   405  ### Linking containers in user-defined networks
   406  
   407  In the above example, `container2` was able to resolve `container3`'s name
   408  automatically in the user defined network `isolated_nw`, but the name
   409  resolution did not succeed automatically in the default `bridge` network. This
   410  is expected in order to maintain backward compatibility with [legacy
   411  link](default_network/dockerlinks.md).
   412  
   413  The `legacy link` provided 4 major functionalities to the default `bridge`
   414  network.
   415  
   416  * name resolution
   417  * name alias for the linked container using `--link=CONTAINER-NAME:ALIAS`
   418  * secured container connectivity (in isolation via `--icc=false`)
   419  * environment variable injection
   420  
   421  Comparing the above 4 functionalities with the non-default user-defined
   422  networks such as `isolated_nw` in this example, without any additional config,
   423  `docker network` provides
   424  
   425  * automatic name resolution using DNS
   426  * automatic secured isolated environment for the containers in a network
   427  * ability to dynamically attach and detach to multiple networks
   428  * supports the `--link` option to provide name alias for the linked container
   429  
   430  Continuing with the above example, create another container `container4` in
   431  `isolated_nw` with `--link` to provide additional name resolution using alias
   432  for other containers in the same network.
   433  
   434  ```bash
   435  $ docker run --net=isolated_nw -itd --name=container4 --link container5:c5 busybox
   436  01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c
   437  ```
   438  
   439  With the help of `--link` `container4` will be able to reach `container5` using
   440  the aliased name `c5` as well.
   441  
   442  Please note that while creating `container4`, we linked to a container named
   443  `container5` which is not created yet. That is one of the differences in
   444  behavior between the *legacy link* in default `bridge` network and the new
   445  *link* functionality in user defined networks. The *legacy link* is static in
   446  nature and it hard-binds the container with the alias and it doesn't tolerate
   447  linked container restarts. While the new *link* functionality in user defined
   448  networks are dynamic in nature and supports linked container restarts including
   449  tolerating ip-address changes on the linked container.
   450  
   451  Now let us launch another container named `container5` linking `container4` to
   452  c4.
   453  
   454  ```bash
   455  $ docker run --net=isolated_nw -itd --name=container5 --link container4:c4 busybox
   456  72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a
   457  ```
   458  
   459  As expected, `container4` will be able to reach `container5` by both its
   460  container name and its alias c5 and `container5` will be able to reach
   461  `container4` by its container name and its alias c4.
   462  
   463  ```bash
   464  $ docker attach container4
   465  / # ping -w 4 c5
   466  PING c5 (172.25.0.5): 56 data bytes
   467  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   468  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   469  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   470  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   471  
   472  --- c5 ping statistics ---
   473  4 packets transmitted, 4 packets received, 0% packet loss
   474  round-trip min/avg/max = 0.070/0.081/0.097 ms
   475  
   476  / # ping -w 4 container5
   477  PING container5 (172.25.0.5): 56 data bytes
   478  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   479  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   480  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   481  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   482  
   483  --- container5 ping statistics ---
   484  4 packets transmitted, 4 packets received, 0% packet loss
   485  round-trip min/avg/max = 0.070/0.081/0.097 ms
   486  ```
   487  
   488  ```bash
   489  $ docker attach container5
   490  / # ping -w 4 c4
   491  PING c4 (172.25.0.4): 56 data bytes
   492  64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms
   493  64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms
   494  64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms
   495  64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms
   496  
   497  --- c4 ping statistics ---
   498  4 packets transmitted, 4 packets received, 0% packet loss
   499  round-trip min/avg/max = 0.065/0.070/0.082 ms
   500  
   501  / # ping -w 4 container4
   502  PING container4 (172.25.0.4): 56 data bytes
   503  64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms
   504  64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms
   505  64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms
   506  64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms
   507  
   508  --- container4 ping statistics ---
   509  4 packets transmitted, 4 packets received, 0% packet loss
   510  round-trip min/avg/max = 0.065/0.070/0.082 ms
   511  ```
   512  
   513  Similar to the legacy link functionality the new link alias is localized to a
   514  container and the aliased name has no meaning outside of the container using
   515  the `--link`.
   516  
   517  Also, it is important to note that if a container belongs to multiple networks,
   518  the linked alias is scoped within a given network. Hence the containers can be
   519  linked to different aliases in different networks.
   520  
   521  Extending the example, let us create another network named `local_alias`
   522  
   523  ```bash
   524  $ docker network create -d bridge --subnet 172.26.0.0/24 local_alias
   525  76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa
   526  ```
   527  
   528  let us connect `container4` and `container5` to the new network `local_alias`
   529  
   530  ```
   531  $ docker network connect --link container5:foo local_alias container4
   532  $ docker network connect --link container4:bar local_alias container5
   533  ```
   534  
   535  ```bash
   536  $ docker attach container4
   537  
   538  / # ping -w 4 foo
   539  PING foo (172.26.0.3): 56 data bytes
   540  64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms
   541  64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms
   542  64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms
   543  64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms
   544  
   545  --- foo ping statistics ---
   546  4 packets transmitted, 4 packets received, 0% packet loss
   547  round-trip min/avg/max = 0.070/0.081/0.097 ms
   548  
   549  / # ping -w 4 c5
   550  PING c5 (172.25.0.5): 56 data bytes
   551  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   552  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   553  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   554  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   555  
   556  --- c5 ping statistics ---
   557  4 packets transmitted, 4 packets received, 0% packet loss
   558  round-trip min/avg/max = 0.070/0.081/0.097 ms
   559  ```
   560  
   561  Note that the ping succeeds for both the aliases but on different networks. Let
   562  us conclude this section by disconnecting `container5` from the `isolated_nw`
   563  and observe the results
   564  
   565  ```
   566  $ docker network disconnect isolated_nw container5
   567  
   568  $ docker attach container4
   569  
   570  / # ping -w 4 c5
   571  ping: bad address 'c5'
   572  
   573  / # ping -w 4 foo
   574  PING foo (172.26.0.3): 56 data bytes
   575  64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms
   576  64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms
   577  64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms
   578  64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms
   579  
   580  --- foo ping statistics ---
   581  4 packets transmitted, 4 packets received, 0% packet loss
   582  round-trip min/avg/max = 0.070/0.081/0.097 ms
   583  
   584  ```
   585  
   586  In conclusion, the new link functionality in user defined networks provides all
   587  the benefits of legacy links while avoiding most of the well-known issues with
   588  *legacy links*.
   589  
   590  One notable missing functionality compared to *legacy links* is the injection
   591  of environment variables. Though very useful, environment variable injection is
   592  static in nature and must be injected when the container is started. One cannot
   593  inject environment variables into a running container without significant
   594  effort and hence it is not compatible with `docker network` which provides a
   595  dynamic way to connect/ disconnect containers to/from a network.
   596  
   597  ### Network-scoped alias
   598  
   599  While *link*s provide private name resolution that is localized within a
   600  container, the network-scoped alias provides a way for a container to be
   601  discovered by an alternate name by any other container within the scope of a
   602  particular network. Unlike the *link* alias, which is defined by the consumer
   603  of a service, the network-scoped alias is defined by the container that is
   604  offering the service to the network.
   605  
   606  Continuing with the above example, create another container in `isolated_nw`
   607  with a network alias.
   608  
   609  ```bash
   610  $ docker run --net=isolated_nw -itd --name=container6 --net-alias app busybox
   611  8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
   612  ```
   613  
   614  ```bash
   615  $ docker attach container4
   616  / # ping -w 4 app
   617  PING app (172.25.0.6): 56 data bytes
   618  64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
   619  64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
   620  64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
   621  64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
   622  
   623  --- app ping statistics ---
   624  4 packets transmitted, 4 packets received, 0% packet loss
   625  round-trip min/avg/max = 0.070/0.081/0.097 ms
   626  
   627  / # ping -w 4 container6
   628  PING container5 (172.25.0.6): 56 data bytes
   629  64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
   630  64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
   631  64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
   632  64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
   633  
   634  --- container6 ping statistics ---
   635  4 packets transmitted, 4 packets received, 0% packet loss
   636  round-trip min/avg/max = 0.070/0.081/0.097 ms
   637  ```
   638  
   639  Now let us connect `container6` to the `local_alias` network with a different
   640  network-scoped alias.
   641  
   642  ```bash
   643  $ docker network connect --alias scoped-app local_alias container6
   644  ```
   645  
   646  `container6` in this example now is aliased as `app` in network `isolated_nw`
   647  and as `scoped-app` in network `local_alias`.
   648  
   649  Let's try to reach these aliases from `container4` (which is connected to both
   650  these networks) and `container5` (which is connected only to `isolated_nw`).
   651  
   652  ```bash
   653  $ docker attach container4
   654  
   655  / # ping -w 4 scoped-app
   656  PING foo (172.26.0.5): 56 data bytes
   657  64 bytes from 172.26.0.5: seq=0 ttl=64 time=0.070 ms
   658  64 bytes from 172.26.0.5: seq=1 ttl=64 time=0.080 ms
   659  64 bytes from 172.26.0.5: seq=2 ttl=64 time=0.080 ms
   660  64 bytes from 172.26.0.5: seq=3 ttl=64 time=0.097 ms
   661  
   662  --- foo ping statistics ---
   663  4 packets transmitted, 4 packets received, 0% packet loss
   664  round-trip min/avg/max = 0.070/0.081/0.097 ms
   665  
   666  $ docker attach container5
   667  
   668  / # ping -w 4 scoped-app
   669  ping: bad address 'scoped-app'
   670  
   671  ```
   672  
   673  As you can see, the alias is scoped to the network it is defined on and hence
   674  only those containers that are connected to that network can access the alias.
   675  
   676  In addition to the above features, multiple containers can share the same
   677  network-scoped alias within the same network. For example, let's launch
   678  `container7` in `isolated_nw` with the same alias as `container6`
   679  
   680  ```bash
   681  $ docker run --net=isolated_nw -itd --name=container7 --net-alias app busybox
   682  3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554
   683  ```
   684  
   685  When multiple containers share the same alias, name resolution to that alias
   686  will happen to one of the containers (typically the first container that is
   687  aliased). When the container that backs the alias goes down or disconnected
   688  from the network, the next container that backs the alias will be resolved.
   689  
   690  Let us ping the alias `app` from `container4` and bring down `container6` to
   691  verify that `container7` is resolving the `app` alias.
   692  
   693  ```bash
   694  $ docker attach container4
   695  / # ping -w 4 app
   696  PING app (172.25.0.6): 56 data bytes
   697  64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
   698  64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
   699  64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
   700  64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
   701  
   702  --- app ping statistics ---
   703  4 packets transmitted, 4 packets received, 0% packet loss
   704  round-trip min/avg/max = 0.070/0.081/0.097 ms
   705  
   706  $ docker stop container6
   707  
   708  $ docker attach container4
   709  / # ping -w 4 app
   710  PING app (172.25.0.7): 56 data bytes
   711  64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms
   712  64 bytes from 172.25.0.7: seq=1 ttl=64 time=0.075 ms
   713  64 bytes from 172.25.0.7: seq=2 ttl=64 time=0.072 ms
   714  64 bytes from 172.25.0.7: seq=3 ttl=64 time=0.101 ms
   715  
   716  --- app ping statistics ---
   717  4 packets transmitted, 4 packets received, 0% packet loss
   718  round-trip min/avg/max = 0.072/0.085/0.101 ms
   719  
   720  ```
   721  
   722  ## Disconnecting containers
   723  
   724  You can disconnect a container from a network using the `docker network
   725  disconnect` command.
   726  
   727  ```bash
   728  $ docker network disconnect isolated_nw container2
   729  
   730  $ docker inspect --format='{{json .NetworkSettings.Networks}}'  container2 | python -m json.tool
   731  {
   732      "bridge": {
   733          "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812",
   734          "EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0",
   735          "Gateway": "172.17.0.1",
   736          "GlobalIPv6Address": "",
   737          "GlobalIPv6PrefixLen": 0,
   738          "IPAddress": "172.17.0.3",
   739          "IPPrefixLen": 16,
   740          "IPv6Gateway": "",
   741          "MacAddress": "02:42:ac:11:00:03"
   742      }
   743  }
   744  
   745  
   746  $ docker network inspect isolated_nw
   747  [
   748      {
   749          "Name": "isolated_nw",
   750          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   751          "Scope": "local",
   752          "Driver": "bridge",
   753          "IPAM": {
   754              "Driver": "default",
   755              "Config": [
   756                  {
   757                      "Subnet": "172.21.0.0/16",
   758                      "Gateway": "172.21.0.1/16"
   759                  }
   760              ]
   761          },
   762          "Containers": {
   763              "467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551": {
   764                  "Name": "container3",
   765                  "EndpointID": "dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103",
   766                  "MacAddress": "02:42:ac:19:03:03",
   767                  "IPv4Address": "172.25.3.3/16",
   768                  "IPv6Address": ""
   769              }
   770          },
   771          "Options": {}
   772      }
   773  ]
   774  ```
   775  
   776  Once a container is disconnected from a network, it cannot communicate with
   777  other containers connected to that network. In this example, `container2` can
   778  no longer talk to `container3` on the `isolated_nw` network.
   779  
   780  ```bash
   781  $ docker attach container2
   782  
   783  / # ifconfig
   784  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
   785            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
   786            inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
   787            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   788            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   789            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   790            collisions:0 txqueuelen:0
   791            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   792  
   793  lo        Link encap:Local Loopback  
   794            inet addr:127.0.0.1  Mask:255.0.0.0
   795            inet6 addr: ::1/128 Scope:Host
   796            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   797            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   798            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   799            collisions:0 txqueuelen:0
   800            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   801  
   802  / # ping container3
   803  PING container3 (172.25.3.3): 56 data bytes
   804  ^C
   805  --- container3 ping statistics ---
   806  2 packets transmitted, 0 packets received, 100% packet loss
   807  ```
   808  
   809  The `container2` still has full connectivity to the bridge network
   810  
   811  ```bash
   812  / # ping container1
   813  PING container1 (172.17.0.2): 56 data bytes
   814  64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms
   815  64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms
   816  ^C
   817  --- container1 ping statistics ---
   818  2 packets transmitted, 2 packets received, 0% packet loss
   819  round-trip min/avg/max = 0.119/0.146/0.174 ms
   820  / #
   821  ```
   822  
   823  There are certain scenarios such as ungraceful docker daemon restarts in
   824  multi-host network, where the daemon is unable to cleanup stale connectivity
   825  endpoints. Such stale endpoints may cause an error `container already connected
   826  to network` when a new container is connected to that network with the same
   827  name as the stale endpoint. In order to cleanup these stale endpoints, first
   828  remove the container and force disconnect (`docker network disconnect -f`) the
   829  endpoint from the network. Once the endpoint is cleaned up, the container can
   830  be connected to the network.
   831  
   832  ```bash
   833  $ docker run -d --name redis_db --net multihost redis
   834  ERROR: Cannot start container bc0b19c089978f7845633027aa3435624ca3d12dd4f4f764b61eac4c0610f32e: container already connected to network multihost
   835  
   836  $ docker rm -f redis_db
   837  $ docker network disconnect -f multihost redis_db
   838  
   839  $ docker run -d --name redis_db --net multihost redis
   840  7d986da974aeea5e9f7aca7e510bdb216d58682faa83a9040c2f2adc0544795a
   841  ```
   842  
   843  ## Remove a network
   844  
   845  When all the containers in a network are stopped or disconnected, you can
   846  remove a network.
   847  
   848  ```bash
   849  $ docker network disconnect isolated_nw container3
   850  ```
   851  
   852  ```bash
   853  docker network inspect isolated_nw
   854  [
   855      {
   856          "Name": "isolated_nw",
   857          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   858          "Scope": "local",
   859          "Driver": "bridge",
   860          "IPAM": {
   861              "Driver": "default",
   862              "Config": [
   863                  {
   864                      "Subnet": "172.21.0.0/16",
   865                      "Gateway": "172.21.0.1/16"
   866                  }
   867              ]
   868          },
   869          "Containers": {},
   870          "Options": {}
   871      }
   872  ]
   873  
   874  $ docker network rm isolated_nw
   875  ```
   876  
   877  List all your networks to verify the `isolated_nw` was removed:
   878  
   879  ```bash
   880  $ docker network ls
   881  NETWORK ID          NAME                DRIVER
   882  72314fa53006        host                host                
   883  f7ab26d71dbd        bridge              bridge              
   884  0f32e83e61ac        none                null  
   885  ```
   886  
   887  ## Related information
   888  
   889  * [network create](../../reference/commandline/network_create.md)
   890  * [network inspect](../../reference/commandline/network_inspect.md)
   891  * [network connect](../../reference/commandline/network_connect.md)
   892  * [network disconnect](../../reference/commandline/network_disconnect.md)
   893  * [network ls](../../reference/commandline/network_ls.md)
   894  * [network rm](../../reference/commandline/network_rm.md)