github.com/caseyhadden/docker@v1.6.2/docs/sources/articles/networking.md (about)

     1  page_title: Network Configuration
     2  page_description: Docker networking
     3  page_keywords: network, networking, bridge, docker, documentation
     4  
     5  # Network Configuration
     6  
     7  ## TL;DR
     8  
     9  When Docker starts, it creates a virtual interface named `docker0` on
    10  the host machine.  It randomly chooses an address and subnet from the
    11  private range defined by [RFC 1918](http://tools.ietf.org/html/rfc1918)
    12  that are not in use on the host machine, and assigns it to `docker0`.
    13  Docker made the choice `172.17.42.1/16` when I started it a few minutes
    14  ago, for example — a 16-bit netmask providing 65,534 addresses for the
    15  host machine and its containers. The MAC address is generated using the
    16  IP address allocated to the container to avoid ARP collisions, using a
    17  range from `02:42:ac:11:00:00` to `02:42:ac:11:ff:ff`.
    18  
    19  > **Note:**
    20  > This document discusses advanced networking configuration
    21  > and options for Docker. In most cases you won't need this information.
    22  > If you're looking to get started with a simpler explanation of Docker
    23  > networking and an introduction to the concept of container linking see
    24  > the [Docker User Guide](/userguide/dockerlinks/).
    25  
    26  But `docker0` is no ordinary interface.  It is a virtual *Ethernet
    27  bridge* that automatically forwards packets between any other network
    28  interfaces that are attached to it.  This lets containers communicate
    29  both with the host machine and with each other.  Every time Docker
    30  creates a container, it creates a pair of “peer” interfaces that are
    31  like opposite ends of a pipe — a packet sent on one will be received on
    32  the other.  It gives one of the peers to the container to become its
    33  `eth0` interface and keeps the other peer, with a unique name like
    34  `vethAQI2QT`, out in the namespace of the host machine.  By binding
    35  every `veth*` interface to the `docker0` bridge, Docker creates a
    36  virtual subnet shared between the host machine and every Docker
    37  container.
    38  
    39  The remaining sections of this document explain all of the ways that you
    40  can use Docker options and — in advanced cases — raw Linux networking
    41  commands to tweak, supplement, or entirely replace Docker's default
    42  networking configuration.
    43  
    44  ## Quick Guide to the Options
    45  
    46  Here is a quick list of the networking-related Docker command-line
    47  options, in case it helps you find the section below that you are
    48  looking for.
    49  
    50  Some networking command-line options can only be supplied to the Docker
    51  server when it starts up, and cannot be changed once it is running:
    52  
    53   *  `-b BRIDGE` or `--bridge=BRIDGE` — see
    54      [Building your own bridge](#bridge-building)
    55  
    56   *  `--bip=CIDR` — see
    57      [Customizing docker0](#docker0)
    58  
    59   *  `--fixed-cidr` — see
    60      [Customizing docker0](#docker0)
    61  
    62   *  `--fixed-cidr-v6` — see
    63      [IPv6](#ipv6)
    64  
    65   *  `-H SOCKET...` or `--host=SOCKET...` —
    66      This might sound like it would affect container networking,
    67      but it actually faces in the other direction:
    68      it tells the Docker server over what channels
    69      it should be willing to receive commands
    70      like “run container” and “stop container.”
    71  
    72   *  `--icc=true|false` — see
    73      [Communication between containers](#between-containers)
    74  
    75   *  `--ip=IP_ADDRESS` — see
    76      [Binding container ports](#binding-ports)
    77  
    78   *  `--ipv6=true|false` — see
    79      [IPv6](#ipv6)
    80  
    81   *  `--ip-forward=true|false` — see
    82      [Communication between containers and the wider world](#the-world)
    83  
    84   *  `--iptables=true|false` — see
    85      [Communication between containers](#between-containers)
    86  
    87   *  `--mtu=BYTES` — see
    88      [Customizing docker0](#docker0)
    89  
    90  There are two networking options that can be supplied either at startup
    91  or when `docker run` is invoked.  When provided at startup, set the
    92  default value that `docker run` will later use if the options are not
    93  specified:
    94  
    95   *  `--dns=IP_ADDRESS...` — see
    96      [Configuring DNS](#dns)
    97  
    98   *  `--dns-search=DOMAIN...` — see
    99      [Configuring DNS](#dns)
   100  
   101  Finally, several networking options can only be provided when calling
   102  `docker run` because they specify something specific to one container:
   103  
   104   *  `-h HOSTNAME` or `--hostname=HOSTNAME` — see
   105      [Configuring DNS](#dns) and
   106      [How Docker networks a container](#container-networking)
   107  
   108   *  `--link=CONTAINER_NAME_or_ID:ALIAS` — see
   109      [Configuring DNS](#dns) and
   110      [Communication between containers](#between-containers)
   111  
   112   *  `--net=bridge|none|container:NAME_or_ID|host` — see
   113      [How Docker networks a container](#container-networking)
   114  
   115   *  `--mac-address=MACADDRESS...` — see
   116      [How Docker networks a container](#container-networking)
   117  
   118   *  `-p SPEC` or `--publish=SPEC` — see
   119      [Binding container ports](#binding-ports)
   120  
   121   *  `-P` or `--publish-all=true|false` — see
   122      [Binding container ports](#binding-ports)
   123  
   124  The following sections tackle all of the above topics in an order that
   125  moves roughly from simplest to most complex.
   126  
   127  ## Configuring DNS
   128  
   129  <a name="dns"></a>
   130  
   131  How can Docker supply each container with a hostname and DNS
   132  configuration, without having to build a custom image with the hostname
   133  written inside?  Its trick is to overlay three crucial `/etc` files
   134  inside the container with virtual files where it can write fresh
   135  information.  You can see this by running `mount` inside a container:
   136  
   137      $$ mount
   138      ...
   139      /dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...
   140      /dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...
   141      /dev/disk/by-uuid/1fec...ebdf on /etc/resolv.conf type ext4 ...
   142      ...
   143  
   144  This arrangement allows Docker to do clever things like keep
   145  `resolv.conf` up to date across all containers when the host machine
   146  receives new configuration over DHCP later.  The exact details of how
   147  Docker maintains these files inside the container can change from one
   148  Docker version to the next, so you should leave the files themselves
   149  alone and use the following Docker options instead.
   150  
   151  Four different options affect container domain name services.
   152  
   153   *  `-h HOSTNAME` or `--hostname=HOSTNAME` — sets the hostname by which
   154      the container knows itself.  This is written into `/etc/hostname`,
   155      into `/etc/hosts` as the name of the container's host-facing IP
   156      address, and is the name that `/bin/bash` inside the container will
   157      display inside its prompt.  But the hostname is not easy to see from
   158      outside the container.  It will not appear in `docker ps` nor in the
   159      `/etc/hosts` file of any other container.
   160  
   161   *  `--link=CONTAINER_NAME_or_ID:ALIAS` — using this option as you `run` a
   162      container gives the new container's `/etc/hosts` an extra entry
   163      named `ALIAS` that points to the IP address of the container identified by
   164      `CONTAINER_NAME_or_ID`.  This lets processes inside the new container
   165      connect to the hostname `ALIAS` without having to know its IP.  The
   166      `--link=` option is discussed in more detail below, in the section
   167      [Communication between containers](#between-containers). Because
   168      Docker may assign a different IP address to the linked containers
   169      on restart, Docker updates the `ALIAS` entry in the `/etc/hosts` file
   170      of the recipient containers.
   171  
   172   *  `--dns=IP_ADDRESS...` — sets the IP addresses added as `server`
   173      lines to the container's `/etc/resolv.conf` file.  Processes in the
   174      container, when confronted with a hostname not in `/etc/hosts`, will
   175      connect to these IP addresses on port 53 looking for name resolution
   176      services.
   177  
   178   *  `--dns-search=DOMAIN...` — sets the domain names that are searched
   179      when a bare unqualified hostname is used inside of the container, by
   180      writing `search` lines into the container's `/etc/resolv.conf`.
   181      When a container process attempts to access `host` and the search
   182      domain `example.com` is set, for instance, the DNS logic will not
   183      only look up `host` but also `host.example.com`.
   184      Use `--dns-search=.` if you don't wish to set the search domain.
   185  
   186  Regarding DNS settings, in the absence of either the `--dns=IP_ADDRESS...`
   187  or the `--dns-search=DOMAIN...` option, Docker makes each container's
   188  `/etc/resolv.conf` look like the `/etc/resolv.conf` of the host machine (where
   189  the `docker` daemon runs).  When creating the container's `/etc/resolv.conf`,
   190  the daemon filters out all localhost IP address `nameserver` entries from
   191  the host's original file.
   192  
   193  Filtering is necessary because all localhost addresses on the host are
   194  unreachable from the container's network.  After this filtering, if there 
   195  are no more `nameserver` entries left in the container's `/etc/resolv.conf`
   196  file, the daemon adds public Google DNS nameservers
   197  (8.8.8.8 and 8.8.4.4) to the container's DNS configuration.  If IPv6 is
   198  enabled on the daemon, the public IPv6 Google DNS nameservers will also
   199  be added (2001:4860:4860::8888 and 2001:4860:4860::8844).
   200  
   201  > **Note**:
   202  > If you need access to a host's localhost resolver, you must modify your
   203  > DNS service on the host to listen on a non-localhost address that is
   204  > reachable from within the container.
   205  
   206  You might wonder what happens when the host machine's
   207  `/etc/resolv.conf` file changes.  The `docker` daemon has a file change
   208  notifier active which will watch for changes to the host DNS configuration.
   209  
   210  > **Note**:
   211  > The file change notifier relies on the Linux kernel's inotify feature.
   212  > Because this feature is currently incompatible with the overlay filesystem 
   213  > driver, a Docker daemon using "overlay" will not be able to take advantage
   214  > of the `/etc/resolv.conf` auto-update feature.
   215  
   216  When the host file changes, all stopped containers which have a matching
   217  `resolv.conf` to the host will be updated immediately to this newest host
   218  configuration.  Containers which are running when the host configuration
   219  changes will need to stop and start to pick up the host changes due to lack
   220  of a facility to ensure atomic writes of the `resolv.conf` file while the
   221  container is running. If the container's `resolv.conf` has been edited since
   222  it was started with the default configuration, no replacement will be
   223  attempted as it would overwrite the changes performed by the container.
   224  If the options (`--dns` or `--dns-search`) have been used to modify the 
   225  default host configuration, then the replacement with an updated host's
   226  `/etc/resolv.conf` will not happen as well.
   227  
   228  > **Note**:
   229  > For containers which were created prior to the implementation of
   230  > the `/etc/resolv.conf` update feature in Docker 1.5.0: those
   231  > containers will **not** receive updates when the host `resolv.conf`
   232  > file changes. Only containers created with Docker 1.5.0 and above
   233  > will utilize this auto-update feature.
   234  
   235  ## Communication between containers and the wider world
   236  
   237  <a name="the-world"></a>
   238  
   239  Whether a container can talk to the world is governed by two factors.
   240  
   241  1.  Is the host machine willing to forward IP packets?  This is governed
   242      by the `ip_forward` system parameter.  Packets can only pass between
   243      containers if this parameter is `1`.  Usually you will simply leave
   244      the Docker server at its default setting `--ip-forward=true` and
   245      Docker will go set `ip_forward` to `1` for you when the server
   246      starts up. To check the setting or turn it on manually:
   247  
   248          $ sysctl net.ipv4.conf.all.forwarding
   249          net.ipv4.conf.all.forwarding = 0
   250          $ sysctl net.ipv4.conf.all.forwarding=1
   251          $ sysctl net.ipv4.conf.all.forwarding
   252          net.ipv4.conf.all.forwarding = 1
   253  
   254      Many using Docker will want `ip_forward` to be on, to at
   255      least make communication *possible* between containers and
   256      the wider world.
   257  
   258      May also be needed for inter-container communication if you are
   259      in a multiple bridge setup.
   260  
   261  2.  Do your `iptables` allow this particular connection? Docker will
   262      never make changes to your system `iptables` rules if you set
   263      `--iptables=false` when the daemon starts.  Otherwise the Docker
   264      server will append forwarding rules to the `DOCKER` filter chain.
   265  
   266  Docker will not delete or modify any pre-existing rules from the `DOCKER`
   267  filter chain. This allows the user to create in advance any rules required
   268  to further restrict access to the containers.
   269  
   270  Docker's forward rules permit all external source IPs by default. To allow
   271  only a specific IP or network to access the containers, insert a negated
   272  rule at the top of the `DOCKER` filter chain. For example, to restrict
   273  external access such that *only* source IP 8.8.8.8 can access the
   274  containers, the following rule could be added:
   275  
   276      $ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
   277  
   278  ## Communication between containers
   279  
   280  <a name="between-containers"></a>
   281  
   282  Whether two containers can communicate is governed, at the operating
   283  system level, by two factors.
   284  
   285  1.  Does the network topology even connect the containers' network
   286      interfaces?  By default Docker will attach all containers to a
   287      single `docker0` bridge, providing a path for packets to travel
   288      between them.  See the later sections of this document for other
   289      possible topologies.
   290  
   291  2.  Do your `iptables` allow this particular connection? Docker will never
   292      make changes to your system `iptables` rules if you set
   293      `--iptables=false` when the daemon starts.  Otherwise the Docker server
   294      will add a default rule to the `FORWARD` chain with a blanket `ACCEPT`
   295      policy if you retain the default `--icc=true`, or else will set the
   296      policy to `DROP` if `--icc=false`.
   297  
   298  It is a strategic question whether to leave `--icc=true` or change it to
   299  `--icc=false` (on Ubuntu, by editing the `DOCKER_OPTS` variable in
   300  `/etc/default/docker` and restarting the Docker server) so that
   301  `iptables` will protect other containers — and the main host — from
   302  having arbitrary ports probed or accessed by a container that gets
   303  compromised.
   304  
   305  If you choose the most secure setting of `--icc=false`, then how can
   306  containers communicate in those cases where you *want* them to provide
   307  each other services?
   308  
   309  The answer is the `--link=CONTAINER_NAME_or_ID:ALIAS` option, which was
   310  mentioned in the previous section because of its effect upon name
   311  services.  If the Docker daemon is running with both `--icc=false` and
   312  `--iptables=true` then, when it sees `docker run` invoked with the
   313  `--link=` option, the Docker server will insert a pair of `iptables`
   314  `ACCEPT` rules so that the new container can connect to the ports
   315  exposed by the other container — the ports that it mentioned in the
   316  `EXPOSE` lines of its `Dockerfile`.  Docker has more documentation on
   317  this subject — see the [linking Docker containers](/userguide/dockerlinks)
   318  page for further details.
   319  
   320  > **Note**:
   321  > The value `CONTAINER_NAME` in `--link=` must either be an
   322  > auto-assigned Docker name like `stupefied_pare` or else the name you
   323  > assigned with `--name=` when you ran `docker run`.  It cannot be a
   324  > hostname, which Docker will not recognize in the context of the
   325  > `--link=` option.
   326  
   327  You can run the `iptables` command on your Docker host to see whether
   328  the `FORWARD` chain has a default policy of `ACCEPT` or `DROP`:
   329  
   330      # When --icc=false, you should see a DROP rule:
   331  
   332      $ sudo iptables -L -n
   333      ...
   334      Chain FORWARD (policy ACCEPT)
   335      target     prot opt source               destination
   336      DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
   337      DROP       all  --  0.0.0.0/0            0.0.0.0/0
   338      ...
   339  
   340      # When a --link= has been created under --icc=false,
   341      # you should see port-specific ACCEPT rules overriding
   342      # the subsequent DROP policy for all other packets:
   343  
   344      $ sudo iptables -L -n
   345      ...
   346      Chain FORWARD (policy ACCEPT)
   347      target     prot opt source               destination
   348      DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
   349      DROP       all  --  0.0.0.0/0            0.0.0.0/0
   350  
   351      Chain DOCKER (1 references)
   352      target     prot opt source               destination
   353      ACCEPT     tcp  --  172.17.0.2           172.17.0.3           tcp spt:80
   354      ACCEPT     tcp  --  172.17.0.3           172.17.0.2           tcp dpt:80
   355  
   356  > **Note**:
   357  > Docker is careful that its host-wide `iptables` rules fully expose
   358  > containers to each other's raw IP addresses, so connections from one
   359  > container to another should always appear to be originating from the
   360  > first container's own IP address.
   361  
   362  ## Binding container ports to the host
   363  
   364  <a name="binding-ports"></a>
   365  
   366  By default Docker containers can make connections to the outside world,
   367  but the outside world cannot connect to containers.  Each outgoing
   368  connection will appear to originate from one of the host machine's own
   369  IP addresses thanks to an `iptables` masquerading rule on the host
   370  machine that the Docker server creates when it starts:
   371  
   372      # You can see that the Docker server creates a
   373      # masquerade rule that let containers connect
   374      # to IP addresses in the outside world:
   375  
   376      $ sudo iptables -t nat -L -n
   377      ...
   378      Chain POSTROUTING (policy ACCEPT)
   379      target     prot opt source               destination
   380      MASQUERADE  all  --  172.17.0.0/16       !172.17.0.0/16
   381      ...
   382  
   383  But if you want containers to accept incoming connections, you will need
   384  to provide special options when invoking `docker run`.  These options
   385  are covered in more detail in the [Docker User Guide](/userguide/dockerlinks)
   386  page.  There are two approaches.
   387  
   388  First, you can supply `-P` or `--publish-all=true|false` to `docker run` which
   389  is a blanket operation that identifies every port with an `EXPOSE` line in the
   390  image's `Dockerfile` or `--expose <port>` commandline flag and maps it to a
   391  host port somewhere within an *ephemeral port range*. The `docker port` command
   392  then needs to be used to inspect created mapping. The *ephemeral port range* is
   393  configured by `/proc/sys/net/ipv4/ip_local_port_range` kernel parameter,
   394  typically ranging from 32768 to 61000.
   395  
   396  Mapping can be specified explicitly using `-p SPEC` or `--publish=SPEC` option.
   397  It allows you to particularize which port on docker server - which can be any
   398  port at all, not just one within the *ephemeral port range* — you want mapped
   399  to which port in the container.
   400  
   401  Either way, you should be able to peek at what Docker has accomplished
   402  in your network stack by examining your NAT tables.
   403  
   404      # What your NAT rules might look like when Docker
   405      # is finished setting up a -P forward:
   406  
   407      $ iptables -t nat -L -n
   408      ...
   409      Chain DOCKER (2 references)
   410      target     prot opt source               destination
   411      DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:49153 to:172.17.0.2:80
   412  
   413      # What your NAT rules might look like when Docker
   414      # is finished setting up a -p 80:80 forward:
   415  
   416      Chain DOCKER (2 references)
   417      target     prot opt source               destination
   418      DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80
   419  
   420  You can see that Docker has exposed these container ports on `0.0.0.0`,
   421  the wildcard IP address that will match any possible incoming port on
   422  the host machine.  If you want to be more restrictive and only allow
   423  container services to be contacted through a specific external interface
   424  on the host machine, you have two choices.  When you invoke `docker run`
   425  you can use either `-p IP:host_port:container_port` or `-p IP::port` to
   426  specify the external interface for one particular binding.
   427  
   428  Or if you always want Docker port forwards to bind to one specific IP
   429  address, you can edit your system-wide Docker server settings (on
   430  Ubuntu, by editing `DOCKER_OPTS` in `/etc/default/docker`) and add the
   431  option `--ip=IP_ADDRESS`.  Remember to restart your Docker server after
   432  editing this setting.
   433  
   434  Again, this topic is covered without all of these low-level networking
   435  details in the [Docker User Guide](/userguide/dockerlinks/) document if you
   436  would like to use that as your port redirection reference instead.
   437  
   438  ## IPv6
   439  
   440  <a name="ipv6"></a>
   441  
   442  As we are [running out of IPv4 addresses](http://en.wikipedia.org/wiki/IPv4_address_exhaustion)
   443  the IETF has standardized an IPv4 successor, [Internet Protocol Version 6](http://en.wikipedia.org/wiki/IPv6)
   444  , in [RFC 2460](https://www.ietf.org/rfc/rfc2460.txt). Both protocols, IPv4 and
   445  IPv6, reside on layer 3 of the [OSI model](http://en.wikipedia.org/wiki/OSI_model).
   446  
   447  
   448  ### IPv6 with Docker
   449  By default, the Docker server configures the container network for IPv4 only.
   450  You can enable IPv4/IPv6 dualstack support by running the Docker daemon with the
   451  `--ipv6` flag. Docker will set up the bridge `docker0` with the IPv6
   452  [link-local address](http://en.wikipedia.org/wiki/Link-local_address) `fe80::1`.
   453  
   454  By default, containers that are created will only get a link-local IPv6 address.
   455  To assign globally routable IPv6 addresses to your containers you have to
   456  specify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the
   457  `--fixed-cidr-v6` parameter when starting Docker daemon:
   458  
   459      docker -d --ipv6 --fixed-cidr-v6="2001:db8:1::/64"
   460  
   461  The subnet for Docker containers should at least have a size of `/80`. This way
   462  an IPv6 address can end with the container's MAC address and you prevent NDP
   463  neighbor cache invalidation issues in the Docker layer.
   464  
   465  With the `--fixed-cidr-v6` parameter set Docker will add a new route to the
   466  routing table. Further IPv6 routing will be enabled (you may prevent this by
   467  starting Docker daemon with `--ip-forward=false`):
   468  
   469      $ ip -6 route add 2001:db8:1::/64 dev docker0
   470      $ sysctl net.ipv6.conf.default.forwarding=1
   471      $ sysctl net.ipv6.conf.all.forwarding=1
   472  
   473  All traffic to the subnet `2001:db8:1::/64` will now be routed
   474  via the `docker0` interface.
   475  
   476  Be aware that IPv6 forwarding may interfere with your existing IPv6
   477  configuration: If you are using Router Advertisements to get IPv6 settings for
   478  your host's interfaces you should set `accept_ra` to `2`. Otherwise IPv6
   479  enabled forwarding will result in rejecting Router Advertisements. E.g., if you
   480  want to configure `eth0` via Router Advertisements you should set:
   481  
   482      $ sysctl net.ipv6.conf.eth0.accept_ra=2
   483  
   484  ![](/article-img/ipv6_basic_host_config.svg)
   485  
   486  Every new container will get an IPv6 address from the defined subnet. Further
   487  a default route will be added via the gateway `fe80::1` on `eth0`:
   488  
   489      docker run -it ubuntu bash -c "ip -6 addr show dev eth0; ip -6 route show"
   490  
   491      15: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500
   492         inet6 2001:db8:1:0:0:242:ac11:3/64 scope global
   493            valid_lft forever preferred_lft forever
   494         inet6 fe80::42:acff:fe11:3/64 scope link
   495            valid_lft forever preferred_lft forever
   496  
   497      2001:db8:1::/64 dev eth0  proto kernel  metric 256
   498      fe80::/64 dev eth0  proto kernel  metric 256
   499      default via fe80::1 dev eth0  metric 1024
   500  
   501  In this example the Docker container is assigned a link-local address with the
   502  network suffix `/64` (here: `fe80::42:acff:fe11:3/64`) and a globally routable
   503  IPv6 address (here: `2001:db8:1:0:0:242:ac11:3/64`). The container will create
   504  connections to addresses outside of the `2001:db8:1::/64` network via the
   505  link-local gateway at `fe80::1` on `eth0`.
   506  
   507  Often servers or virtual machines get a `/64` IPv6 subnet assigned (e.g.
   508  `2001:db8:23:42::/64`). In this case you can split it up further and provide
   509  Docker a `/80` subnet while using a separate `/80` subnet for other
   510  applications on the host:
   511  
   512  ![](/article-img/ipv6_slash64_subnet_config.svg)
   513  
   514  In this setup the subnet `2001:db8:23:42::/80` with a range from `2001:db8:23:42:0:0:0:0`
   515  to `2001:db8:23:42:0:ffff:ffff:ffff` is attached to `eth0`, with the host listening
   516  at `2001:db8:23:42::1`. The subnet `2001:db8:23:42:1::/80` with an address range from
   517  `2001:db8:23:42:1:0:0:0` to `2001:db8:23:42:1:ffff:ffff:ffff` is attached to
   518  `docker0` and will be used by containers.
   519  
   520  #### Using NDP proxying
   521  
   522  If your Docker host is only part of an IPv6 subnet but has not got an IPv6
   523  subnet assigned you can use NDP proxying to connect your containers via IPv6 to
   524  the internet.
   525  For example your host has the IPv6 address `2001:db8::c001`, is part of the
   526  subnet `2001:db8::/64` and your IaaS provider allows you to configure the IPv6
   527  addresses `2001:db8::c000` to `2001:db8::c00f`:
   528  
   529      $ ip -6 addr show
   530      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
   531          inet6 ::1/128 scope host
   532             valid_lft forever preferred_lft forever
   533      2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
   534          inet6 2001:db8::c001/64 scope global
   535             valid_lft forever preferred_lft forever
   536          inet6 fe80::601:3fff:fea1:9c01/64 scope link
   537             valid_lft forever preferred_lft forever
   538  
   539  Let's split up the configurable address range into two subnets
   540  `2001:db8::c000/125` and `2001:db8::c008/125`. The first one can be used by the
   541  host itself, the latter by Docker:
   542  
   543      docker -d --ipv6 --fixed-cidr-v6 2001:db8::c008/125
   544  
   545  You notice the Docker subnet is within the subnet managed by your router that
   546  is connected to `eth0`. This means all devices (containers) with the addresses
   547  from the Docker subnet are expected to be found within the router subnet.
   548  Therefore the router thinks it can talk to these containers directly.
   549  
   550  ![](/article-img/ipv6_ndp_proxying.svg)
   551  
   552  As soon as the router wants to send an IPv6 packet to the first container it
   553  will transmit a neighbor solicitation request, asking, who has
   554  `2001:db8::c009`? But it will get no answer because noone on this subnet has
   555  this address. The container with this address is hidden behind the Docker host.
   556  The Docker host has to listen to neighbor solication requests for the container
   557  address and send a response that itself is the device that is responsible for
   558  the address. This is done by a Kernel feature called `NDP Proxy`. You can
   559  enable it by executing
   560  
   561      $ sysctl net.ipv6.conf.eth0.proxy_ndp=1
   562  
   563  Now you can add the container's IPv6 address to the NDP proxy table:
   564  
   565      $ ip -6 neigh add proxy 2001:db8::c009 dev eth0
   566  
   567  This command tells the Kernel to answer to incoming neighbor solicitation requests
   568  regarding the IPv6 address `2001:db8::c009` on the device `eth0`. As a
   569  consequence of this all traffic to this IPv6 address will go into the Docker
   570  host and it will forward it according to its routing table via the `docker0`
   571  device to the container network:
   572  
   573      $ ip -6 route show
   574      2001:db8::c008/125 dev docker0  metric 1
   575      2001:db8::/64 dev eth0  proto kernel  metric 256
   576  
   577  You have to execute the `ip -6 neigh add proxy ...` command for every IPv6
   578  address in your Docker subnet. Unfortunately there is no functionality for
   579  adding a whole subnet by executing one command.
   580  
   581  ### Docker IPv6 Cluster
   582  
   583  #### Switched Network Environment
   584  Using routable IPv6 addresses allows you to realize communication between
   585  containers on different hosts. Let's have a look at a simple Docker IPv6 cluster
   586  example:
   587  
   588  ![](/article-img/ipv6_switched_network_example.svg)
   589  
   590  The Docker hosts are in the `2001:db8:0::/64` subnet. Host1 is configured
   591  to provide addresses from the `2001:db8:1::/64` subnet to its containers. It
   592  has three routes configured:
   593  
   594  - Route all traffic to `2001:db8:0::/64` via `eth0`
   595  - Route all traffic to `2001:db8:1::/64` via `docker0`
   596  - Route all traffic to `2001:db8:2::/64` via Host2 with IP `2001:db8::2`
   597  
   598  Host1 also acts as a router on OSI layer 3. When one of the network clients
   599  tries to contact a target that is specified in Host1's routing table Host1 will
   600  forward the traffic accordingly. It acts as a router for all networks it knows:
   601  `2001:db8::/64`, `2001:db8:1::/64` and `2001:db8:2::/64`.
   602  
   603  On Host2 we have nearly the same configuration. Host2's containers will get
   604  IPv6 addresses from `2001:db8:2::/64`. Host2 has three routes configured:
   605  
   606  - Route all traffic to `2001:db8:0::/64` via `eth0`
   607  - Route all traffic to `2001:db8:2::/64` via `docker0`
   608  - Route all traffic to `2001:db8:1::/64` via Host1 with IP `2001:db8:0::1`
   609  
   610  The difference to Host1 is that the network `2001:db8:2::/64` is directly
   611  attached to the host via its `docker0` interface whereas it reaches
   612  `2001:db8:1::/64` via Host1's IPv6 address `2001:db8::1`.
   613  
   614  This way every container is able to contact every other container. The
   615  containers `Container1-*` share the same subnet and contact each other directly.
   616  The traffic between `Container1-*` and `Container2-*` will be routed via Host1
   617  and Host2 because those containers do not share the same subnet.
   618  
   619  In a switched environment every host has to know all routes to every subnet. You
   620  always have to update the hosts' routing tables once you add or remove a host
   621  to the cluster.
   622  
   623  Every configuration in the diagram that is shown below the dashed line is
   624  handled by Docker: The `docker0` bridge IP address configuration, the route to
   625  the Docker subnet on the host, the container IP addresses and the routes on the
   626  containers. The configuration above the line is up to the user and can be
   627  adapted to the individual environment.
   628  
   629  #### Routed Network Environment
   630  
   631  In a routed network environment you replace the level 2 switch with a level 3
   632  router. Now the hosts just have to know their default gateway (the router) and
   633  the route to their own containers (managed by Docker). The router holds all
   634  routing information about the Docker subnets. When you add or remove a host to
   635  this environment you just have to update the routing table in the router - not
   636  on every host.
   637  
   638  ![](/article-img/ipv6_routed_network_example.svg)
   639  
   640  In this scenario containers of the same host can communicate directly with each
   641  other. The traffic between containers on different hosts will be routed via
   642  their hosts and the router. For example packet from `Container1-1` to 
   643  `Container2-1` will be routed through `Host1`, `Router` and `Host2` until it
   644  arrives at `Container2-1`.
   645  
   646  To keep the IPv6 addresses short in this example a `/48` network is assigned to
   647  every host. The hosts use a `/64` subnet of this for its own services and one
   648  for Docker. When adding a third host you would add a route for the subnet
   649  `2001:db8:3::/48` in the router and configure Docker on Host3 with
   650  `--fixed-cidr-v6=2001:db8:3:1::/64`.
   651  
   652  Remember the subnet for Docker containers should at least have a size of `/80`.
   653  This way an IPv6 address can end with the container's MAC address and you
   654  prevent NDP neighbor cache invalidation issues in the Docker layer. So if you
   655  have a `/64` for your whole environment use `/68` subnets for the hosts and
   656  `/80` for the containers. This way you can use 4096 hosts with 16 `/80` subnets
   657  each.
   658  
   659  Every configuration in the diagram that is visualized below the dashed line is
   660  handled by Docker: The `docker0` bridge IP address configuration, the route to
   661  the Docker subnet on the host, the container IP addresses and the routes on the
   662  containers. The configuration above the line is up to the user and can be
   663  adapted to the individual environment.
   664  
   665  ## Customizing docker0
   666  
   667  <a name="docker0"></a>
   668  
   669  By default, the Docker server creates and configures the host system's
   670  `docker0` interface as an *Ethernet bridge* inside the Linux kernel that
   671  can pass packets back and forth between other physical or virtual
   672  network interfaces so that they behave as a single Ethernet network.
   673  
   674  Docker configures `docker0` with an IP address, netmask and IP
   675  allocation range. The host machine can both receive and send packets to
   676  containers connected to the bridge, and gives it an MTU — the *maximum
   677  transmission unit* or largest packet length that the interface will
   678  allow — of either 1,500 bytes or else a more specific value copied from
   679  the Docker host's interface that supports its default route.  These
   680  options are configurable at server startup:
   681  
   682   *  `--bip=CIDR` — supply a specific IP address and netmask for the
   683      `docker0` bridge, using standard CIDR notation like
   684      `192.168.1.5/24`.
   685  
   686   *  `--fixed-cidr=CIDR` — restrict the IP range from the `docker0` subnet,
   687      using the standard CIDR notation like `172.167.1.0/28`. This range must
   688      be and IPv4 range for fixed IPs (ex: 10.20.0.0/16) and must be a subset
   689      of the bridge IP range (`docker0` or set using `--bridge`). For example
   690      with `--fixed-cidr=192.168.1.0/25`, IPs for your containers will be chosen
   691      from the first half of `192.168.1.0/24` subnet.
   692  
   693   *  `--mtu=BYTES` — override the maximum packet length on `docker0`.
   694  
   695  On Ubuntu you would add these to the `DOCKER_OPTS` setting in
   696  `/etc/default/docker` on your Docker host and restarting the Docker
   697  service.
   698  
   699  Once you have one or more containers up and running, you can confirm
   700  that Docker has properly connected them to the `docker0` bridge by
   701  running the `brctl` command on the host machine and looking at the
   702  `interfaces` column of the output.  Here is a host with two different
   703  containers connected:
   704  
   705      # Display bridge info
   706  
   707      $ sudo brctl show
   708      bridge name     bridge id               STP enabled     interfaces
   709      docker0         8000.3a1d7362b4ee       no              veth65f9
   710                                                              vethdda6
   711  
   712  If the `brctl` command is not installed on your Docker host, then on
   713  Ubuntu you should be able to run `sudo apt-get install bridge-utils` to
   714  install it.
   715  
   716  Finally, the `docker0` Ethernet bridge settings are used every time you
   717  create a new container.  Docker selects a free IP address from the range
   718  available on the bridge each time you `docker run` a new container, and
   719  configures the container's `eth0` interface with that IP address and the
   720  bridge's netmask.  The Docker host's own IP address on the bridge is
   721  used as the default gateway by which each container reaches the rest of
   722  the Internet.
   723  
   724      # The network, as seen from a container
   725  
   726      $ sudo docker run -i -t --rm base /bin/bash
   727  
   728      $$ ip addr show eth0
   729      24: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
   730          link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff
   731          inet 172.17.0.3/16 scope global eth0
   732             valid_lft forever preferred_lft forever
   733          inet6 fe80::306f:e0ff:fe35:5791/64 scope link
   734             valid_lft forever preferred_lft forever
   735  
   736      $$ ip route
   737      default via 172.17.42.1 dev eth0
   738      172.17.0.0/16 dev eth0  proto kernel  scope link  src 172.17.0.3
   739  
   740      $$ exit
   741  
   742  Remember that the Docker host will not be willing to forward container
   743  packets out on to the Internet unless its `ip_forward` system setting is
   744  `1` — see the section above on [Communication between
   745  containers](#between-containers) for details.
   746  
   747  ## Building your own bridge
   748  
   749  <a name="bridge-building"></a>
   750  
   751  If you want to take Docker out of the business of creating its own
   752  Ethernet bridge entirely, you can set up your own bridge before starting
   753  Docker and use `-b BRIDGE` or `--bridge=BRIDGE` to tell Docker to use
   754  your bridge instead.  If you already have Docker up and running with its
   755  old `docker0` still configured, you will probably want to begin by
   756  stopping the service and removing the interface:
   757  
   758      # Stopping Docker and removing docker0
   759  
   760      $ sudo service docker stop
   761      $ sudo ip link set dev docker0 down
   762      $ sudo brctl delbr docker0
   763      $ sudo iptables -t nat -F POSTROUTING
   764  
   765  Then, before starting the Docker service, create your own bridge and
   766  give it whatever configuration you want.  Here we will create a simple
   767  enough bridge that we really could just have used the options in the
   768  previous section to customize `docker0`, but it will be enough to
   769  illustrate the technique.
   770  
   771      # Create our own bridge
   772  
   773      $ sudo brctl addbr bridge0
   774      $ sudo ip addr add 192.168.5.1/24 dev bridge0
   775      $ sudo ip link set dev bridge0 up
   776  
   777      # Confirming that our bridge is up and running
   778  
   779      $ ip addr show bridge0
   780      4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
   781          link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
   782          inet 192.168.5.1/24 scope global bridge0
   783             valid_lft forever preferred_lft forever
   784  
   785      # Tell Docker about it and restart (on Ubuntu)
   786  
   787      $ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker
   788      $ sudo service docker start
   789  
   790      # Confirming new outgoing NAT masquerade is set up
   791  
   792      $ sudo iptables -t nat -L -n
   793      ...
   794      Chain POSTROUTING (policy ACCEPT)
   795      target     prot opt source               destination
   796      MASQUERADE  all  --  192.168.5.0/24      0.0.0.0/0
   797  
   798  
   799  The result should be that the Docker server starts successfully and is
   800  now prepared to bind containers to the new bridge.  After pausing to
   801  verify the bridge's configuration, try creating a container — you will
   802  see that its IP address is in your new IP address range, which Docker
   803  will have auto-detected.
   804  
   805  Just as we learned in the previous section, you can use the `brctl show`
   806  command to see Docker add and remove interfaces from the bridge as you
   807  start and stop containers, and can run `ip addr` and `ip route` inside a
   808  container to see that it has been given an address in the bridge's IP
   809  address range and has been told to use the Docker host's IP address on
   810  the bridge as its default gateway to the rest of the Internet.
   811  
   812  ## How Docker networks a container
   813  
   814  <a name="container-networking"></a>
   815  
   816  While Docker is under active development and continues to tweak and
   817  improve its network configuration logic, the shell commands in this
   818  section are rough equivalents to the steps that Docker takes when
   819  configuring networking for each new container.
   820  
   821  Let's review a few basics.
   822  
   823  To communicate using the Internet Protocol (IP), a machine needs access
   824  to at least one network interface at which packets can be sent and
   825  received, and a routing table that defines the range of IP addresses
   826  reachable through that interface.  Network interfaces do not have to be
   827  physical devices.  In fact, the `lo` loopback interface available on
   828  every Linux machine (and inside each Docker container) is entirely
   829  virtual — the Linux kernel simply copies loopback packets directly from
   830  the sender's memory into the receiver's memory.
   831  
   832  Docker uses special virtual interfaces to let containers communicate
   833  with the host machine — pairs of virtual interfaces called “peers” that
   834  are linked inside of the host machine's kernel so that packets can
   835  travel between them.  They are simple to create, as we will see in a
   836  moment.
   837  
   838  The steps with which Docker configures a container are:
   839  
   840  1.  Create a pair of peer virtual interfaces.
   841  
   842  2.  Give one of them a unique name like `veth65f9`, keep it inside of
   843      the main Docker host, and bind it to `docker0` or whatever bridge
   844      Docker is supposed to be using.
   845  
   846  3.  Toss the other interface over the wall into the new container (which
   847      will already have been provided with an `lo` interface) and rename
   848      it to the much prettier name `eth0` since, inside of the container's
   849      separate and unique network interface namespace, there are no
   850      physical interfaces with which this name could collide.
   851  
   852  4.  Set the interface's MAC address according to the `--mac-address`
   853      parameter or generate a random one.
   854  
   855  5.  Give the container's `eth0` a new IP address from within the
   856      bridge's range of network addresses, and set its default route to
   857      the IP address that the Docker host owns on the bridge. The MAC
   858      address is generated from the IP address unless otherwise specified.
   859      This prevents ARP cache invalidation problems, when a new container
   860      comes up with an IP used in the past by another container with another
   861      MAC.
   862  
   863  With these steps complete, the container now possesses an `eth0`
   864  (virtual) network card and will find itself able to communicate with
   865  other containers and the rest of the Internet.
   866  
   867  You can opt out of the above process for a particular container by
   868  giving the `--net=` option to `docker run`, which takes four possible
   869  values.
   870  
   871   *  `--net=bridge` — The default action, that connects the container to
   872      the Docker bridge as described above.
   873  
   874   *  `--net=host` — Tells Docker to skip placing the container inside of
   875      a separate network stack.  In essence, this choice tells Docker to
   876      **not containerize the container's networking**!  While container
   877      processes will still be confined to their own filesystem and process
   878      list and resource limits, a quick `ip addr` command will show you
   879      that, network-wise, they live “outside” in the main Docker host and
   880      have full access to its network interfaces.  Note that this does
   881      **not** let the container reconfigure the host network stack — that
   882      would require `--privileged=true` — but it does let container
   883      processes open low-numbered ports like any other root process.
   884      It also allows the container to access local network services
   885      like D-bus.  This can lead to processes in the container being
   886      able to do unexpected things like
   887      [restart your computer](https://github.com/docker/docker/issues/6401).
   888      You should use this option with caution.
   889  
   890   *  `--net=container:NAME_or_ID` — Tells Docker to put this container's
   891      processes inside of the network stack that has already been created
   892      inside of another container.  The new container's processes will be
   893      confined to their own filesystem and process list and resource
   894      limits, but will share the same IP address and port numbers as the
   895      first container, and processes on the two containers will be able to
   896      connect to each other over the loopback interface.
   897  
   898   *  `--net=none` — Tells Docker to put the container inside of its own
   899      network stack but not to take any steps to configure its network,
   900      leaving you free to build any of the custom configurations explored
   901      in the last few sections of this document.
   902  
   903  To get an idea of the steps that are necessary if you use `--net=none`
   904  as described in that last bullet point, here are the commands that you
   905  would run to reach roughly the same configuration as if you had let
   906  Docker do all of the configuration:
   907  
   908      # At one shell, start a container and
   909      # leave its shell idle and running
   910  
   911      $ sudo docker run -i -t --rm --net=none base /bin/bash
   912      root@63f36fc01b5f:/#
   913  
   914      # At another shell, learn the container process ID
   915      # and create its namespace entry in /var/run/netns/
   916      # for the "ip netns" command we will be using below
   917  
   918      $ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
   919      2778
   920      $ pid=2778
   921      $ sudo mkdir -p /var/run/netns
   922      $ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
   923  
   924      # Check the bridge's IP address and netmask
   925  
   926      $ ip addr show docker0
   927      21: docker0: ...
   928      inet 172.17.42.1/16 scope global docker0
   929      ...
   930  
   931      # Create a pair of "peer" interfaces A and B,
   932      # bind the A end to the bridge, and bring it up
   933  
   934      $ sudo ip link add A type veth peer name B
   935      $ sudo brctl addif docker0 A
   936      $ sudo ip link set A up
   937  
   938      # Place B inside the container's network namespace,
   939      # rename to eth0, and activate it with a free IP
   940  
   941      $ sudo ip link set B netns $pid
   942      $ sudo ip netns exec $pid ip link set dev B name eth0
   943      $ sudo ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc
   944      $ sudo ip netns exec $pid ip link set eth0 up
   945      $ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
   946      $ sudo ip netns exec $pid ip route add default via 172.17.42.1
   947  
   948  At this point your container should be able to perform networking
   949  operations as usual.
   950  
   951  When you finally exit the shell and Docker cleans up the container, the
   952  network namespace is destroyed along with our virtual `eth0` — whose
   953  destruction in turn destroys interface `A` out in the Docker host and
   954  automatically un-registers it from the `docker0` bridge.  So everything
   955  gets cleaned up without our having to run any extra commands!  Well,
   956  almost everything:
   957  
   958      # Clean up dangling symlinks in /var/run/netns
   959  
   960      find -L /var/run/netns -type l -delete
   961  
   962  Also note that while the script above used modern `ip` command instead
   963  of old deprecated wrappers like `ipconfig` and `route`, these older
   964  commands would also have worked inside of our container.  The `ip addr`
   965  command can be typed as `ip a` if you are in a hurry.
   966  
   967  Finally, note the importance of the `ip netns exec` command, which let
   968  us reach inside and configure a network namespace as root.  The same
   969  commands would not have worked if run inside of the container, because
   970  part of safe containerization is that Docker strips container processes
   971  of the right to configure their own networks.  Using `ip netns exec` is
   972  what let us finish up the configuration without having to take the
   973  dangerous step of running the container itself with `--privileged=true`.
   974  
   975  ## Tools and Examples
   976  
   977  Before diving into the following sections on custom network topologies,
   978  you might be interested in glancing at a few external tools or examples
   979  of the same kinds of configuration.  Here are two:
   980  
   981   *  Jérôme Petazzoni has created a `pipework` shell script to help you
   982      connect together containers in arbitrarily complex scenarios:
   983      <https://github.com/jpetazzo/pipework>
   984  
   985   *  Brandon Rhodes has created a whole network topology of Docker
   986      containers for the next edition of Foundations of Python Network
   987      Programming that includes routing, NAT'd firewalls, and servers that
   988      offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP:
   989      <https://github.com/brandon-rhodes/fopnp/tree/m/playground>
   990  
   991  Both tools use networking commands very much like the ones you saw in
   992  the previous section, and will see in the following sections.
   993  
   994  ## Building a point-to-point connection
   995  
   996  <a name="point-to-point"></a>
   997  
   998  By default, Docker attaches all containers to the virtual subnet
   999  implemented by `docker0`.  You can create containers that are each
  1000  connected to some different virtual subnet by creating your own bridge
  1001  as shown in [Building your own bridge](#bridge-building), starting each
  1002  container with `docker run --net=none`, and then attaching the
  1003  containers to your bridge with the shell commands shown in [How Docker
  1004  networks a container](#container-networking).
  1005  
  1006  But sometimes you want two particular containers to be able to
  1007  communicate directly without the added complexity of both being bound to
  1008  a host-wide Ethernet bridge.
  1009  
  1010  The solution is simple: when you create your pair of peer interfaces,
  1011  simply throw *both* of them into containers, and configure them as
  1012  classic point-to-point links.  The two containers will then be able to
  1013  communicate directly (provided you manage to tell each container the
  1014  other's IP address, of course).  You might adjust the instructions of
  1015  the previous section to go something like this:
  1016  
  1017      # Start up two containers in two terminal windows
  1018  
  1019      $ sudo docker run -i -t --rm --net=none base /bin/bash
  1020      root@1f1f4c1f931a:/#
  1021  
  1022      $ sudo docker run -i -t --rm --net=none base /bin/bash
  1023      root@12e343489d2f:/#
  1024  
  1025      # Learn the container process IDs
  1026      # and create their namespace entries
  1027  
  1028      $ sudo docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a
  1029      2989
  1030      $ sudo docker inspect -f '{{.State.Pid}}' 12e343489d2f
  1031      3004
  1032      $ sudo mkdir -p /var/run/netns
  1033      $ sudo ln -s /proc/2989/ns/net /var/run/netns/2989
  1034      $ sudo ln -s /proc/3004/ns/net /var/run/netns/3004
  1035  
  1036      # Create the "peer" interfaces and hand them out
  1037  
  1038      $ sudo ip link add A type veth peer name B
  1039  
  1040      $ sudo ip link set A netns 2989
  1041      $ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A
  1042      $ sudo ip netns exec 2989 ip link set A up
  1043      $ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A
  1044  
  1045      $ sudo ip link set B netns 3004
  1046      $ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B
  1047      $ sudo ip netns exec 3004 ip link set B up
  1048      $ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B
  1049  
  1050  The two containers should now be able to ping each other and make
  1051  connections successfully.  Point-to-point links like this do not depend
  1052  on a subnet nor a netmask, but on the bare assertion made by `ip route`
  1053  that some other single IP address is connected to a particular network
  1054  interface.
  1055  
  1056  Note that point-to-point links can be safely combined with other kinds
  1057  of network connectivity — there is no need to start the containers with
  1058  `--net=none` if you want point-to-point links to be an addition to the
  1059  container's normal networking instead of a replacement.
  1060  
  1061  A final permutation of this pattern is to create the point-to-point link
  1062  between the Docker host and one container, which would allow the host to
  1063  communicate with that one container on some single IP address and thus
  1064  communicate “out-of-band” of the bridge that connects the other, more
  1065  usual containers.  But unless you have very specific networking needs
  1066  that drive you to such a solution, it is probably far preferable to use
  1067  `--icc=false` to lock down inter-container communication, as we explored
  1068  earlier.
  1069  
  1070  ## Editing networking config files
  1071  
  1072  Starting with Docker v.1.2.0, you can now edit `/etc/hosts`, `/etc/hostname`
  1073  and `/etc/resolve.conf` in a running container. This is useful if you need
  1074  to install bind or other services that might override one of those files.
  1075  
  1076  Note, however, that changes to these files will not be saved by
  1077  `docker commit`, nor will they be saved during `docker run`.
  1078  That means they won't be saved in the image, nor will they persist when a
  1079  container is restarted; they will only "stick" in a running container.