github.com/hustcat/docker@v1.3.3-0.20160314103604-901c67a8eeab/docs/userguide/networking/get-started-overlay.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Get started with multi-host networking" 4 description = "Use overlay for multi-host networking" 5 keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-3 9 +++ 10 <![end-metadata]--> 11 12 # Get started with multi-host networking 13 14 This article uses an example to explain the basics of creating a multi-host 15 network. Docker Engine supports multi-host networking out-of-the-box through the 16 `overlay` network driver. Unlike `bridge` networks, overlay networks require 17 some pre-existing conditions before you can create one. These conditions are: 18 19 * Access to a key-value store. Docker supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. 20 * A cluster of hosts with connectivity to the key-value store. 21 * A properly configured Engine `daemon` on each host in the cluster. 22 * Hosts within the cluster must have unique hostnames because the key-value store uses the hostnames to identify cluster members. 23 24 Though Docker Machine and Docker Swarm are not mandatory to experience Docker 25 multi-host networking, this example uses them to illustrate how they are 26 integrated. You'll use Machine to create both the key-value store 27 server and the host cluster. This example creates a Swarm cluster. 28 29 ## Prerequisites 30 31 Before you begin, make sure you have a system on your network with the latest 32 version of Docker Engine and Docker Machine installed. The example also relies 33 on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you 34 have all of these installed already. 35 36 If you have not already done so, make sure you upgrade Docker Engine and Docker 37 Machine to the latest versions. 38 39 40 ## Step 1: Set up a key-value store 41 42 An overlay network requires a key-value store. The key-value store holds 43 information about the network state which includes discovery, networks, 44 endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper 45 key-value stores. This example uses Consul. 46 47 1. Log into a system prepared with the prerequisite Docker Engine, Docker Machine, and VirtualBox software. 48 49 2. Provision a VirtualBox machine called `mh-keystore`. 50 51 $ docker-machine create -d virtualbox mh-keystore 52 53 When you provision a new machine, the process adds Docker Engine to the 54 host. This means rather than installing Consul manually, you can create an 55 instance using the [consul image from Docker 56 Hub](https://hub.docker.com/r/progrium/consul/). You'll do this in the next step. 57 58 3. Set your local environment to the `mh-keystore` machine. 59 60 $ eval "$(docker-machine env mh-keystore)" 61 62 4. Start a `progrium/consul` container running on the `mh-keystore` machine. 63 64 $ docker run -d \ 65 -p "8500:8500" \ 66 -h "consul" \ 67 progrium/consul -server -bootstrap 68 69 The client starts a `progrium/consul` image running in the 70 `mh-keystore` machine. The server is called `consul` and is 71 listening on port `8500`. 72 73 5. Run the `docker ps` command to see the `consul` container. 74 75 $ docker ps 76 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 77 4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini 78 79 Keep your terminal open and move onto the next step. 80 81 82 ## Step 2: Create a Swarm cluster 83 84 In this step, you use `docker-machine` to provision the hosts for your network. 85 At this point, you won't actually create the network. You'll create several 86 machines in VirtualBox. One of the machines will act as the Swarm master; 87 you'll create that first. As you create each host, you'll pass the Engine on 88 that machine options that are needed by the `overlay` network driver. 89 90 1. Create a Swarm master. 91 92 $ docker-machine create \ 93 -d virtualbox \ 94 --swarm --swarm-master \ 95 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 96 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 97 --engine-opt="cluster-advertise=eth1:2376" \ 98 mhs-demo0 99 100 At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network. 101 102 2. Create another host and add it to the Swarm cluster. 103 104 $ docker-machine create -d virtualbox \ 105 --swarm \ 106 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 107 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 108 --engine-opt="cluster-advertise=eth1:2376" \ 109 mhs-demo1 110 111 3. List your machines to confirm they are all up and running. 112 113 $ docker-machine ls 114 NAME ACTIVE DRIVER STATE URL SWARM 115 default - virtualbox Running tcp://192.168.99.100:2376 116 mh-keystore * virtualbox Running tcp://192.168.99.103:2376 117 mhs-demo0 - virtualbox Running tcp://192.168.99.104:2376 mhs-demo0 (master) 118 mhs-demo1 - virtualbox Running tcp://192.168.99.105:2376 mhs-demo0 119 120 At this point you have a set of hosts running on your network. You are ready to create a multi-host network for containers using these hosts. 121 122 Leave your terminal open and go onto the next step. 123 124 ## Step 3: Create the overlay Network 125 126 To create an overlay network 127 128 1. Set your docker environment to the Swarm master. 129 130 $ eval $(docker-machine env --swarm mhs-demo0) 131 132 Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone. 133 134 2. Use the `docker info` command to view the Swarm. 135 136 $ docker info 137 Containers: 3 138 Images: 2 139 Role: primary 140 Strategy: spread 141 Filters: affinity, health, constraint, port, dependency 142 Nodes: 2 143 mhs-demo0: 192.168.99.104:2376 144 └ Containers: 2 145 └ Reserved CPUs: 0 / 1 146 └ Reserved Memory: 0 B / 1.021 GiB 147 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 148 mhs-demo1: 192.168.99.105:2376 149 └ Containers: 1 150 └ Reserved CPUs: 0 / 1 151 └ Reserved Memory: 0 B / 1.021 GiB 152 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 153 CPUs: 2 154 Total Memory: 2.043 GiB 155 Name: 30438ece0915 156 157 From this information, you can see that you are running three containers and two images on the Master. 158 159 3. Create your `overlay` network. 160 161 $ docker network create --driver overlay --subnet=10.0.9.0/24 my-net 162 163 You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster. 164 165 > **Note** : It is highly recommended to use the `--subnet` option when creating 166 > a network. If the `--subnet` is not specified, the docker daemon automatically 167 > chooses and assigns a subnet for the network and it could overlap with another subnet 168 > in your infrastructure that is not managed by docker. Such overlaps can cause 169 > connectivity issues or failures when containers are connected to that network. 170 171 4. Check that the network is running: 172 173 $ docker network ls 174 NETWORK ID NAME DRIVER 175 412c2496d0eb mhs-demo1/host host 176 dd51763e6dd2 mhs-demo0/bridge bridge 177 6b07d0be843f my-net overlay 178 b4234109bd9b mhs-demo0/none null 179 1aeead6dd890 mhs-demo0/host host 180 d0bb78cbe7bd mhs-demo1/bridge bridge 181 1c0eb8f69ebb mhs-demo1/none null 182 183 As you are in the Swarm master environment, you see all the networks on all 184 the Swarm agents: the default networks on each engine and the single overlay 185 network. Notice that each `NETWORK ID` is unique. 186 187 5. Switch to each Swarm agent in turn and list the networks. 188 189 $ eval $(docker-machine env mhs-demo0) 190 $ docker network ls 191 NETWORK ID NAME DRIVER 192 6b07d0be843f my-net overlay 193 dd51763e6dd2 bridge bridge 194 b4234109bd9b none null 195 1aeead6dd890 host host 196 $ eval $(docker-machine env mhs-demo1) 197 $ docker network ls 198 NETWORK ID NAME DRIVER 199 d0bb78cbe7bd bridge bridge 200 1c0eb8f69ebb none null 201 412c2496d0eb host host 202 6b07d0be843f my-net overlay 203 204 Both agents report they have the `my-net` network with the `6b07d0be843f` ID. 205 You now have a multi-host container network running! 206 207 ## Step 4: Run an application on your Network 208 209 Once your network is created, you can start a container on any of the hosts and it automatically is part of the network. 210 211 1. Point your environment to the Swarm master. 212 213 $ eval $(docker-machine env --swarm mhs-demo0) 214 215 2. Start an Nginx web server on the `mhs-demo0` instance. 216 217 $ docker run -itd --name=web --net=my-net --env="constraint:node==mhs-demo0" nginx 218 219 4. Run a BusyBox instance on the `mhs-demo1` instance and get the contents of the Nginx server's home page. 220 221 $ docker run -it --rm --net=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web 222 Unable to find image 'busybox:latest' locally 223 latest: Pulling from library/busybox 224 ab2b8a86ca6c: Pull complete 225 2c5ac3f849df: Pull complete 226 Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279 227 Status: Downloaded newer image for busybox:latest 228 Connecting to web (10.0.0.2:80) 229 <!DOCTYPE html> 230 <html> 231 <head> 232 <title>Welcome to nginx!</title> 233 <style> 234 body { 235 width: 35em; 236 margin: 0 auto; 237 font-family: Tahoma, Verdana, Arial, sans-serif; 238 } 239 </style> 240 </head> 241 <body> 242 <h1>Welcome to nginx!</h1> 243 <p>If you see this page, the nginx web server is successfully installed and 244 working. Further configuration is required.</p> 245 246 <p>For online documentation and support please refer to 247 <a href="http://nginx.org/">nginx.org</a>.<br/> 248 Commercial support is available at 249 <a href="http://nginx.com/">nginx.com</a>.</p> 250 251 <p><em>Thank you for using nginx.</em></p> 252 </body> 253 </html> 254 - 100% |*******************************| 612 0:00:00 ETA 255 256 ## Step 5: Check external connectivity 257 258 As you've seen, Docker's built-in overlay network driver provides out-of-the-box 259 connectivity between the containers on multiple hosts within the same network. 260 Additionally, containers connected to the multi-host network are automatically 261 connected to the `docker_gwbridge` network. This network allows the containers 262 to have external connectivity outside of their cluster. 263 264 1. Change your environment to the Swarm agent. 265 266 $ eval $(docker-machine env mhs-demo1) 267 268 2. View the `docker_gwbridge` network, by listing the networks. 269 270 $ docker network ls 271 NETWORK ID NAME DRIVER 272 6b07d0be843f my-net overlay 273 dd51763e6dd2 bridge bridge 274 b4234109bd9b none null 275 1aeead6dd890 host host 276 e1dbd5dff8be docker_gwbridge bridge 277 278 3. Repeat steps 1 and 2 on the Swarm master. 279 280 $ eval $(docker-machine env mhs-demo0) 281 $ docker network ls 282 NETWORK ID NAME DRIVER 283 6b07d0be843f my-net overlay 284 d0bb78cbe7bd bridge bridge 285 1c0eb8f69ebb none null 286 412c2496d0eb host host 287 97102a22e8d2 docker_gwbridge bridge 288 289 2. Check the Nginx container's network interfaces. 290 291 $ docker exec web ip addr 292 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 293 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 294 inet 127.0.0.1/8 scope host lo 295 valid_lft forever preferred_lft forever 296 inet6 ::1/128 scope host 297 valid_lft forever preferred_lft forever 298 22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 299 link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff 300 inet 10.0.9.3/24 scope global eth0 301 valid_lft forever preferred_lft forever 302 inet6 fe80::42:aff:fe00:903/64 scope link 303 valid_lft forever preferred_lft forever 304 24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 305 link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff 306 inet 172.18.0.2/16 scope global eth1 307 valid_lft forever preferred_lft forever 308 inet6 fe80::42:acff:fe12:2/64 scope link 309 valid_lft forever preferred_lft forever 310 311 The `eth0` interface represents the container interface that is connected to 312 the `my-net` overlay network. While the `eth1` interface represents the 313 container interface that is connected to the `docker_gwbridge` network. 314 315 ## Step 6: Extra Credit with Docker Compose 316 317 Please refer to the Networking feature introduced in [Compose V2 format] 318 (https://docs.docker.com/compose/networking/) and execute the 319 multi-host networking scenario in the Swarm cluster used above. 320 321 ## Related information 322 323 * [Understand Docker container networks](dockernetworks.md) 324 * [Work with network commands](work-with-networks.md) 325 * [Docker Swarm overview](https://docs.docker.com/swarm) 326 * [Docker Machine overview](https://docs.docker.com/machine)