github.com/tompao/docker@v1.9.1/docs/userguide/networking/get-started-overlay.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Get started with multi-host networking" 4 description = "Use overlay for multi-host networking" 5 keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-3 9 +++ 10 <![end-metadata]--> 11 12 # Get started with multi-host networking 13 14 This article uses an example to explain the basics of creating a multi-host 15 network. Docker Engine supports multi-host networking out-of-the-box through the 16 `overlay` network driver. Unlike `bridge` networks, overlay networks require 17 some pre-existing conditions before you can create one. These conditions are: 18 19 * A host with a 3.16 kernel version or higher. 20 * Access to a key-value store. Docker supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. 21 * A cluster of hosts with connectivity to the key-value store. 22 * A properly configured Engine `daemon` on each host in the cluster. 23 24 Though Docker Machine and Docker Swarm are not mandatory to experience Docker 25 multi-host networking, this example uses them to illustrate how they are 26 integrated. You'll use Machine to create both the key-value store 27 server and the host cluster. This example creates a Swarm cluster. 28 29 ## Prerequisites 30 31 Before you begin, make sure you have a system on your network with the latest 32 version of Docker Engine and Docker Machine installed. The example also relies 33 on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you 34 have all of these installed already. 35 36 If you have not already done so, make sure you upgrade Docker Engine and Docker 37 Machine to the latest versions. 38 39 40 ## Step 1: Set up a key-value store 41 42 An overlay network requires a key-value store. The key-value store holds 43 information about the network state which includes discovery, networks, 44 endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper 45 key-value stores. This example uses Consul. 46 47 1. Log into a system prepared with the prerequisite Docker Engine, Docker Machine, and VirtualBox software. 48 49 2. Provision a VirtualBox machine called `mh-keystore`. 50 51 $ docker-machine create -d virtualbox mh-keystore 52 53 When you provision a new machine, the process adds Docker Engine to the 54 host. This means rather than installing Consul manually, you can create an 55 instance using the [consul image from Docker 56 Hub](https://hub.docker.com/r/progrium/consul/). You'll do this in the next step. 57 58 3. Start a `progrium/consul` container running on the `mh-keystore` machine. 59 60 $ docker $(docker-machine config mh-keystore) run -d \ 61 -p "8500:8500" \ 62 -h "consul" \ 63 progrium/consul -server -bootstrap 64 65 A bash expansion `$(docker-machine config mh-keystore)` is used to pass the 66 connection configuration to the `docker run` command. The client starts a 67 `progrium/consul` image running in the `mh-keystore` machine. The server is 68 called `consul` and is listening on port `8500`. 69 70 4. Set your local environment to the `mh-keystore` machine. 71 72 $ eval "$(docker-machine env mh-keystore)" 73 74 5. Run the `docker ps` command to see the `consul` container. 75 76 $ docker ps 77 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 78 4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini 79 80 Keep your terminal open and move onto the next step. 81 82 83 ## Step 2: Create a Swarm cluster 84 85 In this step, you use `docker-machine` to provision the hosts for your network. 86 At this point, you won't actually create the network. You'll create several 87 machines in VirtualBox. One of the machines will act as the Swarm master; 88 you'll create that first. As you create each host, you'll pass the Engine on 89 that machine options that are needed by the `overlay` network driver. 90 91 1. Create a Swarm master. 92 93 $ docker-machine create \ 94 -d virtualbox \ 95 --swarm --swarm-master \ 96 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 97 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 98 --engine-opt="cluster-advertise=eth1:2376" \ 99 mhs-demo0 100 101 At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network. 102 103 2. Create another host and add it to the Swarm cluster. 104 105 $ docker-machine create -d virtualbox \ 106 --swarm \ 107 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 108 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 109 --engine-opt="cluster-advertise=eth1:2376" \ 110 mhs-demo1 111 112 3. List your machines to confirm they are all up and running. 113 114 $ docker-machine ls 115 NAME ACTIVE DRIVER STATE URL SWARM 116 default - virtualbox Running tcp://192.168.99.100:2376 117 mh-keystore * virtualbox Running tcp://192.168.99.103:2376 118 mhs-demo0 - virtualbox Running tcp://192.168.99.104:2376 mhs-demo0 (master) 119 mhs-demo1 - virtualbox Running tcp://192.168.99.105:2376 mhs-demo0 120 121 At this point you have a set of hosts running on your network. You are ready to create a multi-host network for containers using these hosts. 122 123 Leave your terminal open and go onto the next step. 124 125 ## Step 3: Create the overlay Network 126 127 To create an overlay network 128 129 1. Set your docker environment to the Swarm master. 130 131 $ eval $(docker-machine env --swarm mhs-demo0) 132 133 Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone. 134 135 2. Use the `docker info` command to view the Swarm. 136 137 $ docker info 138 Containers: 3 139 Images: 2 140 Role: primary 141 Strategy: spread 142 Filters: affinity, health, constraint, port, dependency 143 Nodes: 2 144 mhs-demo0: 192.168.99.104:2376 145 └ Containers: 2 146 └ Reserved CPUs: 0 / 1 147 └ Reserved Memory: 0 B / 1.021 GiB 148 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 149 mhs-demo1: 192.168.99.105:2376 150 └ Containers: 1 151 └ Reserved CPUs: 0 / 1 152 └ Reserved Memory: 0 B / 1.021 GiB 153 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 154 CPUs: 2 155 Total Memory: 2.043 GiB 156 Name: 30438ece0915 157 158 From this information, you can see that you are running three containers and two images on the Master. 159 160 3. Create your `overlay` network. 161 162 $ docker network create --driver overlay my-net 163 164 You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster. 165 166 4. Check that the network is running: 167 168 $ docker network ls 169 NETWORK ID NAME DRIVER 170 412c2496d0eb mhs-demo1/host host 171 dd51763e6dd2 mhs-demo0/bridge bridge 172 6b07d0be843f my-net overlay 173 b4234109bd9b mhs-demo0/none null 174 1aeead6dd890 mhs-demo0/host host 175 d0bb78cbe7bd mhs-demo1/bridge bridge 176 1c0eb8f69ebb mhs-demo1/none null 177 178 As you are in the Swarm master environment, you see all the networks on all 179 the Swarm agents: the default networks on each engine and the single overlay 180 network. Notice that each `NETWORK ID` is unique. 181 182 5. Switch to each Swarm agent in turn and list the networks. 183 184 $ eval $(docker-machine env mhs-demo0) 185 $ docker network ls 186 NETWORK ID NAME DRIVER 187 6b07d0be843f my-net overlay 188 dd51763e6dd2 bridge bridge 189 b4234109bd9b none null 190 1aeead6dd890 host host 191 $ eval $(docker-machine env mhs-demo1) 192 $ docker network ls 193 NETWORK ID NAME DRIVER 194 d0bb78cbe7bd bridge bridge 195 1c0eb8f69ebb none null 196 412c2496d0eb host host 197 6b07d0be843f my-net overlay 198 199 Both agents report they have the `my-net` network with the `6b07d0be843f` ID. 200 You now have a multi-host container network running! 201 202 ## Step 4: Run an application on your Network 203 204 Once your network is created, you can start a container on any of the hosts and it automatically is part of the network. 205 206 1. Point your environment to the Swarm master. 207 208 $ eval $(docker-machine env --swarm mhs-demo0) 209 210 2. Start an Nginx web server on the `mhs-demo0` instance. 211 212 $ docker run -itd --name=web --net=my-net --env="constraint:node==mhs-demo0" nginx 213 214 4. Run a BusyBox instance on the `mhs-demo1` instance and get the contents of the Nginx server's home page. 215 216 $ docker run -it --rm --net=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web 217 Unable to find image 'busybox:latest' locally 218 latest: Pulling from library/busybox 219 ab2b8a86ca6c: Pull complete 220 2c5ac3f849df: Pull complete 221 Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279 222 Status: Downloaded newer image for busybox:latest 223 Connecting to web (10.0.0.2:80) 224 <!DOCTYPE html> 225 <html> 226 <head> 227 <title>Welcome to nginx!</title> 228 <style> 229 body { 230 width: 35em; 231 margin: 0 auto; 232 font-family: Tahoma, Verdana, Arial, sans-serif; 233 } 234 </style> 235 </head> 236 <body> 237 <h1>Welcome to nginx!</h1> 238 <p>If you see this page, the nginx web server is successfully installed and 239 working. Further configuration is required.</p> 240 241 <p>For online documentation and support please refer to 242 <a href="http://nginx.org/">nginx.org</a>.<br/> 243 Commercial support is available at 244 <a href="http://nginx.com/">nginx.com</a>.</p> 245 246 <p><em>Thank you for using nginx.</em></p> 247 </body> 248 </html> 249 - 100% |*******************************| 612 0:00:00 ETA 250 251 ## Step 5: Check external connectivity 252 253 As you've seen, Docker's built-in overlay network driver provides out-of-the-box 254 connectivity between the containers on multiple hosts within the same network. 255 Additionally, containers connected to the multi-host network are automatically 256 connected to the `docker_gwbridge` network. This network allows the containers 257 to have external connectivity outside of their cluster. 258 259 1. Change your environment to the Swarm agent. 260 261 $ eval $(docker-machine env mhs-demo1) 262 263 2. View the `docker_gwbridge` network, by listing the networks. 264 265 $ docker network ls 266 NETWORK ID NAME DRIVER 267 6b07d0be843f my-net overlay 268 dd51763e6dd2 bridge bridge 269 b4234109bd9b none null 270 1aeead6dd890 host host 271 e1dbd5dff8be docker_gwbridge bridge 272 273 3. Repeat steps 1 and 2 on the Swarm master. 274 275 $ eval $(docker-machine env mhs-demo0) 276 $ docker network ls 277 NETWORK ID NAME DRIVER 278 6b07d0be843f my-net overlay 279 d0bb78cbe7bd bridge bridge 280 1c0eb8f69ebb none null 281 412c2496d0eb host host 282 97102a22e8d2 docker_gwbridge bridge 283 284 2. Check the Nginx container's network interfaces. 285 286 $ docker exec web ip addr 287 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 288 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 289 inet 127.0.0.1/8 scope host lo 290 valid_lft forever preferred_lft forever 291 inet6 ::1/128 scope host 292 valid_lft forever preferred_lft forever 293 22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 294 link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff 295 inet 10.0.9.3/24 scope global eth0 296 valid_lft forever preferred_lft forever 297 inet6 fe80::42:aff:fe00:903/64 scope link 298 valid_lft forever preferred_lft forever 299 24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 300 link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff 301 inet 172.18.0.2/16 scope global eth1 302 valid_lft forever preferred_lft forever 303 inet6 fe80::42:acff:fe12:2/64 scope link 304 valid_lft forever preferred_lft forever 305 306 The `eth0` interface represents the container interface that is connected to 307 the `my-net` overlay network. While the `eth1` interface represents the 308 container interface that is connected to the `docker_gwbridge` network. 309 310 ## Step 6: Extra Credit with Docker Compose 311 312 You can try starting a second network on your existing Swarm cluster using Docker Compose. 313 314 1. If you haven't already, install Docker Compose. 315 316 2. Change your environment to the Swarm master. 317 318 $ eval $(docker-machine env --swarm mhs-demo0) 319 320 3. Create a `docker-compose.yml` file. 321 322 4. Add the following content to the file. 323 324 web: 325 image: bfirsh/compose-mongodb-demo 326 environment: 327 - "MONGO_HOST=counter_mongo_1" 328 - "constraint:node==mhs-demo0" 329 ports: 330 - "80:5000" 331 mongo: 332 image: mongo 333 334 5. Save and close the file. 335 336 6. Start the application with Compose. 337 338 $ docker-compose --x-networking --project-name=counter up -d 339 340 7. Get the Swarm master's IP address. 341 342 $ docker-machine ip mhs-demo0 343 344 8. Put the IP address into your web browser. 345 346 Upon success, the browser should display the web application. 347 348 ## Related information 349 350 * [Understand Docker container networks](dockernetworks.md) 351 * [Work with network commands](work-with-networks.md) 352 * [Docker Swarm overview](https://docs.docker.com/swarm) 353 * [Docker Machine overview](https://docs.docker.com/machine)