github.com/brahmaroutu/docker@v1.2.1-0.20160809185609-eb28dde01f16/docs/userguide/networking/get-started-overlay.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Get started with multi-host networking" 4 description = "Use overlay for multi-host networking" 5 keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-3 9 +++ 10 <![end-metadata]--> 11 12 # Get started with multi-host networking 13 14 This article uses an example to explain the basics of creating a multi-host 15 network. Docker Engine supports multi-host networking out-of-the-box through the 16 `overlay` network driver. Unlike `bridge` networks, overlay networks require 17 some pre-existing conditions before you can create one. These conditions are: 18 19 * Access to a key-value store. Docker supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. 20 * A cluster of hosts with connectivity to the key-value store. 21 * A properly configured Engine `daemon` on each host in the cluster. 22 * Hosts within the cluster must have unique hostnames because the key-value store uses the hostnames to identify cluster members. 23 24 Though Docker Machine and Docker Swarm are not mandatory to experience Docker 25 multi-host networking, this example uses them to illustrate how they are 26 integrated. You'll use Machine to create both the key-value store 27 server and the host cluster. This example creates a Swarm cluster. 28 29 ## Prerequisites 30 31 Before you begin, make sure you have a system on your network with the latest 32 version of Docker Engine and Docker Machine installed. The example also relies 33 on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you 34 have all of these installed already. 35 36 If you have not already done so, make sure you upgrade Docker Engine and Docker 37 Machine to the latest versions. 38 39 40 ## Step 1: Set up a key-value store 41 42 An overlay network requires a key-value store. The key-value store holds 43 information about the network state which includes discovery, networks, 44 endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper 45 key-value stores. This example uses Consul. 46 47 1. Log into a system prepared with the prerequisite Docker Engine, Docker Machine, and VirtualBox software. 48 49 2. Provision a VirtualBox machine called `mh-keystore`. 50 51 $ docker-machine create -d virtualbox mh-keystore 52 53 When you provision a new machine, the process adds Docker Engine to the 54 host. This means rather than installing Consul manually, you can create an 55 instance using the [consul image from Docker 56 Hub](https://hub.docker.com/r/progrium/consul/). You'll do this in the next step. 57 58 3. Set your local environment to the `mh-keystore` machine. 59 60 $ eval "$(docker-machine env mh-keystore)" 61 62 4. Start a `progrium/consul` container running on the `mh-keystore` machine. 63 64 $ docker run -d \ 65 -p "8500:8500" \ 66 -h "consul" \ 67 progrium/consul -server -bootstrap 68 69 The client starts a `progrium/consul` image running in the 70 `mh-keystore` machine. The server is called `consul` and is 71 listening on port `8500`. 72 73 5. Run the `docker ps` command to see the `consul` container. 74 75 $ docker ps 76 77 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 78 4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini 79 80 Keep your terminal open and move onto the next step. 81 82 83 ## Step 2: Create a Swarm cluster 84 85 In this step, you use `docker-machine` to provision the hosts for your network. 86 At this point, you won't actually create the network. You'll create several 87 machines in VirtualBox. One of the machines will act as the Swarm master; 88 you'll create that first. As you create each host, you'll pass the Engine on 89 that machine options that are needed by the `overlay` network driver. 90 91 1. Create a Swarm master. 92 93 $ docker-machine create \ 94 -d virtualbox \ 95 --swarm --swarm-master \ 96 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 97 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 98 --engine-opt="cluster-advertise=eth1:2376" \ 99 mhs-demo0 100 101 At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network. 102 103 2. Create another host and add it to the Swarm cluster. 104 105 $ docker-machine create -d virtualbox \ 106 --swarm \ 107 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 108 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 109 --engine-opt="cluster-advertise=eth1:2376" \ 110 mhs-demo1 111 112 3. List your machines to confirm they are all up and running. 113 114 $ docker-machine ls 115 116 NAME ACTIVE DRIVER STATE URL SWARM 117 default - virtualbox Running tcp://192.168.99.100:2376 118 mh-keystore * virtualbox Running tcp://192.168.99.103:2376 119 mhs-demo0 - virtualbox Running tcp://192.168.99.104:2376 mhs-demo0 (master) 120 mhs-demo1 - virtualbox Running tcp://192.168.99.105:2376 mhs-demo0 121 122 At this point you have a set of hosts running on your network. You are ready to create a multi-host network for containers using these hosts. 123 124 Leave your terminal open and go onto the next step. 125 126 ## Step 3: Create the overlay Network 127 128 To create an overlay network 129 130 1. Set your docker environment to the Swarm master. 131 132 $ eval $(docker-machine env --swarm mhs-demo0) 133 134 Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone. 135 136 2. Use the `docker info` command to view the Swarm. 137 138 $ docker info 139 140 Containers: 3 141 Images: 2 142 Role: primary 143 Strategy: spread 144 Filters: affinity, health, constraint, port, dependency 145 Nodes: 2 146 mhs-demo0: 192.168.99.104:2376 147 └ Containers: 2 148 └ Reserved CPUs: 0 / 1 149 └ Reserved Memory: 0 B / 1.021 GiB 150 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 151 mhs-demo1: 192.168.99.105:2376 152 └ Containers: 1 153 └ Reserved CPUs: 0 / 1 154 └ Reserved Memory: 0 B / 1.021 GiB 155 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 156 CPUs: 2 157 Total Memory: 2.043 GiB 158 Name: 30438ece0915 159 160 From this information, you can see that you are running three containers and two images on the Master. 161 162 3. Create your `overlay` network. 163 164 $ docker network create --driver overlay --subnet=10.0.9.0/24 my-net 165 166 You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster. 167 168 > **Note** : It is highly recommended to use the `--subnet` option when creating 169 > a network. If the `--subnet` is not specified, the docker daemon automatically 170 > chooses and assigns a subnet for the network and it could overlap with another subnet 171 > in your infrastructure that is not managed by docker. Such overlaps can cause 172 > connectivity issues or failures when containers are connected to that network. 173 174 4. Check that the network is running: 175 176 $ docker network ls 177 178 NETWORK ID NAME DRIVER 179 412c2496d0eb mhs-demo1/host host 180 dd51763e6dd2 mhs-demo0/bridge bridge 181 6b07d0be843f my-net overlay 182 b4234109bd9b mhs-demo0/none null 183 1aeead6dd890 mhs-demo0/host host 184 d0bb78cbe7bd mhs-demo1/bridge bridge 185 1c0eb8f69ebb mhs-demo1/none null 186 187 As you are in the Swarm master environment, you see all the networks on all 188 the Swarm agents: the default networks on each engine and the single overlay 189 network. Notice that each `NETWORK ID` is unique. 190 191 5. Switch to each Swarm agent in turn and list the networks. 192 193 $ eval $(docker-machine env mhs-demo0) 194 195 $ docker network ls 196 197 NETWORK ID NAME DRIVER 198 6b07d0be843f my-net overlay 199 dd51763e6dd2 bridge bridge 200 b4234109bd9b none null 201 1aeead6dd890 host host 202 203 $ eval $(docker-machine env mhs-demo1) 204 205 $ docker network ls 206 207 NETWORK ID NAME DRIVER 208 d0bb78cbe7bd bridge bridge 209 1c0eb8f69ebb none null 210 412c2496d0eb host host 211 6b07d0be843f my-net overlay 212 213 Both agents report they have the `my-net` network with the `6b07d0be843f` ID. 214 You now have a multi-host container network running! 215 216 ## Step 4: Run an application on your Network 217 218 Once your network is created, you can start a container on any of the hosts and it automatically is part of the network. 219 220 1. Point your environment to the Swarm master. 221 222 $ eval $(docker-machine env --swarm mhs-demo0) 223 224 2. Start an Nginx web server on the `mhs-demo0` instance. 225 226 $ docker run -itd --name=web --network=my-net --env="constraint:node==mhs-demo0" nginx 227 228 4. Run a BusyBox instance on the `mhs-demo1` instance and get the contents of the Nginx server's home page. 229 230 $ docker run -it --rm --network=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web 231 232 Unable to find image 'busybox:latest' locally 233 latest: Pulling from library/busybox 234 ab2b8a86ca6c: Pull complete 235 2c5ac3f849df: Pull complete 236 Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279 237 Status: Downloaded newer image for busybox:latest 238 Connecting to web (10.0.0.2:80) 239 <!DOCTYPE html> 240 <html> 241 <head> 242 <title>Welcome to nginx!</title> 243 <style> 244 body { 245 width: 35em; 246 margin: 0 auto; 247 font-family: Tahoma, Verdana, Arial, sans-serif; 248 } 249 </style> 250 </head> 251 <body> 252 <h1>Welcome to nginx!</h1> 253 <p>If you see this page, the nginx web server is successfully installed and 254 working. Further configuration is required.</p> 255 256 <p>For online documentation and support please refer to 257 <a href="http://nginx.org/">nginx.org</a>.<br/> 258 Commercial support is available at 259 <a href="http://nginx.com/">nginx.com</a>.</p> 260 261 <p><em>Thank you for using nginx.</em></p> 262 </body> 263 </html> 264 - 100% |*******************************| 612 0:00:00 ETA 265 266 ## Step 5: Check external connectivity 267 268 As you've seen, Docker's built-in overlay network driver provides out-of-the-box 269 connectivity between the containers on multiple hosts within the same network. 270 Additionally, containers connected to the multi-host network are automatically 271 connected to the `docker_gwbridge` network. This network allows the containers 272 to have external connectivity outside of their cluster. 273 274 1. Change your environment to the Swarm agent. 275 276 $ eval $(docker-machine env mhs-demo1) 277 278 2. View the `docker_gwbridge` network, by listing the networks. 279 280 $ docker network ls 281 282 NETWORK ID NAME DRIVER 283 6b07d0be843f my-net overlay 284 dd51763e6dd2 bridge bridge 285 b4234109bd9b none null 286 1aeead6dd890 host host 287 e1dbd5dff8be docker_gwbridge bridge 288 289 3. Repeat steps 1 and 2 on the Swarm master. 290 291 $ eval $(docker-machine env mhs-demo0) 292 293 $ docker network ls 294 295 NETWORK ID NAME DRIVER 296 6b07d0be843f my-net overlay 297 d0bb78cbe7bd bridge bridge 298 1c0eb8f69ebb none null 299 412c2496d0eb host host 300 97102a22e8d2 docker_gwbridge bridge 301 302 2. Check the Nginx container's network interfaces. 303 304 $ docker exec web ip addr 305 306 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 307 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 308 inet 127.0.0.1/8 scope host lo 309 valid_lft forever preferred_lft forever 310 inet6 ::1/128 scope host 311 valid_lft forever preferred_lft forever 312 22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 313 link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff 314 inet 10.0.9.3/24 scope global eth0 315 valid_lft forever preferred_lft forever 316 inet6 fe80::42:aff:fe00:903/64 scope link 317 valid_lft forever preferred_lft forever 318 24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 319 link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff 320 inet 172.18.0.2/16 scope global eth1 321 valid_lft forever preferred_lft forever 322 inet6 fe80::42:acff:fe12:2/64 scope link 323 valid_lft forever preferred_lft forever 324 325 The `eth0` interface represents the container interface that is connected to 326 the `my-net` overlay network. While the `eth1` interface represents the 327 container interface that is connected to the `docker_gwbridge` network. 328 329 ## Step 6: Extra Credit with Docker Compose 330 331 Please refer to the Networking feature introduced in [Compose V2 format] 332 (https://docs.docker.com/compose/networking/) and execute the 333 multi-host networking scenario in the Swarm cluster used above. 334 335 ## Related information 336 337 * [Understand Docker container networks](dockernetworks.md) 338 * [Work with network commands](work-with-networks.md) 339 * [Docker Swarm overview](https://docs.docker.com/swarm) 340 * [Docker Machine overview](https://docs.docker.com/machine)