github.com/kaisenlinux/docker.io@v0.0.0-20230510090727-ea55db55fac7/libnetwork/docs/vagrant.md (about)

     1  # Vagrant Setup to Test the Overlay Driver
     2  
     3  This documentation highlights how to use Vagrant to start a three nodes setup to test Docker network.
     4  
     5  ## Pre-requisites
     6  
     7  This was tested on:
     8  
     9  - Vagrant 1.7.2
    10  - VirtualBox 4.3.26
    11  
    12  ## Machine Setup
    13  
    14  The Vagrantfile provided will start three virtual machines. One will act as a consul server, and the other two will act as Docker host.
    15  The experimental version of Docker is installed.
    16  
    17  - `consul-server` is the Consul server node, based on Ubuntu 14.04, this has IP 192.168.33.10
    18  - `net-1` is the first Docker host based on Ubuntu 14.10, this has IP 192.168.33.11
    19  - `net-2` is the second Docker host based on Ubuntu 14.10, this has IP 192.168.33.12
    20  
    21  ## Getting Started
    22  
    23  Clone this repo, change to the `docs` directory and let Vagrant do the work.
    24  
    25      $ vagrant up
    26      $ vagrant status
    27      Current machine states:
    28  
    29      consul-server             running (virtualbox)
    30      net-1                     running (virtualbox)
    31      net-2                     running (virtualbox)
    32  
    33  You are now ready to SSH to the Docker hosts and start containers.
    34  
    35      $ vagrant ssh net-1
    36      vagrant@net-1:~$ docker version
    37      Client version: 1.8.0-dev
    38      ...<snip>...
    39  
    40  Check that Docker network is functional by listing the default networks:
    41  
    42      vagrant@net-1:~$ docker network ls
    43      NETWORK ID          NAME                TYPE
    44      4275f8b3a821        none                null                
    45      80eba28ed4a7        host                host                
    46      64322973b4aa        bridge              bridge              
    47  
    48  No services has been published so far, so the `docker service ls` will return an empty list:
    49  
    50      $ docker service ls
    51      SERVICE ID          NAME                NETWORK             CONTAINER
    52  
    53  Start a container and check the content of `/etc/hosts`.
    54  
    55      $ docker run -it --rm ubuntu:14.04 bash
    56      root@df479e660658:/# cat /etc/hosts
    57      172.21.0.3	df479e660658
    58      127.0.0.1	localhost
    59      ::1	localhost ip6-localhost ip6-loopback
    60      fe00::0	ip6-localnet
    61      ff00::0	ip6-mcastprefix
    62      ff02::1	ip6-allnodes
    63      ff02::2	ip6-allrouters
    64      172.21.0.3	distracted_bohr
    65      172.21.0.3	distracted_bohr.multihost
    66  
    67  In a separate terminal on `net-1` list the networks again. You will see that the _multihost_ overlay now appears.
    68  The overlay network _multihost_ is your default network. This was setup by the Docker daemon during the Vagrant provisioning. Check `/etc/default/docker` to see the options that were set.
    69  
    70      vagrant@net-1:~$ docker network ls
    71      NETWORK ID          NAME                TYPE
    72      4275f8b3a821        none                null
    73      80eba28ed4a7        host                host
    74      64322973b4aa        bridge              bridge
    75      b5c9f05f1f8f        multihost           overlay
    76  
    77  Now in a separate terminal, SSH to `net-2`, check the network and services. The networks will be the same, and the default network will also be _multihost_ of type overlay. But the service will show the container started on `net-1`:
    78  
    79      $ vagrant ssh net-2
    80      vagrant@net-2:~$ docker service ls
    81      SERVICE ID          NAME                NETWORK             CONTAINER
    82      b00f2bfd81ac        distracted_bohr     multihost           df479e660658
    83  
    84  Start a container on `net-2` and check the `/etc/hosts`.
    85  
    86      vagrant@net-2:~$ docker run -ti --rm ubuntu:14.04 bash
    87      root@2ac726b4ce60:/# cat /etc/hosts
    88      172.21.0.4	2ac726b4ce60
    89      127.0.0.1	localhost
    90      ::1	localhost ip6-localhost ip6-loopback
    91      fe00::0	ip6-localnet
    92      ff00::0	ip6-mcastprefix
    93      ff02::1	ip6-allnodes
    94      ff02::2	ip6-allrouters
    95      172.21.0.3	distracted_bohr
    96      172.21.0.3	distracted_bohr.multihost
    97      172.21.0.4	modest_curie
    98      172.21.0.4	modest_curie.multihost
    99  
   100  You will see not only the container that you just started on `net-2` but also the container that you started earlier on `net-1`.
   101  And of course you will be able to ping each container.
   102  
   103  ## Creating a Non Default Overlay Network
   104  
   105  In the previous test we started containers with regular options `-ti --rm` and these containers got placed automatically in the default network which was set to be the _multihost_ network of type overlay.
   106  
   107  But you could create your own overlay network and start containers in it. Let's create a new overlay network.
   108  On one of your Docker hosts, `net-1` or `net-2` do:
   109  
   110      $ docker network create -d overlay foobar
   111      8805e22ad6e29cd7abb95597c91420fdcac54f33fcdd6fbca6dd4ec9710dd6a4
   112      $ docker network ls
   113      NETWORK ID          NAME                TYPE
   114      a77e16a1e394        host                host                
   115      684a4bb4c471        bridge              bridge              
   116      8805e22ad6e2        foobar              overlay             
   117      b5c9f05f1f8f        multihost           overlay             
   118      67d5a33a2e54        none                null   
   119  
   120  Automatically, the second host will also see this network. To start a container on this new network, simply use the `--publish-service` option of `docker run` like so:
   121  
   122      $ docker run -it --rm --publish-service=bar.foobar.overlay ubuntu:14.04 bash
   123  
   124  Note, that you could directly start a container with a new overlay using the `--publish-service` option and it will create the network automatically.
   125  
   126  Check the docker services now:
   127  
   128      $ docker service ls
   129      SERVICE ID          NAME                NETWORK             CONTAINER
   130      b1ffdbfb1ac6        bar                 foobar              6635a3822135
   131  
   132  Repeat the getting started steps, by starting another container in this new overlay on the other host, check the `/etc/hosts` file and try to ping each container.
   133  
   134  ## A look at the interfaces
   135  
   136  This new Docker multihost networking is made possible via VXLAN tunnels and the use of network namespaces.
   137  Check the [design](design.md) documentation for all the details. But to explore these concepts a bit, nothing beats an example.
   138  
   139  With a running container in one overlay, check the network namespace:
   140  
   141      $ docker inspect -f '{{ .NetworkSettings.SandboxKey}}' 6635a3822135
   142      /var/run/docker/netns/6635a3822135
   143  
   144  This is a none default location for network namespaces which might confuse things a bit. So let's become root, head over to this directory that contains the network namespaces of the containers and check the interfaces:
   145  
   146      $ sudo su
   147      root@net-2:/home/vagrant# cd /var/run/docker/
   148      root@net-2:/var/run/docker# ls netns
   149      6635a3822135
   150      8805e22ad6e2
   151  
   152  To be able to check the interfaces in those network namespace using `ip` command, just create a symlink for `netns` that points to `/var/run/docker/netns`:
   153  
   154      root@net-2:/var/run# ln -s /var/run/docker/netns netns
   155      root@net-2:/var/run# ip netns show
   156      6635a3822135
   157      8805e22ad6e2
   158  
   159  The two namespace ID return are the ones of the running container on that host and the one of the actual overlay network the container is in.
   160  Let's check the interfaces in the container:
   161  
   162      root@net-2:/var/run/docker# ip netns exec 6635a3822135 ip addr show eth0
   163      15: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
   164          link/ether 02:42:b3:91:22:c3 brd ff:ff:ff:ff:ff:ff
   165          inet 172.21.0.5/16 scope global eth0
   166             valid_lft forever preferred_lft forever
   167          inet6 fe80::42:b3ff:fe91:22c3/64 scope link 
   168             valid_lft forever preferred_lft forever
   169  
   170  Indeed we get back the network interface of our running container, same MAC address, same IP.
   171  If we check the links of the overlay namespace we see our vxlan interface and the VLAN ID being used.
   172  
   173      root@net-2:/var/run/docker# ip netns exec 8805e22ad6e2 ip -d link show
   174      ...<snip>...
   175      14: vxlan1: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default 
   176          link/ether 7a:af:20:ee:e3:81 brd ff:ff:ff:ff:ff:ff promiscuity 1 
   177          vxlan id 256 srcport 32768 61000 dstport 8472 proxy l2miss l3miss ageing 300 
   178          bridge_slave 
   179      16: veth2: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP mode DEFAULT group default qlen 1000
   180          link/ether 46:b1:e2:5c:48:a8 brd ff:ff:ff:ff:ff:ff promiscuity 1 
   181          veth 
   182          bridge_slave  
   183  
   184  If you sniff packets on these interfaces you will see the traffic between your containers.
   185