github.com/tmlbl/deis@v1.0.2/docs/managing_deis/add_remove_host.rst (about)

     1  :title: Addding/Removing Hosts
     2  :description: Considerations for adding or removing Deis hosts.
     3  
     4  .. _add_remove_host:
     5  
     6  Adding/Removing Hosts
     7  =====================
     8  
     9  Most Deis components handle new machines just fine. Care has to be taken when removing machines from
    10  the cluster, however, since the deis-store components act as the backing store for all the
    11  stateful data Deis needs to function properly.
    12  
    13  Note that these instructions follow the Ceph documentation for `removing monitors`_ and `removing OSDs`_.
    14  Should these instructions differ significantly from the Ceph documentation, the Ceph documentation
    15  should be followed, and a PR to update this documentation would be much appreciated.
    16  
    17  Since Ceph uses the Paxos algorithm, it is important to always have enough monitors in the cluster
    18  to be able to achieve a majority: 1:1, 2:3, 3:4, 3:5, 4:6, etc. It is always preferable to add
    19  a new node to the cluster before removing an old one, if possible.
    20  
    21  This documentation will assume a running three-node Deis cluster.
    22  We will add a fourth machine to the cluster, then remove the first machine.
    23  
    24  Inspecting health
    25  -----------------
    26  
    27  Before we begin, we should check the state of the Ceph cluster to be sure it's healthy.
    28  We can do this by logging into any machine in the cluster, entering a store container, and then querying Ceph:
    29  
    30  .. code-block:: console
    31  
    32      core@deis-1 ~ $ nse deis-store-monitor
    33      root@deis-1:/# ceph -s
    34          cluster 20038e38-4108-4e79-95d4-291d0eef2949
    35           health HEALTH_OK
    36           monmap e3: 3 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0}, election epoch 16, quorum 0,1,2 deis-1,deis-2,deis-3
    37           mdsmap e10: 1/1/1 up {0=deis-2=up:active}, 2 up:standby
    38           osdmap e36: 3 osds: 3 up, 3 in
    39            pgmap v2096: 1344 pgs, 12 pools, 369 MB data, 448 objects
    40                  24198 MB used, 23659 MB / 49206 MB avail
    41                  1344 active+clean
    42  
    43  We see from the ``pgmap`` that we have 1344 placement groups, all of which are ``active+clean``. This is good!
    44  
    45  Adding a node
    46  -------------
    47  
    48  To add a new node to your Deis cluster, simply provision a new CoreOS machine with the same
    49  etcd discovery URL specified in the cloud-config file. When the new machine comes up, it will join the etcd cluster.
    50  You can confirm this with ``fleetctl list-machines``.
    51  
    52  Since the store components are global units, they will be automatically started on the new node.
    53  
    54  Once the new machine is running, we can inspect the Ceph cluster health again:
    55  
    56  .. code-block:: console
    57  
    58      root@deis-1:/# ceph -s
    59          cluster 20038e38-4108-4e79-95d4-291d0eef2949
    60           health HEALTH_WARN 4 pgs recovering; 7 pgs recovery_wait; 31 pgs stuck unclean; recovery 325/1353 objects degraded (24.021%); clock skew detected on mon.deis-4
    61           monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4
    62           mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby
    63           osdmap e40: 4 osds: 4 up, 4 in
    64            pgmap v2172: 1344 pgs, 12 pools, 370 MB data, 451 objects
    65                  29751 MB used, 34319 MB / 65608 MB avail
    66                  325/1353 objects degraded (24.021%)
    67                    88 active
    68                     7 active+recovery_wait
    69                  1245 active+clean
    70                     4 active+recovering
    71        recovery io 2302 kB/s, 2 objects/s
    72        client io 204 B/s wr, 0 op/s
    73  
    74  Note that we are in a ``HEALTH_WARN`` state, and we have placement groups recovering. Ceph is
    75  copying data to our new node. We can query the status of this until it completes. Then, we should
    76  we something like:
    77  
    78  .. code-block:: console
    79  
    80      root@deis-1:/# ceph -s
    81          cluster 20038e38-4108-4e79-95d4-291d0eef2949
    82           health HEALTH_OK
    83           monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4
    84           mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby
    85           osdmap e40: 4 osds: 4 up, 4 in
    86            pgmap v2216: 1344 pgs, 12 pools, 372 MB data, 453 objects
    87                  29749 MB used, 34324 MB / 65608 MB avail
    88                      1344 active+clean
    89        client io 409 B/s wr, 0 op/s
    90  
    91  We're back in a ``HEALTH_OK``, and note the following:
    92  
    93  .. code-block:: console
    94  
    95      monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4
    96      mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby
    97      osdmap e40: 4 osds: 4 up, 4 in
    98  
    99  We have 4 monitors, OSDs, and metadata servers. Hooray!
   100  
   101  .. note::
   102  
   103      If you have applied the `custom firewall script`_ to your cluster, you will have to run this
   104      script again and reboot your nodes for iptables to remove the duplicate rules.
   105  
   106  Removing a node
   107  ---------------
   108  
   109  When removing a node from the cluster that runs a deis-store component, you'll need to tell Ceph
   110  that the store services on this host will be leaving the cluster.
   111  In this example we're going to remove the first node in our cluster, deis-1.
   112  That machine has an IP address of ``172.17.8.100``.
   113  
   114  Removing an OSD
   115  ~~~~~~~~~~~~~~~
   116  
   117  Before we can tell Ceph to remove an OSD, we need the OSD ID. We can get this from etcd:
   118  
   119  .. code-block:: console
   120  
   121      core@deis-2 ~ $ etcdctl get /deis/store/osds/172.17.8.100
   122      2
   123  
   124  Note: In some cases, we may not know the IP or hostname or the machine we want to remove.
   125  In these cases, we can use ``ceph osd tree`` to see the current state of the cluster.
   126  This will list all the OSDs in the cluster, and report which ones are down.
   127  
   128  Now that we have the OSD's ID, let's remove it. We'll need a shell in any store container
   129  on any host in the cluster (except the one we're removing). In this example, I am on ``deis-2``.
   130  
   131  .. code-block:: console
   132  
   133      core@deis-2 ~ $ nse deis-store-monitor
   134      root@deis-2:/# ceph osd out 2
   135      marked out osd.2.
   136  
   137  This instructs Ceph to start relocating placement groups on that OSD to another host. We can watch this with ``ceph -w``:
   138  
   139  .. code-block:: console
   140  
   141      root@deis-2:/# ceph -w
   142          cluster 20038e38-4108-4e79-95d4-291d0eef2949
   143           health HEALTH_WARN 4 pgs recovery_wait; 151 pgs stuck unclean; recovery 654/1365 objects degraded (47.912%); clock skew detected on mon.deis-4
   144           monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4
   145           mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby
   146           osdmap e42: 4 osds: 4 up, 3 in
   147           pgmap v2259: 1344 pgs, 12 pools, 373 MB data, 455 objects
   148                  23295 MB used, 24762 MB / 49206 MB avail
   149                  654/1365 objects degraded (47.912%)
   150                   151 active
   151                     4 active+recovery_wait
   152                  1189 active+clean
   153        recovery io 1417 kB/s, 1 objects/s
   154        client io 113 B/s wr, 0 op/s
   155  
   156      2014-11-04 06:45:07.940731 mon.0 [INF] pgmap v2260: 1344 pgs: 142 active, 3 active+recovery_wait, 1199 active+clean; 373 MB data, 23301 MB used, 24757 MB / 49206 MB avail; 619/1365 objects degraded (45.348%); 1724 kB/s, 0 keys/s, 1 objects/s recovering
   157      2014-11-04 06:45:17.948788 mon.0 [INF] pgmap v2261: 1344 pgs: 141 active, 4 active+recovery_wait, 1199 active+clean; 373 MB data, 23301 MB used, 24757 MB / 49206 MB avail; 82 B/s rd, 0 op/s; 619/1365 objects degraded (45.348%); 843 kB/s, 0 keys/s, 0 objects/s recovering
   158      2014-11-04 06:45:18.962420 mon.0 [INF] pgmap v2262: 1344 pgs: 140 active, 5 active+recovery_wait, 1199 active+clean; 373 MB data, 23318 MB used, 24740 MB / 49206 MB avail; 371 B/s rd, 0 B/s wr, 0 op/s; 618/1365 objects degraded (45.275%); 0 B/s, 0 keys/s, 0 objects/s recovering
   159      2014-11-04 06:45:23.347089 mon.0 [INF] pgmap v2263: 1344 pgs: 130 active, 5 active+recovery_wait, 1209 active+clean; 373 MB data, 23331 MB used, 24727 MB / 49206 MB avail; 379 B/s rd, 0 B/s wr, 0 op/s; 572/1365 objects degraded (41.905%); 2323 kB/s, 0 keys/s, 4 objects/s recovering
   160      2014-11-04 06:45:37.970125 mon.0 [INF] pgmap v2264: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23336 MB used, 24722 MB / 49206 MB avail; 568/1365 objects degraded (41.612%); 659 kB/s, 2 keys/s, 1 objects/s recovering
   161      2014-11-04 06:45:40.006110 mon.0 [INF] pgmap v2265: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23336 MB used, 24722 MB / 49206 MB avail; 568/1365 objects degraded (41.612%); 11 B/s, 3 keys/s, 0 objects/s recovering
   162      2014-11-04 06:45:43.034215 mon.0 [INF] pgmap v2266: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23344 MB used, 24714 MB / 49206 MB avail; 1010 B/s wr, 0 op/s; 568/1365 objects degraded (41.612%)
   163      2014-11-04 06:45:44.048059 mon.0 [INF] pgmap v2267: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23344 MB used, 24714 MB / 49206 MB avail; 1766 B/s wr, 0 op/s; 568/1365 objects degraded (41.612%)
   164      2014-11-04 06:45:48.366555 mon.0 [INF] pgmap v2268: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23345 MB used, 24713 MB / 49206 MB avail; 576 B/s wr, 0 op/s; 568/1365 objects degraded (41.612%)
   165  
   166  Eventually, the cluster will return to a clean state and will once again report ``HEALTH_OK``.
   167  Then, we can stop the daemon. Since the store units are global units, we can't target a specific
   168  one to stop. Instead, we log into the host machine and instruct Docker to stop the container.
   169  
   170  Reminder: make sure you're logged into the machine you're removing from the cluster!
   171  
   172  .. code-block:: console
   173  
   174      core@deis-1 ~ $ docker stop deis-store-daemon
   175      deis-store-daemon
   176  
   177  Back inside a store container on ``deis-2``, we can finally remove the OSD:
   178  
   179  .. code-block:: console
   180  
   181      core@deis-2 ~ $ nse deis-store-monitor
   182      root@deis-2:/# ceph osd crush remove osd.2
   183      removed item id 2 name 'osd.2' from crush map
   184      root@deis-2:/# ceph auth del osd.2
   185      updated
   186      root@deis-2:/# ceph osd rm 2
   187      removed osd.2
   188  
   189  For cleanup, we should remove the OSD entry from etcd:
   190  
   191  .. code-block:: console
   192  
   193      core@deis-2 ~ $ etcdctl rm /deis/store/osds/172.17.8.100
   194  
   195  That's it! If we inspect the health, we see that there are now 3 osds again, and all of our placement groups are ``active+clean``.
   196  
   197  .. code-block:: console
   198  
   199      core@deis-2 ~ $ nse deis-store-monitor
   200      root@deis-2:/# ceph -s
   201          cluster 20038e38-4108-4e79-95d4-291d0eef2949
   202           health HEALTH_OK
   203           monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4
   204           mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby
   205           osdmap e46: 3 osds: 3 up, 3 in
   206            pgmap v2338: 1344 pgs, 12 pools, 375 MB data, 458 objects
   207                  23596 MB used, 24465 MB / 49206 MB avail
   208                      1344 active+clean
   209        client io 326 B/s wr, 0 op/s
   210  
   211  Removing a monitor
   212  ~~~~~~~~~~~~~~~~~~
   213  
   214  Removing a monitor is much easier. First, we remove the etcd entry so any clients that are using Ceph won't use the monitor for connecting:
   215  
   216  .. code-block:: console
   217  
   218      $ etcdctl rm /deis/store/hosts/172.17.8.100
   219  
   220  Within 5 seconds, confd will run on all store clients and remove the monitor from the ``ceph.conf`` configuration file.
   221  
   222  Next, we stop the container:
   223  
   224  .. code-block:: console
   225  
   226      core@deis-1 ~ $ docker stop deis-store-monitor
   227      deis-store-monitor
   228  
   229  
   230  Back on another host, we can again enter a store container and then remove this monitor:
   231  
   232  .. code-block:: console
   233  
   234      core@deis-2 ~ $ nse deis-store-monitor
   235      root@deis-2:/# ceph mon remove deis-1
   236      removed mon.deis-1 at 172.17.8.100:6789/0, there are now 3 monitors
   237      2014-11-04 06:57:59.712934 7f04bc942700  0 monclient: hunting for new mon
   238      2014-11-04 06:57:59.712934 7f04bc942700  0 monclient: hunting for new mon
   239  
   240  Note that there may be faults that follow - this is normal to see when a Ceph client is
   241  unable to communicate with a monitor. The important line is that we see ``removed mon.deis-1 at 172.17.8.100:6789/0, there are now 3 monitors``.
   242  
   243  Finally, let's check the health of the cluster:
   244  
   245  .. code-block:: console
   246  
   247      root@deis-2:/# ceph -s
   248          cluster 20038e38-4108-4e79-95d4-291d0eef2949
   249           health HEALTH_OK
   250           monmap e5: 3 mons at {deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 26, quorum 0,1,2 deis-2,deis-3,deis-4
   251           mdsmap e17: 1/1/1 up {0=deis-4=up:active}, 3 up:standby
   252           osdmap e47: 3 osds: 3 up, 3 in
   253            pgmap v2359: 1344 pgs, 12 pools, 375 MB data, 458 objects
   254                  23605 MB used, 24455 MB / 49206 MB avail
   255                      1344 active+clean
   256        client io 816 B/s wr, 0 op/s
   257  
   258  We're done!
   259  
   260  Removing a metadata server
   261  ~~~~~~~~~~~~~~~~~~~~~~~~~~
   262  
   263  Like the daemon, we'll just stop the Docker container for the metadata service.
   264  
   265  Reminder: make sure you're logged into the machine you're removing from the cluster!
   266  
   267  .. code-block:: console
   268  
   269      core@deis-1 ~ $ docker stop deis-store-metadata
   270      deis-store-metadata
   271  
   272  This is actually all that's necessary. Ceph provides a ``ceph mds rm`` command, but has no
   273  documentation for it. See: http://docs.ceph.com/docs/giant/rados/operations/control/#mds-subsystem
   274  
   275  Removing the host from etcd
   276  ~~~~~~~~~~~~~~~~~~~~~~~~~~~
   277  
   278  The etcd cluster still has an entry for the host we've removed, so we'll need to remove this entry.
   279  This can be achieved by making a request to the etcd API. See `remove machines`_ for details.
   280  
   281  .. _`custom firewall script`: https://github.com/deis/deis/blob/master/contrib/util/custom-firewall.sh
   282  .. _`remove machines`: https://coreos.com/docs/distributed-configuration/etcd-api/#remove-machines
   283  .. _`removing monitors`: http://ceph.com/docs/giant/rados/operations/add-or-rm-mons/#removing-monitors
   284  .. _`removing OSDs`: http://docs.ceph.com/docs/giant/rados/operations/add-or-rm-osds/#removing-osds-manual