github.com/misfo/deis@v1.0.1-0.20141111224634-e0eee0392b8a/docs/managing_deis/add_remove_host.rst (about) 1 :title: Addding/Removing Hosts 2 :description: Considerations for adding or removing Deis hosts. 3 4 .. _add_remove_host: 5 6 Adding/Removing Hosts 7 ===================== 8 9 Most Deis components handle new machines just fine. Care has to be taken when removing machines from 10 the cluster, however, since the deis-store components act as the backing store for all the 11 stateful data Deis needs to function properly. 12 13 Note that these instructions follow the Ceph documentation for `removing monitors`_ and `removing OSDs`_. 14 Should these instructions differ significantly from the Ceph documentation, the Ceph documentation 15 should be followed, and a PR to update this documentation would be much appreciated. 16 17 Since Ceph uses the Paxos algorithm, it is important to always have enough monitors in the cluster 18 to be able to achieve a majority: 1:1, 2:3, 3:4, 3:5, 4:6, etc. It is always preferable to add 19 a new node to the cluster before removing an old one, if possible. 20 21 This documentation will assume a running three-node Deis cluster. 22 We will add a fourth machine to the cluster, then remove the first machine. 23 24 Inspecting health 25 ----------------- 26 27 Before we begin, we should check the state of the Ceph cluster to be sure it's healthy. 28 We can do this by logging into any machine in the cluster, entering a store container, and then querying Ceph: 29 30 .. code-block:: console 31 32 core@deis-1 ~ $ nse deis-store-monitor 33 root@deis-1:/# ceph -s 34 cluster 20038e38-4108-4e79-95d4-291d0eef2949 35 health HEALTH_OK 36 monmap e3: 3 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0}, election epoch 16, quorum 0,1,2 deis-1,deis-2,deis-3 37 mdsmap e10: 1/1/1 up {0=deis-2=up:active}, 2 up:standby 38 osdmap e36: 3 osds: 3 up, 3 in 39 pgmap v2096: 1344 pgs, 12 pools, 369 MB data, 448 objects 40 24198 MB used, 23659 MB / 49206 MB avail 41 1344 active+clean 42 43 We see from the ``pgmap`` that we have 1344 placement groups, all of which are ``active+clean``. This is good! 44 45 Adding a node 46 ------------- 47 48 To add a new node to your Deis cluster, simply provision a new CoreOS machine with the same 49 etcd discovery URL specified in the cloud-config file. When the new machine comes up, it will join the etcd cluster. 50 You can confirm this with ``fleetctl list-machines``. 51 52 Since the store components are global units, they will be automatically started on the new node. 53 54 Once the new machine is running, we can inspect the Ceph cluster health again: 55 56 .. code-block:: console 57 58 root@deis-1:/# ceph -s 59 cluster 20038e38-4108-4e79-95d4-291d0eef2949 60 health HEALTH_WARN 4 pgs recovering; 7 pgs recovery_wait; 31 pgs stuck unclean; recovery 325/1353 objects degraded (24.021%); clock skew detected on mon.deis-4 61 monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4 62 mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby 63 osdmap e40: 4 osds: 4 up, 4 in 64 pgmap v2172: 1344 pgs, 12 pools, 370 MB data, 451 objects 65 29751 MB used, 34319 MB / 65608 MB avail 66 325/1353 objects degraded (24.021%) 67 88 active 68 7 active+recovery_wait 69 1245 active+clean 70 4 active+recovering 71 recovery io 2302 kB/s, 2 objects/s 72 client io 204 B/s wr, 0 op/s 73 74 Note that we are in a ``HEALTH_WARN`` state, and we have placement groups recovering. Ceph is 75 copying data to our new node. We can query the status of this until it completes. Then, we should 76 we something like: 77 78 .. code-block:: console 79 80 root@deis-1:/# ceph -s 81 cluster 20038e38-4108-4e79-95d4-291d0eef2949 82 health HEALTH_OK 83 monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4 84 mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby 85 osdmap e40: 4 osds: 4 up, 4 in 86 pgmap v2216: 1344 pgs, 12 pools, 372 MB data, 453 objects 87 29749 MB used, 34324 MB / 65608 MB avail 88 1344 active+clean 89 client io 409 B/s wr, 0 op/s 90 91 We're back in a ``HEALTH_OK``, and note the following: 92 93 .. code-block:: console 94 95 monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4 96 mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby 97 osdmap e40: 4 osds: 4 up, 4 in 98 99 We have 4 monitors, OSDs, and metadata servers. Hooray! 100 101 Removing a node 102 --------------- 103 104 When removing a node from the cluster that runs a deis-store component, you'll need to tell Ceph 105 that the store services on this host will be leaving the cluster. 106 In this example we're going to remove the first node in our cluster, deis-1. 107 That machine has an IP address of ``172.17.8.100``. 108 109 Removing an OSD 110 ~~~~~~~~~~~~~~~ 111 112 Before we can tell Ceph to remove an OSD, we need the OSD ID. We can get this from etcd: 113 114 .. code-block:: console 115 116 core@deis-2 ~ $ etcdctl get /deis/store/osds/172.17.8.100 117 2 118 119 Note: In some cases, we may not know the IP or hostname or the machine we want to remove. 120 In these cases, we can use ``ceph osd tree`` to see the current state of the cluster. 121 This will list all the OSDs in the cluster, and report which ones are down. 122 123 Now that we have the OSD's ID, let's remove it. We'll need a shell in any store container 124 on any host in the cluster (except the one we're removing). In this example, I am on ``deis-2``. 125 126 .. code-block:: console 127 128 core@deis-2 ~ $ nse deis-store-monitor 129 root@deis-2:/# ceph osd out 2 130 marked out osd.2. 131 132 This instructs Ceph to start relocating placement groups on that OSD to another host. We can watch this with ``ceph -w``: 133 134 .. code-block:: console 135 136 root@deis-2:/# ceph -w 137 cluster 20038e38-4108-4e79-95d4-291d0eef2949 138 health HEALTH_WARN 4 pgs recovery_wait; 151 pgs stuck unclean; recovery 654/1365 objects degraded (47.912%); clock skew detected on mon.deis-4 139 monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4 140 mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby 141 osdmap e42: 4 osds: 4 up, 3 in 142 pgmap v2259: 1344 pgs, 12 pools, 373 MB data, 455 objects 143 23295 MB used, 24762 MB / 49206 MB avail 144 654/1365 objects degraded (47.912%) 145 151 active 146 4 active+recovery_wait 147 1189 active+clean 148 recovery io 1417 kB/s, 1 objects/s 149 client io 113 B/s wr, 0 op/s 150 151 2014-11-04 06:45:07.940731 mon.0 [INF] pgmap v2260: 1344 pgs: 142 active, 3 active+recovery_wait, 1199 active+clean; 373 MB data, 23301 MB used, 24757 MB / 49206 MB avail; 619/1365 objects degraded (45.348%); 1724 kB/s, 0 keys/s, 1 objects/s recovering 152 2014-11-04 06:45:17.948788 mon.0 [INF] pgmap v2261: 1344 pgs: 141 active, 4 active+recovery_wait, 1199 active+clean; 373 MB data, 23301 MB used, 24757 MB / 49206 MB avail; 82 B/s rd, 0 op/s; 619/1365 objects degraded (45.348%); 843 kB/s, 0 keys/s, 0 objects/s recovering 153 2014-11-04 06:45:18.962420 mon.0 [INF] pgmap v2262: 1344 pgs: 140 active, 5 active+recovery_wait, 1199 active+clean; 373 MB data, 23318 MB used, 24740 MB / 49206 MB avail; 371 B/s rd, 0 B/s wr, 0 op/s; 618/1365 objects degraded (45.275%); 0 B/s, 0 keys/s, 0 objects/s recovering 154 2014-11-04 06:45:23.347089 mon.0 [INF] pgmap v2263: 1344 pgs: 130 active, 5 active+recovery_wait, 1209 active+clean; 373 MB data, 23331 MB used, 24727 MB / 49206 MB avail; 379 B/s rd, 0 B/s wr, 0 op/s; 572/1365 objects degraded (41.905%); 2323 kB/s, 0 keys/s, 4 objects/s recovering 155 2014-11-04 06:45:37.970125 mon.0 [INF] pgmap v2264: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23336 MB used, 24722 MB / 49206 MB avail; 568/1365 objects degraded (41.612%); 659 kB/s, 2 keys/s, 1 objects/s recovering 156 2014-11-04 06:45:40.006110 mon.0 [INF] pgmap v2265: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23336 MB used, 24722 MB / 49206 MB avail; 568/1365 objects degraded (41.612%); 11 B/s, 3 keys/s, 0 objects/s recovering 157 2014-11-04 06:45:43.034215 mon.0 [INF] pgmap v2266: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23344 MB used, 24714 MB / 49206 MB avail; 1010 B/s wr, 0 op/s; 568/1365 objects degraded (41.612%) 158 2014-11-04 06:45:44.048059 mon.0 [INF] pgmap v2267: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23344 MB used, 24714 MB / 49206 MB avail; 1766 B/s wr, 0 op/s; 568/1365 objects degraded (41.612%) 159 2014-11-04 06:45:48.366555 mon.0 [INF] pgmap v2268: 1344 pgs: 129 active, 4 active+recovery_wait, 1211 active+clean; 373 MB data, 23345 MB used, 24713 MB / 49206 MB avail; 576 B/s wr, 0 op/s; 568/1365 objects degraded (41.612%) 160 161 Eventually, the cluster will return to a clean state and will once again report ``HEALTH_OK``. 162 Then, we can stop the daemon. Since the store units are global units, we can't target a specific 163 one to stop. Instead, we log into the host machine and instruct Docker to stop the container. 164 165 Reminder: make sure you're logged into the machine you're removing from the cluster! 166 167 .. code-block:: console 168 169 core@deis-1 ~ $ docker stop deis-store-daemon 170 deis-store-daemon 171 172 Back inside a store container on ``deis-2``, we can finally remove the OSD: 173 174 .. code-block:: console 175 176 core@deis-2 ~ $ nse deis-store-monitor 177 root@deis-2:/# ceph osd crush remove osd.2 178 removed item id 2 name 'osd.2' from crush map 179 root@deis-2:/# ceph auth del osd.2 180 updated 181 root@deis-2:/# ceph osd rm 2 182 removed osd.2 183 184 For cleanup, we should remove the OSD entry from etcd: 185 186 .. code-block:: console 187 188 core@deis-2 ~ $ etcdctl rm /deis/store/osds/172.17.8.100 189 190 That's it! If we inspect the health, we see that there are now 3 osds again, and all of our placement groups are ``active+clean``. 191 192 .. code-block:: console 193 194 core@deis-2 ~ $ nse deis-store-monitor 195 root@deis-2:/# ceph -s 196 cluster 20038e38-4108-4e79-95d4-291d0eef2949 197 health HEALTH_OK 198 monmap e4: 4 mons at {deis-1=172.17.8.100:6789/0,deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 20, quorum 0,1,2,3 deis-1,deis-2,deis-3,deis-4 199 mdsmap e11: 1/1/1 up {0=deis-2=up:active}, 3 up:standby 200 osdmap e46: 3 osds: 3 up, 3 in 201 pgmap v2338: 1344 pgs, 12 pools, 375 MB data, 458 objects 202 23596 MB used, 24465 MB / 49206 MB avail 203 1344 active+clean 204 client io 326 B/s wr, 0 op/s 205 206 Removing a monitor 207 ~~~~~~~~~~~~~~~~~~ 208 209 Removing a monitor is much easier. First, we remove the etcd entry so any clients that are using Ceph won't use the monitor for connecting: 210 211 .. code-block:: console 212 213 $ etcdctl rm /deis/store/hosts/172.17.8.100 214 215 Within 5 seconds, confd will run on all store clients and remove the monitor from the ``ceph.conf`` configuration file. 216 217 Next, we stop the container: 218 219 .. code-block:: console 220 221 core@deis-1 ~ $ docker stop deis-store-monitor 222 deis-store-monitor 223 224 225 Back on another host, we can again enter a store container and then remove this monitor: 226 227 .. code-block:: console 228 229 core@deis-2 ~ $ nse deis-store-monitor 230 root@deis-2:/# ceph mon remove deis-1 231 removed mon.deis-1 at 172.17.8.100:6789/0, there are now 3 monitors 232 2014-11-04 06:57:59.712934 7f04bc942700 0 monclient: hunting for new mon 233 2014-11-04 06:57:59.712934 7f04bc942700 0 monclient: hunting for new mon 234 235 Note that there may be faults that follow - this is normal to see when a Ceph client is 236 unable to communicate with a monitor. The important line is that we see ``removed mon.deis-1 at 172.17.8.100:6789/0, there are now 3 monitors``. 237 238 Finally, let's check the health of the cluster: 239 240 .. code-block:: console 241 242 root@deis-2:/# ceph -s 243 cluster 20038e38-4108-4e79-95d4-291d0eef2949 244 health HEALTH_OK 245 monmap e5: 3 mons at {deis-2=172.17.8.101:6789/0,deis-3=172.17.8.102:6789/0,deis-4=172.17.8.103:6789/0}, election epoch 26, quorum 0,1,2 deis-2,deis-3,deis-4 246 mdsmap e17: 1/1/1 up {0=deis-4=up:active}, 3 up:standby 247 osdmap e47: 3 osds: 3 up, 3 in 248 pgmap v2359: 1344 pgs, 12 pools, 375 MB data, 458 objects 249 23605 MB used, 24455 MB / 49206 MB avail 250 1344 active+clean 251 client io 816 B/s wr, 0 op/s 252 253 We're done! 254 255 Removing a metadata server 256 ~~~~~~~~~~~~~~~~~~~~~~~~~~ 257 258 Like the daemon, we'll just stop the Docker container for the metadata service. 259 260 Reminder: make sure you're logged into the machine you're removing from the cluster! 261 262 .. code-block:: console 263 264 core@deis-1 ~ $ docker stop deis-store-metadata 265 deis-store-metadata 266 267 This is actually all that's necessary. Ceph provides a ``ceph mds rm`` command, but has no 268 documentation for it. See: http://docs.ceph.com/docs/giant/rados/operations/control/#mds-subsystem 269 270 Removing the host from etcd 271 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 272 273 The etcd cluster still has an entry for the host we've removed, so we'll need to remove this entry. 274 This can be achieved by making a request to the etcd API. See `remove machines`_ for details. 275 276 .. _`remove machines`: https://coreos.com/docs/distributed-configuration/etcd-api/#remove-machines 277 .. _`removing monitors`: http://ceph.com/docs/giant/rados/operations/add-or-rm-mons/#removing-monitors 278 .. _`removing OSDs`: http://docs.ceph.com/docs/giant/rados/operations/add-or-rm-osds/#removing-osds-manual