github.com/technosophos/deis@v1.7.1-0.20150915173815-f9005256004b/docs/understanding_deis/failover.rst (about)

     1  :title: Node Failover in Deis
     2  :description: Describes how Deis nodes failover
     3  
     4  .. _failover:
     5  
     6  Failover
     7  ========
     8  
     9  Three Node Cluster
    10  ------------------
    11  
    12  Losing One of Three Nodes
    13  ^^^^^^^^^^^^^^^^^^^^^^^^^
    14  
    15  Losing one of three nodes will have the following effects:
    16  
    17  - Ceph will enter a health warn state but will continue to function.
    18  - Anything scheduled on the downed node will be rescheduled to the other two nodes.
    19    If your remaining nodes don't have the resources to run the new units, this could
    20    take down the entire platform
    21  - When you scale up to three nodes again, Ceph and Etcd will still think one member is down.
    22    You will need to manually remove the downed node from Ceph and Etcd.
    23  
    24  Losing Two of Three Nodes
    25  ^^^^^^^^^^^^^^^^^^^^^^^^^
    26  
    27  Losing two of three nodes will have the following effects:
    28  
    29  - Ceph will enter a degraded state and go into read-only mode.
    30  - Etcd will enter a degraded state and go into read-only mode.
    31  - Anything scheduled on the downed node will be rescheduled to remaining node.
    32    If your remaining node doesn't have the resources to run the new units, this could
    33    take down the entire platform.
    34  - When you scale up to three nodes again, Ceph and Etcd will still think two members are down.
    35    You will need to manually remove the downed nodes from Ceph and Etcd.
    36  
    37  Larger Clusters
    38  ---------------
    39  
    40  If you have more than three nodes, Deis can tolerate node failure without issue.
    41  Here are a few things to keep in mind:
    42  
    43  - You have to manually remove downed nodes from Etcd and Ceph. Ceph and Etcd think downed nodes
    44    might still be functioning but out of communication with the main cluster. If you don't remove
    45    downed nodes, they could eventually outnumber running nodes. This will cause Ceph and etcd to go
    46    into read only mode to prevent a split brained cluster.
    47  - Ceph on Deis stores three replicas of all data. If a node goes down, Ceph doesn't replicate the data on
    48    that node because it expects the node will come back. Manually removing the node will resolve this.
    49  - You should use the preseed script to automatically download the control and data plane on every node.
    50    This way if a unit is rescheduled (like if a node goes down) it just had to be started, not downloaded,
    51    reducing failover time to seconds, not minutes. See :ref:`preseeding_continers` for further details.
    52  - If the database is rescheduled, it has to go through a recovery process wherever it is rescheduled, causing
    53    controller downtime (generally less than a minute).
    54  - User apps should be scaled to reside on multiple hosts. That way, if one node goes down your app will continue to
    55    function without downtime.