github.com/wallyworld/juju@v0.0.0-20161013125918-6cf1bc9d917a/doc/high_availability.md (about)

     1  High Availability (HA)
     2  ======================
     3  
     4  
     5  High Availability in general terms means that we have 3 or more (up to 7) 
     6  State Machines, each one of which can be used as the master.
     7  
     8  This is an overview of how it works:
     9  
    10  ### Mongo
    11  _Mongo_ is always started in [replicaset mode](http://docs.mongodb.org/manual/replication/).
    12  
    13   If not in HA, this will behave as if it were a single mongodb and, in practical 
    14  terms there is no difference with a regular setup.
    15  
    16  ### Voting
    17  
    18  A voting member of the replicaset is a one that has a say in which member is master.
    19  
    20  A non-voting member is just a storage backup.
    21  
    22  Currently we don't support non-voting members; instead when a member is non-voting it
    23  means that said controller is going to be removed entirely.
    24  
    25  ### Ensure availability
    26  
    27  There is a `ensure-availabiity` command for juju, it takes `-n` (minimum number
    28   of state machines) as an optional parameter; if it's not provided it will 
    29  default to 3.
    30  
    31   This needs to be an odd number in order to prevent ties during voting.
    32   
    33   The number cannot be larger than seven (making the current possibilities: 3, 
    34  5 and 7) due to limitations of mongodb, which cannot have more than 7
    35  replica set voting members.
    36   
    37   Currently the number can be increased but not decreased (this is planned). 
    38  In the first case Juju will bring up as many machines as necessary to meet the 
    39  requirement; in the second nothing will happen since the rule tries to have 
    40  _"at least that many"_
    41   
    42   At present there is no way to reduce the number of machines, you can kill by 
    43  hand enough machines to reduce to a number you need, but this is risky and 
    44  **not recommended**. If you kill less than half of the machines (half+1
    45  remaining) running `enable-ha` again will add more machines to 
    46  replace the dead ones. If you kill more there is no way to recover as there 
    47  are not enough voting machines.
    48   
    49   The EnableHA API call will report will report the changes that it 
    50  made to the model, which will shortly be reflected in reality
    51  ### The API 
    52  
    53   There is an API server running on all State Machines, these talk to all
    54  the peers but queries and updates are addressed to the mongo master instance.
    55   
    56   Unit and machine agents connect to any of the API servers, by trying to connect
    57  to all the addresses concurrently, but not simultaneously. It starts to try each
    58  address in turn after a short delay. After a successful connection, the
    59  connected address will be stored; it will be tried first when next connecting.
    60  
    61  ### The peergrouper worker:
    62   
    63   It looks at the current state and decides what the peergroup members should 
    64  look like and continually tries to maintain those members.
    65  
    66   The reason for its existence is that it can often take a while for mongo to 
    67  allow a peer group change, so we can't change it directly in the 
    68  EnableHA API call
    69  
    70   Its worker loop continally watches 
    71  
    72   1. The current set of controllers 
    73   2. The addresses of the current controllers 
    74   3. The status of the current mongo peergroup
    75   
    76  It feeds all that information into `desiredPeerGroup`, which provides the peer 
    77  group that we want to be and continually tries to set that peer group in mongo 
    78  until it succeeds.
    79   
    80  **NOTE:** There is one situation which currently doesn't work which is 
    81  that if you've only got one controller, you can't switch to another one.
    82  
    83  ### The Singleton Workers
    84  
    85  **Note:** This section reflects the current behavior of these workers but 
    86  should by no means be taken as an example to follow since most (if not all)
    87  should run concurrently and are going to change in the near future.
    88  
    89  The following workers require only a single instance to be running
    90  at any one moment:
    91  
    92   * The environment provisioner
    93   * The firewaller
    94   * The charm revision updater
    95   * The state cleaner
    96   * The transaction resumer
    97   * The minunits worker
    98  
    99  When a machine agent connects to the state, it decides whether
   100  it is on the same instance as the mongo master instance, and
   101  if so, it runs the singleton workers; otherwise it doesn't run them.
   102  
   103  Because we are using `mgo.Strong` consistency semantics,
   104  it's guaranteed that our mongo connection will be dropped
   105  when the master changes, which means that when the
   106  master changes, the machine agent will reconnect to the
   107  state and choose whether to run the singleton workers again.
   108  
   109  It also means that we can never accidentally have two
   110  singleton workers performing operations at the same time.