github.com/NVIDIA/aistore@v1.3.23-0.20240517131212-7df6609be51d/docs/ha.md (about)

     1  ---
     2  layout: post
     3  title: HA
     4  permalink: /docs/ha
     5  redirect_from:
     6   - /ha.md/
     7   - /docs/ha.md/
     8  ---
     9  
    10  ## Table of Contents
    11  - [Highly Available Control Plane](#highly-available-control-plane)
    12      - [Bootstrap](#bootstrap)
    13      - [Election](#election)
    14      - [Non-electable gateways](#non-electable-gateways)
    15      - [Metasync](#metasync)
    16  
    17  ## Highly Available Control Plane
    18  
    19  AIStore cluster will survive a loss of any storage target and any gateway including the primary gateway (leader). New gateways and targets can join at any time – including the time of electing a new leader. Each new node joining a running cluster will get updated with the most current cluster-level metadata.
    20  Failover – that is, the election of a new leader – is carried out automatically on failure of the current/previous leader. Failback on the hand – that is, administrative selection of the leading (likely, an originally designated) gateway – is done manually via AIStore [REST API](http_api.md).
    21  
    22  It is, therefore, recommended that AIStore cluster is deployed with multiple proxies aka gateways (the terms that are interchangeably used throughout the source code and this README).
    23  
    24  When there are multiple proxies, only one of them acts as the primary while all the rest are, respectively, non-primaries. The primary proxy's (primary) responsibility is serializing updates of the cluster-level metadata (which is also versioned and immutable).
    25  
    26  Further:
    27  
    28  - Each proxy/gateway stores a local copy of the cluster map (Smap)
    29  - Each Smap instance is immutable and versioned; the versioning is monotonic (increasing)
    30  - Only the current primary (leader) proxy distributes Smap updates to all other clustered nodes
    31  
    32  ### Bootstrap
    33  
    34  The proxy's bootstrap sequence initiates by executing the following three main steps:
    35  
    36  - step 1: load a local copy of the cluster map (Smap) and try to use it for the discovery of the current one;
    37  - step 2: use the local configuration and the local Smap to perform the discovery of the cluster-level metadata;
    38  - step 3: use all of the above and, optionally, [`AIS_PRIMARY_EP`](environment-vars.md) to figure out whether this proxy must keep starting up as a _primary_;
    39    - otherwise, join as a non-primary (a.k.a. _secondary_).
    40  
    41  The rules to determine whether a given starting-up proxy is the primary one in the cluster - are simple. In fact, it's a single switch statement in the namesake function:
    42  
    43  * [`determineRole`](https://github.com/NVIDIA/aistore/blob/main/ais/earlystart.go).
    44  
    45  Further, the (potentially) primary proxy executes more steps:
    46  
    47  - (i)    initialize empty Smap;
    48  - (ii)   wait a configured time for other nodes to join;
    49  - (iii)  merge the Smap containing newly joined nodes with the Smap that was previously discovered;
    50  - (iiii) and use the latter to rediscover cluster-wide metadata and resolve remaining conflicts, if any.
    51  
    52  If during any of these steps the proxy finds out that it must be joining as a non-primary then it simply does so.
    53  
    54  ### Election
    55  
    56  The primary proxy election process is as follows:
    57  
    58  - A candidate to replace the current (failed) primary is selected;
    59  - The candidate is notified that an election is commencing;
    60  - After the candidate (proxy) confirms that the current primary proxy is down, it broadcasts vote requests to all other nodes;
    61  - Each recipient node confirms whether the current primary is down and whether the candidate proxy has the HRW (Highest Random Weight) according to the local Smap;
    62  - If confirmed, the node responds with Yes, otherwise it's a No;
    63  - If and when the candidate receives a majority of affirmative responses it performs the commit phase of this two-phase process by distributing an updated cluster map to all nodes.
    64  
    65  ### Non-electable gateways
    66  
    67  AIStore cluster can be *stretched* to collocate its redundant gateways with the compute nodes. Those non-electable local gateways ([AIStore configuration](/deploy/dev/local/aisnode_config.sh)) will only serve as access points but will never take on the responsibility of leading the cluster.
    68  
    69  ### Metasync
    70  
    71  By design, AIStore does not have a centralized (SPOF) shared cluster-level metadata. The metadata consists of versioned objects: cluster map, buckets (names and properties), authentication tokens. In AIStore, these objects are consistently replicated across the entire cluster – the component responsible for this is called [metasync](/ais/metasync.go). AIStore metasync makes sure to keep cluster-level metadata in-sync at all times.