go.etcd.io/etcd@v3.3.27+incompatible/contrib/raftexample/README.md (about)

     1  # raftexample
     2  
     3  raftexample is an example usage of etcd's [raft library](../../raft). It provides a simple REST API for a key-value store cluster backed by the [Raft][raft] consensus algorithm.
     4  
     5  [raft]: http://raftconsensus.github.io/
     6  
     7  ## Getting Started
     8  
     9  ### Running single node raftexample
    10  
    11  First start a single-member cluster of raftexample:
    12  
    13  ```sh
    14  raftexample --id 1 --cluster http://127.0.0.1:12379 --port 12380
    15  ```
    16  
    17  Each raftexample process maintains a single raft instance and a key-value server.
    18  The process's list of comma separated peers (--cluster), its raft ID index into the peer list (--id), and http key-value server port (--port) are passed through the command line.
    19  
    20  Next, store a value ("hello") to a key ("my-key"):
    21  
    22  ```
    23  curl -L http://127.0.0.1:12380/my-key -XPUT -d hello
    24  ```
    25  
    26  Finally, retrieve the stored key:
    27  
    28  ```
    29  curl -L http://127.0.0.1:12380/my-key
    30  ```
    31  
    32  ### Running a local cluster
    33  
    34  First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
    35  
    36  The [Procfile script](./Procfile) will set up a local example cluster. Start it with:
    37  
    38  ```sh
    39  goreman start
    40  ```
    41  
    42  This will bring up three raftexample instances.
    43  
    44  Now it's possible to write a key-value pair to any member of the cluster and likewise retrieve it from any member.
    45  
    46  ### Fault Tolerance
    47  
    48  To test cluster recovery, first start a cluster and write a value "foo":
    49  ```sh
    50  goreman start
    51  curl -L http://127.0.0.1:12380/my-key -XPUT -d foo
    52  ```
    53  
    54  Next, remove a node and replace the value with "bar" to check cluster availability:
    55  
    56  ```sh
    57  goreman run stop raftexample2
    58  curl -L http://127.0.0.1:12380/my-key -XPUT -d bar
    59  curl -L http://127.0.0.1:32380/my-key
    60  ```
    61  
    62  Finally, bring the node back up and verify it recovers with the updated value "bar":
    63  ```sh
    64  goreman run start raftexample2
    65  curl -L http://127.0.0.1:22380/my-key
    66  ```
    67  
    68  ### Dynamic cluster reconfiguration
    69  
    70  Nodes can be added to or removed from a running cluster using requests to the REST API.
    71  
    72  For example, suppose we have a 3-node cluster that was started with the commands:
    73  ```sh
    74  raftexample --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
    75  raftexample --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
    76  raftexample --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380
    77  ```
    78  
    79  A fourth node with ID 4 can be added by issuing a POST:
    80  ```sh
    81  curl -L http://127.0.0.1:12380/4 -XPOST -d http://127.0.0.1:42379
    82  ```
    83  
    84  Then the new node can be started as the others were, using the --join option:
    85  ```sh
    86  raftexample --id 4 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379,http://127.0.0.1:42379 --port 42380 --join
    87  ```
    88  
    89  The new node should join the cluster and be able to service key/value requests.
    90  
    91  We can remove a node using a DELETE request:
    92  ```sh
    93  curl -L http://127.0.0.1:12380/3 -XDELETE
    94  ```
    95  
    96  Node 3 should shut itself down once the cluster has processed this request.
    97  
    98  ## Design
    99  
   100  The raftexample consists of three components: a raft-backed key-value store, a REST API server, and a raft consensus server based on etcd's raft implementation.
   101  
   102  The raft-backed key-value store is a key-value map that holds all committed key-values.
   103  The store bridges communication between the raft server and the REST server.
   104  Key-value updates are issued through the store to the raft server.
   105  The store updates its map once raft reports the updates are committed.
   106  
   107  The REST server exposes the current raft consensus by accessing the raft-backed key-value store.
   108  A GET command looks up a key in the store and returns the value, if any.
   109  A key-value PUT command issues an update proposal to the store.
   110  
   111  The raft server participates in consensus with its cluster peers.
   112  When the REST server submits a proposal, the raft server transmits the proposal to its peers.
   113  When raft reaches a consensus, the server publishes all committed updates over a commit channel.
   114  For raftexample, this commit channel is consumed by the key-value store.
   115