github.com/outbrain/consul@v1.4.5/website/source/docs/install/manual-bootstrap.html.md (about) 1 --- 2 layout: "docs" 3 page_title: "Manual Bootstrapping" 4 sidebar_current: "docs-install-bootstrapping" 5 description: |- 6 When deploying Consul to a datacenter for the first time, there is an initial bootstrapping that must be done. As of Consul 0.4, an automatic bootstrapping is available and is the recommended approach. However, older versions only support a manual bootstrap that is documented here. 7 --- 8 9 # Manually Bootstrapping a Datacenter 10 11 When deploying Consul to a datacenter for the first time, there is an initial 12 bootstrapping that must be done. As of Consul 0.4, an 13 [automatic bootstrapping](/docs/guides/bootstrapping.html) is available and is 14 the recommended approach. However, older versions only support a manual 15 bootstrap that is documented here. 16 17 Generally, the first nodes that are started are the server nodes. Remember that 18 an agent can run in both client and server mode. Server nodes are responsible 19 for running the [consensus protocol](/docs/internals/consensus.html), and 20 storing the cluster state. The client nodes are mostly stateless and rely on the 21 server nodes, so they can be started easily. 22 23 Manual bootstrapping requires that the first server that is deployed in a new 24 datacenter provide the [`-bootstrap` configuration option](/docs/agent/options.html#_bootstrap). 25 This option allows the server 26 to assert leadership of the cluster without agreement from any other server. 27 This is necessary because at this point, there are no other servers running in 28 the datacenter! Lets call this first server `Node A`. When starting `Node A` 29 something like the following will be logged: 30 31 ```text 32 2014/02/22 19:23:32 [INFO] consul: cluster leadership acquired 33 ``` 34 35 Once `Node A` is running, we can start the next set of servers. There is a 36 [deployment table](/docs/internals/consensus.html#toc_4) that covers various 37 options, but it is recommended to have 3 or 5 total servers per datacenter. A 38 single server deployment is _**highly**_ discouraged as data loss is inevitable 39 in a failure scenario. We start the next servers **without** specifying 40 `-bootstrap`. This is critical, since only one server should ever be running in 41 bootstrap mode. Once `Node B` and `Node C` are started, you should see a 42 message to the effect of: 43 44 ```text 45 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election. 46 ``` 47 48 This indicates that the node is not in bootstrap mode, and it will not elect 49 itself as leader. We can now join these machines together. Since a join 50 operation is symmetric it does not matter which node initiates it. From 51 `Node B` and `Node C` you can do the following: 52 53 ```text 54 $ consul join <Node A Address> 55 Successfully joined cluster by contacting 1 nodes. 56 ``` 57 58 Alternatively, from `Node A` you can do the following: 59 60 ```text 61 $ consul join <Node B Address> <Node C Address> 62 Successfully joined cluster by contacting 2 nodes. 63 ``` 64 65 Once the join is successful, `Node A` should output something like: 66 67 ```text 68 [INFO] raft: Added peer 127.0.0.2:8300, starting replication 69 .... 70 [INFO] raft: Added peer 127.0.0.3:8300, starting replication 71 ``` 72 73 As a sanity check, the `consul info` command is a useful tool. It can be used to 74 verify `raft.num_peers` is now 2, and you can view the latest log index under 75 `raft.last_log_index`. When running `consul info` on the followers, you should 76 see `raft.last_log_index` converge to the same value as the leader begins 77 replication. That value represents the last log entry that has been stored on 78 disk. 79 80 This indicates that `Node B` and `Node C` have been added as peers. At this 81 point, all three nodes see each other as peers, `Node A` is the leader, and 82 replication should be working. 83 84 The final step is to remove the `-bootstrap` flag. This is important since we 85 don't want the node to be able to make unilateral decisions in the case of a 86 failure of the other two nodes. To do this, we send a `SIGINT` to `Node A` to 87 allow it to perform a graceful leave. Then we remove the `-bootstrap` flag and 88 restart the node. The node will need to rejoin the cluster, since the graceful 89 exit leaves the cluster. Any transactions that took place while `Node A` was 90 offline will be replicated and the node will catch up. 91 92 Now that the servers are all started and replicating to each other, all the 93 remaining clients can be joined. Clients are much easier, as they can be started 94 and perform a `join` against any existing node. All nodes participate in a 95 gossip protocol to perform basic discovery, so clients will automatically find 96 the servers and register themselves. 97 98 -> If you accidentally start another server with the flag set, do not fret. 99 Shutdown the node, and remove the `raft/` folder from the data directory. This 100 will remove the bad state caused by being in `-bootstrap` mode. Then restart the 101 node and join the cluster normally.