github.com/outbrain/consul@v1.4.5/website/source/docs/guides/areas.html.markdown.erb (about) 1 --- 2 layout: "docs" 3 page_title: "Multiple Datacenters - Advanced Federation with Network Areas" 4 sidebar_current: "docs-guides-areas" 5 description: |- 6 One of the key features of Consul is its support for multiple datacenters. The architecture of Consul is designed to promote low coupling of datacenters so that connectivity issues or failure of any datacenter does not impact the availability of Consul in other datacenters. This means each datacenter runs independently, each having a dedicated group of servers and a private LAN gossip pool. 7 --- 8 9 # Multiple Datacenters 10 ## Advanced Federation with Network Areas 11 12 [//]: # ( ~> The network area functionality described here is available only in ) 13 [//]: # ( [Consul Enterprise](https://www.hashicorp.com/products/consul/) version 0.8.0 and later. ) 14 15 <%= enterprise_alert :consul %> 16 17 One of the key features of Consul is its support for multiple datacenters. 18 The [architecture](/docs/internals/architecture.html) of Consul is designed to 19 promote a low coupling of datacenters so that connectivity issues or 20 failure of any datacenter does not impact the availability of Consul in other 21 datacenters. This means each datacenter runs independently, each having a dedicated 22 group of servers and a private LAN [gossip pool](/docs/internals/gossip.html). 23 24 In general, data is not replicated between different Consul datacenters. When a 25 request is made for a resource in another datacenter, the local Consul servers forward 26 an RPC request to the remote Consul servers for that resource and return the results. 27 If the remote datacenter is not available, then those resources will also not be 28 available, but that won't otherwise affect the local datacenter. There are some special 29 situations where a limited subset of data can be replicated, such as with Consul's built-in 30 [ACL replication](/docs/guides/acl.html#outages-and-acl-replication) capability, or 31 external tools like [consul-replicate](https://github.com/hashicorp/consul-replicate). 32 33 This guide covers the advanced form of federating Consul clusters using the new 34 network areas capability added in [Consul Enterprise](https://www.hashicorp.com/products/consul/) 35 version 0.8.0. For the basic form of federation available in the open source version 36 of Consul, please see the [Basic Federation Guide](/docs/guides/datacenters.html) 37 for more details. 38 39 ## Network Areas 40 41 Consul's [Basic Federation](/docs/guides/datacenters.html) support relies on all 42 Consul servers in all datacenters having full mesh connectivity via server RPC 43 (8300/tcp) and Serf WAN (8302/tcp and 8302/udp). Securing this setup requires TLS 44 in combination with managing a gossip keyring. With massive Consul deployments, it 45 becomes tricky to support a full mesh with all Consul servers, and to manage the 46 keyring. 47 48 Consul Enterprise version 0.8.0 added support for a new federation model based on 49 operator-created network areas. Network areas specify a relationship between a 50 pair of Consul datacenters. Operators create reciprocal areas on each side of the 51 relationship and then join them together, so a given Consul datacenter can participate 52 in many areas, even when some of the peer areas cannot contact each other. This 53 allows for more flexible relationships between Consul datacenters, such as hub/spoke 54 or more general tree structures. Traffic between areas is all performed via server 55 RPC (8300/tcp) so it can be secured with just TLS. 56 57 Currently, Consul will only route RPC requests to datacenters it is immediately adjacent 58 to via an area (or via the WAN), but future versions of Consul may add routing support. 59 60 The following can be used to manage network areas: 61 62 * [Network Areas HTTP Endpoint](/api/operator/area.html) 63 * [Network Areas CLI](/docs/commands/operator/area.html) 64 65 ## Network Areas and the WAN Gossip Pool 66 67 Networks areas can be used alongside the Consul's [Basic Federation](/docs/guides/datacenters.html) 68 model and the WAN gossip pool. This helps ease migration, and clusters like the 69 [primary datacenter](/docs/agent/options.html#primary_datacenter) are more easily managed via 70 the WAN because they need to be available to all Consul datacenters. 71 72 A peer datacenter can connected via the WAN gossip pool and a network area at the 73 same time, and RPCs will be forwarded as long as servers are available in either. 74 75 ## Getting Started 76 77 To get started, follow the [bootstrapping guide](/docs/guides/bootstrapping.html) to 78 start each datacenter. After bootstrapping, we should have two datacenters now which 79 we can refer to as `dc1` and `dc2`. Note that datacenter names are opaque to Consul; 80 they are simply labels that help human operators reason about the Consul clusters. 81 82 A compatible pair of areas must be created in each datacenter: 83 84 ```text 85 (dc1) $ consul operator area create -peer-datacenter=dc2 86 Created area "cbd364ae-3710-1770-911b-7214e98016c0" with peer datacenter "dc2"! 87 ``` 88 89 ```text 90 (dc2) $ consul operator area create -peer-datacenter=dc1 91 Created area "2aea3145-f1e3-cb1d-a775-67d15ddd89bf" with peer datacenter "dc1"! 92 ``` 93 94 Now you can query for the members of the area: 95 96 ```text 97 (dc1) $ consul operator area members 98 Area Node Address Status Build Protocol DC RTT 99 cbd364ae-3710-1770-911b-7214e98016c0 node-1.dc1 127.0.0.1:8300 alive 0.8.0_entrc1 2 dc1 0s 100 ``` 101 102 Consul will automatically make sure that all servers within the datacenter where 103 the area was created are joined to the area using the LAN information. We need to 104 join with at least one Consul server in the other datacenter to complete the area: 105 106 ```text 107 (dc1) $ consul operator area join -peer-datacenter=dc2 127.0.0.2 108 Address Joined Error 109 127.0.0.2 true (none) 110 ``` 111 112 With a successful join, we should now see the remote Consul servers as part of the 113 area's members: 114 115 ```text 116 (dc1) $ consul operator area members 117 Area Node Address Status Build Protocol DC RTT 118 cbd364ae-3710-1770-911b-7214e98016c0 node-1.dc1 127.0.0.1:8300 alive 0.8.0_entrc1 2 dc1 0s 119 cbd364ae-3710-1770-911b-7214e98016c0 node-2.dc2 127.0.0.2:8300 alive 0.8.0_entrc1 2 dc2 581.649µs 120 ``` 121 122 Now we can route RPC commands in both directions. Here's a sample command to set a KV 123 entry in dc2 from dc1: 124 125 ```text 126 (dc1) $ consul kv put -datacenter=dc2 hello world 127 Success! Data written to: hello 128 ``` 129 130 The DNS interface supports federation as well: 131 132 ```text 133 (dc1) $ dig @127.0.0.1 -p 8600 consul.service.dc2.consul 134 135 ; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.dc2.consul 136 ; (1 server found) 137 ;; global options: +cmd 138 ;; Got answer: 139 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49069 140 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 141 ;; WARNING: recursion requested but not available 142 143 ;; QUESTION SECTION: 144 ;consul.service.dc2.consul. IN A 145 146 ;; ANSWER SECTION: 147 consul.service.dc2.consul. 0 IN A 127.0.0.2 148 149 ;; Query time: 3 msec 150 ;; SERVER: 127.0.0.1#8600(127.0.0.1) 151 ;; WHEN: Wed Mar 29 11:27:35 2017 152 ;; MSG SIZE rcvd: 59 153 ``` 154 155 There are a few networking requirements that must be satisfied for this to 156 work. Of course, all server nodes must be able to talk to each other via their server 157 RPC ports (8300/tcp). If service discovery is to be used across datacenters, the 158 network must be able to route traffic between IP addresses across regions as well. 159 Usually, this means that all datacenters must be connected using a VPN or other 160 tunneling mechanism. Consul does not handle VPN or NAT traversal for you. 161 162 The [`translate_wan_addrs`](/docs/agent/options.html#translate_wan_addrs) configuration 163 provides a basic address rewriting capability. 164