github.com/outbrain/consul@v1.4.5/website/source/docs/internals/architecture.html.md (about) 1 --- 2 layout: "docs" 3 page_title: "Consul Architecture" 4 sidebar_current: "docs-internals-architecture" 5 description: |- 6 Consul is a complex system that has many different moving parts. To help users and developers of Consul form a mental model of how it works, this page documents the system architecture. 7 --- 8 9 # Consul Architecture 10 11 Consul is a complex system that has many different moving parts. To help 12 users and developers of Consul form a mental model of how it works, this 13 page documents the system architecture. 14 15 ~> **Advanced Topic!** This page covers technical details of 16 the internals of Consul. You don't need to know these details to effectively 17 operate and use Consul. These details are documented here for those who wish 18 to learn about them without having to go spelunking through the source code. 19 20 ## Glossary 21 22 Before describing the architecture, we provide a glossary of terms to help 23 clarify what is being discussed: 24 25 * Agent - An agent is the long running daemon on every member of the Consul cluster. 26 It is started by running `consul agent`. The agent is able to run in either *client* 27 or *server* mode. Since all nodes must be running an agent, it is simpler to refer to 28 the node as being either a client or server, but there are other instances of the agent. All 29 agents can run the DNS or HTTP interfaces, and are responsible for running checks and 30 keeping services in sync. 31 32 * Client - A client is an agent that forwards all RPCs to a server. The client is relatively 33 stateless. The only background activity a client performs is taking part in the LAN gossip 34 pool. This has a minimal resource overhead and consumes only a small amount of network 35 bandwidth. 36 37 * Server - A server is an agent with an expanded set of responsibilities including 38 participating in the Raft quorum, maintaining cluster state, responding to RPC queries, 39 exchanging WAN gossip with other datacenters, and forwarding queries to leaders or 40 remote datacenters. 41 42 * Datacenter - While the definition of a datacenter seems obvious, there are subtle details 43 that must be considered. For example, in EC2, are multiple availability zones considered 44 to comprise a single datacenter? We define a datacenter to be a networking environment that is 45 private, low latency, and high bandwidth. This excludes communication that would traverse 46 the public internet, but for our purposes multiple availability zones within a single EC2 47 region would be considered part of a single datacenter. 48 49 * Consensus - When used in our documentation we use consensus to mean agreement upon 50 the elected leader as well as agreement on the ordering of transactions. Since these 51 transactions are applied to a 52 [finite-state machine](https://en.wikipedia.org/wiki/Finite-state_machine), our definition 53 of consensus implies the consistency of a replicated state machine. Consensus is described 54 in more detail on [Wikipedia](https://en.wikipedia.org/wiki/Consensus_(computer_science)), 55 and our implementation is described [here](/docs/internals/consensus.html). 56 57 * Gossip - Consul is built on top of [Serf](https://www.serf.io/) which provides a full 58 [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) that is used for multiple purposes. 59 Serf provides membership, failure detection, and event broadcast. Our use of these 60 is described more in the [gossip documentation](/docs/internals/gossip.html). It is enough to know 61 that gossip involves random node-to-node communication, primarily over UDP. 62 63 * LAN Gossip - Refers to the LAN gossip pool which contains nodes that are all 64 located on the same local area network or datacenter. 65 66 * WAN Gossip - Refers to the WAN gossip pool which contains only servers. These 67 servers are primarily located in different datacenters and typically communicate 68 over the internet or wide area network. 69 70 * RPC - Remote Procedure Call. This is a request / response mechanism allowing a 71 client to make a request of a server. 72 73 ## 10,000 foot view 74 75 From a 10,000 foot altitude the architecture of Consul looks like this: 76 77 <div class="center"> 78 [![Consul Architecture](/assets/images/consul-arch.png)](/assets/images/consul-arch.png) 79 </div> 80 81 Let's break down this image and describe each piece. First of all, we can see 82 that there are two datacenters, labeled "one" and "two". Consul has first 83 class support for [multiple datacenters](/docs/guides/datacenters.html) and 84 expects this to be the common case. 85 86 Within each datacenter, we have a mixture of clients and servers. It is expected 87 that there be between three to five servers. This strikes a balance between 88 availability in the case of failure and performance, as consensus gets progressively 89 slower as more machines are added. However, there is no limit to the number of clients, 90 and they can easily scale into the thousands or tens of thousands. 91 92 All the nodes that are in a datacenter participate in a [gossip protocol](/docs/internals/gossip.html). 93 This means there is a gossip pool that contains all the nodes for a given datacenter. This serves 94 a few purposes: first, there is no need to configure clients with the addresses of servers; 95 discovery is done automatically. Second, the work of detecting node failures 96 is not placed on the servers but is distributed. This makes failure detection much more 97 scalable than naive heartbeating schemes. Thirdly, it is used as a messaging layer to notify 98 when important events such as leader election take place. 99 100 The servers in each datacenter are all part of a single Raft peer set. This means that 101 they work together to elect a single leader, a selected server which has extra duties. The leader 102 is responsible for processing all queries and transactions. Transactions must also be replicated to 103 all peers as part of the [consensus protocol](/docs/internals/consensus.html). Because of this 104 requirement, when a non-leader server receives an RPC request, it forwards it to the cluster leader. 105 106 The server nodes also operate as part of a WAN gossip pool. This pool is different from the LAN pool 107 as it is optimized for the higher latency of the internet and is expected to contain only 108 other Consul server nodes. The purpose of this pool is to allow datacenters to discover each 109 other in a low-touch manner. Bringing a new datacenter online is as easy as joining the existing 110 WAN gossip pool. Because the servers are all operating in this pool, it also enables cross-datacenter 111 requests. When a server receives a request for a different datacenter, it forwards it to a random 112 server in the correct datacenter. That server may then forward to the local leader. 113 114 This results in a very low coupling between datacenters, but because of failure detection, 115 connection caching and multiplexing, cross-datacenter requests are relatively fast and reliable. 116 117 In general, data is not replicated between different Consul datacenters. When a 118 request is made for a resource in another datacenter, the local Consul servers forward 119 an RPC request to the remote Consul servers for that resource and return the results. 120 If the remote datacenter is not available, then those resources will also not be 121 available, but that won't otherwise affect the local datacenter. There are some special 122 situations where a limited subset of data can be replicated, such as with Consul's built-in 123 [ACL replication](/docs/guides/acl.html#outages-and-acl-replication) capability, or 124 external tools like [consul-replicate](https://github.com/hashicorp/consul-replicate). 125 126 In some places, client agents may cache data from the servers to make it 127 available locally for performance and reliability. Examples include Connect 128 certificates and intentions which allow the client agent to make local decisions 129 about inbound connection requests without a round trip to the servers. Some API 130 endpoints also support optional result caching. This helps reliability because 131 the local agent can continue to respond to some queries like service-discovery 132 or Connect authorization from cache even if the connection to the servers is 133 disrupted or the servers are temporarily unavailable. 134 135 ## Getting in depth 136 137 At this point we've covered the high level architecture of Consul, but there are many 138 more details for each of the subsystems. The [consensus protocol](/docs/internals/consensus.html) is 139 documented in detail as is the [gossip protocol](/docs/internals/gossip.html). The [documentation](/docs/internals/security.html) 140 for the security model and protocols used are also available. 141 142 For other details, either consult the code, ask in IRC, or reach out to the mailing list.