github.com/maier/nomad@v0.4.1-0.20161110003312-a9e3d0b8549d/website/source/docs/cluster/requirements.html.md (about)

     1  ---
     2  layout: "docs"
     3  page_title: "Nomad Client and Server Requirements"
     4  sidebar_current: "docs-cluster-requirements"
     5  description: |-
     6    Learn how to manually bootstrap a Nomad cluster using the server-join
     7    command. This section also discusses Nomad federation across multiple
     8    datacenters and regions.
     9  ---
    10  
    11  # Cluster Requirements
    12  
    13  ## Resources (RAM, CPU, etc.)
    14  
    15  **Nomad servers** may need to be run on large machine instances. We suggest
    16  having 8+ cores, 32 GB+ of memory, 80 GB+ of disk and significant network
    17  bandwidth. The core count and network recommendations are to ensure high
    18  throughput as Nomad heavily relies on network communication and as the Servers
    19  are managing all the nodes in the region and performing scheduling. The memory
    20  and disk requirements are due to the fact that Nomad stores all state in memory
    21  and will store two snapshots of this data onto disk. Thus disk should be at
    22  least 2 times the memory available to the server when deploying a high load
    23  cluster.
    24  
    25  **Nomad clients** support reserving resources on the node that should not be
    26  used by Nomad. This should be used to target a specific resource utilization per
    27  node and to reserve resources for applications running outside of Nomad's
    28  supervision such as Consul and the operating system itself.
    29  
    30  Please see the [reservation configuration](/docs/agent/configuration/client.html#reserved) for
    31  more detail.
    32  
    33  ## Network Topology
    34  
    35  **Nomad servers** are expected to have sub 10 millisecond network latencies
    36  between each other to ensure liveness and high throughput scheduling. Nomad
    37  servers can be spread across multiple datacenters if they have low latency
    38  connections between them to achieve high availability.
    39  
    40  For example, on AWS every region comprises of multiple zones which have very low
    41  latency links between them, so every zone can be modeled as a Nomad datacenter
    42  and every Zone can have a single Nomad server which could be connected to form a
    43  quorum and a region.
    44  
    45  Nomad servers uses Raft for state replication and Raft being highly consistent
    46  needs a quorum of servers to function, therefore we recommend running an odd
    47  number of Nomad servers in a region.  Usually running 3-5 servers in a region is
    48  recommended. The cluster can withstand a failure of one server in a cluster of
    49  three servers and two failures in a cluster of five servers. Adding more servers
    50  to the quorum adds more time to replicate state and hence throughput decreases
    51  so we don't recommend having more than seven servers in a region.
    52  
    53  **Nomad clients** do not have the same latency requirements as servers since they
    54  are not participating in Raft. Thus clients can have 100+ millisecond latency to
    55  their servers. This allows having a set of Nomad servers that service clients
    56  that can be spread geographically over a continent or even the world in the case
    57  of having a single "global" region and many datacenter.