github.com/iqoqo/nomad@v0.11.3-0.20200911112621-d7021c74d101/website/pages/docs/install/production/requirements.mdx (about) 1 --- 2 layout: docs 3 page_title: Hardware Requirements 4 sidebar_title: Hardware Requirements 5 description: |- 6 Learn about Nomad client and server requirements such as memory and CPU 7 recommendations, network topologies, and more. 8 --- 9 10 # Hardware Requirements 11 12 ## Resources (RAM, CPU, etc.) 13 14 **Nomad servers** may need to be run on large machine instances. We suggest 15 having between 4-8+ cores, 16-32 GB+ of memory, 40-80 GB+ of **fast** disk and 16 significant network bandwidth. The core count and network recommendations are to 17 ensure high throughput as Nomad heavily relies on network communication and as 18 the Servers are managing all the nodes in the region and performing scheduling. 19 The memory and disk requirements are due to the fact that Nomad stores all state 20 in memory and will store two snapshots of this data onto disk, which causes high IO in busy clusters with lots of writes. Thus disk should 21 be at least 2 times the memory available to the server when deploying a high 22 load cluster. When running on AWS prefer NVME or Provisioned IOPS SSD storage for data dir. 23 24 These recommendations are guidelines and operators should always monitor the 25 resource usage of Nomad to determine if the machines are under or over-sized. 26 27 **Nomad clients** support reserving resources on the node that should not be 28 used by Nomad. This should be used to target a specific resource utilization per 29 node and to reserve resources for applications running outside of Nomad's 30 supervision such as Consul and the operating system itself. 31 32 Please see the [reservation configuration](/docs/configuration/client#reserved) for 33 more detail. 34 35 ## Network Topology 36 37 **Nomad servers** are expected to have sub 10 millisecond network latencies 38 between each other to ensure liveness and high throughput scheduling. Nomad 39 servers can be spread across multiple datacenters if they have low latency 40 connections between them to achieve high availability. 41 42 For example, on AWS every region comprises of multiple zones which have very low 43 latency links between them, so every zone can be modeled as a Nomad datacenter 44 and every Zone can have a single Nomad server which could be connected to form a 45 quorum and a region. 46 47 Nomad servers uses Raft for state replication and Raft being highly consistent 48 needs a quorum of servers to function, therefore we recommend running an odd 49 number of Nomad servers in a region. Usually running 3-5 servers in a region is 50 recommended. The cluster can withstand a failure of one server in a cluster of 51 three servers and two failures in a cluster of five servers. Adding more servers 52 to the quorum adds more time to replicate state and hence throughput decreases 53 so we don't recommend having more than seven servers in a region. 54 55 **Nomad clients** do not have the same latency requirements as servers since they 56 are not participating in Raft. Thus clients can have 100+ millisecond latency to 57 their servers. This allows having a set of Nomad servers that service clients 58 that can be spread geographically over a continent or even the world in the case 59 of having a single "global" region and many datacenter. 60 61 ## Ports Used 62 63 Nomad requires 3 different ports to work properly on servers and 2 on clients, 64 some on TCP, UDP, or both protocols. Below we document the requirements for each 65 port. 66 67 - HTTP API (Default 4646). This is used by clients and servers to serve the HTTP 68 API. TCP only. 69 70 - RPC (Default 4647). This is used for internal RPC communication between client 71 agents and servers, and for inter-server traffic. TCP only. 72 73 - Serf WAN (Default 4648). This is used by servers to gossip both over the LAN and 74 WAN to other servers. It isn't required that Nomad clients can reach this address. 75 TCP and UDP. 76 77 When tasks ask for dynamic ports, they are allocated out of the port range 78 between 20,000 and 32,000. This is well under the ephemeral port range suggested 79 by the [IANA](https://en.wikipedia.org/wiki/Ephemeral_port). If your operating 80 system's default ephemeral port range overlaps with Nomad's dynamic port range, 81 you should tune the OS to avoid this overlap. 82 83 On Linux this can be checked and set as follows: 84 85 ```shell-sessioncat /proc/sys/net/ipv4/ip_local_port_range 86 32768 60999 87 $ echo "49152 65535" > /proc/sys/net/ipv4/ip_local_port_range 88 ``` 89 90 ## Bridge Networking and `iptables` 91 92 Nomad's task group networks and Consul Connect integration use bridge networking and iptables to send traffic between containers. The Linux kernel bridge module has three "tunables" that control whether traffic crossing the bridge are processed by iptables. Some operating systems (RedHat, CentOS, and Fedora in particular) configure these tunables to optimize for VM workloads where iptables rules might not be correctly configured for guest traffic. 93 94 These tunables can be set to allow iptables processing for the bridge network as follows: 95 96 ```shell-sessionecho 1 > /proc/sys/net/bridge/bridge-nf-call-arptables 97 $ echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables 98 $ echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables 99 ``` 100 101 To preserve these settings on startup of a client node, add a file including the following to `/etc/sysctl.d/` or remove the file your Linux distribution puts in that directory. 102 103 ```text 104 net.bridge.bridge-nf-call-arptables = 1 105 net.bridge.bridge-nf-call-ip6tables = 1 106 net.bridge.bridge-nf-call-iptables = 1 107 ```