github.com/Ilhicas/nomad@v1.0.4-0.20210304152020-e86851182bc3/website/content/docs/enterprise.mdx (about) 1 --- 2 layout: docs 3 page_title: Nomad Enterprise 4 sidebar_title: Nomad Enterprise 5 description: >- 6 Nomad Enterprise adds operations, collaboration, and governance capabilities 7 to Nomad. 8 9 Features include Resource Quotas, Sentinel Policies, and Advanced Autopilot. 10 --- 11 12 # Nomad Enterprise 13 14 Nomad Enterprise adds collaboration, operational, and governance capabilities 15 to Nomad. Nomad Enterprise is available as a base Platform package with an 16 optional Governance & Policy add-on module. 17 18 Please navigate the sub-sections for more information about each package and 19 its features in detail. 20 21 ## Nomad Enterprise Licensing 22 23 Nomad Enterprise requires a license to run. When a Enterprise server first 24 starts it will be set with a temporary license that includes all 25 features. This will be valid for 6 hours and allows users to test enterprise 26 features and provide enough time for operators to apply their enterprise 27 license. 28 29 If a server is never given a valid license and the temporary license expires, 30 the server will shutdown. If a valid (non-temporary) license expires, the 31 cluster will continue to function, but write operations to enterprise features 32 will be disabled. 33 34 ~> **Note:** A Nomad Enterprise cluster cannot be downgraded to the open 35 source version of Nomad. Servers running the open source version of Nomad will 36 panic if they are joined to a Nomad Enterprise cluster. See issue [gh-9958] 37 for more details. 38 39 See the [License commands](/docs/commands/license) for more information on 40 interacting with the Enterprise License. 41 42 ## Nomad Enterprise Platform 43 44 Nomad Enterprise Platform enables operators to easily upgrade Nomad as well as 45 enhances performance and availability through Advanced Autopilot features such 46 as Automated Upgrades, Enhanced Read Scalability, and Redundancy Zones. 47 48 ### Automated Upgrades 49 50 Automated Upgrades allows operators to deploy a complete cluster of new 51 servers and then simply wait for the upgrade to complete. As the new servers 52 join the cluster, server logic checks the version of each Nomad server 53 node. If the version is higher than the version on the current set of voters, 54 it will avoid promoting the new servers to voters until the number of new 55 servers matches the number of existing servers at the previous version. Once 56 the numbers match, Nomad will begin to promote new servers and demote old 57 ones. 58 59 See the [Autopilot - Upgrade 60 Migrations](https://learn.hashicorp.com/tutorials/nomad/autopilot#upgrade-migrations) 61 documentation for a thorough overview. 62 63 ### Automated Backups 64 65 Automated Backups allows operators to run the snapshot agent as a long-running 66 daemon process or in a one-shot mode from a batch job. The agent takes 67 snapshots of the state of the Nomad servers and saves them locally, or pushes 68 them to an optional remote storage service, such as Amazon S3. 69 70 This capability provides an enterprise solution for backup and restoring the 71 state of Nomad servers within an environment in an automated manner. These 72 snapshots are atomic and point-in-time. 73 74 See the [Operator Snapshot agent](/docs/commands/operator/snapshot-agent) 75 documentation for a thorough overview. 76 77 ### Enhanced Read Scalability 78 79 This feature enables an operator to introduce non-voting server nodes to a 80 Nomad cluster. Non-voting servers will receive the replication stream but will 81 not take part in quorum (required by the leader before log entries can be 82 committed). Adding explicit non-voters will scale reads and scheduling without 83 impacting write latency. 84 85 See the [Autopilot - Read 86 Scalability](https://learn.hashicorp.com/tutorials/nomad/autopilot#server-read-and-scheduling-scaling) 87 documentation for a thorough overview. 88 89 ### Redundancy Zones 90 91 Redundancy Zones enables an operator to deploy a non-voting server as a hot 92 standby server on a per availability zone basis. For example, in an 93 environment with three availability zones an operator can run one voter and 94 one non-voter in each availability zone, for a total of six servers. If an 95 availability zone is completely lost, only one voter will be lost, so the 96 cluster remains available. If a voter is lost in an availability zone, Nomad 97 will promote the non-voter to a voter automatically, putting the hot standby 98 server into service quickly. 99 100 See the [Autopilot - Redundancy 101 Zones](https://learn.hashicorp.com/tutorials/nomad/autopilot#redundancy-zones) 102 documentation for a thorough overview. 103 104 ### Multiple Vault Namespaces 105 106 Multi-Vault Namespaces enables an operator to configure a single Nomad cluster 107 to support multiple Vault Namespaces for topology simplicity and fleet 108 consolidation when running Nomad and Vault together. Nomad will automatically 109 retrieve a Vault token based on a job's defined Vault Namespace and make it 110 available for the specified Nomad task at hand. 111 112 See the [Vault Integration documentation](/docs/integrations/vault-integration#enterprise-configuration) for more information. 113 114 ## Governance & Policy 115 116 Governance & Policy features are part of an add-on module that enables an 117 organization to securely operate Nomad at scale across multiple teams through 118 features such as Audit Logging, Resource Quotas, and Sentinel Policies. 119 120 ### Audit Logging 121 122 Secure clusters with enhanced risk management and operational traceability to 123 fulfill compliance requirements. This Enterprise feature provides 124 administrators with a complete set of records for all user-issued actions in 125 Nomad. 126 127 With Audit Logging, enterprises can now proactively identify access anomalies, 128 ensure enforcement of their security policies, and diagnose cluster behavior 129 by viewing preceding user operations. Designed as an HTTP API based audit 130 logging system, each audit event is captured with relevant request and 131 response information in a JSON format that is easily digestibile and familiar 132 to operators. 133 134 See the [Audit Logging Documentation](/docs/configuration/audit) for a 135 thorough overview. 136 137 ### Resource Quotas 138 139 Resource Quotas enable an operator to limit resource consumption across teams 140 or projects to reduce waste and align budgets. In Nomad Enterprise, operators 141 can define quota specifications and apply them to namespaces. When a quota is 142 attached to a namespace, the jobs within the namespace may not consume more 143 resources than the quota specification allows. 144 145 This allows operators to partition a shared cluster and ensure that no single 146 actor can consume the whole resources of the cluster. 147 148 See the [Resource Quotas 149 Guide](https://learn.hashicorp.com/tutorials/nomad/quotas) for a thorough 150 overview. 151 152 ### Sentinel Policies 153 154 In Nomad Enterprise, operators can create Sentinel policies for fine-grained 155 policy enforcement. Sentinel policies build on top of the ACL system and allow 156 operators to define policies such as disallowing jobs to be submitted to 157 production on Fridays or only allowing users to run jobs that use 158 pre-authorized Docker images. Sentinel policies are defined as code, giving 159 operators considerable flexibility to meet compliance requirements. 160 161 See the [Sentinel Policies 162 Guide](https://learn.hashicorp.com/tutorials/nomad/sentinel) for a thorough 163 overview. 164 165 ## Multi-Cluster & Efficiency 166 167 Multi-Cluster & Efficiency features are part of an add-on module that enables 168 an organization to operate Nomad at scale across multiple clusters through 169 features such as Multiregion Deployments. 170 171 ### Multiregion Deployments 172 173 [Multiregion Deployments] enable an operator to deploy a single job to multiple 174 federated regions. This includes the ability to control the order of rollouts 175 and how each region will behave in the event of a deployment failure. 176 177 ### Dynamic Application Sizing 178 179 Dynamic Application Sizing enables organizations to optimize the resource 180 consumption of applications using sizing recommendations from Nomad. This 181 feature builds on Nomad [autoscaling capabilities] to remove the trial-and-error 182 routine of manually setting resource requirements. DAS comprises support for 183 vertical [scaling policies], a new API and UI for reviewing recommended job 184 changes, and a collection of Nomad Autoscaler plugins informed by best-practice 185 statistical measures. 186 187 ## Try Nomad Enterprise 188 189 Click [here](https://www.hashicorp.com/go/nomad-enterprise) to set up a demo or 190 request a trial of Nomad Enterprise. 191 192 [multiregion deployments]: /docs/job-specification/multiregion 193 [autoscaling capabilities]: /docs/autoscaling 194 [scaling policies]: /docs/autoscaling/policy 195 [gh-9958]: https://github.com/hashicorp/nomad/issues/9958