github.com/anth0d/nomad@v0.0.0-20221214183521-ae3a0a2cad06/website/content/docs/operations/key-management.mdx (about)

     1  ---
     2  layout: docs
     3  page_title: Key Management
     4  description: Learn about the key management in Nomad.
     5  ---
     6  
     7  # Key Management
     8  
     9  Nomad servers maintain an encryption keyring used to encrypt [Variables][] and
    10  sign task [workload identities][]. The servers store key metadata in raft, but
    11  the encryption key material is stored in a separate file in the `keystore`
    12  subdirectory of the Nomad [data directory][]. These files have the extension
    13  `.nks.json`. The key material in each file is wrapped in a unique key encryption
    14  key (KEK) that is not shared between servers.
    15  
    16  Under normal operations the keyring is entirely managed by Nomad, but this
    17  section provides administrators additional context around key replication and
    18  recovery.
    19  
    20  ## Key Rotation
    21  
    22  Only one key in the keyring is "active" at any given time, and all encryption
    23  and signing operations happen on the leader. Nomad automatically rotates the
    24  active encryption key every 30 days. When a key is rotated, the existing keys
    25  are marked as "inactive" but not deleted, so they can be used for decrypting
    26  previously encrypted variables and verifying workload identities for existing
    27  allocations.
    28  
    29  If you believe key material has been compromised, you can execute [`nomad
    30  operator root keyring rotate -full`][]. A new "active" key will be created and
    31  "inactive" keys will be marked "rekeying". Nomad will asynchronously decrypt and
    32  re-encrypt all variables with the new key. As each key's variables are encrypted
    33  with the new key, the old key will marked as "deprecated".
    34  
    35  ## Key Replication
    36  
    37  When a leader is elected, it creates the keyring if it does not already
    38  exist. When a key is added, the metadata will be replicated via raft. Each
    39  server runs a key replication process that watches for changes to the state
    40  store and will fetch the key material from the leader asynchronously, falling
    41  back to retrieving from other servers in the case where a key is written
    42  immediately before a leader election.
    43  
    44  ## Restoring the Keyring from Backup
    45  
    46  Key material is never stored in raft. This prevents an attacker with a backup of
    47  the state store from getting access to encrypted variables. It also allows the
    48  HashiCorp engineering and support organization to safely handle cluster
    49  snapshots you might provide without exposing any of your keys or variables.
    50  
    51  However, this means that to restore a cluster from snapshot you need to also
    52  provide the keystore directory with the `.nks.json` key files on at least one
    53  server. The `.nks.json` key files are unique per server, but only one server's
    54  key files are needed to recover the cluster. Operators should include these
    55  files as part of your organization's backup and recovery strategy for the
    56  cluster.
    57  
    58  [variables]: /docs/concepts/variables
    59  [workload identities]: /docs/concepts/workload-identity
    60  [data directory]: /docs/configuration#data_dir
    61  [`nomad operator root keyring rotate -full`]: /docs/commands/operator/root/keyring-rotate