github.com/adityamillind98/nomad@v0.11.8/website/pages/docs/integrations/vault-integration.mdx (about)

     1  ---
     2  layout: docs
     3  page_title: Vault Integration and Retrieving Dynamic Secrets
     4  sidebar_title: Vault
     5  description: |-
     6    Learn how to deploy an application in Nomad and retrieve dynamic credentials
     7    by integrating with Vault.
     8  ---
     9  
    10  # Vault Integration
    11  
    12  Nomad integrates seamlessly with [Vault][vault] and allows your application to
    13  retrieve dynamic credentials for various tasks. In this guide, you will deploy a
    14  web application that needs to authenticate against [PostgreSQL][postgresql] to
    15  display data from a table to the user.
    16  
    17  ## Reference Material
    18  
    19  - [Vault Integration Documentation][vault-integration]
    20  - [Nomad Template Stanza Integration with Vault][nomad-template-vault]
    21  - [Secrets Task Directory][secrets-task-directory]
    22  
    23  ## Estimated Time to Complete
    24  
    25  20 minutes
    26  
    27  ## Challenge
    28  
    29  Think of a scenario where a Nomad operator needs to deploy an application that
    30  can quickly and safely retrieve dynamic credentials to authenticate against a
    31  database and return information.
    32  
    33  ## Solution
    34  
    35  Deploy Vault and configure the nodes in your Nomad cluster to integrate with it.
    36  Use the appropriate [templating syntax][nomad-template-vault] to retrieve
    37  credentials from Vault and then store those credentials in the
    38  [secrets][secrets-task-directory] task directory to be consumed by the Nomad
    39  task.
    40  
    41  ## Prerequisites
    42  
    43  To perform the tasks described in this guide, you need to have a Nomad
    44  environment with Consul and Vault installed. You can use this [repo][repo] to
    45  easily provision a sandbox environment. This guide will assume a cluster with
    46  one server node and three client nodes.
    47  
    48  -> **Please Note:** This guide is for demo purposes and is only using a single
    49  Nomad server with Vault installed alongside. In a production cluster, 3 or 5
    50  Nomad server nodes are recommended along with a separate Vault cluster.
    51  
    52  ## Steps
    53  
    54  ### Step 1: Initialize Vault Server
    55  
    56  Run the following command to initialize Vault server and receive an
    57  [unseal][seal] key and initial root [token][token]. Be sure to note the unseal
    58  key and initial root token as you will need these two pieces of information.
    59  
    60  ```shell-session
    61  $ vault operator init -key-shares=1 -key-threshold=1
    62  ```
    63  
    64  The `vault operator init` command above creates a single Vault unseal key for
    65  convenience. For a production environment, it is recommended that you create at
    66  least five unseal key shares and securely distribute them to independent
    67  operators. The `vault operator init` command defaults to five key shares and a
    68  key threshold of three. If you provisioned more than one server, the others will
    69  become standby nodes but should still be unsealed.
    70  
    71  ### Step 2: Unseal Vault
    72  
    73  Run the following command and then provide your unseal key to Vault.
    74  
    75  ```shell-session
    76  $ vault operator unseal
    77  ```
    78  
    79  The output of unsealing Vault will look similar to the following:
    80  
    81  ```text
    82  Key                    Value
    83  ---                    -----
    84  Seal Type              shamir
    85  Initialized            true
    86  Sealed                 false
    87  Total Shares           1
    88  Threshold              1
    89  Version                0.11.4
    90  Cluster Name           vault-cluster-d12535e5
    91  Cluster ID             49383931-c782-fdc6-443e-7681e7b15aca
    92  HA Enabled             true
    93  HA Cluster             n/a
    94  HA Mode                standby
    95  Active Node Address    <none>
    96  ```
    97  
    98  ### Step 3: Log in to Vault
    99  
   100  Use the [login][login] command to authenticate yourself against Vault using the
   101  initial root token you received earlier. You will need to authenticate to run
   102  the necessary commands to write policies, create roles, and configure a
   103  connection to your database.
   104  
   105  ```shell-session
   106  $ vault login <your initial root token>
   107  ```
   108  
   109  If your login is successful, you will see output similar to what is shown below:
   110  
   111  ```text
   112  Success! You are now authenticated. The token information displayed below
   113  is already stored in the token helper. You do NOT need to run "vault login"
   114  again. Future Vault requests will automatically use this token.
   115  ...
   116  ```
   117  
   118  ### Step 4: Write the Policy for the Nomad Server Token
   119  
   120  To use the Vault integration, you must provide a Vault token to your Nomad
   121  servers. Although you can provide your root token to easily get started, the
   122  recommended approach is to use a token [role][role] based token. This first
   123  requires writing a policy that you will attach to the token you provide to your
   124  Nomad servers. By using this approach, you can limit the set of
   125  [policies][policy] that tasks managed by Nomad can access.
   126  
   127  For this exercise, use the following policy for the token you will create for
   128  your Nomad server. Place this policy in a file named `nomad-server-policy.hcl`.
   129  
   130  ```hcl
   131  # Allow creating tokens under "nomad-cluster" token role. The token role name
   132  # should be updated if "nomad-cluster" is not used.
   133  path "auth/token/create/nomad-cluster" {
   134    capabilities = ["update"]
   135  }
   136  
   137  # Allow looking up "nomad-cluster" token role. The token role name should be
   138  # updated if "nomad-cluster" is not used.
   139  path "auth/token/roles/nomad-cluster" {
   140    capabilities = ["read"]
   141  }
   142  
   143  # Allow looking up the token passed to Nomad to validate # the token has the
   144  # proper capabilities. This is provided by the "default" policy.
   145  path "auth/token/lookup-self" {
   146    capabilities = ["read"]
   147  }
   148  
   149  # Allow looking up incoming tokens to validate they have permissions to access
   150  # the tokens they are requesting. This is only required if
   151  # `allow_unauthenticated` is set to false.
   152  path "auth/token/lookup" {
   153    capabilities = ["update"]
   154  }
   155  
   156  # Allow revoking tokens that should no longer exist. This allows revoking
   157  # tokens for dead tasks.
   158  path "auth/token/revoke-accessor" {
   159    capabilities = ["update"]
   160  }
   161  
   162  # Allow checking the capabilities of our own token. This is used to validate the
   163  # token upon startup.
   164  path "sys/capabilities-self" {
   165    capabilities = ["update"]
   166  }
   167  
   168  # Allow our own token to be renewed.
   169  path "auth/token/renew-self" {
   170    capabilities = ["update"]
   171  }
   172  ```
   173  
   174  You can now write a policy called `nomad-server` by running the following
   175  command:
   176  
   177  ```shell-session
   178  $ vault policy write nomad-server nomad-server-policy.hcl
   179  ```
   180  
   181  You should see the following output:
   182  
   183  ```text
   184  Success! Uploaded policy: nomad-server
   185  ```
   186  
   187  You will generate the actual token in the next few steps.
   188  
   189  ### Step 5: Create a Token Role
   190  
   191  At this point, you must create a Vault token role that Nomad can use. The token
   192  role allows you to limit what Vault policies are accessible by jobs
   193  submitted to Nomad. We will use the following token role:
   194  
   195  ```json
   196  {
   197    "allowed_policies": "access-tables",
   198    "token_explicit_max_ttl": 0,
   199    "name": "nomad-cluster",
   200    "orphan": true,
   201    "token_period": 259200,
   202    "renewable": true
   203  }
   204  ```
   205  
   206  Please notice that the `access-tables` policy is listed under the
   207  `allowed_policies` key. We have not created this policy yet, but it will be used
   208  by our job to retrieve credentials to access the database. A job running in our
   209  Nomad cluster will only be allowed to use the `access-tables` policy.
   210  
   211  If you would like to allow all policies to be used by any job in the Nomad
   212  cluster except for the ones you specifically prohibit, then use the
   213  `disallowed_policies` key instead and simply list the policies that should not
   214  be granted. If you take this approach, be sure to include `nomad-server` in the
   215  disallowed policies group. An example of this is shown below:
   216  
   217  ```json
   218  {
   219    "disallowed_policies": "nomad-server",
   220    "token_explicit_max_ttl": 0,
   221    "name": "nomad-cluster",
   222    "orphan": true,
   223    "token_period": 259200,
   224    "renewable": true
   225  }
   226  ```
   227  
   228  Save the policy in a file named `nomad-cluster-role.json` and create the token
   229  role named `nomad-cluster`.
   230  
   231  ```shell-session
   232  $ vault write /auth/token/roles/nomad-cluster @nomad-cluster-role.json
   233  ```
   234  
   235  You should see the following output:
   236  
   237  ```text
   238  Success! Data written to: auth/token/roles/nomad-cluster
   239  ```
   240  
   241  ### Step 6: Generate the Token for the Nomad Server
   242  
   243  Run the following command to create a token for your Nomad server:
   244  
   245  ```shell-session
   246  $ vault token create -policy nomad-server -period 72h -orphan
   247  ```
   248  
   249  The `-orphan` flag is included when generating the Nomad server token above to
   250  prevent revocation of the token when its parent expires. Vault typically creates
   251  tokens with a parent-child relationship. When an ancestor token is revoked, all
   252  of its descendant tokens and their associated leases are revoked as well.
   253  
   254  If everything works, you should see output similar to the following:
   255  
   256  ```text
   257  Key                  Value
   258  ---                  -----
   259  token                1gr0YoLyTBVZl5UqqvCfK9RJ
   260  token_accessor       5fz20DuDbxKgweJZt3cMynya
   261  token_duration       72h
   262  token_renewable      true
   263  token_policies       ["default" "nomad-server"]
   264  identity_policies    []
   265  policies             ["default" "nomad-server"]
   266  ```
   267  
   268  ### Step 7: Edit the Nomad Server Configuration to Enable Vault Integration
   269  
   270  At this point, you are ready to edit the [vault stanza][vault-stanza] in the
   271  Nomad Server's configuration file located at `/etc/nomad.d/nomad.hcl`. Provide
   272  the token you generated in the previous step in the `vault` stanza of your Nomad
   273  server configuration. The token can also be provided as an environment variable
   274  called `VAULT_TOKEN`. Be sure to specify the `nomad-cluster-role` in the
   275  [create_from_role][create-from-role] option. If using
   276  [Vault Namespaces](https://www.vaultproject.io/docs/enterprise/namespaces),
   277  modify both the client and server configuration to include the namespace;
   278  alternatively, it can be provided in the environment variable `VAULT_NAMESPACE`.
   279  After following these steps and enabling Vault, the `vault` stanza in your Nomad
   280  server configuration will be similar to what is shown below:
   281  
   282  ```hcl
   283  vault {
   284    enabled = true
   285    address = "http://active.vault.service.consul:8200"
   286    task_token_ttl = "1h"
   287    create_from_role = "nomad-cluster"
   288    token = "<your nomad server token>"
   289    namespace = "<vault namespace for the cluster>"
   290  }
   291  ```
   292  
   293  Restart the Nomad server
   294  
   295  ```shell-session
   296  $ sudo systemctl restart nomad
   297  ```
   298  
   299  NOTE: Nomad servers will renew the token automatically.
   300  
   301  Vault integration needs to be enabled on the client nodes as well, but this has
   302  been configured for you already in this environment. You will see the `vault`
   303  stanza in your Nomad clients' configuration (located at
   304  `/etc/nomad.d/nomad.hcl`) looks similar to the following:
   305  
   306  ```hcl
   307  vault {
   308    enabled = true
   309    address = "http://active.vault.service.consul:8200"
   310  }
   311  ```
   312  
   313  Please note that the Nomad clients do not need to be provided with a Vault
   314  token.
   315  
   316  ### Step 8: Deploy Database
   317  
   318  The next few steps will involve configuring a connection between Vault and our
   319  database, so let's deploy one that we can connect to. Create a Nomad job called
   320  `db.nomad` with the following content:
   321  
   322  ```hcl
   323  job "postgres-nomad-demo" {
   324    datacenters = ["dc1"]
   325  
   326    group "db" {
   327  
   328      task "server" {
   329        driver = "docker"
   330  
   331        config {
   332          image = "hashicorp/postgres-nomad-demo:latest"
   333          port_map {
   334            db = 5432
   335          }
   336        }
   337        resources {
   338          network {
   339            port  "db"{
   340  	    static = 5432
   341  	  }
   342          }
   343        }
   344  
   345        service {
   346          name = "database"
   347          port = "db"
   348  
   349          check {
   350            type     = "tcp"
   351            interval = "2s"
   352            timeout  = "2s"
   353          }
   354        }
   355      }
   356    }
   357  }
   358  ```
   359  
   360  Run the job as shown below:
   361  
   362  ```shell-session
   363  $ nomad run db.nomad
   364  ```
   365  
   366  Verify the job is running with the following command:
   367  
   368  ```shell-session
   369  $ nomad status postgres-nomad-demo
   370  ```
   371  
   372  The result of the status command will look similar to the output below:
   373  
   374  ```text
   375  ID            = postgres-nomad-demo
   376  Name          = postgres-nomad-demo
   377  Submit Date   = 2018-11-15T21:01:00Z
   378  Type          = service
   379  Priority      = 50
   380  Datacenters   = dc1
   381  Status        = running
   382  Periodic      = false
   383  Parameterized = false
   384  
   385  Summary
   386  Task Group  Queued  Starting  Running  Failed  Complete  Lost
   387  db          0       0         1        0       0         0
   388  
   389  Allocations
   390  ID        Node ID   Task Group  Version  Desired  Status   Created    Modified
   391  701e2699  5de1330c  db          0        run      running  1m56s ago  1m33s ago
   392  ```
   393  
   394  Now we can move on to configuring the connection between Vault and our database.
   395  
   396  ### Step 9: Enable the Database Secrets Engine
   397  
   398  We are using the database secrets engine for Vault in this exercise so that we
   399  can generate dynamic credentials for our PostgreSQL database. Run the following command to enable it:
   400  
   401  ```shell-session
   402  $ vault secrets enable database
   403  ```
   404  
   405  If the previous command was successful, you will see the following output:
   406  
   407  ```text
   408  Success! Enabled the database secrets engine at: database/
   409  ```
   410  
   411  ### Step 10: Configure the Database Secrets Engine
   412  
   413  Create a file named `connection.json` and placed the following information into
   414  it:
   415  
   416  ```json
   417  {
   418    "plugin_name": "postgresql-database-plugin",
   419    "allowed_roles": "accessdb",
   420    "connection_url": "postgresql://{{username}}:{{password}}@database.service.consul:5432/postgres?sslmode=disable",
   421    "username": "postgres",
   422    "password": "postgres123"
   423  }
   424  ```
   425  
   426  The information above allows Vault to connect to our database and create users
   427  with specific privileges. We will specify the `accessdb` role soon. In a
   428  production setting, it is recommended to give Vault credentials with enough
   429  privileges to generate database credentials dynamically and and manage their
   430  lifecycle.
   431  
   432  Run the following command to configure the connection between the database
   433  secrets engine and our database:
   434  
   435  ```shell-session
   436  $ vault write database/config/postgresql @connection.json
   437  ```
   438  
   439  If the operation is successful, there will be no output.
   440  
   441  ### Step 11: Create a Vault Role to Manage Database Privileges
   442  
   443  Recall from the previous step that we specified `accessdb` in the
   444  `allowed_roles` key of our connection information. Let's set up that role now. Create a file called `accessdb.sql` with the following content:
   445  
   446  ```shell
   447  CREATE USER "{{name}}" WITH ENCRYPTED PASSWORD '{{password}}' VALID UNTIL
   448  '{{expiration}}';
   449  GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "{{name}}";
   450  GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "{{name}}";
   451  GRANT ALL ON SCHEMA public TO "{{name}}";
   452  ```
   453  
   454  The SQL above will be used in the [creation_statements][creation-statements]
   455  parameter of our next command to specify the privileges that the dynamic
   456  credentials being generated will possess. In our case, the dynamic database user
   457  will have broad privileges that include the ability to read from the tables that
   458  our application will need to access.
   459  
   460  Run the following command to create the role:
   461  
   462  ```shell-session
   463  $ vault write database/roles/accessdb db_name=postgresql \
   464  creation_statements=@accessdb.sql default_ttl=1h max_ttl=24h
   465  ```
   466  
   467  You should see the following output after running the previous command:
   468  
   469  ```text
   470  Success! Data written to: database/roles/accessdb
   471  ```
   472  
   473  ### Step 12: Generate PostgreSQL Credentials
   474  
   475  You should now be able to generate dynamic credentials to access your database.
   476  Run the following command to generate a set of credentials:
   477  
   478  ```shell-session
   479  $ vault read database/creds/accessdb
   480  ```
   481  
   482  The previous command should return output similar to what is shown below:
   483  
   484  ```text
   485  Key                Value
   486  ---                -----
   487  lease_id           database/creds/accessdb/3JozEMSMqw0vHHhvla15sKTW
   488  lease_duration     1h
   489  lease_renewable    true
   490  password           A1a-3pMGjpDXHZ2Qzuf7
   491  username           v-root-accessdb-5LA65urB4daA8KYy2xku-1542318363
   492  ```
   493  
   494  Congratulations! You have configured Vault's connection to your database and can
   495  now generate credentials with the previously specified privileges. Now we need
   496  to deploy our application and make sure that it will be able to communicate with
   497  Vault and obtain the credentials as well.
   498  
   499  ### Step 13: Create the `access-tables` Policy for Your Nomad Job to Use
   500  
   501  Recall from [Step 5][step-5] that we specified a policy named `access-tables` in
   502  our `allowed_policies` section of the token role. We will create this policy now
   503  and give it the capability to read from the `database/creds/accessdb` endpoint
   504  (the same endpoint we read from in the previous step to generate credentials for
   505  our database). We will then specify this policy in our Nomad job which will
   506  allow it to retrieve credentials for itself to access the database.
   507  
   508  On the Nomad server (which is also running Vault), create a file named
   509  `access-tables-policy.hcl` with the following content:
   510  
   511  ```hcl
   512  path "database/creds/accessdb" {
   513    capabilities = ["read"]
   514  }
   515  ```
   516  
   517  Create the `access-tables` policy with the following command:
   518  
   519  ```shell-session
   520  $ vault policy write access-tables access-tables-policy.hcl
   521  ```
   522  
   523  You should see the following output:
   524  
   525  ```text
   526  Success! Uploaded policy: access-tables
   527  ```
   528  
   529  ### Step 14: Deploy Your Job with the Appropriate Policy and Templating
   530  
   531  Now we are ready to deploy our web application and give it the necessary policy
   532  and configuration to communicate with our database. Create a file called
   533  `web-app.nomad` and save the following content in it.
   534  
   535  ```hcl
   536  job "nomad-vault-demo" {
   537    datacenters = ["dc1"]
   538  
   539    group "demo" {
   540      task "server" {
   541  
   542        vault {
   543          policies = ["access-tables"]
   544        }
   545  
   546        driver = "docker"
   547        config {
   548          image = "hashicorp/nomad-vault-demo:latest"
   549          port_map {
   550            http = 8080
   551          }
   552  
   553          volumes = [
   554            "secrets/config.json:/etc/demo/config.json"
   555          ]
   556        }
   557  
   558        template {
   559          data = <<EOF
   560  {{ with secret "database/creds/accessdb" }}
   561    {
   562      "host": "database.service.consul",
   563      "port": 5432,
   564      "username": "{{ .Data.username }}",
   565      {{ /* Ensure password is a properly escaped JSON string. */ }}
   566      "password": {{ .Data.password | toJSON }},
   567      "db": "postgres"
   568    }
   569  {{ end }}
   570  EOF
   571          destination = "secrets/config.json"
   572        }
   573  
   574        resources {
   575          network {
   576            port "http" {}
   577          }
   578        }
   579  
   580        service {
   581          name = "nomad-vault-demo"
   582          port = "http"
   583  
   584          tags = [
   585            "urlprefix-/",
   586          ]
   587  
   588          check {
   589            type     = "tcp"
   590            interval = "2s"
   591            timeout  = "2s"
   592          }
   593        }
   594      }
   595    }
   596  }
   597  ```
   598  
   599  There are a few key points to note here:
   600  
   601  - We have specified the `access-tables` policy in the [vault][vault-jobspec]
   602    stanza of this job. The Nomad client will receive a token with this policy
   603    attached. Recall from the previous step that this policy will allow our
   604    application to read from the `database/creds/accessdb` endpoint in Vault and
   605    retrieve credentials.
   606  - We are using the [template][template] stanza's [vault
   607    integration][nomad-template-vault] to populate the JSON configuration file
   608    that our application needs. The underlying tool being used is [Consul
   609    Template][consul-template]. You can use Consul Template's documentation to
   610    learn more about the [syntax][consul-temp-syntax] needed to interact with
   611    Vault. Please note that although we have defined the template
   612    [inline][inline], we can use the template stanza [in conjunction with the
   613    artifact stanza][remote-template] to download an input template from a remote
   614    source such as an S3 bucket.
   615  - We are using the `toJSON` function to ensure the password is encoded as a JSON
   616    string. Any templated value which may contain special characters (like quotes
   617    or newlines) should be passed through the `toJSON` function.
   618  - Finally, note that that [destination][destination] of our template is the
   619    [secrets/][secrets-task-directory] task directory. This ensures the data is
   620    not accessible with a command like [nomad alloc fs][nomad-alloc-fs] or
   621    filesystem APIs.
   622  
   623  Use the following command to run the job:
   624  
   625  ```shell-session
   626  $ nomad run web-app.nomad
   627  ```
   628  
   629  ### Step 15: Confirm the Application is Accessing the Database
   630  
   631  At this point, you can visit your application at the path `/names` to confirm
   632  the appropriate data is being accessed from the database and displayed to you.
   633  There are several ways to do this.
   634  
   635  - Use the `dig` command to query the SRV record of your service and obtain the
   636    port it is using. Then `curl` your service at the appropriate port and `names` path.
   637  
   638  ```shell-session
   639  $ dig +short SRV nomad-vault-demo.service.consul
   640  1 1 30478 ip-172-31-58-230.node.dc1.consul.
   641  ```
   642  
   643  ```shell-session
   644  $ curl nomad-vault-demo.service.consul:30478/names
   645  <!DOCTYPE html>
   646  <html>
   647  <body>
   648  
   649  <h1> Welcome! </h1>
   650  <h2> If everything worked correctly, you should be able to see a list of names
   651  below </h2>
   652  
   653  <hr>
   654  
   655  
   656  <h4> John Doe </h4>
   657  
   658  <h4> Peter Parker </h4>
   659  
   660  <h4> Clifford Roosevelt </h4>
   661  
   662  <h4> Bruce Wayne </h4>
   663  
   664  <h4> Steven Clark </h4>
   665  
   666  <h4> Mary Jane </h4>
   667  
   668  
   669  </body>
   670  <html>
   671  ```
   672  
   673  - You can also deploy [fabio][fabio] and visit any Nomad client at its public IP
   674    address using a fixed port. The details of this method are beyond the scope of
   675    this guide, but you can refer to the [Load Balancing with Fabio][fabio-lb]
   676    guide for more information on this topic. Alternatively, you could use the
   677    `nomad` [alloc status][alloc-status] command along with the AWS console to
   678    determine the public IP and port your service is running (remember to open the
   679    port in your AWS security group if you choose this method).
   680  
   681  [![Web Service][web-service]][web-service]
   682  
   683  [alloc-status]: /docs/commands/alloc/status
   684  [consul-template]: https://github.com/hashicorp/consul-template
   685  [consul-temp-syntax]: https://github.com/hashicorp/consul-template#secret
   686  [create-from-role]: /docs/configuration/vault#create_from_role
   687  [creation-statements]: https://www.vaultproject.io/api/secret/databases#creation_statements
   688  [destination]: /docs/job-specification/template#destination
   689  [fabio]: https://github.com/fabiolb/fabio
   690  [fabio-lb]: https://learn.hashicorp.com/nomad/load-balancing/fabio
   691  [inline]: /docs/job-specification/template#inline-template
   692  [login]: https://www.vaultproject.io/docs/commands/login
   693  [nomad-alloc-fs]: /docs/commands/alloc/fs
   694  [nomad-template-vault]: /docs/job-specification/template#vault-integration
   695  [policy]: https://www.vaultproject.io/docs/concepts/policies
   696  [postgresql]: https://www.postgresql.org/about/
   697  [remote-template]: /docs/job-specification/template#remote-template
   698  [repo]: https://github.com/hashicorp/nomad/tree/master/terraform
   699  [role]: https://www.vaultproject.io/docs/auth/token
   700  [seal]: https://www.vaultproject.io/docs/concepts/seal
   701  [secrets-task-directory]: /docs/runtime/environment#secrets
   702  [step-5]: /docs/integrations/vault-integration#step-5-create-a-token-role
   703  [template]: /docs/job-specification/template
   704  [token]: https://www.vaultproject.io/docs/concepts/tokens
   705  [vault]: https://www.vaultproject.io/
   706  [vault-integration]: /docs/vault-integration
   707  [vault-jobspec]: /docs/job-specification/vault
   708  [vault-stanza]: /docs/configuration/vault
   709  [web-service]: /img/nomad-demo-app.png