github.com/ferranbt/nomad@v0.9.3-0.20190607002617-85c449b7667c/website/source/guides/integrations/vault-integration/index.html.md (about) 1 --- 2 layout: "guides" 3 page_title: "Vault Integration and Retrieving Dynamic Secrets" 4 sidebar_current: "guides-integrations-vault" 5 description: |- 6 Learn how to deploy an application in Nomad and retrieve dynamic credentials 7 by integrating with Vault. 8 --- 9 10 # Vault Integration 11 12 Nomad integrates seamlessly with [Vault][vault] and allows your application to 13 retrieve dynamic credentials for various tasks. In this guide, you will deploy a 14 web application that needs to authenticate against [PostgreSQL][postgresql] to 15 display data from a table to the user. 16 17 ## Reference Material 18 19 - [Vault Integration Documentation][vault-integration] 20 - [Nomad Template Stanza Integration with Vault][nomad-template-vault] 21 - [Secrets Task Directory][secrets-task-directory] 22 23 ## Estimated Time to Complete 24 25 20 minutes 26 27 ## Challenge 28 29 Think of a scenario where a Nomad operator needs to deploy an application that 30 can quickly and safely retrieve dynamic credentials to authenticate against a 31 database and return information. 32 33 ## Solution 34 35 Deploy Vault and configure the nodes in your Nomad cluster to integrate with it. 36 Use the appropriate [templating syntax][nomad-template-vault] to retrieve 37 credentials from Vault and then store those credentials in the 38 [secrets][secrets-task-directory] task directory to be consumed by the Nomad 39 task. 40 41 ## Prerequisites 42 43 To perform the tasks described in this guide, you need to have a Nomad 44 environment with Consul and Vault installed. You can use this [repo][repo] to 45 easily provision a sandbox environment. This guide will assume a cluster with 46 one server node and three client nodes. 47 48 -> **Please Note:** This guide is for demo purposes and is only using a single 49 Nomad server with Vault installed alongside. In a production cluster, 3 or 5 50 Nomad server nodes are recommended along with a separate Vault cluster. 51 52 ## Steps 53 54 ### Step 1: Initialize Vault Server 55 56 Run the following command to initialize Vault server and receive an 57 [unseal][seal] key and initial root [token][token]. Be sure to note the unseal 58 key and initial root token as you will need these two pieces of information. 59 60 ```shell 61 $ vault operator init -key-shares=1 -key-threshold=1 62 ``` 63 64 The `vault operator init` command above creates a single Vault unseal key for 65 convenience. For a production environment, it is recommended that you create at 66 least five unseal key shares and securely distribute them to independent 67 operators. The `vault operator init` command defaults to five key shares and a 68 key threshold of three. If you provisioned more than one server, the others will 69 become standby nodes but should still be unsealed. 70 71 ### Step 2: Unseal Vault 72 73 Run the following command and then provide your unseal key to Vault. 74 75 ```shell 76 $ vault operator unseal 77 ``` 78 The output of unsealing Vault will look similar to the following: 79 80 ```shell 81 Key Value 82 --- ----- 83 Seal Type shamir 84 Initialized true 85 Sealed false 86 Total Shares 1 87 Threshold 1 88 Version 0.11.4 89 Cluster Name vault-cluster-d12535e5 90 Cluster ID 49383931-c782-fdc6-443e-7681e7b15aca 91 HA Enabled true 92 HA Cluster n/a 93 HA Mode standby 94 Active Node Address <none> 95 ``` 96 97 ### Step 3: Log in to Vault 98 99 Use the [login][login] command to authenticate yourself against Vault using the 100 initial root token you received earlier. You will need to authenticate to run 101 the necessary commands to write policies, create roles, and configure a 102 connection to your database. 103 104 ```shell 105 $ vault login <your initial root token> 106 ``` 107 If your login is successful, you will see output similar to what is shown below: 108 109 ```shell 110 Success! You are now authenticated. The token information displayed below 111 is already stored in the token helper. You do NOT need to run "vault login" 112 again. Future Vault requests will automatically use this token. 113 ... 114 ``` 115 ### Step 4: Write the Policy for the Nomad Server Token 116 117 To use the Vault integration, you must provide a Vault token to your Nomad 118 servers. Although you can provide your root token to easily get started, the 119 recommended approach is to use a token [role][role] based token. This first 120 requires writing a policy that you will attach to the token you provide to your 121 Nomad servers. By using this approach, you can limit the set of 122 [policies][policy] that tasks managed by Nomad can access. 123 124 For this exercise, use the following policy for the token you will create for 125 your Nomad server. Place this policy in a file named `nomad-server-policy.hcl`. 126 127 ```'hcl 128 # Allow creating tokens under "nomad-cluster" token role. The token role name 129 # should be updated if "nomad-cluster" is not used. 130 path "auth/token/create/nomad-cluster" { 131 capabilities = ["update"] 132 } 133 134 # Allow looking up "nomad-cluster" token role. The token role name should be 135 # updated if "nomad-cluster" is not used. 136 path "auth/token/roles/nomad-cluster" { 137 capabilities = ["read"] 138 } 139 140 # Allow looking up the token passed to Nomad to validate # the token has the 141 # proper capabilities. This is provided by the "default" policy. 142 path "auth/token/lookup-self" { 143 capabilities = ["read"] 144 } 145 146 # Allow looking up incoming tokens to validate they have permissions to access 147 # the tokens they are requesting. This is only required if 148 # `allow_unauthenticated` is set to false. 149 path "auth/token/lookup" { 150 capabilities = ["update"] 151 } 152 153 # Allow revoking tokens that should no longer exist. This allows revoking 154 # tokens for dead tasks. 155 path "auth/token/revoke-accessor" { 156 capabilities = ["update"] 157 } 158 159 # Allow checking the capabilities of our own token. This is used to validate the 160 # token upon startup. 161 path "sys/capabilities-self" { 162 capabilities = ["update"] 163 } 164 165 # Allow our own token to be renewed. 166 path "auth/token/renew-self" { 167 capabilities = ["update"] 168 } 169 ``` 170 You can now write a policy called `nomad-server` by running the following 171 command: 172 173 ```shell 174 $ vault policy write nomad-server nomad-server-policy.hcl 175 ``` 176 You should see the following output: 177 178 ```shell 179 Success! Uploaded policy: nomad-server 180 ``` 181 You will generate the actual token in the next few steps. 182 183 ### Step 5: Create a Token Role 184 185 At this point, you must create a Vault token role that Nomad can use. The token 186 role allows you to limit what Vault policies are are accessible by jobs 187 submitted to Nomad. We will use the following token role: 188 189 ```json 190 { 191 "allowed_policies": "access-tables", 192 "explicit_max_ttl": 0, 193 "name": "nomad-cluster", 194 "orphan": true, 195 "period": 259200, 196 "renewable": true 197 } 198 ``` 199 Please notice that the `access-tables` policy is listed under the 200 `allowed_policies` key. We have not created this policy yet, but it will be used 201 by our job to retrieve credentials to access the database. A job running in our 202 Nomad cluster will only be allowed to use the `access-tables` policy. 203 204 If you would like to allow all policies to be used by any job in the Nomad 205 cluster except for the ones you specifically prohibit, then use the 206 `disallowed_policies` key instead and simply list the policies that should not 207 be granted. If you take this approach, be sure to include `nomad-server` in the 208 disallowed policies group. An example of this is shown below: 209 210 ```json 211 { 212 "disallowed_policies": "nomad-server", 213 "explicit_max_ttl": 0, 214 "name": "nomad-cluster", 215 "orphan": true, 216 "period": 259200, 217 "renewable": true 218 } 219 ``` 220 Save the policy in a file named `nomad-cluster-role.json` and create the token 221 role named `nomad-cluster`. 222 223 ```shell 224 $ vault write /auth/token/roles/nomad-cluster @nomad-cluster-role.json 225 ``` 226 You should see the following output: 227 228 ```shell 229 Success! Data written to: auth/token/roles/nomad-cluster 230 ``` 231 232 ### Step 6: Generate the Token for the Nomad Server 233 234 Run the following command to create a token for your Nomad server: 235 236 ```shell 237 $ vault token create -policy nomad-server -period 72h -orphan 238 ``` 239 The `-orphan` flag is included when generating the Nomad server token above to 240 prevent revocation of the token when its parent expires. Vault typically creates 241 tokens with a parent-child relationship. When an ancestor token is revoked, all 242 of its descendant tokens and their associated leases are revoked as well. 243 244 If everything works, you should see output similar to the following: 245 246 ```shell 247 Key Value 248 --- ----- 249 token 1gr0YoLyTBVZl5UqqvCfK9RJ 250 token_accessor 5fz20DuDbxKgweJZt3cMynya 251 token_duration 72h 252 token_renewable true 253 token_policies ["default" "nomad-server"] 254 identity_policies [] 255 policies ["default" "nomad-server"] 256 ``` 257 ### Step 7: Edit the Nomad Server Configuration to Enable Vault Integration 258 259 At this point, you are ready to edit the [vault stanza][vault-stanza] in the 260 Nomad Server's configuration file located at `/etc/nomad.d/nomad.hcl`. Provide 261 the token you generated in the previous step in the `vault` stanza of your Nomad 262 server configuration. The token can also be provided as an environment variable 263 called `VAULT_TOKEN`. Be sure to specify the `nomad-cluster-role` in the 264 [create_from_role][create-from-role] option. If using 265 [Vault Namespaces](https://www.vaultproject.io/docs/enterprise/namespaces/index.html), 266 modify both the client and server configuration to include the namespace; 267 alternatively, it can be provided in the environment variable `VAULT_NAMESPACE`. 268 After following these steps and enabling Vault, the `vault` stanza in your Nomad 269 server configuration will be similar to what is shown below: 270 271 ```hcl 272 vault { 273 enabled = true 274 address = "http://active.vault.service.consul:8200" 275 task_token_ttl = "1h" 276 create_from_role = "nomad-cluster" 277 token = "<your nomad server token>" 278 namespace = "<vault namespace for the cluster>" 279 } 280 ``` 281 282 Restart the Nomad server 283 284 ```shell 285 $ sudo systemctl restart nomad 286 ``` 287 288 NOTE: Nomad servers will renew the token automatically. 289 290 Vault integration needs to be enabled on the client nodes as well, but this has 291 been configured for you already in this environment. You will see the `vault` 292 stanza in your Nomad clients' configuration (located at 293 `/etc/nomad.d/nomad.hcl`) looks similar to the following: 294 295 ```hcl 296 vault { 297 enabled = true 298 address = "http://active.vault.service.consul:8200" 299 } 300 ``` 301 Please note that the Nomad clients do not need to be provided with a Vault 302 token. 303 304 ### Step 8: Deploy Database 305 306 The next few steps will involve configuring a connection between Vault and our 307 database, so let's deploy one that we can connect to. Create a Nomad job called 308 `db.nomad` with the following content: 309 310 ```hcl 311 job "postgres-nomad-demo" { 312 datacenters = ["dc1"] 313 314 group "db" { 315 316 task "server" { 317 driver = "docker" 318 319 config { 320 image = "hashicorp/postgres-nomad-demo:latest" 321 port_map { 322 db = 5432 323 } 324 } 325 resources { 326 network { 327 port "db"{ 328 static = 5432 329 } 330 } 331 } 332 333 service { 334 name = "database" 335 port = "db" 336 337 check { 338 type = "tcp" 339 interval = "2s" 340 timeout = "2s" 341 } 342 } 343 } 344 } 345 } 346 ``` 347 348 Run the job as shown below: 349 350 ```shell 351 $ nomad run db.nomad 352 ``` 353 354 Verify the job is running with the following command: 355 356 ```shell 357 $ nomad status postgres-nomad-demo 358 ``` 359 360 The result of the status command will look similar to the output below: 361 362 ```shell 363 ID = postgres-nomad-demo 364 Name = postgres-nomad-demo 365 Submit Date = 2018-11-15T21:01:00Z 366 Type = service 367 Priority = 50 368 Datacenters = dc1 369 Status = running 370 Periodic = false 371 Parameterized = false 372 373 Summary 374 Task Group Queued Starting Running Failed Complete Lost 375 db 0 0 1 0 0 0 376 377 Allocations 378 ID Node ID Task Group Version Desired Status Created Modified 379 701e2699 5de1330c db 0 run running 1m56s ago 1m33s ago 380 ``` 381 382 Now we can move on to configuring the connection between Vault and our database. 383 384 ### Step 9: Enable the Database Secrets Engine 385 386 We are using the database secrets engine for Vault in this exercise so that we 387 can generate dynamic credentials for our PostgreSQL database. Run the following command to enable it: 388 389 ```shell 390 $ vault secrets enable database 391 ``` 392 If the previous command was successful, you will see the following output: 393 394 ```shell 395 Success! Enabled the database secrets engine at: database/ 396 ``` 397 398 ### Step 10: Configure the Database Secrets Engine 399 400 Create a file named `connection.json` and placed the following information into 401 it: 402 403 ```json 404 { 405 "plugin_name": "postgresql-database-plugin", 406 "allowed_roles": "accessdb", 407 "connection_url": "postgresql://{{username}}:{{password}}@database.service.consul:5432/postgres?sslmode=disable", 408 "username": "postgres", 409 "password": "postgres123" 410 } 411 ``` 412 The information above allows Vault to connect to our database and create users 413 with specific privileges. We will specify the `accessdb` role soon. In a 414 production setting, it is recommended to give Vault credentials with enough 415 privileges to generate database credentials dynamically and and manage their 416 lifecycle. 417 418 Run the following command to configure the connection between the database 419 secrets engine and our database: 420 421 ```shell 422 $ vault write database/config/postgresql @connection.json 423 ``` 424 425 If the operation is successful, there will be no output. 426 427 ### Step 11: Create a Vault Role to Manage Database Privileges 428 429 Recall from the previous step that we specified `accessdb` in the 430 `allowed_roles` key of our connection information. Let's set up that role now. Create a file called `accessdb.sql` with the following content: 431 432 ```shell 433 CREATE USER "{{name}}" WITH ENCRYPTED PASSWORD '{{password}}' VALID UNTIL 434 '{{expiration}}'; 435 GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "{{name}}"; 436 GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "{{name}}"; 437 GRANT ALL ON SCHEMA public TO "{{name}}"; 438 ``` 439 440 The SQL above will be used in the [creation_statements][creation-statements] 441 parameter of our next command to specify the privileges that the dynamic 442 credentials being generated will possess. In our case, the dynamic database user 443 will have broad privileges that include the ability to read from the tables that 444 our application will need to access. 445 446 Run the following command to create the role: 447 448 ```shell 449 $ vault write database/roles/accessdb db_name=postgresql \ 450 creation_statements=@accessdb.sql default_ttl=1h max_ttl=24h 451 ``` 452 You should see the following output after running the previous command: 453 454 ```shell 455 Success! Data written to: database/roles/accessdb 456 ``` 457 458 ### Step 12: Generate PostgreSQL Credentials 459 460 You should now be able to generate dynamic credentials to access your database. 461 Run the following command to generate a set of credentials: 462 463 ```shell 464 $ vault read database/creds/accessdb 465 ``` 466 The previous command should return output similar to what is shown below: 467 468 ```shell 469 Key Value 470 --- ----- 471 lease_id database/creds/accessdb/3JozEMSMqw0vHHhvla15sKTW 472 lease_duration 1h 473 lease_renewable true 474 password A1a-3pMGjpDXHZ2Qzuf7 475 username v-root-accessdb-5LA65urB4daA8KYy2xku-1542318363 476 ``` 477 Congratulations! You have configured Vault's connection to your database and can 478 now generate credentials with the previously specified privileges. Now we need 479 to deploy our application and make sure that it will be able to communicate with 480 Vault and obtain the credentials as well. 481 482 ### Step 13: Create the `access-tables` Policy for Your Nomad Job to Use 483 484 Recall from [Step 5][step-5] that we specified a policy named `access-tables` in 485 our `allowed_policies` section of the token role. We will create this policy now 486 and give it the capability to read from the `database/creds/accessdb` endpoint 487 (the same endpoint we read from in the previous step to generate credentials for 488 our database). We will then specify this policy in our Nomad job which will 489 allow it to retrieve credentials for itself to access the database. 490 491 On the Nomad server (which is also running Vault), create a file named 492 `access-tables-policy.hcl` with the following content: 493 494 ```hcl 495 path "database/creds/accessdb" { 496 capabilities = ["read"] 497 } 498 ``` 499 Create the `access-tables` policy with the following command: 500 501 ```shell 502 $ vault policy write access-tables access-tables-policy.hcl 503 ``` 504 You should see the following output: 505 506 ```shell 507 Success! Uploaded policy: access-tables 508 ``` 509 510 ### Step 14: Deploy Your Job with the Appropriate Policy and Templating 511 512 Now we are ready to deploy our web application and give it the necessary policy 513 and configuration to communicate with our database. Create a file called 514 `web-app.nomad` and save the following content in it. 515 516 ```hcl 517 job "nomad-vault-demo" { 518 datacenters = ["dc1"] 519 520 group "demo" { 521 task "server" { 522 523 vault { 524 policies = ["access-tables"] 525 } 526 527 driver = "docker" 528 config { 529 image = "hashicorp/nomad-vault-demo:latest" 530 port_map { 531 http = 8080 532 } 533 534 volumes = [ 535 "secrets/config.json:/etc/demo/config.json" 536 ] 537 } 538 539 template { 540 data = <<EOF 541 {{ with secret "database/creds/accessdb" }} 542 { 543 "host": "database.service.consul", 544 "port": 5432, 545 "username": "{{ .Data.username }}", 546 {{ /* Ensure password is a properly escaped JSON string. */ }} 547 "password": {{ .Data.password | toJSON }}, 548 "db": "postgres" 549 } 550 {{ end }} 551 EOF 552 destination = "secrets/config.json" 553 } 554 555 resources { 556 network { 557 port "http" {} 558 } 559 } 560 561 service { 562 name = "nomad-vault-demo" 563 port = "http" 564 565 tags = [ 566 "urlprefix-/", 567 ] 568 569 check { 570 type = "tcp" 571 interval = "2s" 572 timeout = "2s" 573 } 574 } 575 } 576 } 577 } 578 ``` 579 There are a few key points to note here: 580 581 - We have specified the `access-tables` policy in the [vault][vault-jobspec] 582 stanza of this job. The Nomad client will receive a token with this policy 583 attached. Recall from the previous step that this policy will allow our 584 application to read from the `database/creds/accessdb` endpoint in Vault and 585 retrieve credentials. 586 - We are using the [template][template] stanza's [vault 587 integration][nomad-template-vault] to populate the JSON configuration file 588 that our application needs. The underlying tool being used is [Consul 589 Template][consul-template]. You can use Consul Template's documentation to 590 learn more about the [syntax][consul-temp-syntax] needed to interact with 591 Vault. Please note that although we have defined the template 592 [inline][inline], we can use the template stanza [in conjunction with the 593 artifact stanza][remote-template] to download an input template from a remote 594 source such as an S3 bucket. 595 - We are using the `toJSON` function to ensure the password is encoded as a JSON 596 string. Any templated value which may contain special characters (like quotes 597 or newlines) should be passed through the `toJSON` function. 598 - Finally, note that that [destination][destination] of our template is the 599 [secrets/][secrets-task-directory] task directory. This ensures the data is 600 not accessible with a command like [nomad alloc fs][nomad-alloc-fs] or 601 filesystem APIs. 602 603 Use the following command to run the job: 604 605 ```shell 606 $ nomad run web-app.nomad 607 ``` 608 609 ### Step 15: Confirm the Application is Accessing the Database 610 611 At this point, you can visit your application at the path `/names` to confirm 612 the appropriate data is being accessed from the database and displayed to you. 613 There are several ways to do this. 614 615 - Use the `dig` command to query the SRV record of your service and obtain the 616 port it is using. Then `curl` your service at the appropriate port and `names` path. 617 618 ```shell 619 $ dig +short SRV nomad-vault-demo.service.consul 620 1 1 30478 ip-172-31-58-230.node.dc1.consul. 621 ``` 622 ```shell 623 $ curl nomad-vault-demo.service.consul:30478/names 624 <!DOCTYPE html> 625 <html> 626 <body> 627 628 <h1> Welcome! </h1> 629 <h2> If everything worked correctly, you should be able to see a list of names 630 below </h2> 631 632 <hr> 633 634 635 <h4> John Doe </h4> 636 637 <h4> Peter Parker </h4> 638 639 <h4> Clifford Roosevelt </h4> 640 641 <h4> Bruce Wayne </h4> 642 643 <h4> Steven Clark </h4> 644 645 <h4> Mary Jane </h4> 646 647 648 </body> 649 <html> 650 ``` 651 - You can also deploy [fabio][fabio] and visit any Nomad client at its public IP 652 address using a fixed port. The details of this method are beyond the scope of 653 this guide, but you can refer to the [Load Balancing with Fabio][fabio-lb] 654 guide for more information on this topic. Alternatively, you could use the 655 `nomad` [alloc status][alloc-status] command along with the AWS console to 656 determine the public IP and port your service is running (remember to open the 657 port in your AWS security group if you choose this method). 658 659 [![Web Service][web-service]][web-service] 660 661 [alloc-status]: /docs/commands/alloc/status.html 662 [consul-template]: https://github.com/hashicorp/consul-template 663 [consul-temp-syntax]: https://github.com/hashicorp/consul-template#secret 664 [create-from-role]: /docs/configuration/vault.html#create_from_role 665 [creation-statements]: https://www.vaultproject.io/api/secret/databases/index.html#creation_statements 666 [destination]: /docs/job-specification/template.html#destination 667 [fabio]: https://github.com/fabiolb/fabio 668 [fabio-job]: /guides/load-balancing/fabio.html#step-1-create-a-job-for-fabio 669 [fabio-lb]: /guides/load-balancing/fabio.html 670 [inline]: /docs/job-specification/template.html#inline-template 671 [login]: https://www.vaultproject.io/docs/commands/login.html 672 [nomad-alloc-fs]: /docs/commands/alloc/fs.html 673 [nomad-template-vault]: /docs/job-specification/template.html#vault-integration 674 [policy]: https://www.vaultproject.io/docs/concepts/policies.html 675 [postgresql]: https://www.postgresql.org/about/ 676 [remote-template]: /docs/job-specification/template.html#remote-template 677 [repo]: https://github.com/hashicorp/nomad/tree/master/terraform 678 [role]: https://www.vaultproject.io/docs/auth/token.html 679 [seal]: https://www.vaultproject.io/docs/concepts/seal.html 680 [secrets-task-directory]: /docs/runtime/environment.html#secrets- 681 [step-5]: /guides/integrations/vault-integration/index.html#step-5-create-a-token-role 682 [template]: /docs/job-specification/template.html 683 [token]: https://www.vaultproject.io/docs/concepts/tokens.html 684 [vault]: https://www.vaultproject.io/ 685 [vault-integration]: /docs/vault-integration/index.html 686 [vault-jobspec]: /docs/job-specification/vault.html 687 [vault-stanza]: /docs/configuration/vault.html 688 [web-service]: /assets/images/nomad-demo-app.png