github.com/jrxfive/nomad@v0.6.1-0.20170802162750-1fef470e89bf/terraform/README.md (about)

     1  # Provision a Nomad cluster on AWS with Packer & Terraform
     2  
     3  Use this to easily provision a Nomad sandbox environment on AWS with 
     4  [Packer](https://packer.io) and [Terraform](https://terraform.io). 
     5  [Consul](https://www.consul.io/intro/index.html) and 
     6  [Vault](https://www.vaultproject.io/intro/index.html) are also installed 
     7  (colocated for convenience). The intention is to allow easy exploration of 
     8  Nomad and its integrations with the HashiCorp stack. This is *not* meant to be
     9  a production ready environment. A demonstration of [Nomad's Apache Spark 
    10  integration](examples/spark/README.md) is included. 
    11  
    12  ## Setup
    13  
    14  Clone this repo and (optionally) use [Vagrant](https://www.vagrantup.com/intro/index.html) 
    15  to bootstrap a local staging environment:
    16  
    17  ```bash
    18  $ git clone git@github.com:hashicorp/nomad.git
    19  $ cd terraform/aws
    20  $ vagrant up && vagrant ssh
    21  ```
    22  
    23  The Vagrant staging environment pre-installs Packer, Terraform, and Docker.
    24  
    25  ### Pre-requisites
    26  
    27  You will need the following:
    28  
    29  - AWS account
    30  - [API access keys](http://aws.amazon.com/developers/access-keys/)
    31  - [SSH key pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
    32  
    33  Set environment variables for your AWS credentials:
    34  
    35  ```bash
    36  $ export AWS_ACCESS_KEY_ID=[ACCESS_KEY_ID]
    37  $ export AWS_SECRET_ACCESS_KEY=[SECRET_ACCESS_KEY]
    38  ```
    39  
    40  ## Provision a cluster
    41  
    42  `cd` to an environment subdirectory:
    43  
    44  ```bash
    45  $ cd env/us-east
    46  ```
    47  
    48  Update terraform.tfvars with your SSH key name:
    49  
    50  ```bash
    51  region                  = "us-east-1"
    52  ami                     = "ami-577d212c"
    53  instance_type           = "t2.medium"
    54  key_name                = "KEY_NAME"
    55  server_count            = "3"
    56  client_count            = "4"
    57  ```
    58  
    59  Note that a pre-provisioned, publicly available AMI is used by default 
    60  (for the `us-east-1` region). To provision your own customized AMI with 
    61  [Packer](https://www.packer.io/intro/index.html), follow the instructions 
    62  [here](aws/packer/README.md). You will need to replace the AMI ID in 
    63  `terraform.tfvars` with your own. You can also modify the `region`, 
    64  `instance_type`, `server_count`, and `client_count`. At least one client and
    65  one server are required.
    66  
    67  Provision the cluster:
    68  
    69  ```bash
    70  $ terraform get
    71  $ terraform plan
    72  $ terraform apply
    73  ```
    74  
    75  ## Access the cluster
    76  
    77  SSH to one of the servers using its public IP:
    78  
    79  ```bash
    80  $ ssh -i /path/to/key ubuntu@PUBLIC_IP
    81  ```
    82  
    83  Note that the AWS security group is configured by default to allow all traffic 
    84  over port 22. This is *not* recommended for production deployments.
    85  
    86  Run a few basic commands to verify that Consul and Nomad are up and running 
    87  properly:
    88  
    89  ```bash
    90  $ consul members
    91  $ nomad server-members
    92  $ nomad node-status
    93  ```
    94  
    95  Optionally, initialize and unseal Vault:
    96  
    97  ```bash
    98  $ vault init -key-shares=1 -key-threshold=1
    99  $ vault unseal
   100  $ export VAULT_TOKEN=[INITIAL_ROOT_TOKEN]
   101  ```
   102  
   103  The `vault init` command above creates a single 
   104  [Vault unseal key](https://www.vaultproject.io/docs/concepts/seal.html) for 
   105  convenience. For a production environment, it is recommended that you create at 
   106  least five unseal key shares and securely distribute them to independent 
   107  operators. The `vault init` command defaults to five key shares and a key 
   108  threshold of three. If you provisioned more than one server, the others will 
   109  become standby nodes (but should still be unsealed). You can query the active 
   110  and standby nodes independently:
   111  
   112  ```bash
   113  $ dig active.vault.service.consul
   114  $ dig active.vault.service.consul SRV
   115  $ dig standby.vault.service.consul
   116  ``` 
   117  
   118  ## Getting started with Nomad & the HashiCorp stack
   119  
   120  See:
   121  
   122  * [Getting Started with Nomad](https://www.nomadproject.io/intro/getting-started/jobs.html)
   123  * [Consul integration](https://www.nomadproject.io/docs/service-discovery/index.html)
   124  * [Vault integration](https://www.nomadproject.io/docs/vault-integration/index.html)
   125  * [consul-template integration](https://www.nomadproject.io/docs/job-specification/template.html) 
   126  
   127  ## Apache Spark integration
   128  
   129  Nomad is well-suited for analytical workloads, given its performance 
   130  characteristics and first-class support for batch scheduling. Apache Spark is a 
   131  popular data processing engine/framework that has been architected to use 
   132  third-party schedulers. The Nomad ecosystem includes a [fork that natively 
   133  integrates Nomad with Spark](https://github.com/hashicorp/nomad-spark). A
   134  detailed walkthrough of the integration is included [here](examples/spark/README.md).