github.com/tetrafolium/tflint@v0.8.0/tflint/test-fixtures/v0.11.0_module/.terraform/modules/ede63babd02a55137928f566176f7463/README.md (about)

     1  # Consul Cluster
     2  
     3  This folder contains a [Terraform](https://www.terraform.io/) module to deploy a 
     4  [Consul](https://www.consul.io/) cluster in [AWS](https://aws.amazon.com/) on top of an Auto Scaling Group. This module 
     5  is designed to deploy an [Amazon Machine Image (AMI)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) 
     6  that has Consul installed via the [install-consul](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/install-consul) module in this Module.
     7  
     8  
     9  
    10  ## How do you use this module?
    11  
    12  This folder defines a [Terraform module](https://www.terraform.io/docs/modules/usage.html), which you can use in your
    13  code by adding a `module` configuration and setting its `source` parameter to URL of this folder:
    14  
    15  ```hcl
    16  module "consul_cluster" {
    17    # TODO: update this to the final URL
    18    # Use version v0.0.1 of the consul-cluster module
    19    source = "github.com/hashicorp/terraform-aws-consul//modules/consul-cluster?ref=v0.0.1"
    20  
    21    # Specify the ID of the Consul AMI. You should build this using the scripts in the install-consul module.
    22    ami_id = "ami-abcd1234"
    23    
    24    # Add this tag to each node in the cluster
    25    cluster_tag_key   = "consul-cluster"
    26    cluster_tag_value = "consul-cluster-example"
    27    
    28    # Configure and start Consul during boot. It will automatically form a cluster with all nodes that have that same tag. 
    29    user_data = <<-EOF
    30                #!/bin/bash
    31                /opt/consul/bin/run-consul --server --cluster-tag-key consul-cluster
    32                EOF
    33    
    34    # ... See vars.tf for the other parameters you must define for the consul-cluster module
    35  }
    36  ```
    37  
    38  Note the following parameters:
    39  
    40  * `source`: Use this parameter to specify the URL of the consul-cluster module. The double slash (`//`) is intentional 
    41    and required. Terraform uses it to specify subfolders within a Git repo (see [module 
    42    sources](https://www.terraform.io/docs/modules/sources.html)). The `ref` parameter specifies a specific Git tag in 
    43    this repo. That way, instead of using the latest version of this module from the `master` branch, which 
    44    will change every time you run Terraform, you're using a fixed version of the repo.
    45  
    46  * `ami_id`: Use this parameter to specify the ID of a Consul [Amazon Machine Image 
    47    (AMI)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) to deploy on each server in the cluster. You
    48    should install Consul in this AMI using the scripts in the [install-consul](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/install-consul) module.
    49    
    50  * `user_data`: Use this parameter to specify a [User 
    51    Data](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts) script that each
    52    server will run during boot. This is where you can use the [run-consul script](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/run-consul) to configure and 
    53    run Consul. The `run-consul` script is one of the scripts installed by the [install-consul](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/install-consul) 
    54    module. 
    55  
    56  You can find the other parameters in [vars.tf](vars.tf).
    57  
    58  Check out the [consul-cluster example](https://github.com/hashicorp/terraform-aws-consul/tree/master/MAIN.md) for fully-working sample code. 
    59  
    60  
    61  
    62  
    63  ## How do you connect to the Consul cluster?
    64  
    65  ### Using the HTTP API from your own computer
    66  
    67  If you want to connect to the cluster from your own computer, the easiest way is to use the [HTTP 
    68  API](https://www.consul.io/docs/agent/http.html). Note that this only works if the Consul cluster is running in public 
    69  subnets and/or your default VPC (as in the [consul-cluster example](https://github.com/hashicorp/terraform-aws-consul/tree/master/MAIN.md)), which is OK for testing
    70  and experimentation, but NOT recommended for production usage.
    71  
    72  To use the HTTP API, you first need to get the public IP address of one of the Consul Servers. You can find Consul 
    73  servers by using AWS tags. If you're running the [consul-cluster example](https://github.com/hashicorp/terraform-aws-consul/tree/master/MAIN.md), the 
    74  [consul-examples-helper.sh script](https://github.com/hashicorp/terraform-aws-consul/tree/master/examples/consul-examples-helper/consul-examples-helper.sh) will do the tag lookup 
    75  for you automatically (note, you must have the [AWS CLI](https://aws.amazon.com/cli/), 
    76  [jq](https://stedolan.github.io/jq/), and the [Consul agent](https://www.consul.io/) installed locally):
    77  
    78  ```
    79  > ../consul-examples-helper/consul-examples-helper.sh
    80  
    81  Your Consul servers are running at the following IP addresses:
    82  
    83  34.200.218.123
    84  34.205.127.138
    85  34.201.165.11
    86  ```
    87  
    88  You can use one of these IP addresses with the `members` command to see a list of cluster nodes:
    89  
    90  ```
    91  > consul members -http-addr=11.22.33.44:8500
    92  
    93  Node                 Address             Status  Type    Build  Protocol  DC
    94  i-0051c3ea00e9691a0  172.31.35.148:8301  alive   client  0.8.0  2         us-east-1
    95  i-00aea529cce1761d4  172.31.47.236:8301  alive   client  0.8.0  2         us-east-1
    96  i-01bc94ccfa032d82d  172.31.27.193:8301  alive   client  0.8.0  2         us-east-1
    97  i-04271e97808f15d63  172.31.25.174:8301  alive   server  0.8.0  2         us-east-1
    98  i-0483b07abe49ea7ff  172.31.5.42:8301    alive   client  0.8.0  2         us-east-1
    99  i-098fb1ebd5ca443bf  172.31.55.203:8301  alive   client  0.8.0  2         us-east-1
   100  i-0eb961b6825f7871c  172.31.65.9:8301    alive   client  0.8.0  2         us-east-1
   101  i-0ee6dcf715adbff5f  172.31.67.235:8301  alive   server  0.8.0  2         us-east-1
   102  i-0fd0e63682a94b245  172.31.54.84:8301   alive   server  0.8.0  2         us-east-1
   103  ```
   104  
   105  You can also try inserting a value:
   106  
   107  ```
   108  > consul kv put -http-addr=11.22.33.44:8500 foo bar
   109  
   110  Success! Data written to: foo
   111  ```
   112  
   113  And reading that value back:
   114   
   115  ```
   116  > consul kv get -http-addr=11.22.33.44:8500 foo
   117  
   118  bar
   119  ```
   120  
   121  Finally, you can try opening up the Consul UI in your browser at the URL `http://11.22.33.44:8500/ui/`.
   122  
   123  ![Consul UI](https://github.com/hashicorp/terraform-aws-consul/blob/master/_docs/consul-ui-screenshot.png?raw=true)
   124  
   125  
   126  ### Using the Consul agent on another EC2 Instance
   127  
   128  The easiest way to run [Consul agent](https://www.consul.io/docs/agent/basics.html) and have it connect to the Consul 
   129  cluster is to use the same EC2 tags the Consul servers use to discover each other during bootstrapping. 
   130  
   131  For example, imagine you deployed a Consul cluster in `us-east-1` as follows:
   132  
   133  <!-- TODO: update this to the final URL -->
   134  
   135  ```hcl
   136  module "consul_cluster" {
   137    source = "github.com/hashicorp/terraform-aws-consul//modules/consul-cluster?ref=v0.0.1"
   138  
   139    # Add this tag to each node in the cluster
   140    cluster_tag_key   = "consul-cluster"
   141    cluster_tag_value = "consul-cluster-example"
   142    
   143    # ... Other params omitted ... 
   144  }
   145  ```
   146  
   147  Using the `retry-join-ec2-xxx` params, you can connect run a Consul agent on an EC2 Instance as follows: 
   148  
   149  ```
   150  consul agent -retry-join-ec2-tag-key=consul-cluster -retry-join-ec2-tag-value=consul-cluster-example -data-dir=/tmp/consul
   151  ```
   152  
   153  Two important notes about this command:
   154  
   155  1. By default, the Consul cluster nodes advertise their *private* IP addresses, so the command above only works from 
   156     EC2 Instances inside the same VPC (or any VPC with proper peering connections and route table entries).
   157  1. In order to look up the EC2 tags, the EC2 Instance where you're running this command must have an IAM role with
   158     the `ec2:DescribeInstances` permission.
   159  
   160  
   161  
   162  
   163  ## What's included in this module?
   164  
   165  This module creates the following architecture:
   166  
   167  ![Consul architecture](https://github.com/hashicorp/terraform-aws-consul/blob/master/_docs/architecture.png?raw=true)
   168  
   169  This architecture consists of the following resources:
   170  
   171  * [Auto Scaling Group](#auto-scaling-group)
   172  * [EC2 Instance Tags](#ec2-instance-tags)
   173  * [Security Group](#security-group)
   174  * [IAM Role and Permissions](#iam-role-and-permissions)
   175  
   176  
   177  ### Auto Scaling Group
   178  
   179  This module runs Consul on top of an [Auto Scaling Group (ASG)](https://aws.amazon.com/autoscaling/). Typically, you
   180  should run the ASG with 3 or 5 EC2 Instances spread across multiple [Availability 
   181  Zones](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). Each of the EC2
   182  Instances should be running an AMI that has Consul installed via the [install-consul](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/install-consul)
   183  module. You pass in the ID of the AMI to run using the `ami_id` input parameter.
   184  
   185  
   186  ### EC2 Instance Tags
   187  
   188  This module allows you to specify a tag to add to each EC2 instance in the ASG. We recommend using this tag with the
   189  [retry_join_ec2](https://www.consul.io/docs/agent/options.html?#retry_join_ec2) configuration to allow the EC2 
   190  Instances to find each other and automatically form a cluster.     
   191  
   192  
   193  ### Security Group
   194  
   195  Each EC2 Instance in the ASG has a Security Group that allows:
   196   
   197  * All outbound requests
   198  * All the inbound ports specified in the [Consul documentation](https://www.consul.io/docs/agent/options.html?#ports-used)
   199  
   200  The Security Group ID is exported as an output variable if you need to add additional rules. 
   201  
   202  Check out the [Security section](#security) for more details. 
   203  
   204  
   205  ### IAM Role and Permissions
   206  
   207  Each EC2 Instance in the ASG has an [IAM Role](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached. 
   208  We give this IAM role a small set of IAM permissions that each EC2 Instance can use to automatically discover the other 
   209  Instances in its ASG and form a cluster with them. See the [run-consul required permissions 
   210  docs](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/run-consul#required-permissions) for details.
   211  
   212  The IAM Role ARN is exported as an output variable if you need to add additional permissions. 
   213  
   214  
   215  
   216  ## How do you roll out updates?
   217  
   218  If you want to deploy a new version of Consul across the cluster, the best way to do that is to:
   219  
   220  1. Build a new AMI.
   221  1. Set the `ami_id` parameter to the ID of the new AMI.
   222  1. Run `terraform apply`.
   223  
   224  This updates the Launch Configuration of the ASG, so any new Instances in the ASG will have your new AMI, but it does
   225  NOT actually deploy those new instances. To make that happen, you should do the following:
   226  
   227  1. Issue an API call to one of the old Instances in the ASG to have it leave gracefully. E.g.:
   228  
   229      ```
   230      curl -X PUT <OLD_INSTANCE_IP>:8500/v1/agent/leave
   231      ```
   232      
   233  1. Once the instance has left the cluster, terminate it:
   234   
   235      ```
   236      aws ec2 terminate-instances --instance-ids <OLD_INSTANCE_ID>
   237      ```
   238  
   239  1. After a minute or two, the ASG should automatically launch a new Instance, with the new AMI, to replace the old one.
   240  
   241  1. Wait for the new Instance to boot and join the cluster.
   242  
   243  1. Repeat these steps for each of the other old Instances in the ASG.
   244     
   245  We will add a script in the future to automate this process (PRs are welcome!).
   246  
   247  
   248  
   249  
   250  ## What happens if a node crashes?
   251  
   252  There are two ways a Consul node may go down:
   253   
   254  1. The Consul process may crash. In that case, `supervisor` should restart it automatically.
   255  1. The EC2 Instance running Consul dies. In that case, the Auto Scaling Group should launch a replacement automatically. 
   256     Note that in this case, since the Consul agent did not exit gracefully, and the replacement will have a different ID,
   257     you may have to manually clean out the old nodes using the [force-leave
   258     command](https://www.consul.io/docs/commands/force-leave.html). We may add a script to do this 
   259     automatically in the future. For more info, see the [Consul Outage 
   260     documentation](https://www.consul.io/docs/guides/outage.html).
   261  
   262  
   263  
   264  
   265  ## Security
   266  
   267  Here are some of the main security considerations to keep in mind when using this module:
   268  
   269  1. [Encryption in transit](#encryption-in-transit)
   270  1. [Encryption at rest](#encryption-at-rest)
   271  1. [Dedicated instances](#dedicated-instances)
   272  1. [Security groups](#security-groups)
   273  1. [SSH access](#ssh-access)
   274  
   275  
   276  ### Encryption in transit
   277  
   278  Consul can encrypt all of its network traffic. For instructions on enabling network encryption, have a look at the
   279  [How do you handle encryption documentation](https://github.com/hashicorp/terraform-aws-consul/tree/master/modules/run-consul#how-do-you-handle-encryption).
   280  
   281  
   282  ### Encryption at rest
   283  
   284  The EC2 Instances in the cluster store all their data on the root EBS Volume. To enable encryption for the data at
   285  rest, you must enable encryption in your Consul AMI. If you're creating the AMI using Packer (e.g. as shown in
   286  the [consul-ami example](https://github.com/hashicorp/terraform-aws-consul/tree/master/examples/consul-ami)), you need to set the [encrypt_boot 
   287  parameter](https://www.packer.io/docs/builders/amazon-ebs.html#encrypt_boot) to `true`.  
   288  
   289  
   290  ### Dedicated instances
   291  
   292  If you wish to use dedicated instances, you can set the `tenancy` parameter to `"dedicated"` in this module. 
   293  
   294  
   295  ### Security groups
   296  
   297  This module attaches a security group to each EC2 Instance that allows inbound requests as follows:
   298  
   299  * **Consul**: For all the [ports used by Consul](https://www.consul.io/docs/agent/options.html#ports), you can 
   300    use the `allowed_inbound_cidr_blocks` parameter to control the list of 
   301    [CIDR blocks](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that will be allowed access and the `allowed_inbound_security_group_ids` parameter to control the security groups that will be allowed access.
   302  
   303  * **SSH**: For the SSH port (default: 22), you can use the `allowed_ssh_cidr_blocks` parameter to control the list of   
   304    [CIDR blocks](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that will be allowed access. You can use the `allowed_inbound_ssh_security_group_ids` parameter to control the list of source Security Groups that will be allowed access.
   305    
   306  Note that all the ports mentioned above are configurable via the `xxx_port` variables (e.g. `server_rpc_port`). See
   307  [vars.tf](vars.tf) for the full list.  
   308    
   309    
   310  
   311  ### SSH access
   312  
   313  You can associate an [EC2 Key Pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) with each
   314  of the EC2 Instances in this cluster by specifying the Key Pair's name in the `ssh_key_name` variable. If you don't
   315  want to associate a Key Pair with these servers, set `ssh_key_name` to an empty string.
   316  
   317  
   318  
   319  
   320  
   321  ## What's NOT included in this module?
   322  
   323  This module does NOT handle the following items, which you may want to provide on your own:
   324  
   325  * [Monitoring, alerting, log aggregation](#monitoring-alerting-log-aggregation)
   326  * [VPCs, subnets, route tables](#vpcs-subnets-route-tables)
   327  * [DNS entries](#dns-entries)
   328  
   329  
   330  ### Monitoring, alerting, log aggregation
   331  
   332  This module does not include anything for monitoring, alerting, or log aggregation. All ASGs and EC2 Instances come 
   333  with limited [CloudWatch](https://aws.amazon.com/cloudwatch/) metrics built-in, but beyond that, you will have to 
   334  provide your own solutions.
   335  
   336  
   337  ### VPCs, subnets, route tables
   338  
   339  This module assumes you've already created your network topology (VPC, subnets, route tables, etc). You will need to 
   340  pass in the the relevant info about your network topology (e.g. `vpc_id`, `subnet_ids`) as input variables to this 
   341  module.
   342  
   343  
   344  ### DNS entries
   345  
   346  This module does not create any DNS entries for Consul (e.g. in Route 53).
   347  
   348