github.com/kcburge/terraform@v0.11.12-beta1/website/docs/backends/types/s3.html.md (about)

     1  ---
     2  layout: "backend-types"
     3  page_title: "Backend Type: s3"
     4  sidebar_current: "docs-backends-types-standard-s3"
     5  description: |-
     6    Terraform can store state remotely in S3 and lock that state with DynamoDB.
     7  ---
     8  
     9  # S3
    10  
    11  **Kind: Standard (with locking via DynamoDB)**
    12  
    13  Stores the state as a given key in a given bucket on
    14  [Amazon S3](https://aws.amazon.com/s3/).
    15  This backend also supports state locking and consistency checking via
    16  [Dynamo DB](https://aws.amazon.com/dynamodb/), which can be enabled by setting
    17  the `dynamodb_table` field to an existing DynamoDB table name.
    18  
    19  ~> **Warning!** It is highly recommended that you enable
    20  [Bucket Versioning](http://docs.aws.amazon.com/AmazonS3/latest/UG/enable-bucket-versioning.html)
    21  on the S3 bucket to allow for state recovery in the case of accidental deletions and human error.
    22  
    23  ## Example Configuration
    24  
    25  ```hcl
    26  terraform {
    27    backend "s3" {
    28      bucket = "mybucket"
    29      key    = "path/to/my/key"
    30      region = "us-east-1"
    31    }
    32  }
    33  ```
    34  
    35  This assumes we have a bucket created called `mybucket`. The
    36  Terraform state is written to the key `path/to/my/key`.
    37  
    38  Note that for the access credentials we recommend using a
    39  [partial configuration](/docs/backends/config.html).
    40  
    41  ### S3 Bucket Permissions
    42  
    43  Terraform will need the following AWS IAM permissions on
    44  the target backend bucket:
    45  
    46  * `s3:ListBucket` on `arn:aws:s3:::mybucket`
    47  * `s3:GetObject` on `arn:aws:s3:::mybucket/path/to/my/key`
    48  * `s3:PutObject` on `arn:aws:s3:::mybucket/path/to/my/key`
    49  
    50  This is seen in the following AWS IAM Statement:
    51  
    52  ```json
    53  {
    54    "Version": "2012-10-17",
    55    "Statement": [
    56      {
    57        "Effect": "Allow",
    58        "Action": "s3:ListBucket",
    59        "Resource": "arn:aws:s3:::mybucket"
    60      },
    61      {
    62        "Effect": "Allow",
    63        "Action": ["s3:GetObject", "s3:PutObject"],
    64        "Resource": "arn:aws:s3:::mybucket/path/to/my/key"
    65      }
    66    ]
    67  }
    68  ```
    69  
    70  ### DynamoDB Table Permissions
    71  
    72  If you are using state locking, Terraform will need the following AWS IAM
    73  permissions on the DynamoDB table (`arn:aws:dynamodb:::table/mytable`):
    74  
    75  * `dynamodb:GetItem`
    76  * `dynamodb:PutItem`
    77  * `dynamodb:DeleteItem`
    78  
    79  This is seen in the following AWS IAM Statement:
    80  
    81  ```json
    82  {
    83    "Version": "2012-10-17",
    84    "Statement": [
    85      {
    86        "Effect": "Allow",
    87        "Action": [
    88          "dynamodb:GetItem",
    89          "dynamodb:PutItem",
    90          "dynamodb:DeleteItem"
    91        ],
    92        "Resource": "arn:aws:dynamodb:*:*:table/mytable"
    93      }
    94    ]
    95  }
    96  ```
    97  
    98  ## Using the S3 remote state
    99  
   100  To make use of the S3 remote state we can use the
   101  [`terraform_remote_state` data
   102  source](/docs/providers/terraform/d/remote_state.html).
   103  
   104  ```hcl
   105  data "terraform_remote_state" "network" {
   106    backend = "s3"
   107    config {
   108      bucket = "terraform-state-prod"
   109      key    = "network/terraform.tfstate"
   110      region = "us-east-1"
   111    }
   112  }
   113  ```
   114  
   115  The `terraform_remote_state` data source will return all of the root module 
   116  outputs defined in the referenced remote state (but not any outputs from 
   117  nested modules unless they are explicitly output again in the root). An 
   118  example output might look like:
   119  
   120  ```
   121  data.terraform_remote_state.network:
   122    id = 2016-10-29 01:57:59.780010914 +0000 UTC
   123    addresses.# = 2
   124    addresses.0 = 52.207.220.222
   125    addresses.1 = 54.196.78.166
   126    backend = s3
   127    config.% = 3
   128    config.bucket = terraform-state-prod
   129    config.key = network/terraform.tfstate
   130    config.region = us-east-1
   131    elb_address = web-elb-790251200.us-east-1.elb.amazonaws.com
   132    public_subnet_id = subnet-1e05dd33
   133  ```
   134  
   135  ## Configuration variables
   136  
   137  The following configuration options or environment variables are supported:
   138  
   139   * `bucket` - (Required) The name of the S3 bucket.
   140   * `key` - (Required) The path to the state file inside the bucket. When using
   141     a non-default [workspace](/docs/state/workspaces.html), the state path will
   142     be `/workspace_key_prefix/workspace_name/key`
   143   * `region` / `AWS_DEFAULT_REGION` - (Optional) The region of the S3
   144   bucket.
   145   * `endpoint` / `AWS_S3_ENDPOINT` - (Optional) A custom endpoint for the
   146   S3 API.
   147   * `encrypt` - (Optional) Whether to enable [server side
   148     encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html)
   149     of the state file.
   150   * `acl` - [Canned
   151     ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl)
   152     to be applied to the state file.
   153   * `access_key` / `AWS_ACCESS_KEY_ID` - (Optional) AWS access key.
   154   * `secret_key` / `AWS_SECRET_ACCESS_KEY` - (Optional) AWS secret access key.
   155   * `kms_key_id` - (Optional) The ARN of a KMS Key to use for encrypting
   156     the state.
   157   * `lock_table` - (Optional, Deprecated) Use `dynamodb_table` instead.
   158   * `dynamodb_table` - (Optional) The name of a DynamoDB table to use for state
   159     locking and consistency. The table must have a primary key named LockID. If
   160     not present, locking will be disabled.
   161   * `profile` - (Optional) This is the AWS profile name as set in the
   162     shared credentials file.
   163   * `shared_credentials_file`  - (Optional) This is the path to the
   164     shared credentials file. If this is not set and a profile is specified,
   165     `~/.aws/credentials` will be used.
   166   * `token` - (Optional) Use this to set an MFA token. It can also be
   167     sourced from the `AWS_SESSION_TOKEN` environment variable.
   168   * `role_arn` - (Optional) The role to be assumed.
   169   * `assume_role_policy` - (Optional) The permissions applied when assuming a role.
   170   * `external_id` - (Optional) The external ID to use when assuming the role.
   171   * `session_name` - (Optional) The session name to use when assuming the role.
   172   * `workspace_key_prefix` - (Optional) The prefix applied to the state path
   173     inside the bucket. This is only relevant when using a non-default workspace.
   174     This defaults to "env:"
   175   * `skip_credentials_validation` - (Optional) Skip the credentials validation via the STS API.
   176   * `skip_get_ec2_platforms` - (Optional) Skip getting the supported EC2 platforms.
   177   * `skip_region_validation` - (Optional) Skip validation of provided region name.
   178   * `skip_requesting_account_id` - (Optional) Skip requesting the account ID.
   179   * `skip_metadata_api_check` - (Optional) Skip the AWS Metadata API check.
   180  
   181  ## Multi-account AWS Architecture
   182  
   183  A common architectural pattern is for an organization to use a number of
   184  separate AWS accounts to isolate different teams and environments. For example,
   185  a "staging" system will often be deployed into a separate AWS account than
   186  its corresponding "production" system, to minimize the risk of the staging
   187  environment affecting production infrastructure, whether via rate limiting,
   188  misconfigured access controls, or other unintended interactions.
   189  
   190  The S3 backend can be used in a number of different ways that make different
   191  tradeoffs between convenience, security, and isolation in such an organization.
   192  This section describes one such approach that aims to find a good compromise
   193  between these tradeoffs, allowing use of
   194  [Terraform's workspaces feature](/docs/state/workspaces.html) to switch
   195  conveniently between multiple isolated deployments of the same configuration.
   196  
   197  Use this section as a starting-point for your approach, but note that
   198  you will probably need to make adjustments for the unique standards and
   199  regulations that apply to your organization. You will also need to make some
   200  adjustments to this approach to account for _existing_ practices within your
   201  organization, if for example other tools have previously been used to manage
   202  infrastructure.
   203  
   204  Terraform is an administrative tool that manages your infrastructure, and so
   205  ideally the infrastructure that is used by Terraform should exist outside of
   206  the infrastructure that Terraform manages. This can be achieved by creating a
   207  separate _administrative_ AWS account which contains the user accounts used by
   208  human operators and any infrastructure and tools used to manage the the other
   209  accounts. Isolating shared administrative tools from your main environments
   210  has a number of advantages, such as avoiding accidentally damaging the
   211  administrative infrastructure while changing the target infrastructure, and
   212  reducing the risk that an attacker might abuse production infrastructure to
   213  gain access to the (usually more privileged) administrative infrastructure.
   214  
   215  ### Administrative Account Setup
   216  
   217  Your administrative AWS account will contain at least the following items:
   218  
   219  * One or more [IAM user](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)
   220    for system administrators that will log in to maintain infrastructure in
   221    the other accounts.
   222  * Optionally, one or more [IAM groups](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html)
   223    to differentiate between different groups of users that have different
   224    levels of access to the other AWS accounts.
   225  * An [S3 bucket](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html)
   226    that will contain the Terraform state files for each workspace.
   227  * A [DynamoDB table](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.TablesItemsAttributes)
   228    that will be used for locking to prevent concurrent operations on a single
   229    workspace.
   230  
   231  Provide the S3 bucket name and DynamoDB table name to Terraform within the
   232  S3 backend configuration using the `bucket` and `dynamodb_table` arguments
   233  respectively, and configure a suitable `workspace_key_prefix` to contain
   234  the states of the various workspaces that will subsequently be created for
   235  this configuration.
   236  
   237  ### Environment Account Setup
   238  
   239  For the sake of this section, the term "environment account" refers to one
   240  of the accounts whose contents are managed by Terraform, separate from the
   241  administrative account described above.
   242  
   243  Your environment accounts will eventually contain your own product-specific
   244  infrastructure. Along with this it must contain one or more
   245  [IAM roles](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)
   246  that grant sufficient access for Terraform to perform the desired management
   247  tasks.
   248  
   249  ### Delegating Access
   250  
   251  Each Administrator will run Terraform using credentials for their IAM user
   252  in the administrative account.
   253  [IAM Role Delegation](http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html)
   254  is used to grant these users access to the roles created in each environment
   255  account.
   256  
   257  Full details on role delegation are covered in the AWS documentation linked
   258  above. The most important details are:
   259  
   260  * Each role's _Assume Role Policy_ must grant access to the administrative AWS
   261    account, which creates a trust relationship with the administrative AWS
   262    account so that its users may assume the role.
   263  * The users or groups within the administrative account must also have a
   264    policy that creates the converse relationship, allowing these users or groups
   265    to assume that role.
   266  
   267  Since the purpose of the administrative account is only to host tools for
   268  managing other accounts, it is useful to give the administrative accounts
   269  restricted access only to the specific operations needed to assume the
   270  environment account role and access the Terraform state. By blocking all
   271  other access, you remove the risk that user error will lead to staging or
   272  production resources being created in the administrative account by mistake.
   273  
   274  When configuring Terraform, use either environment variables or the standard
   275  credentials file `~/.aws/credentials` to provide the administrator user's
   276  IAM credentials within the administrative account to both the S3 backend _and_
   277  to Terraform's AWS provider.
   278  
   279  Use conditional configuration to pass a different `assume_role` value to
   280  the AWS provider depending on the selected workspace. For example:
   281  
   282  ```hcl
   283  variable "workspace_iam_roles" {
   284    default = {
   285      staging    = "arn:aws:iam::STAGING-ACCOUNT-ID:role/Terraform"
   286      production = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
   287    }
   288  }
   289  
   290  provider "aws" {
   291    # No credentials explicitly set here because they come from either the
   292    # environment or the global credentials file.
   293  
   294    assume_role = "${var.workspace_iam_roles[terraform.workspace]}"
   295  }
   296  ```
   297  
   298  If workspace IAM roles are centrally managed and shared across many separate
   299  Terraform configurations, the role ARNs could also be obtained via a data
   300  source such as [`terraform_remote_state`](/docs/providers/terraform/d/remote_state.html)
   301  to avoid repeating these values.
   302  
   303  ### Creating and Selecting Workspaces
   304  
   305  With the necessary objects created and the backend configured, run
   306  `terraform init` to initialize the backend and establish an initial workspace
   307  called "default". This workspace will not be used, but is created automatically
   308  by Terraform as a convenience for users who are not using the workspaces
   309  feature.
   310  
   311  Create a workspace corresponding to each key given in the `workspace_iam_roles`
   312  variable value above:
   313  
   314  ```
   315  $ terraform workspace new staging
   316  Created and switched to workspace "staging"!
   317  
   318  ...
   319  
   320  $ terraform workspace new production
   321  Created and switched to workspace "production"!
   322  
   323  ...
   324  ```
   325  
   326  Due to the `assume_role` setting in the AWS provider configuration, any
   327  management operations for AWS resources will be performed via the configured
   328  role in the appropriate environment AWS account. The backend operations, such
   329  as reading and writing the state from S3, will be performed directly as the
   330  administrator's own user within the administrative account.
   331  
   332  ```
   333  $ terraform workspace select staging
   334  $ terraform apply
   335  ...
   336  ```
   337  
   338  ### Running Terraform in Amazon EC2
   339  
   340  Teams that make extensive use of Terraform for infrastructure management
   341  often [run Terraform in automation](/guides/running-terraform-in-automation.html)
   342  to ensure a consistent operating environment and to limit access to the
   343  various secrets and other sensitive information that Terraform configurations
   344  tend to require.
   345  
   346  When running Terraform in an automation tool running on an Amazon EC2 instance,
   347  consider running this instance in the administrative account and using an
   348  [instance profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html)
   349  in place of the various administrator IAM users suggested above. An IAM
   350  instance profile can also be granted cross-account delegation access via
   351  an IAM policy, giving this instance the access it needs to run Terraform.
   352  
   353  To isolate access to different environment accounts, use a separate EC2
   354  instance for each target account so that its access can be limited only to
   355  the single account.
   356  
   357  Similar approaches can be taken with equivalent features in other AWS compute
   358  services, such as ECS.
   359  
   360  ### Protecting Access to Workspace State
   361  
   362  In a simple implementation of the pattern described in the prior sections,
   363  all users have access to read and write states for all workspaces. In many
   364  cases it is desirable to apply more precise access constraints to the
   365  Terraform state objects in S3, so that for example only trusted administrators
   366  are allowed to modify the production state, or to control _reading_ of a state
   367  that contains sensitive information.
   368  
   369  Amazon S3 supports fine-grained access control on a per-object-path basis
   370  using IAM policy. A full description of S3's access control mechanism is
   371  beyond the scope of this guide, but an example IAM policy granting access
   372  to only a single state object within an S3 bucket is shown below:
   373  
   374  ```json
   375  {
   376    "Version": "2012-10-17",
   377    "Statement": [
   378      {
   379        "Effect": "Allow",
   380        "Action": "s3:ListBucket",
   381        "Resource": "arn:aws:s3:::myorg-terraform-states"
   382      },
   383      {
   384        "Effect": "Allow",
   385        "Action": ["s3:GetObject", "s3:PutObject"],
   386        "Resource": "arn:aws:s3:::myorg-terraform-states/myapp/production/tfstate"
   387      }
   388    ]
   389  }
   390  ```
   391  
   392  It is not possible to apply such fine-grained access control to the DynamoDB
   393  table used for locking, so it is possible for any user with Terraform access
   394  to lock any workspace state, even if they do not have access to read or write
   395  that state. If a malicious user has such access they could block attempts to
   396  use Terraform against some or all of your workspaces as long as locking is
   397  enabled in the backend configuration.