github.com/muratcelep/terraform@v1.1.0-beta2-not-internal-4/website/docs/language/settings/backends/s3.html.md (about)

     1  ---
     2  layout: "language"
     3  page_title: "Backend Type: s3"
     4  sidebar_current: "docs-backends-types-standard-s3"
     5  description: |-
     6    Terraform can store state remotely in S3 and lock that state with DynamoDB.
     7  ---
     8  
     9  # S3
    10  
    11  **Kind: Standard (with locking via DynamoDB)**
    12  
    13  Stores the state as a given key in a given bucket on
    14  [Amazon S3](https://aws.amazon.com/s3/).
    15  This backend also supports state locking and consistency checking via
    16  [Dynamo DB](https://aws.amazon.com/dynamodb/), which can be enabled by setting
    17  the `dynamodb_table` field to an existing DynamoDB table name.
    18  A single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of the `bucket` and `key` variables.
    19  
    20  ~> **Warning!** It is highly recommended that you enable
    21  [Bucket Versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/manage-versioning-examples.html)
    22  on the S3 bucket to allow for state recovery in the case of accidental deletions and human error.
    23  
    24  ## Example Configuration
    25  
    26  ```hcl
    27  terraform {
    28    backend "s3" {
    29      bucket = "mybucket"
    30      key    = "path/to/my/key"
    31      region = "us-east-1"
    32    }
    33  }
    34  ```
    35  
    36  This assumes we have a bucket created called `mybucket`. The
    37  Terraform state is written to the key `path/to/my/key`.
    38  
    39  Note that for the access credentials we recommend using a
    40  [partial configuration](/docs/language/settings/backends/configuration.html#partial-configuration).
    41  
    42  ### S3 Bucket Permissions
    43  
    44  Terraform will need the following AWS IAM permissions on
    45  the target backend bucket:
    46  
    47  * `s3:ListBucket` on `arn:aws:s3:::mybucket`
    48  * `s3:GetObject` on `arn:aws:s3:::mybucket/path/to/my/key`
    49  * `s3:PutObject` on `arn:aws:s3:::mybucket/path/to/my/key`
    50  * `s3:DeleteObject` on `arn:aws:s3:::mybucket/path/to/my/key`
    51  
    52  This is seen in the following AWS IAM Statement:
    53  
    54  ```json
    55  {
    56    "Version": "2012-10-17",
    57    "Statement": [
    58      {
    59        "Effect": "Allow",
    60        "Action": "s3:ListBucket",
    61        "Resource": "arn:aws:s3:::mybucket"
    62      },
    63      {
    64        "Effect": "Allow",
    65        "Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
    66        "Resource": "arn:aws:s3:::mybucket/path/to/my/key"
    67      }
    68    ]
    69  }
    70  ```
    71  
    72  -> **Note:** AWS can control access to S3 buckets with either IAM policies
    73  attached to users/groups/roles (like the example above) or resource policies
    74  attached to bucket objects (which look similar but also require a `Principal` to
    75  indicate which entity has those permissions). For more details, see Amazon's
    76  documentation about
    77  [S3 access control](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html).
    78  
    79  ### DynamoDB Table Permissions
    80  
    81  If you are using state locking, Terraform will need the following AWS IAM
    82  permissions on the DynamoDB table (`arn:aws:dynamodb:::table/mytable`):
    83  
    84  * `dynamodb:GetItem`
    85  * `dynamodb:PutItem`
    86  * `dynamodb:DeleteItem`
    87  
    88  This is seen in the following AWS IAM Statement:
    89  
    90  ```json
    91  {
    92    "Version": "2012-10-17",
    93    "Statement": [
    94      {
    95        "Effect": "Allow",
    96        "Action": [
    97          "dynamodb:GetItem",
    98          "dynamodb:PutItem",
    99          "dynamodb:DeleteItem"
   100        ],
   101        "Resource": "arn:aws:dynamodb:*:*:table/mytable"
   102      }
   103    ]
   104  }
   105  ```
   106  
   107  ## Data Source Configuration
   108  
   109  To make use of the S3 remote state in another configuration, use the
   110  [`terraform_remote_state` data
   111  source](/docs/language/state/remote-state-data.html).
   112  
   113  ```hcl
   114  data "terraform_remote_state" "network" {
   115    backend = "s3"
   116    config = {
   117      bucket = "terraform-state-prod"
   118      key    = "network/terraform.tfstate"
   119      region = "us-east-1"
   120    }
   121  }
   122  ```
   123  
   124  The `terraform_remote_state` data source will return all of the root module
   125  outputs defined in the referenced remote state (but not any outputs from
   126  nested modules unless they are explicitly output again in the root). An
   127  example output might look like:
   128  
   129  ```
   130  data.terraform_remote_state.network:
   131    id = 2016-10-29 01:57:59.780010914 +0000 UTC
   132    addresses.# = 2
   133    addresses.0 = 52.207.220.222
   134    addresses.1 = 54.196.78.166
   135    backend = s3
   136    config.% = 3
   137    config.bucket = terraform-state-prod
   138    config.key = network/terraform.tfstate
   139    config.region = us-east-1
   140    elb_address = web-elb-790251200.us-east-1.elb.amazonaws.com
   141    public_subnet_id = subnet-1e05dd33
   142  ```
   143  
   144  ## Configuration
   145  
   146  This backend requires the configuration of the AWS Region and S3 state storage. Other configuration, such as enabling DynamoDB state locking, is optional.
   147  
   148  ### Credentials and Shared Configuration
   149  
   150  The following configuration is required:
   151  
   152  * `region` - (Required) AWS Region of the S3 Bucket and DynamoDB Table (if used). This can also be sourced from the `AWS_DEFAULT_REGION` and `AWS_REGION` environment variables.
   153  
   154  The following configuration is optional:
   155  
   156  * `access_key` - (Optional) AWS access key. If configured, must also configure `secret_key`. This can also be sourced from the `AWS_ACCESS_KEY_ID` environment variable, AWS shared credentials file (e.g. `~/.aws/credentials`), or AWS shared configuration file (e.g. `~/.aws/config`).
   157  * `secret_key` - (Optional) AWS access key. If configured, must also configure `access_key`. This can also be sourced from the `AWS_SECRET_ACCESS_KEY` environment variable, AWS shared credentials file (e.g. `~/.aws/credentials`), or AWS shared configuration file (e.g. `~/.aws/config`).
   158  * `iam_endpoint` - (Optional) Custom endpoint for the AWS Identity and Access Management (IAM) API. This can also be sourced from the `AWS_IAM_ENDPOINT` environment variable.
   159  * `max_retries` - (Optional) The maximum number of times an AWS API request is retried on retryable failure. Defaults to 5.
   160  * `profile` - (Optional) Name of AWS profile in AWS shared credentials file (e.g. `~/.aws/credentials`) or AWS shared configuration file (e.g. `~/.aws/config`) to use for credentials and/or configuration. This can also be sourced from the `AWS_PROFILE` environment variable.
   161  * `shared_credentials_file`  - (Optional) Path to the AWS shared credentials file. Defaults to `~/.aws/credentials`.
   162  * `skip_credentials_validation` - (Optional) Skip credentials validation via the STS API.
   163  * `skip_region_validation` - (Optional) Skip validation of provided region name.
   164  * `skip_metadata_api_check` - (Optional) Skip usage of EC2 Metadata API.
   165  * `sts_endpoint` - (Optional) Custom endpoint for the AWS Security Token Service (STS) API. This can also be sourced from the `AWS_STS_ENDPOINT` environment variable.
   166  * `token` - (Optional) Multi-Factor Authentication (MFA) token. This can also be sourced from the `AWS_SESSION_TOKEN` environment variable.
   167  
   168  #### Assume Role Configuration
   169  
   170  The following configuration is optional:
   171  
   172  * `assume_role_duration_seconds` - (Optional) Number of seconds to restrict the assume role session duration.
   173  * `assume_role_policy` - (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed.
   174  * `assume_role_policy_arns` - (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed.
   175  * `assume_role_tags` - (Optional) Map of assume role session tags.
   176  * `assume_role_transitive_tag_keys` - (Optional) Set of assume role session tag keys to pass to any subsequent sessions.
   177  * `external_id` - (Optional) External identifier to use when assuming the role.
   178  * `role_arn` - (Optional) Amazon Resource Name (ARN) of the IAM Role to assume.
   179  * `session_name` - (Optional) Session name to use when assuming the role.
   180  
   181  ### S3 State Storage
   182  
   183  The following configuration is required:
   184  
   185  * `bucket` - (Required) Name of the S3 Bucket.
   186  * `key` - (Required) Path to the state file inside the S3 Bucket. When using a non-default [workspace](/docs/language/state/workspaces.html), the state path will be `/workspace_key_prefix/workspace_name/key` (see also the `workspace_key_prefix` configuration).
   187  
   188  The following configuration is optional:
   189  
   190  * `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl) to be applied to the state file.
   191  * `encrypt` - (Optional) Enable [server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) of the state file.
   192  * `endpoint` - (Optional) Custom endpoint for the AWS S3 API. This can also be sourced from the `AWS_S3_ENDPOINT` environment variable.
   193  * `force_path_style` - (Optional) Enable path-style S3 URLs (`https://<HOST>/<BUCKET>` instead of `https://<BUCKET>.<HOST>`).
   194  * `kms_key_id` - (Optional) Amazon Resource Name (ARN) of a Key Management Service (KMS) Key to use for encrypting the state. Note that if this value is specified, Terraform will need `kms:Encrypt`, `kms:Decrypt` and `kms:GenerateDataKey` permissions on this KMS key.
   195  * `sse_customer_key` - (Optional) The key to use for encrypting state with [Server-Side Encryption with Customer-Provided Keys (SSE-C)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html). This is the base64-encoded value of the key, which must decode to 256 bits. This can also be sourced from the `AWS_SSE_CUSTOMER_KEY` environment variable, which is recommended due to the sensitivity of the value. Setting it inside a terraform file will cause it to be persisted to disk in `terraform.tfstate`.
   196  * `workspace_key_prefix` - (Optional) Prefix applied to the state path inside the bucket. This is only relevant when using a non-default workspace. Defaults to `env:`.
   197  
   198  ### DynamoDB State Locking
   199  
   200  The following configuration is optional:
   201  
   202  * `dynamodb_endpoint` - (Optional) Custom endpoint for the AWS DynamoDB API. This can also be sourced from the `AWS_DYNAMODB_ENDPOINT` environment variable.
   203  * `dynamodb_table` - (Optional) Name of DynamoDB Table to use for state locking and consistency. The table must have a partition key named `LockID` with type of `String`. If not configured, state locking will be disabled.
   204  
   205  ## Multi-account AWS Architecture
   206  
   207  A common architectural pattern is for an organization to use a number of
   208  separate AWS accounts to isolate different teams and environments. For example,
   209  a "staging" system will often be deployed into a separate AWS account than
   210  its corresponding "production" system, to minimize the risk of the staging
   211  environment affecting production infrastructure, whether via rate limiting,
   212  misconfigured access controls, or other unintended interactions.
   213  
   214  The S3 backend can be used in a number of different ways that make different
   215  tradeoffs between convenience, security, and isolation in such an organization.
   216  This section describes one such approach that aims to find a good compromise
   217  between these tradeoffs, allowing use of
   218  [Terraform's workspaces feature](/docs/language/state/workspaces.html) to switch
   219  conveniently between multiple isolated deployments of the same configuration.
   220  
   221  Use this section as a starting-point for your approach, but note that
   222  you will probably need to make adjustments for the unique standards and
   223  regulations that apply to your organization. You will also need to make some
   224  adjustments to this approach to account for _existing_ practices within your
   225  organization, if for example other tools have previously been used to manage
   226  infrastructure.
   227  
   228  Terraform is an administrative tool that manages your infrastructure, and so
   229  ideally the infrastructure that is used by Terraform should exist outside of
   230  the infrastructure that Terraform manages. This can be achieved by creating a
   231  separate _administrative_ AWS account which contains the user accounts used by
   232  human operators and any infrastructure and tools used to manage the other
   233  accounts. Isolating shared administrative tools from your main environments
   234  has a number of advantages, such as avoiding accidentally damaging the
   235  administrative infrastructure while changing the target infrastructure, and
   236  reducing the risk that an attacker might abuse production infrastructure to
   237  gain access to the (usually more privileged) administrative infrastructure.
   238  
   239  ### Administrative Account Setup
   240  
   241  Your administrative AWS account will contain at least the following items:
   242  
   243  * One or more [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)
   244    for system administrators that will log in to maintain infrastructure in
   245    the other accounts.
   246  * Optionally, one or more [IAM groups](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html)
   247    to differentiate between different groups of users that have different
   248    levels of access to the other AWS accounts.
   249  * An [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
   250    that will contain the Terraform state files for each workspace.
   251  * A [DynamoDB table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.TablesItemsAttributes)
   252    that will be used for locking to prevent concurrent operations on a single
   253    workspace.
   254  
   255  Provide the S3 bucket name and DynamoDB table name to Terraform within the
   256  S3 backend configuration using the `bucket` and `dynamodb_table` arguments
   257  respectively, and configure a suitable `workspace_key_prefix` to contain
   258  the states of the various workspaces that will subsequently be created for
   259  this configuration.
   260  
   261  ### Environment Account Setup
   262  
   263  For the sake of this section, the term "environment account" refers to one
   264  of the accounts whose contents are managed by Terraform, separate from the
   265  administrative account described above.
   266  
   267  Your environment accounts will eventually contain your own product-specific
   268  infrastructure. Along with this it must contain one or more
   269  [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)
   270  that grant sufficient access for Terraform to perform the desired management
   271  tasks.
   272  
   273  ### Delegating Access
   274  
   275  Each Administrator will run Terraform using credentials for their IAM user
   276  in the administrative account.
   277  [IAM Role Delegation](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html)
   278  is used to grant these users access to the roles created in each environment
   279  account.
   280  
   281  Full details on role delegation are covered in the AWS documentation linked
   282  above. The most important details are:
   283  
   284  * Each role's _Assume Role Policy_ must grant access to the administrative AWS
   285    account, which creates a trust relationship with the administrative AWS
   286    account so that its users may assume the role.
   287  * The users or groups within the administrative account must also have a
   288    policy that creates the converse relationship, allowing these users or groups
   289    to assume that role.
   290  
   291  Since the purpose of the administrative account is only to host tools for
   292  managing other accounts, it is useful to give the administrative accounts
   293  restricted access only to the specific operations needed to assume the
   294  environment account role and access the Terraform state. By blocking all
   295  other access, you remove the risk that user error will lead to staging or
   296  production resources being created in the administrative account by mistake.
   297  
   298  When configuring Terraform, use either environment variables or the standard
   299  credentials file `~/.aws/credentials` to provide the administrator user's
   300  IAM credentials within the administrative account to both the S3 backend _and_
   301  to Terraform's AWS provider.
   302  
   303  Use conditional configuration to pass a different `assume_role` value to
   304  the AWS provider depending on the selected workspace. For example:
   305  
   306  ```hcl
   307  variable "workspace_iam_roles" {
   308    default = {
   309      staging    = "arn:aws:iam::STAGING-ACCOUNT-ID:role/Terraform"
   310      production = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
   311    }
   312  }
   313  
   314  provider "aws" {
   315    # No credentials explicitly set here because they come from either the
   316    # environment or the global credentials file.
   317  
   318    assume_role = "${var.workspace_iam_roles[terraform.workspace]}"
   319  }
   320  ```
   321  
   322  If workspace IAM roles are centrally managed and shared across many separate
   323  Terraform configurations, the role ARNs could also be obtained via a data
   324  source such as [`terraform_remote_state`](/docs/language/state/remote-state-data.html)
   325  to avoid repeating these values.
   326  
   327  ### Creating and Selecting Workspaces
   328  
   329  With the necessary objects created and the backend configured, run
   330  `terraform init` to initialize the backend and establish an initial workspace
   331  called "default". This workspace will not be used, but is created automatically
   332  by Terraform as a convenience for users who are not using the workspaces
   333  feature.
   334  
   335  Create a workspace corresponding to each key given in the `workspace_iam_roles`
   336  variable value above:
   337  
   338  ```
   339  $ terraform workspace new staging
   340  Created and switched to workspace "staging"!
   341  
   342  ...
   343  
   344  $ terraform workspace new production
   345  Created and switched to workspace "production"!
   346  
   347  ...
   348  ```
   349  
   350  Due to the `assume_role` setting in the AWS provider configuration, any
   351  management operations for AWS resources will be performed via the configured
   352  role in the appropriate environment AWS account. The backend operations, such
   353  as reading and writing the state from S3, will be performed directly as the
   354  administrator's own user within the administrative account.
   355  
   356  ```
   357  $ terraform workspace select staging
   358  $ terraform apply
   359  ...
   360  ```
   361  
   362  ### Running Terraform in Amazon EC2
   363  
   364  Teams that make extensive use of Terraform for infrastructure management
   365  often [run Terraform in automation](https://learn.hashicorp.com/tutorials/terraform/automate-terraform?in=terraform/automation&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS)
   366  to ensure a consistent operating environment and to limit access to the
   367  various secrets and other sensitive information that Terraform configurations
   368  tend to require.
   369  
   370  When running Terraform in an automation tool running on an Amazon EC2 instance,
   371  consider running this instance in the administrative account and using an
   372  [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html)
   373  in place of the various administrator IAM users suggested above. An IAM
   374  instance profile can also be granted cross-account delegation access via
   375  an IAM policy, giving this instance the access it needs to run Terraform.
   376  
   377  To isolate access to different environment accounts, use a separate EC2
   378  instance for each target account so that its access can be limited only to
   379  the single account.
   380  
   381  Similar approaches can be taken with equivalent features in other AWS compute
   382  services, such as ECS.
   383  
   384  ### Protecting Access to Workspace State
   385  
   386  In a simple implementation of the pattern described in the prior sections,
   387  all users have access to read and write states for all workspaces. In many
   388  cases it is desirable to apply more precise access constraints to the
   389  Terraform state objects in S3, so that for example only trusted administrators
   390  are allowed to modify the production state, or to control _reading_ of a state
   391  that contains sensitive information.
   392  
   393  Amazon S3 supports fine-grained access control on a per-object-path basis
   394  using IAM policy. A full description of S3's access control mechanism is
   395  beyond the scope of this guide, but an example IAM policy granting access
   396  to only a single state object within an S3 bucket is shown below:
   397  
   398  ```json
   399  {
   400    "Version": "2012-10-17",
   401    "Statement": [
   402      {
   403        "Effect": "Allow",
   404        "Action": "s3:ListBucket",
   405        "Resource": "arn:aws:s3:::myorg-terraform-states"
   406      },
   407      {
   408        "Effect": "Allow",
   409        "Action": ["s3:GetObject", "s3:PutObject"],
   410        "Resource": "arn:aws:s3:::myorg-terraform-states/myapp/production/tfstate"
   411      }
   412    ]
   413  }
   414  ```
   415  
   416  It is not possible to apply such fine-grained access control to the DynamoDB
   417  table used for locking, so it is possible for any user with Terraform access
   418  to lock any workspace state, even if they do not have access to read or write
   419  that state. If a malicious user has such access they could block attempts to
   420  use Terraform against some or all of your workspaces as long as locking is
   421  enabled in the backend configuration.
   422  
   423  ### Configuring Custom User-Agent Information
   424  
   425  Note this feature is optional and only available in Terraform v0.13.1+.
   426  
   427  By default, the underlying AWS client used by the Terraform AWS Provider creates requests with User-Agent headers including information about Terraform and AWS Go SDK versions. To provide additional information in the User-Agent headers, the `TF_APPEND_USER_AGENT` environment variable can be set and its value will be directly added to HTTP requests. e.g.
   428  
   429  ```sh
   430  $ export TF_APPEND_USER_AGENT="JenkinsAgent/i-12345678 BuildID/1234 (Optional Extra Information)"
   431  ```