github.com/replicatedhq/ship@v0.55.0/integration/failing/init/concourse/expected/.ship/upstream/README.md (about)

     1  # Concourse Helm Chart
     2  
     3  [Concourse](https://concourse-ci.org/) is a simple and scalable CI system.
     4  
     5  ## TL;DR;
     6  
     7  ```console
     8  $ helm install stable/concourse
     9  ```
    10  
    11  ## Introduction
    12  
    13  This chart bootstraps a [Concourse](https://concourse-ci.org/) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
    14  
    15  ## Prerequisites Details
    16  
    17  * Kubernetes 1.6 (for `pod affinity` support)
    18  * PV support on underlying infrastructure (if persistence is required)
    19  
    20  ## Installing the Chart
    21  
    22  To install the chart with the release name `my-release`:
    23  
    24  ```console
    25  $ helm install --name my-release stable/concourse
    26  ```
    27  
    28  ## Uninstalling the Chart
    29  
    30  To uninstall/delete the `my-release` deployment:
    31  
    32  ```console
    33  $ helm delete my-release
    34  ```
    35  
    36  The command removes nearly all the Kubernetes components associated with the chart and deletes the release.
    37  
    38  ### Cleanup orphaned Persistent Volumes
    39  
    40  This chart uses `StatefulSets` for Concourse Workers. Deleting a `StatefulSet` does not delete associated Persistent Volumes.
    41  
    42  Do the following after deleting the chart release to clean up orphaned Persistent Volumes.
    43  
    44  ```console
    45  $ kubectl delete pvc -l app=${RELEASE-NAME}-worker
    46  ```
    47  
    48  ## Scaling the Chart
    49  
    50  Scaling should typically be managed via the `helm upgrade` command, but `StatefulSets` don't yet work with `helm upgrade`. In the meantime, until `helm upgrade` works, if you want to change the number of replicas, you can use the `kubectl scale` command as shown below:
    51  
    52  ```console
    53  $ kubectl scale statefulset my-release-worker --replicas=3
    54  ```
    55  
    56  ### Restarting workers
    57  
    58  If a worker isn't taking on work, you can restart the worker with `kubectl delete pod`. This initiates a graceful shutdown by "retiring" the worker, to ensure Concourse doesn't try looking for old volumes on the new worker. The value`worker.terminationGracePeriodSeconds` can be used to provide an upper limit on graceful shutdown time before forcefully terminating the container. Check the output of `fly workers`, and if a worker is `stalled`, you'll also need to run `fly prune-worker` to allow the new incarnation of the worker to start.
    59  
    60  ### Worker Liveness Probe
    61  
    62  The worker's Liveness Probe will trigger a restart of the worker if it detects unrecoverable errors, by looking at the worker's logs. The set of strings used to identify such errors could change in the future, but can be tuned with `worker.fatalErrors`. See [values.yaml](values.yaml) for the defaults.
    63  
    64  ## Configuration
    65  
    66  The following table lists the configurable parameters of the Concourse chart and their default values.
    67  
    68  | Parameter               | Description                           | Default                                                    |
    69  | ----------------------- | ----------------------------------    | ---------------------------------------------------------- |
    70  | `image` | Concourse image | `concourse/concourse` |
    71  | `imageTag` | Concourse image version | `4.2.2` |
    72  | `imagePullPolicy` | Concourse image pull policy | `IfNotPresent` |
    73  | `imagePullSecrets` | Array of imagePullSecrets in the namespace for pulling images | `[]` |
    74  | `web.additionalAffinities` | Additional affinities to apply to web pods. E.g: node affinity | `{}` |
    75  | `web.additionalVolumeMounts` | VolumeMounts to be added to the web pods | `nil` |
    76  | `web.additionalVolumes` | Volumes to be added to the web pods | `nil` |
    77  | `web.annotations`| Concourse Web deployment annotations | `nil` |
    78  | `web.authSecretsPath` | Specify the mount directory of the web auth secrets | `/concourse-auth` |
    79  | `web.env` | Configure additional environment variables for the web containers | `[]` |
    80  | `web.ingress.annotations` | Concourse Web Ingress annotations | `{}` |
    81  | `web.ingress.enabled` | Enable Concourse Web Ingress | `false` |
    82  | `web.ingress.hosts` | Concourse Web Ingress Hostnames | `[]` |
    83  | `web.ingress.tls` | Concourse Web Ingress TLS configuration | `[]` |
    84  | `web.keysSecretsPath` | Specify the mount directory of the web keys secrets | `/concourse-keys` |
    85  | `web.livenessProbe` | Liveness Probe settings | `{"failureThreshold":5,"httpGet":{"path":"/api/v1/info","port":"atc"},"initialDelaySeconds":10,"periodSeconds":15,"timeoutSeconds":3}` |
    86  | `web.nameOverride` | Override the Concourse Web components name | `nil` |
    87  | `web.nodeSelector` | Node selector for web nodes | `{}` |
    88  | `web.postgresqlSecrtsPath` | Specify the mount directory of the web postgresql secrets | `/concourse-postgresql` |
    89  | `web.readinessProbe` | Readiness Probe settings | `{"httpGet":{"path":"/api/v1/info","port":"atc"}}` |
    90  | `web.replicas` | Number of Concourse Web replicas | `1` |
    91  | `web.resources` | Concourse Web resource requests and limits | `{requests: {cpu: "100m", memory: "128Mi"}}` |
    92  | `web.service.annotations` | Concourse Web Service annotations | `nil` |
    93  | `web.service.atcNodePort` | Sets the nodePort for atc when using `NodePort` | `nil` |
    94  | `web.service.atcTlsNodePort` | Sets the nodePort for atc tls when using `NodePort` | `nil` |
    95  | `web.service.labels` | Additional concourse web service labels | `nil` |
    96  | `web.service.loadBalancerIP` | The IP to use when web.service.type is LoadBalancer | `nil` |
    97  | `web.service.loadBalancerSourceRanges` | Concourse Web Service Load Balancer Source IP ranges | `nil` |
    98  | `web.service.tsaNodePort` | Sets the nodePort for tsa when using `NodePort` | `nil` |
    99  | `web.service.type` | Concourse Web service type | `ClusterIP` |
   100  | `web.syslogSecretsPath` | Specify the mount directory of the web syslog secrets | `/concourse-syslog` |
   101  | `web.tolerations` | Tolerations for the web nodes | `[]` |
   102  | `web.vaultSecretsPath` | Specify the mount directory of the web vault secrets | `/concourse-vault` |
   103  | `worker.nameOverride` | Override the Concourse Worker components name | `nil` |
   104  | `worker.replicas` | Number of Concourse Worker replicas | `2` |
   105  | `worker.minAvailable` | Minimum number of workers available after an eviction | `1` |
   106  | `worker.resources` | Concourse Worker resource requests and limits | `{requests: {cpu: "100m", memory: "512Mi"}}` |
   107  | `worker.env` | Configure additional environment variables for the worker container(s) | `[]` |
   108  | `worker.annotations` | Annotations to be added to the worker pods | `{}` |
   109  | `worker.keysSecretsPath` | Specify the mount directory of the worker keys secrets | `/concourse-keys` |
   110  | `worker.additionalVolumeMounts` | VolumeMounts to be added to the worker pods | `nil` |
   111  | `worker.additionalVolumes` | Volumes to be added to the worker pods | `nil` |
   112  | `worker.additionalAffinities` | Additional affinities to apply to worker pods. E.g: node affinity | `{}` |
   113  | `worker.tolerations` | Tolerations for the worker nodes | `[]` |
   114  | `worker.terminationGracePeriodSeconds` | Upper bound for graceful shutdown to allow the worker to drain its tasks | `60` |
   115  | `worker.fatalErrors` | Newline delimited strings which, when logged, should trigger a restart of the worker | *See [values.yaml](values.yaml)* |
   116  | `worker.updateStrategy` | `OnDelete` or `RollingUpdate` (requires Kubernetes >= 1.7) | `RollingUpdate` |
   117  | `worker.podManagementPolicy` | `OrderedReady` or `Parallel` (requires Kubernetes >= 1.7) | `Parallel` |
   118  | `worker.hardAntiAffinity` | Should the workers be forced (as opposed to preferred) to be on different nodes? | `false` |
   119  | `worker.emptyDirSize` | When persistance is disabled this value will be used to limit the emptyDir volume size | `nil` |
   120  | `persistence.enabled` | Enable Concourse persistence using Persistent Volume Claims | `true` |
   121  | `persistence.worker.storageClass` | Concourse Worker Persistent Volume Storage Class | `generic` |
   122  | `persistence.worker.accessMode` | Concourse Worker Persistent Volume Access Mode | `ReadWriteOnce` |
   123  | `persistence.worker.size` | Concourse Worker Persistent Volume Storage Size | `20Gi` |
   124  | `postgresql.enabled` | Enable PostgreSQL as a chart dependency | `true` |
   125  | `postgresql.postgresUser` | PostgreSQL User to create | `concourse` |
   126  | `postgresql.postgresPassword` | PostgreSQL Password for the new user | `concourse` |
   127  | `postgresql.postgresDatabase` | PostgreSQL Database to create | `concourse` |
   128  | `postgresql.persistence.enabled` | Enable PostgreSQL persistence using Persistent Volume Claims | `true` |
   129  | `rbac.create` | Enables creation of RBAC resources | `true` |
   130  | `rbac.apiVersion` | RBAC version | `v1beta1` |
   131  | `rbac.webServiceAccountName` | Name of the service account to use for web pods if `rbac.create` is `false` | `default` |
   132  | `rbac.workerServiceAccountName` | Name of the service account to use for workers if `rbac.create` is `false` | `default` |
   133  | `secrets.create` | Create the secret resource from the following values. *See [Secrets](#secrets)* | `true` |
   134  | `secrets.awsSsmAccessKey` | AWS Access Key ID for SSM access | `nil` |
   135  | `secrets.awsSsmSecretKey` | AWS Secret Access Key ID for SSM access | `nil` |
   136  | `secrets.awsSsmSessionToken` | AWS Session Token for SSM access | `nil` |
   137  | `secrets.cfCaCert` | CA certificate for cf auth provider | `nil` |
   138  | `secrets.cfClientId` | Client ID for cf auth provider | `nil` |
   139  | `secrets.cfClientSecret` | Client secret for cf auth provider | `nil` |
   140  | `secrets.encryptionKey` | current encryption key | `nil` |
   141  | `secrets.githubCaCert` | CA certificate for Enterprise Github OAuth | `nil` |
   142  | `secrets.githubClientId` | Application client ID for GitHub OAuth | `nil` |
   143  | `secrets.githubClientSecret` | Application client secret for GitHub OAuth | `nil` |
   144  | `secrets.gitlabClientId` | Application client ID for GitLab OAuth | `nil` |
   145  | `secrets.gitlabClientSecret` | Application client secret for GitLab OAuth | `nil` |
   146  | `secrets.hostKeyPub` | Concourse Host Public Key | *See [values.yaml](values.yaml)* |
   147  | `secrets.hostKey` | Concourse Host Private Key | *See [values.yaml](values.yaml)* |
   148  | `secrets.influxdbPassword` | Password used to authenticate with influxdb | `nil` |
   149  | `secrets.localUsers` | Create concourse local users. Default username and password are `test:test` *See [values.yaml](values.yaml)* |
   150  | `secrets.oauthCaCert` | CA certificate for Generic OAuth | `nil` |
   151  | `secrets.oauthClientId` | Application client ID for Generic OAuth | `nil` |
   152  | `secrets.oauthClientSecret` | Application client secret for Generic OAuth | `nil` |
   153  | `secrets.oidcCaCert` | CA certificate for OIDC Oauth | `nil` |
   154  | `secrets.oidcClientId` | Application client ID for OIDI OAuth | `nil` |
   155  | `secrets.oidcClientSecret` | Application client secret for OIDC OAuth | `nil` |
   156  | `secrets.oldEncryptionKey` | old encryption key, used for key rotation | `nil` |
   157  | `secrets.postgresqlCaCert` | PostgreSQL CA certificate | `nil` |
   158  | `secrets.postgresqlClientCert` | PostgreSQL Client certificate | `nil` |
   159  | `secrets.postgresqlClientKey` | PostgreSQL Client key | `nil` |
   160  | `secrets.postgresqlPassword` | PostgreSQL User Password | `nil` |
   161  | `secrets.postgresqlUser` | PostgreSQL User Name | `nil` |
   162  | `secrets.sessionSigningKey` | Concourse Session Signing Private Key | *See [values.yaml](values.yaml)* |
   163  | `secrets.syslogCaCert` | SSL certificate to verify Syslog server | `nil` |
   164  | `secrets.vaultAuthParam` | Paramter to pass when logging in via the backend | `nil` |
   165  | `secrets.vaultCaCert` | CA certificate use to verify the vault server SSL cert | `nil` |
   166  | `secrets.vaultClientCert` | Vault Client Certificate | `nil` |
   167  | `secrets.vaultClientKey` | Vault Client Key | `nil` |
   168  | `secrets.vaultClientToken` | Vault periodic client token | `nil` |
   169  | `secrets.webTlsCert` | TLS certificate for the web component to terminate TLS connections | `nil` |
   170  | `secrets.webTlsKey` | An RSA private key, used to encrypt HTTPS traffic  | `nil` |
   171  | `secrets.workerKeyPub` | Concourse Worker Public Key | *See [values.yaml](values.yaml)* |
   172  | `secrets.workerKey` | Concourse Worker Private Key | *See [values.yaml](values.yaml)* |
   173  
   174  For configurable concourse parameters, refer to [values.yaml](values.yaml) `concourse` section. All parameters under this section are strictly mapped from concourse binary commands. For example if one needs to configure the concourse external URL, the param `concourse` -> `web` -> `externalUrl` should be set, which is equivalent to running concourse binary as `concourse web --external-url`. For those sub-sections that have `enabled`, one needs to set `enabled` to be `true` to use the following params within the section.
   175  
   176  Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
   177  
   178  Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
   179  
   180  ```console
   181  $ helm install --name my-release -f values.yaml stable/concourse
   182  ```
   183  
   184  > **Tip**: You can use the default [values.yaml](values.yaml)
   185  
   186  ### Secrets
   187  
   188  For your convenience, this chart provides some default values for secrets, but it is recommended that you generate and manage these secrets outside the Helm chart. To do this, set `secrets.create` to `false`, create files for each secret value, and turn it all into a k8s secret. Be careful with introducing trailing newline characters; following the steps below ensures none end up in your secrets. First, perform the following to create the mandatory secret values:
   189  
   190  ```console
   191  mkdir concourse-secrets
   192  cd concourse-secrets
   193  ssh-keygen -t rsa -f host-key  -N ''
   194  mv host-key.pub host-key-pub
   195  ssh-keygen -t rsa -f worker-key  -N ''
   196  mv worker-key.pub worker-key-pub
   197  ssh-keygen -t rsa -f session-signing-key  -N ''
   198  rm session-signing-key.pub
   199  printf "%s:%s" "concourse" "$(openssl rand -base64 24)" > local-users
   200  ```
   201  
   202  You'll also need to create/copy secret values for optional features. See [templates/secrets.yaml](templates/secrets.yaml) for possible values. In the example below, we are not using the [PostgreSQL](#postgresql) chart dependency, and so we must set `postgresql-user` and `postgresql-password` secrets.
   203  
   204  ```console
   205  # copy a posgres user to clipboard and paste it to file
   206  printf "%s" "$(pbpaste)" > postgresql-user
   207  # copy a posgres password to clipboard and paste it to file
   208  printf "%s" "$(pbpaste)" > postgresql-password
   209  
   210  # copy Github client id and secrets to clipboard and paste to files
   211  printf "%s" "$(pbpaste)" > github-client-id
   212  printf "%s" "$(pbpaste)" > github-client-secret
   213  
   214  # set an encryption key for DB encryption at rest
   215  printf "%s" "$(openssl rand -base64 24)" > encryption-key
   216  ```
   217  
   218  Then create a secret called `[release-name]-concourse` from all the secret value files in the current folder:
   219  
   220  ```console
   221  kubectl create secret generic my-release-concourse --from-file=.
   222  ```
   223  
   224  Make sure you clean up after yourself.
   225  
   226  ### Persistence
   227  
   228  This chart mounts a Persistent Volume for each Concourse Worker. The volume is created using dynamic volume provisioning. If you want to disable it or change the persistence properties, update the `persistence` section of your custom `values.yaml` file:
   229  
   230  ```yaml
   231  ## Persistent Volume Storage configuration.
   232  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
   233  ##
   234  persistence:
   235    ## Enable persistence using Persistent Volume Claims.
   236    ##
   237    enabled: true
   238  
   239    ## Worker Persistence configuration.
   240    ##
   241    worker:
   242      ## Persistent Volume Storage Class.
   243      ##
   244      class: generic
   245  
   246      ## Persistent Volume Access Mode.
   247      ##
   248      accessMode: ReadWriteOnce
   249  
   250      ## Persistent Volume Storage Size.
   251      ##
   252      size: "20Gi"
   253  ```
   254  
   255  It is highly recommended to use Persistent Volumes for Concourse Workers; otherwise, the container images managed by the Worker are stored in an `emptyDir` volume on the node's disk. This will interfere with k8s ImageGC and the node's disk will fill up as a result. This will be fixed in a future release of k8s: https://github.com/kubernetes/kubernetes/pull/57020
   256  
   257  ### Ingress TLS
   258  
   259  If your cluster allows automatic creation/retrieval of TLS certificates (e.g. [kube-lego](https://github.com/jetstack/kube-lego)), please refer to the documentation for that mechanism.
   260  
   261  To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:
   262  
   263  ```console
   264  kubectl create secret tls concourse-web-tls --cert=path/to/tls.cert --key=path/to/tls.key
   265  ```
   266  
   267  Include the secret's name, along with the desired hostnames, in the `web.ingress.tls` section of your custom `values.yaml` file:
   268  
   269  ```yaml
   270  ## Configuration values for Concourse Web components.
   271  ##
   272  web:
   273    ## Ingress configuration.
   274    ## ref: https://kubernetes.io/docs/user-guide/ingress/
   275    ##
   276    ingress:
   277      ## Enable ingress.
   278      ##
   279      enabled: true
   280  
   281      ## Hostnames.
   282      ## Must be provided if Ingress is enabled.
   283      ##
   284      hosts:
   285        - concourse.domain.com
   286  
   287      ## TLS configuration.
   288      ## Secrets must be manually created in the namespace.
   289      ##
   290      tls:
   291        - secretName: concourse-web-tls
   292          hosts:
   293            - concourse.domain.com
   294  ```
   295  
   296  ### PostgreSQL
   297  
   298  By default, this chart uses a PostgreSQL database deployed as a chart dependency, with default values for username, password, and database name. These can be modified by setting the `postgresql.*` values.
   299  
   300  You can also bring your own PostgreSQL. To do so, set `postgresql.enabled` to false, and then configure Concourse's `postgres` values (`concourse.web.postgres.*`).
   301  
   302  Note that some values get set in the form of secrets, like `postgresql-user`, `postgresql-password`, and others (see [templates/secrets.yaml](templates/secrets.yaml) for possible values and the [secrets section](#secrets) on this README for guidance on how to set those secrets).
   303  
   304  
   305  ### Credential Management
   306  
   307  Pipelines usually need credentials to do things. Concourse supports the use of a [Credential Manager](https://concourse-ci.org/creds.html) so your pipelines can contain references to secrets instead of the actual secret values. You can't use more than one credential manager at a time.
   308  
   309  #### Kubernetes Secrets
   310  
   311  By default, this chart uses Kubernetes Secrets as a credential manager. 
   312  
   313  For a given Concourse *team*, a pipeline looks for secrets in a namespace named `[namespacePrefix][teamName]`. The namespace prefix is the release name followed by a hyphen by default, and can be overridden with the value `concourse.web.kubernetes.namespacePrefix`. Each team listed under `concourse.web.kubernetes.teams` will have a namespace created for it, and the namespace remains after deletion of the release unless you set `concourse.web.kubernetes.keepNamespace` to `false`. By default, a namespace will be created for the `main` team.
   314  
   315  The service account used by Concourse must have `get` access to secrets in that namespace. When `rbac.create` is true, this access is granted for each team listed under `concourse.web.kubernetes.teams`.
   316  
   317  Here are some examples of the lookup heuristics, given release name `concourse`:
   318  
   319  In team `accounting-dev`, pipeline `my-app`; the expression `((api-key))` resolves to:
   320  
   321  1. the secret value in namespace: `concourse-accounting-dev` secret: `my-app.api-key`, key: `value`
   322  2. and if not found, is the value in namespace: `concourse-accounting-dev` secret: `api-key`, key: `value`
   323  
   324  In team accounting-dev, pipeline `my-app`, the expression `((common-secrets.api-key))` resolves to:
   325  
   326  1. the secret value in namespace: `concourse-accounting-dev` secret: `my-app.common-secrets`, key: `api-key`
   327  2. and if not found, is the value in namespace: `concourse-accounting-dev` secret: `common-secrets`, key: `api-key`
   328  
   329  Be mindful of your team and pipeline names, to ensure they can be used in namespace and secret names, e.g. no underscores.
   330  
   331  To test, create a secret in namespace `concourse-main`:
   332  
   333  ```console
   334  kubectl create secret generic hello --from-literal 'value=Hello world!'
   335  ```
   336  
   337  Then `fly set-pipeline` with the following pipeline, and trigger it:
   338  
   339  ```yaml
   340  jobs:
   341  - name: hello-world
   342    plan:
   343    - task: say-hello
   344      config:
   345        platform: linux
   346        image_resource:
   347          type: docker-image
   348          source: {repository: alpine}
   349        params:
   350          HELLO: ((hello))
   351        run:
   352          path: /bin/sh
   353          args: ["-c", "echo $HELLO"]
   354  ```
   355  
   356  #### Hashicorp Vault
   357  
   358  To use Vault, set `concourse.web.kubernetes.enabled` to false, and set the following values:
   359  
   360  
   361  ```yaml
   362  ## Configuration values for the Credential Manager.
   363  ## ref: https://concourse-ci.org/creds.html
   364  ##
   365  concourse:
   366    web:
   367      vault:
   368        ## Use Hashicorp Vault for the Credential Manager.
   369        ##
   370        enabled: false
   371  
   372        ## URL pointing to vault addr (i.e. http://vault:8200).
   373        ##
   374        # url:
   375  
   376        ## vault path under which to namespace credential lookup, defaults to /concourse.
   377        ##
   378        # pathPrefix:
   379  ```
   380  
   381  #### AWS Systems Manager Parameter Store (SSM)
   382  
   383  To use SSM, set `concourse.web.kubernetes.enabled` to false, and set `concourse.web.awsSsm.enabled` to true.
   384  
   385  For a given Concourse *team*, a pipeline looks for secrets in SSM using either `/concourse/{team}/{secret}` or `/concourse/{team}/{pipeline}/{secret}`; the patterns can be overridden using the `concourse.web.awsSsm.teamSecretTemplate` and `concourse.web.awsSsm.pipelineSecretTemplate` settings.
   386  
   387  Concourse requires AWS credentials which are able to read from SSM for this feature to function. Credentials can be set in the `secrets.awsSsm*` settings; if your cluster is running in a different AWS region, you may also need to set `concourse.web.awsSsm.region`.
   388  
   389  The minimum IAM policy you need to use SSM with Concourse is:
   390  
   391  ```json
   392  {
   393    "Version": "2012-10-17",
   394    "Statement": [
   395      {
   396        "Action": "kms:Decrypt",
   397        "Resource": "<kms-key-arn>",
   398        "Effect": "Allow"
   399      },
   400      {
   401        "Action": "ssm:GetParameter*",
   402        "Resource": "<...arn...>:parameter/concourse/*",
   403        "Effect": "Allow"
   404      }
   405    ]
   406  }
   407  ```
   408  
   409  Where `<kms-key-arn>` is the ARN of the KMS key used to encrypt the secrets in Parameter Store, and the `<...arn...>` should be replaced with a correct ARN for your account and region's Parameter Store.
   410  
   411  #### AWS Secrets Manager
   412  
   413  To use Secrets Manager, set `concourse.web.kubernetes.enabled` to false, and set `concourse.web.awsSecretsManager.enabled` to true.
   414  
   415  For a given Concourse *team*, a pipeline looks for secrets in Secrets Manager using either `/concourse/{team}/{secret}` or `/concourse/{team}/{pipeline}/{secret}`; the patterns can be overridden using the `concourse.web.awsSecretsManager.teamSecretTemplate` and `concourse.web.awsSecretsManager.pipelineSecretTemplate` settings.
   416  
   417  Concourse requires AWS credentials which are able to read from Secrets Manager for this feature to function. Credentials can be set in the `secrets.awsSecretsmanager*` settings; if your cluster is running in a different AWS region, you may also need to set `concourse.web.awsSecretsManager.region`.
   418  
   419  The minimum IAM policy you need to use Secrets Manager with Concourse is:
   420  
   421  ```json
   422  {
   423    "Version": "2012-10-17",
   424    "Statement": [
   425      {
   426        "Sid": "AllowAccessToSecretManagerParameters",
   427        "Effect": "Allow",
   428        "Action": [
   429          "secretsmanager:ListSecrets"
   430        ],
   431        "Resource": "*"
   432      },
   433      {
   434        "Sid": "AllowAccessGetSecret",
   435        "Effect": "Allow",
   436        "Action": [
   437          "secretsmanager:GetSecretValue",
   438          "secretsmanager:DescribeSecret"
   439        ],
   440        "Resource": [
   441          "arn:aws:secretsmanager:::secret:/concourse/*"
   442        ]
   443      }
   444    ]
   445  }
   446  ```