github.com/khulnasoft-lab/kube-bench@v0.2.1-0.20240330183753-9df52345ae58/cfg/gke-1.0/managedservices.yaml (about)

     1  ---
     2  controls:
     3  version: "gke-1.0"
     4  id: 6
     5  text: "Managed Services"
     6  type: "managedservices"
     7  groups:
     8    - id: 6.1
     9      text: "Image Registry and Image Scanning"
    10      checks:
    11        - id: 6.1.1
    12          text: "Ensure Image Vulnerability Scanning using GCR Container Analysis
    13          or a third-party provider (Scored)"
    14          type: "manual"
    15          remediation: |
    16            Using Command Line:
    17  
    18              gcloud services enable containerscanning.googleapis.com
    19          scored: true
    20  
    21        - id: 6.1.2
    22          text: "Minimize user access to GCR (Scored)"
    23          type: "manual"
    24          remediation: |
    25            Using Command Line:
    26              To change roles at the GCR bucket level:
    27              Firstly, run the following if read permissions are required:
    28  
    29                gsutil iam ch [TYPE]:[EMAIL-ADDRESS]:objectViewer
    30                gs://artifacts.[PROJECT_ID].appspot.com
    31  
    32              Then remove the excessively privileged role (Storage Admin / Storage Object Admin /
    33              Storage Object Creator) using:
    34  
    35                gsutil iam ch -d [TYPE]:[EMAIL-ADDRESS]:[ROLE]
    36                gs://artifacts.[PROJECT_ID].appspot.com
    37  
    38              where:
    39                [TYPE] can be one of the following:
    40                      o user, if the [EMAIL-ADDRESS] is a Google account
    41                      o serviceAccount, if [EMAIL-ADDRESS] specifies a Service account
    42                [EMAIL-ADDRESS] can be one of the following:
    43                      o a Google account (for example, someone@example.com)
    44                      o a Cloud IAM service account
    45                      To modify roles defined at the project level and subsequently inherited within the GCR
    46                      bucket, or the Service Account User role, extract the IAM policy file, modify it accordingly
    47              and apply it using:
    48  
    49                gcloud projects set-iam-policy [PROJECT_ID] [POLICY_FILE]
    50          scored: true
    51  
    52        - id: 6.1.3
    53          text: "Minimize cluster access to read-only for GCR (Scored)"
    54          type: "manual"
    55          remediation: |
    56            Using Command Line:
    57              For an account explicitly granted to the bucket. First, add read access to the Kubernetes
    58              Service Account
    59  
    60                gsutil iam ch [TYPE]:[EMAIL-ADDRESS]:objectViewer
    61                gs://artifacts.[PROJECT_ID].appspot.com
    62  
    63                where:
    64                [TYPE] can be one of the following:
    65                        o user, if the [EMAIL-ADDRESS] is a Google account
    66                        o serviceAccount, if [EMAIL-ADDRESS] specifies a Service account
    67                [EMAIL-ADDRESS] can be one of the following:
    68                        o a Google account (for example, someone@example.com)
    69                        o a Cloud IAM service account
    70  
    71                Then remove the excessively privileged role (Storage Admin / Storage Object Admin /
    72                Storage Object Creator) using:
    73  
    74                  gsutil iam ch -d [TYPE]:[EMAIL-ADDRESS]:[ROLE]
    75                  gs://artifacts.[PROJECT_ID].appspot.com
    76  
    77                For an account that inherits access to the GCR Bucket through Project level permissions,
    78                modify the Projects IAM policy file accordingly, then upload it using:
    79  
    80                  gcloud projects set-iam-policy [PROJECT_ID] [POLICY_FILE]
    81          scored: true
    82  
    83        - id: 6.1.4
    84          text: "Minimize Container Registries to only those approved (Not Scored)"
    85          type: "manual"
    86          remediation: |
    87            Using Command Line:
    88              First, update the cluster to enable Binary Authorization:
    89  
    90                gcloud container cluster update [CLUSTER_NAME] \
    91                  --enable-binauthz
    92  
    93              Create a Binary Authorization Policy using the Binary Authorization Policy Reference
    94              (https://cloud.google.com/binary-authorization/docs/policy-yaml-reference) for guidance.
    95              Import the policy file into Binary Authorization:
    96  
    97                gcloud container binauthz policy import [YAML_POLICY]
    98          scored: false
    99  
   100    - id: 6.2
   101      text: "Identity and Access Management (IAM)"
   102      checks:
   103        - id: 6.2.1
   104          text: "Ensure GKE clusters are not running using the Compute Engine
   105          default service account (Scored)"
   106          type: "manual"
   107          remediation: |
   108            Using Command Line:
   109              Firstly, create a minimally privileged service account:
   110  
   111                gcloud iam service-accounts create [SA_NAME] \
   112                  --display-name "GKE Node Service Account"
   113                export NODE_SA_EMAIL=`gcloud iam service-accounts list \
   114                  --format='value(email)' \
   115                  --filter='displayName:GKE Node Service Account'`
   116  
   117              Grant the following roles to the service account:
   118  
   119                export PROJECT_ID=`gcloud config get-value project`
   120                gcloud projects add-iam-policy-binding $PROJECT_ID \
   121                  --member serviceAccount:$NODE_SA_EMAIL \
   122                  --role roles/monitoring.metricWriter
   123                gcloud projects add-iam-policy-binding $PROJECT_ID \
   124                  --member serviceAccount:$NODE_SA_EMAIL \
   125                  --role roles/monitoring.viewer
   126                gcloud projects add-iam-policy-binding $PROJECT_ID \
   127                  --member serviceAccount:$NODE_SA_EMAIL \
   128                  --role roles/logging.logWriter
   129  
   130              To create a new Node pool using the Service account, run the following command:
   131  
   132                gcloud container node-pools create [NODE_POOL] \
   133                  --service-account=[SA_NAME]@[PROJECT_ID].iam.gserviceaccount.com \
   134                  --cluster=[CLUSTER_NAME] --zone [COMPUTE_ZONE]
   135  
   136              You will need to migrate your workloads to the new Node pool, and delete Node pools that
   137              use the default service account to complete the remediation.
   138          scored: true
   139  
   140        - id: 6.2.2
   141          text: "Prefer using dedicated GCP Service Accounts and Workload Identity (Not Scored)"
   142          type: "manual"
   143          remediation: |
   144            Using Command Line:
   145  
   146                gcloud beta container clusters update [CLUSTER_NAME] --zone [CLUSTER_ZONE] \
   147                  --identity-namespace=[PROJECT_ID].svc.id.goog
   148  
   149              Note that existing Node pools are unaffected. New Node pools default to --workload-
   150              metadata-from-node=GKE_METADATA_SERVER .
   151  
   152              Then, modify existing Node pools to enable GKE_METADATA_SERVER:
   153  
   154                gcloud beta container node-pools update [NODEPOOL_NAME] \
   155                  --cluster=[CLUSTER_NAME] --zone [CLUSTER_ZONE] \
   156                  --workload-metadata-from-node=GKE_METADATA_SERVER
   157  
   158              You may also need to modify workloads in order for them to use Workload Identity as
   159              described within https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
   160              identity. Also consider the effects on the availability of your hosted workloads as Node
   161              pools are updated, it may be more appropriate to create new Node Pools.
   162          scored: false
   163  
   164    - id: 6.3
   165      text: "Cloud Key Management Service (Cloud KMS)"
   166      checks:
   167        - id: 6.3.1
   168          text: "Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS (Scored)"
   169          type: "manual"
   170          remediation: |
   171            Using Command Line:
   172              To create a key
   173  
   174              Create a key ring:
   175  
   176                gcloud kms keyrings create [RING_NAME] \
   177                  --location [LOCATION] \
   178                  --project [KEY_PROJECT_ID]
   179  
   180              Create a key:
   181  
   182                gcloud kms keys create [KEY_NAME] \
   183                  --location [LOCATION] \
   184                  --keyring [RING_NAME] \
   185                  --purpose encryption \
   186                  --project [KEY_PROJECT_ID]
   187  
   188              Grant the Kubernetes Engine Service Agent service account the Cloud KMS CryptoKey
   189              Encrypter/Decrypter role:
   190  
   191                gcloud kms keys add-iam-policy-binding [KEY_NAME] \
   192                  --location [LOCATION] \
   193                  --keyring [RING_NAME] \
   194                  --member serviceAccount:[SERVICE_ACCOUNT_NAME] \
   195                  --role roles/cloudkms.cryptoKeyEncrypterDecrypter \
   196                  --project [KEY_PROJECT_ID]
   197  
   198              To create a new cluster with Application-layer Secrets Encryption:
   199  
   200                gcloud container clusters create [CLUSTER_NAME] \
   201                  --cluster-version=latest \
   202                  --zone [ZONE] \
   203                  --database-encryption-key projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKey s/[KEY_NAME] \
   204                  --project [CLUSTER_PROJECT_ID]
   205  
   206              To enable on an existing cluster:
   207  
   208                gcloud container clusters update [CLUSTER_NAME] \
   209                  --zone [ZONE] \
   210                  --database-encryption-key projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKey s/[KEY_NAME] \
   211                  --project [CLUSTER_PROJECT_ID]
   212          scored: true
   213  
   214    - id: 6.4
   215      text: "Node Metadata"
   216      checks:
   217        - id: 6.4.1
   218          text: "Ensure legacy Compute Engine instance metadata APIs are Disabled (Scored)"
   219          type: "manual"
   220          remediation: |
   221            Using Command Line:
   222              To update an existing cluster, create a new Node pool with the legacy GCE metadata
   223              endpoint disabled:
   224  
   225                gcloud container node-pools create [POOL_NAME] \
   226                  --metadata disable-legacy-endpoints=true \
   227                  --cluster [CLUSTER_NAME] \
   228                  --zone [COMPUTE_ZONE]
   229  
   230              You will need to migrate workloads from any existing non-conforming Node pools, to the
   231              new Node pool, then delete non-conforming Node pools to complete the remediation.
   232          scored: true
   233  
   234        - id: 6.4.2
   235          text: "Ensure the GKE Metadata Server is Enabled (Not Scored)"
   236          type: "manual"
   237          remediation: |
   238            Using Command Line:
   239                gcloud beta container clusters update [CLUSTER_NAME] \
   240                  --identity-namespace=[PROJECT_ID].svc.id.goog
   241              Note that existing Node pools are unaffected. New Node pools default to --workload-
   242              metadata-from-node=GKE_METADATA_SERVER .
   243  
   244              To modify an existing Node pool to enable GKE Metadata Server:
   245  
   246                gcloud beta container node-pools update [NODEPOOL_NAME] \
   247                  --cluster=[CLUSTER_NAME] \
   248                  --workload-metadata-from-node=GKE_METADATA_SERVER
   249  
   250              You may also need to modify workloads in order for them to use Workload Identity as
   251              described within https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
   252              identity.
   253          scored: false
   254  
   255    - id: 6.5
   256      text: "Node Configuration and Maintenance"
   257      checks:
   258        - id: 6.5.1
   259          text: "Ensure Container-Optimized OS (COS) is used for GKE node images (Scored)"
   260          type: "manual"
   261          remediation: |
   262            Using Command Line:
   263              To set the node image to cos for an existing cluster's Node pool:
   264  
   265                gcloud container clusters upgrade [CLUSTER_NAME]\
   266                  --image-type cos \
   267                  --zone [COMPUTE_ZONE] --node-pool [POOL_NAME]
   268          scored: true
   269  
   270        - id: 6.5.2
   271          text: "Ensure Node Auto-Repair is enabled for GKE nodes (Scored)"
   272          type: "manual"
   273          remediation: |
   274            Using Command Line:
   275              To enable node auto-repair for an existing cluster with Node pool, run the following
   276              command:
   277  
   278                gcloud container node-pools update [POOL_NAME] \
   279                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   280                  --enable-autorepair
   281          scored: true
   282  
   283        - id: 6.5.3
   284          text: "Ensure Node Auto-Upgrade is enabled for GKE nodes (Scored)"
   285          type: "manual"
   286          remediation: |
   287            Using Command Line:
   288              To enable node auto-upgrade for an existing cluster's Node pool, run the following
   289              command:
   290  
   291                gcloud container node-pools update [NODE_POOL] \
   292                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   293                  --enable-autoupgrade
   294          scored: true
   295  
   296        - id: 6.5.4
   297          text: "Automate GKE version management using Release Channels (Not Scored)"
   298          type: "manual"
   299          remediation: |
   300            Using Command Line:
   301              Create a new cluster by running the following command:
   302  
   303                gcloud beta container clusters create [CLUSTER_NAME] \
   304                  --zone [COMPUTE_ZONE] \
   305                  --release-channel [RELEASE_CHANNEL]
   306  
   307              where [RELEASE_CHANNEL] is stable or regular according to your needs.
   308          scored: false
   309  
   310        - id: 6.5.5
   311          text: "Ensure Shielded GKE Nodes are Enabled (Not Scored)"
   312          type: "manual"
   313          remediation: |
   314            Using Command Line:
   315              To create a Node pool within the cluster with Integrity Monitoring enabled, run the
   316              following command:
   317  
   318                gcloud beta container node-pools create [NODEPOOL_NAME] \
   319                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   320                  --shielded-integrity-monitoring
   321  
   322              You will also need to migrate workloads from existing non-conforming Node pools to the
   323              newly created Node pool, then delete the non-conforming pools.
   324          scored: false
   325  
   326        - id: 6.5.6
   327          text: "Ensure Shielded GKE Nodes are Enabled (Not Scored)"
   328          type: "manual"
   329          remediation: |
   330            Using Command Line:
   331              To migrate an existing cluster, you will need to specify the --enable-shielded-nodes flag
   332              on a cluster update command:
   333  
   334                gcloud beta container clusters update [CLUSTER_NAME] \
   335                  --zone [CLUSTER_ZONE] \
   336                  --enable-shielded-nodes
   337          scored: false
   338  
   339        - id: 6.5.7
   340          text: "Ensure Secure Boot for Shielded GKE Nodes is Enabled (Not Scored)"
   341          type: "manual"
   342          remediation: |
   343            Using Command Line:
   344              To create a Node pool within the cluster with Secure Boot enabled, run the following
   345              command:
   346  
   347                gcloud beta container node-pools create [NODEPOOL_NAME] \
   348                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   349                  --shielded-secure-boot
   350  
   351              You will also need to migrate workloads from existing non-conforming Node pools to the
   352              newly created Node pool, then delete the non-conforming pools.
   353          scored: false
   354  
   355    - id: 6.6
   356      text: "Cluster Networking"
   357      checks:
   358        - id: 6.6.1
   359          text: "Enable VPC Flow Logs and Intranode Visibility (Not Scored)"
   360          type: "manual"
   361          remediation: |
   362            Using Command Line:
   363              To enable intranode visibility on an existing cluster, run the following command:
   364  
   365                gcloud beta container clusters update [CLUSTER_NAME] \
   366                  --enable-intra-node-visibility
   367          scored: false
   368  
   369        - id: 6.6.2
   370          text: "Ensure use of VPC-native clusters (Scored)"
   371          type: "manual"
   372          remediation: |
   373            Using Command Line:
   374              To enable Alias IP on a new cluster, run the following command:
   375  
   376                gcloud container clusters create [CLUSTER_NAME] \
   377                  --zone [COMPUTE_ZONE] \
   378                  --enable-ip-alias
   379          scored: true
   380  
   381        - id: 6.6.3
   382          text: "Ensure Master Authorized Networks is Enabled (Scored)"
   383          type: "manual"
   384          remediation: |
   385            Using Command Line:
   386              To check Master Authorized Networks status for an existing cluster, run the following
   387              command;
   388  
   389                gcloud container clusters describe [CLUSTER_NAME] \
   390                  --zone [COMPUTE_ZONE] \
   391                  --format json | jq '.masterAuthorizedNetworksConfig'
   392  
   393              The output should return
   394  
   395                {
   396                  "enabled": true
   397                }
   398  
   399              if Master Authorized Networks is enabled.
   400  
   401              If Master Authorized Networks is disabled, the
   402              above command will return null ( { } ).
   403          scored: true
   404  
   405        - id: 6.6.4
   406          text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Scored)"
   407          type: "manual"
   408          remediation: |
   409            Using Command Line:
   410              Create a cluster with a Private Endpoint enabled and Public Access disabled by including
   411              the --enable-private-endpoint flag within the cluster create command:
   412  
   413                gcloud container clusters create [CLUSTER_NAME] \
   414                  --enable-private-endpoint
   415  
   416              Setting this flag also requires the setting of --enable-private-nodes , --enable-ip-alias
   417              and --master-ipv4-cidr=[MASTER_CIDR_RANGE] .
   418          scored: true
   419  
   420        - id: 6.6.5
   421          text: "Ensure clusters are created with Private Nodes (Scored)"
   422          type: "manual"
   423          remediation: |
   424            Using Command Line:
   425              To create a cluster with Private Nodes enabled, include the --enable-private-nodes flag
   426              within the cluster create command:
   427  
   428                gcloud container clusters create [CLUSTER_NAME] \
   429                  --enable-private-nodes
   430  
   431              Setting this flag also requires the setting of --enable-ip-alias and --master-ipv4-
   432              cidr=[MASTER_CIDR_RANGE] .
   433          scored: true
   434  
   435        - id: 6.6.6
   436          text: "Consider firewalling GKE worker nodes (Not Scored)"
   437          type: "manual"
   438          remediation: |
   439            Using Command Line:
   440              Use the following command to generate firewall rules, setting the variables as appropriate.
   441              You may want to use the target [TAG] and [SERVICE_ACCOUNT] previously identified.
   442  
   443                gcloud compute firewall-rules create FIREWALL_RULE_NAME \
   444                  --network [NETWORK] \
   445                  --priority [PRIORITY] \
   446                  --direction [DIRECTION] \
   447                  --action [ACTION] \
   448                  --target-tags [TAG] \
   449                  --target-service-accounts [SERVICE_ACCOUNT] \
   450                  --source-ranges [SOURCE_CIDR-RANGE] \
   451                  --source-tags [SOURCE_TAGS] \
   452                  --source-service-accounts=[SOURCE_SERVICE_ACCOUNT] \
   453                  --destination-ranges [DESTINATION_CIDR_RANGE] \
   454                  --rules [RULES]
   455          scored: false
   456  
   457        - id: 6.6.7
   458          text: "Ensure Network Policy is Enabled and set as appropriate (Not Scored)"
   459          type: "manual"
   460          remediation: |
   461            Using Command Line:
   462              To enable Network Policy for an existing cluster, firstly enable the Network Policy add-on:
   463  
   464                gcloud container clusters update [CLUSTER_NAME] \
   465                  --zone [COMPUTE_ZONE] \
   466                  --update-addons NetworkPolicy=ENABLED
   467  
   468              Then, enable Network Policy:
   469  
   470                gcloud container clusters update [CLUSTER_NAME] \
   471                  --zone [COMPUTE_ZONE] \
   472                  --enable-network-policy
   473          scored: false
   474  
   475        - id: 6.6.8
   476          text: "Ensure use of Google-managed SSL Certificates (Not Scored)"
   477          type: "manual"
   478          remediation: |
   479            If services of type:LoadBalancer are discovered, consider replacing the Service with an
   480            Ingress.
   481  
   482            To configure the Ingress and use Google-managed SSL certificates, follow the instructions
   483            as listed at https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs.
   484          scored: false
   485  
   486    - id: 6.7
   487      text: "Logging"
   488      checks:
   489        - id: 6.7.1
   490          text: "Ensure Stackdriver Kubernetes Logging and Monitoring is Enabled (Scored)"
   491          type: "manual"
   492          remediation: |
   493            Using Command Line:
   494  
   495              STACKDRIVER KUBERNETES ENGINE MONITORING SUPPORT (PREFERRED):
   496              To enable Stackdriver Kubernetes Engine Monitoring for an existing cluster, run the
   497              following command:
   498  
   499                gcloud container clusters update [CLUSTER_NAME] \
   500                  --zone [COMPUTE_ZONE] \
   501                  --enable-stackdriver-kubernetes
   502  
   503              LEGACY STACKDRIVER SUPPORT:
   504              Both Logging and Monitoring support must be enabled.
   505              To enable Legacy Stackdriver Logging for an existing cluster, run the following command:
   506  
   507                gcloud container clusters update [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   508                  --logging-service logging.googleapis.com
   509  
   510              To enable Legacy Stackdriver Monitoring for an existing cluster, run the following
   511              command:
   512  
   513                gcloud container clusters update [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   514                  --monitoring-service monitoring.googleapis.com
   515          scored: true
   516  
   517        - id: 6.7.2
   518          text: "Enable Linux auditd logging (Not Scored)"
   519          type: "manual"
   520          remediation: |
   521            Using Command Line:
   522              Download the example manifests:
   523  
   524                curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml \
   525                  > cos-auditd-logging.yaml
   526  
   527              Edit the example manifests if needed. Then, deploy them:
   528  
   529                kubectl apply -f cos-auditd-logging.yaml
   530  
   531              Verify that the logging Pods have started. If you defined a different Namespace in your
   532              manifests, replace cos-auditd with the name of the namespace you're using:
   533  
   534                kubectl get pods --namespace=cos-auditd
   535          scored: false
   536  
   537    - id: 6.8
   538      text: "Authentication and Authorization"
   539      checks:
   540        - id: 6.8.1
   541          text: "Ensure Basic Authentication using static passwords is Disabled (Scored)"
   542          type: "manual"
   543          remediation: |
   544            Using Command Line:
   545              To update an existing cluster and disable Basic Authentication by removing the static
   546              password:
   547  
   548                gcloud container clusters update [CLUSTER_NAME] \
   549                  --no-enable-basic-auth
   550          scored: true
   551  
   552        - id: 6.8.2
   553          text: "Ensure authentication using Client Certificates is Disabled (Scored)"
   554          type: "manual"
   555          remediation: |
   556            Using Command Line:
   557              Create a new cluster without a Client Certificate:
   558  
   559                gcloud container clusters create [CLUSTER_NAME] \
   560                  --no-issue-client-certificate
   561          scored: true
   562  
   563        - id: 6.8.3
   564          text: "Manage Kubernetes RBAC users with Google Groups for GKE (Not Scored)"
   565          type: "manual"
   566          remediation: |
   567            Using Command Line:
   568              Follow the G Suite Groups instructions at https://cloud.google.com/kubernetes-
   569              engine/docs/how-to/role-based-access-control#google-groups-for-gke.
   570  
   571              Then, create a cluster with
   572  
   573                gcloud beta container clusters create my-cluster \
   574                  --security-group="gke-security-groups@[yourdomain.com]"
   575  
   576              Finally create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that
   577              reference your G Suite Groups.
   578          scored: false
   579  
   580        - id: 6.8.4
   581          text: "Ensure Legacy Authorization (ABAC) is Disabled (Scored)"
   582          type: "manual"
   583          remediation: |
   584            Using Command Line:
   585              To disable Legacy Authorization for an existing cluster, run the following command:
   586  
   587                gcloud container clusters update [CLUSTER_NAME] \
   588                  --zone [COMPUTE_ZONE] \
   589                  --no-enable-legacy-authorization
   590          scored: true
   591  
   592    - id: 6.9
   593      text: "Storage"
   594      checks:
   595        - id: 6.9.1
   596          text: "Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD) (Not Scored)"
   597          type: "manual"
   598          remediation: |
   599            Using Command Line:
   600              FOR NODE BOOT DISKS:
   601              Create a new node pool using customer-managed encryption keys for the node boot disk, of
   602              [DISK_TYPE] either pd-standard or pd-ssd :
   603  
   604                gcloud beta container node-pools create [CLUSTER_NAME] \
   605                  --disk-type [DISK_TYPE] \
   606                  --boot-disk-kms-key \
   607                  projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]
   608  
   609              Create a cluster using customer-managed encryption keys for the node boot disk, of
   610              [DISK_TYPE] either pd-standard or pd-ssd :
   611  
   612                gcloud beta container clusters create [CLUSTER_NAME] \
   613                  --disk-type [DISK_TYPE] \
   614                  --boot-disk-kms-key \
   615                  projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]
   616  
   617              FOR ATTACHED DISKS:
   618              Follow the instructions detailed at https://cloud.google.com/kubernetes-
   619              engine/docs/how-to/using-cmek.
   620          scored: false
   621  
   622    - id: 6.10
   623      text: "Other Cluster Configurations"
   624      checks:
   625        - id: 6.10.1
   626          text: "Ensure Kubernetes Web UI is Disabled (Scored)"
   627          type: "manual"
   628          remediation: |
   629            Using Command Line:
   630              To disable the Kubernetes Dashboard on an existing cluster, run the following command:
   631  
   632                gcloud container clusters update [CLUSTER_NAME] \
   633                  --zone [ZONE] \
   634                  --update-addons=KubernetesDashboard=DISABLED
   635          scored: true
   636  
   637        - id: 6.10.2
   638          text: "Ensure that Alpha clusters are not used for production workloads (Scored)"
   639          type: "manual"
   640          remediation: |
   641            Using Command Line:
   642              Upon creating a new cluster
   643  
   644                gcloud container clusters create [CLUSTER_NAME] \
   645                  --zone [COMPUTE_ZONE]
   646  
   647              Do not use the --enable-kubernetes-alpha argument.
   648          scored: true
   649  
   650        - id: 6.10.3
   651          text: "Ensure Pod Security Policy is Enabled and set as appropriate (Not Scored)"
   652          type: "manual"
   653          remediation: |
   654            Using Command Line:
   655              To enable Pod Security Policy for an existing cluster, run the following command:
   656  
   657                gcloud beta container clusters update [CLUSTER_NAME] \
   658                  --zone [COMPUTE_ZONE] \
   659                  --enable-pod-security-policy
   660          scored: false
   661  
   662        - id: 6.10.4
   663          text: "Consider GKE Sandbox for running untrusted workloads (Not Scored)"
   664          type: "manual"
   665          remediation: |
   666            Using Command Line:
   667              To enable GKE Sandbox on an existing cluster, a new Node pool must be created.
   668  
   669                gcloud container node-pools create [NODE_POOL_NAME] \
   670                  --zone=[COMPUTE-ZONE] \
   671                  --cluster=[CLUSTER_NAME] \
   672                  --image-type=cos_containerd \
   673                  --sandbox type=gvisor
   674          scored: false
   675  
   676        - id: 6.10.5
   677          text: "Ensure use of Binary Authorization (Scored)"
   678          type: "manual"
   679          remediation: |
   680            Using Command Line:
   681              Firstly, update the cluster to enable Binary Authorization:
   682  
   683                gcloud container cluster update [CLUSTER_NAME] \
   684                  --zone [COMPUTE-ZONE] \
   685                  --enable-binauthz
   686  
   687              Create a Binary Authorization Policy using the Binary Authorization Policy Reference
   688              (https://cloud.google.com/binary-authorization/docs/policy-yaml-reference) for
   689              guidance.
   690  
   691              Import the policy file into Binary Authorization:
   692  
   693                gcloud container binauthz policy import [YAML_POLICY]
   694          scored: true
   695  
   696        - id: 6.10.6
   697          text: "Enable Cloud Security Command Center (Cloud SCC) (Not Scored)"
   698          type: "manual"
   699          remediation: |
   700            Using Command Line:
   701              Follow the instructions at https://cloud.google.com/security-command-
   702              center/docs/quickstart-scc-setup.
   703          scored: false