github.com/khulnasoft-lab/kube-bench@v0.2.1-0.20240330183753-9df52345ae58/cfg/gke-1.2.0/managedservices.yaml (about)

     1  ---
     2  controls:
     3  version: "gke-1.2.0"
     4  id: 5
     5  text: "Managed Services"
     6  type: "managedservices"
     7  groups:
     8    - id: 5.1
     9      text: "Image Registry and Image Scanning"
    10      checks:
    11        - id: 5.1.1
    12          text: "Ensure Image Vulnerability Scanning using GCR Container Analysis
    13          or a third-party provider (Manual)"
    14          type: "manual"
    15          remediation: |
    16            Using Command Line:
    17  
    18              gcloud services enable containerscanning.googleapis.com
    19          scored: false
    20  
    21        - id: 5.1.2
    22          text: "Minimize user access to GCR (Manual)"
    23          type: "manual"
    24          remediation: |
    25            Using Command Line:
    26              To change roles at the GCR bucket level:
    27              Firstly, run the following if read permissions are required:
    28  
    29                gsutil iam ch [TYPE]:[EMAIL-ADDRESS]:objectViewer
    30                gs://artifacts.[PROJECT_ID].appspot.com
    31  
    32              Then remove the excessively privileged role (Storage Admin / Storage Object Admin /
    33              Storage Object Creator) using:
    34  
    35                gsutil iam ch -d [TYPE]:[EMAIL-ADDRESS]:[ROLE]
    36                gs://artifacts.[PROJECT_ID].appspot.com
    37  
    38              where:
    39                [TYPE] can be one of the following:
    40                      o user, if the [EMAIL-ADDRESS] is a Google account
    41                      o serviceAccount, if [EMAIL-ADDRESS] specifies a Service account
    42                [EMAIL-ADDRESS] can be one of the following:
    43                      o a Google account (for example, someone@example.com)
    44                      o a Cloud IAM service account
    45                      To modify roles defined at the project level and subsequently inherited within the GCR
    46                      bucket, or the Service Account User role, extract the IAM policy file, modify it accordingly
    47              and apply it using:
    48  
    49                gcloud projects set-iam-policy [PROJECT_ID] [POLICY_FILE]
    50          scored: false
    51  
    52        - id: 5.1.3
    53          text: "Minimize cluster access to read-only for GCR (Manual)"
    54          type: "manual"
    55          remediation: |
    56            Using Command Line:
    57              For an account explicitly granted to the bucket. First, add read access to the Kubernetes
    58              Service Account
    59  
    60                gsutil iam ch [TYPE]:[EMAIL-ADDRESS]:objectViewer
    61                gs://artifacts.[PROJECT_ID].appspot.com
    62  
    63                where:
    64                [TYPE] can be one of the following:
    65                        o user, if the [EMAIL-ADDRESS] is a Google account
    66                        o serviceAccount, if [EMAIL-ADDRESS] specifies a Service account
    67                [EMAIL-ADDRESS] can be one of the following:
    68                        o a Google account (for example, someone@example.com)
    69                        o a Cloud IAM service account
    70  
    71                Then remove the excessively privileged role (Storage Admin / Storage Object Admin /
    72                Storage Object Creator) using:
    73  
    74                  gsutil iam ch -d [TYPE]:[EMAIL-ADDRESS]:[ROLE]
    75                  gs://artifacts.[PROJECT_ID].appspot.com
    76  
    77                For an account that inherits access to the GCR Bucket through Project level permissions,
    78                modify the Projects IAM policy file accordingly, then upload it using:
    79  
    80                  gcloud projects set-iam-policy [PROJECT_ID] [POLICY_FILE]
    81          scored: false
    82  
    83        - id: 5.1.4
    84          text: "Minimize Container Registries to only those approved (Manual)"
    85          type: "manual"
    86          remediation: |
    87            Using Command Line:
    88              First, update the cluster to enable Binary Authorization:
    89  
    90                gcloud container cluster update [CLUSTER_NAME] \
    91                  --enable-binauthz
    92  
    93              Create a Binary Authorization Policy using the Binary Authorization Policy Reference
    94              (https://cloud.google.com/binary-authorization/docs/policy-yaml-reference) for guidance.
    95              Import the policy file into Binary Authorization:
    96  
    97                gcloud container binauthz policy import [YAML_POLICY]
    98          scored: false
    99  
   100    - id: 5.2
   101      text: "Identity and Access Management (IAM)"
   102      checks:
   103        - id: 5.2.1
   104          text: "Ensure GKE clusters are not running using the Compute Engine
   105          default service account (Manual)"
   106          type: "manual"
   107          remediation: |
   108            Using Command Line:
   109              Firstly, create a minimally privileged service account:
   110  
   111                gcloud iam service-accounts create [SA_NAME] \
   112                  --display-name "GKE Node Service Account"
   113                export NODE_SA_EMAIL=`gcloud iam service-accounts list \
   114                  --format='value(email)' \
   115                  --filter='displayName:GKE Node Service Account'`
   116  
   117              Grant the following roles to the service account:
   118  
   119                export PROJECT_ID=`gcloud config get-value project`
   120                gcloud projects add-iam-policy-binding $PROJECT_ID \
   121                  --member serviceAccount:$NODE_SA_EMAIL \
   122                  --role roles/monitoring.metricWriter
   123                gcloud projects add-iam-policy-binding $PROJECT_ID \
   124                  --member serviceAccount:$NODE_SA_EMAIL \
   125                  --role roles/monitoring.viewer
   126                gcloud projects add-iam-policy-binding $PROJECT_ID \
   127                  --member serviceAccount:$NODE_SA_EMAIL \
   128                  --role roles/logging.logWriter
   129  
   130              To create a new Node pool using the Service account, run the following command:
   131  
   132                gcloud container node-pools create [NODE_POOL] \
   133                  --service-account=[SA_NAME]@[PROJECT_ID].iam.gserviceaccount.com \
   134                  --cluster=[CLUSTER_NAME] --zone [COMPUTE_ZONE]
   135  
   136              You will need to migrate your workloads to the new Node pool, and delete Node pools that
   137              use the default service account to complete the remediation.
   138          scored: false
   139  
   140        - id: 5.2.2
   141          text: "Prefer using dedicated GCP Service Accounts and Workload Identity (Manual)"
   142          type: "manual"
   143          remediation: |
   144            Using Command Line:
   145  
   146                gcloud beta container clusters update [CLUSTER_NAME] --zone [CLUSTER_ZONE] \
   147                  --identity-namespace=[PROJECT_ID].svc.id.goog
   148  
   149              Note that existing Node pools are unaffected. New Node pools default to --workload-
   150              metadata-from-node=GKE_METADATA_SERVER .
   151  
   152              Then, modify existing Node pools to enable GKE_METADATA_SERVER:
   153  
   154                gcloud beta container node-pools update [NODEPOOL_NAME] \
   155                  --cluster=[CLUSTER_NAME] --zone [CLUSTER_ZONE] \
   156                  --workload-metadata-from-node=GKE_METADATA_SERVER
   157  
   158              You may also need to modify workloads in order for them to use Workload Identity as
   159              described within https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
   160              identity. Also consider the effects on the availability of your hosted workloads as Node
   161              pools are updated, it may be more appropriate to create new Node Pools.
   162          scored: false
   163  
   164    - id: 5.3
   165      text: "Cloud Key Management Service (Cloud KMS)"
   166      checks:
   167        - id: 5.3.1
   168          text: "Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS (Manual)"
   169          type: "manual"
   170          remediation: |
   171            Using Command Line:
   172              To create a key
   173  
   174              Create a key ring:
   175  
   176                gcloud kms keyrings create [RING_NAME] \
   177                  --location [LOCATION] \
   178                  --project [KEY_PROJECT_ID]
   179  
   180              Create a key:
   181  
   182                gcloud kms keys create [KEY_NAME] \
   183                  --location [LOCATION] \
   184                  --keyring [RING_NAME] \
   185                  --purpose encryption \
   186                  --project [KEY_PROJECT_ID]
   187  
   188              Grant the Kubernetes Engine Service Agent service account the Cloud KMS CryptoKey
   189              Encrypter/Decrypter role:
   190  
   191                gcloud kms keys add-iam-policy-binding [KEY_NAME] \
   192                  --location [LOCATION] \
   193                  --keyring [RING_NAME] \
   194                  --member serviceAccount:[SERVICE_ACCOUNT_NAME] \
   195                  --role roles/cloudkms.cryptoKeyEncrypterDecrypter \
   196                  --project [KEY_PROJECT_ID]
   197  
   198              To create a new cluster with Application-layer Secrets Encryption:
   199  
   200                gcloud container clusters create [CLUSTER_NAME] \
   201                  --cluster-version=latest \
   202                  --zone [ZONE] \
   203                  --database-encryption-key projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKey s/[KEY_NAME] \
   204                  --project [CLUSTER_PROJECT_ID]
   205  
   206              To enable on an existing cluster:
   207  
   208                gcloud container clusters update [CLUSTER_NAME] \
   209                  --zone [ZONE] \
   210                  --database-encryption-key projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKey s/[KEY_NAME] \
   211                  --project [CLUSTER_PROJECT_ID]
   212          scored: false
   213  
   214    - id: 5.4
   215      text: "Node Metadata"
   216      checks:
   217        - id: 5.4.1
   218          text: "Ensure legacy Compute Engine instance metadata APIs are Disabled (Automated)"
   219          type: "manual"
   220          remediation: |
   221            Using Command Line:
   222              To update an existing cluster, create a new Node pool with the legacy GCE metadata
   223              endpoint disabled:
   224  
   225                gcloud container node-pools create [POOL_NAME] \
   226                  --metadata disable-legacy-endpoints=true \
   227                  --cluster [CLUSTER_NAME] \
   228                  --zone [COMPUTE_ZONE]
   229  
   230              You will need to migrate workloads from any existing non-conforming Node pools, to the
   231              new Node pool, then delete non-conforming Node pools to complete the remediation.
   232          scored: false
   233  
   234        - id: 5.4.2
   235          text: "Ensure the GKE Metadata Server is Enabled (Automated)"
   236          type: "manual"
   237          remediation: |
   238            Using Command Line:
   239                gcloud beta container clusters update [CLUSTER_NAME] \
   240                  --identity-namespace=[PROJECT_ID].svc.id.goog
   241              Note that existing Node pools are unaffected. New Node pools default to --workload-
   242              metadata-from-node=GKE_METADATA_SERVER .
   243  
   244              To modify an existing Node pool to enable GKE Metadata Server:
   245  
   246                gcloud beta container node-pools update [NODEPOOL_NAME] \
   247                  --cluster=[CLUSTER_NAME] \
   248                  --workload-metadata-from-node=GKE_METADATA_SERVER
   249  
   250              You may also need to modify workloads in order for them to use Workload Identity as
   251              described within https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
   252              identity.
   253          scored: false
   254  
   255    - id: 5.5
   256      text: "Node Configuration and Maintenance"
   257      checks:
   258        - id: 5.5.1
   259          text: "Ensure Container-Optimized OS (COS) is used for GKE node images (Automated)"
   260          type: "manual"
   261          remediation: |
   262            Using Command Line:
   263              To set the node image to cos for an existing cluster's Node pool:
   264  
   265                gcloud container clusters upgrade [CLUSTER_NAME]\
   266                  --image-type cos \
   267                  --zone [COMPUTE_ZONE] --node-pool [POOL_NAME]
   268          scored: false
   269  
   270        - id: 5.5.2
   271          text: "Ensure Node Auto-Repair is enabled for GKE nodes (Automated)"
   272          type: "manual"
   273          remediation: |
   274            Using Command Line:
   275              To enable node auto-repair for an existing cluster with Node pool, run the following
   276              command:
   277  
   278                gcloud container node-pools update [POOL_NAME] \
   279                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   280                  --enable-autorepair
   281          scored: false
   282  
   283        - id: 5.5.3
   284          text: "Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)"
   285          type: "manual"
   286          remediation: |
   287            Using Command Line:
   288              To enable node auto-upgrade for an existing cluster's Node pool, run the following
   289              command:
   290  
   291                gcloud container node-pools update [NODE_POOL] \
   292                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   293                  --enable-autoupgrade
   294          scored: false
   295  
   296        - id: 5.5.4
   297          text: "Automate GKE version management using Release Channels (Manual)"
   298          type: "manual"
   299          remediation: |
   300            Using Command Line:
   301              Create a new cluster by running the following command:
   302  
   303                gcloud beta container clusters create [CLUSTER_NAME] \
   304                  --zone [COMPUTE_ZONE] \
   305                  --release-channel [RELEASE_CHANNEL]
   306  
   307              where [RELEASE_CHANNEL] is stable or regular according to your needs.
   308          scored: false
   309  
   310        - id: 5.5.5
   311          text: "Ensure Shielded GKE Nodes are Enabled (Manual)"
   312          type: "manual"
   313          remediation: |
   314            Using Command Line:
   315              To create a Node pool within the cluster with Integrity Monitoring enabled, run the
   316              following command:
   317  
   318                gcloud beta container node-pools create [NODEPOOL_NAME] \
   319                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   320                  --shielded-integrity-monitoring
   321  
   322              You will also need to migrate workloads from existing non-conforming Node pools to the
   323              newly created Node pool, then delete the non-conforming pools.
   324          scored: false
   325  
   326        - id: 5.5.6
   327          text: "Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled (Automated)"
   328          type: "manual"
   329          remediation: |
   330            Using Command Line:
   331              To create a Node pool within the cluster with Integrity Monitoring enabled, run the
   332              following command:
   333  
   334                gcloud beta container node-pools create [NODEPOOL_NAME] \
   335                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   336                  --shielded-integrity-monitoring
   337  
   338            You will also need to migrate workloads from existing non-conforming Node pools to the newly created Node pool,
   339            then delete the non-conforming pools.
   340          scored: false
   341  
   342        - id: 5.5.7
   343          text: "Ensure Secure Boot for Shielded GKE Nodes is Enabled (Automated)"
   344          type: "manual"
   345          remediation: |
   346            Using Command Line:
   347              To create a Node pool within the cluster with Secure Boot enabled, run the following
   348              command:
   349  
   350                gcloud beta container node-pools create [NODEPOOL_NAME] \
   351                  --cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   352                  --shielded-secure-boot
   353  
   354              You will also need to migrate workloads from existing non-conforming Node pools to the
   355              newly created Node pool, then delete the non-conforming pools.
   356          scored: false
   357  
   358    - id: 5.6
   359      text: "Cluster Networking"
   360      checks:
   361        - id: 5.6.1
   362          text: "Enable VPC Flow Logs and Intranode Visibility (Automated)"
   363          type: "manual"
   364          remediation: |
   365            Using Command Line:
   366              To enable intranode visibility on an existing cluster, run the following command:
   367  
   368                gcloud beta container clusters update [CLUSTER_NAME] \
   369                  --enable-intra-node-visibility
   370          scored: false
   371  
   372        - id: 5.6.2
   373          text: "Ensure use of VPC-native clusters (Automated)"
   374          type: "manual"
   375          remediation: |
   376            Using Command Line:
   377              To enable Alias IP on a new cluster, run the following command:
   378  
   379                gcloud container clusters create [CLUSTER_NAME] \
   380                  --zone [COMPUTE_ZONE] \
   381                  --enable-ip-alias
   382          scored: false
   383  
   384        - id: 5.6.3
   385          text: "Ensure Master Authorized Networks is Enabled (Manual)"
   386          type: "manual"
   387          remediation: |
   388            Using Command Line:
   389              To check Master Authorized Networks status for an existing cluster, run the following
   390              command;
   391  
   392                gcloud container clusters describe [CLUSTER_NAME] \
   393                  --zone [COMPUTE_ZONE] \
   394                  --format json | jq '.masterAuthorizedNetworksConfig'
   395  
   396              The output should return
   397  
   398                {
   399                  "enabled": true
   400                }
   401  
   402              if Master Authorized Networks is enabled.
   403  
   404              If Master Authorized Networks is disabled, the
   405              above command will return null ( { } ).
   406          scored: false
   407  
   408        - id: 5.6.4
   409          text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
   410          type: "manual"
   411          remediation: |
   412            Using Command Line:
   413              Create a cluster with a Private Endpoint enabled and Public Access disabled by including
   414              the --enable-private-endpoint flag within the cluster create command:
   415  
   416                gcloud container clusters create [CLUSTER_NAME] \
   417                  --enable-private-endpoint
   418  
   419              Setting this flag also requires the setting of --enable-private-nodes , --enable-ip-alias
   420              and --master-ipv4-cidr=[MASTER_CIDR_RANGE] .
   421          scored: false
   422  
   423        - id: 5.6.5
   424          text: "Ensure clusters are created with Private Nodes (Manual)"
   425          type: "manual"
   426          remediation: |
   427            Using Command Line:
   428              To create a cluster with Private Nodes enabled, include the --enable-private-nodes flag
   429              within the cluster create command:
   430  
   431                gcloud container clusters create [CLUSTER_NAME] \
   432                  --enable-private-nodes
   433  
   434              Setting this flag also requires the setting of --enable-ip-alias and --master-ipv4-
   435              cidr=[MASTER_CIDR_RANGE] .
   436          scored: false
   437  
   438        - id: 5.6.6
   439          text: "Consider firewalling GKE worker nodes (Manual)"
   440          type: "manual"
   441          remediation: |
   442            Using Command Line:
   443              Use the following command to generate firewall rules, setting the variables as appropriate.
   444              You may want to use the target [TAG] and [SERVICE_ACCOUNT] previously identified.
   445  
   446                gcloud compute firewall-rules create FIREWALL_RULE_NAME \
   447                  --network [NETWORK] \
   448                  --priority [PRIORITY] \
   449                  --direction [DIRECTION] \
   450                  --action [ACTION] \
   451                  --target-tags [TAG] \
   452                  --target-service-accounts [SERVICE_ACCOUNT] \
   453                  --source-ranges [SOURCE_CIDR-RANGE] \
   454                  --source-tags [SOURCE_TAGS] \
   455                  --source-service-accounts=[SOURCE_SERVICE_ACCOUNT] \
   456                  --destination-ranges [DESTINATION_CIDR_RANGE] \
   457                  --rules [RULES]
   458          scored: false
   459  
   460        - id: 5.6.7
   461          text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
   462          type: "manual"
   463          remediation: |
   464            Using Command Line:
   465              To enable Network Policy for an existing cluster, firstly enable the Network Policy add-on:
   466  
   467                gcloud container clusters update [CLUSTER_NAME] \
   468                  --zone [COMPUTE_ZONE] \
   469                  --update-addons NetworkPolicy=ENABLED
   470  
   471              Then, enable Network Policy:
   472  
   473                gcloud container clusters update [CLUSTER_NAME] \
   474                  --zone [COMPUTE_ZONE] \
   475                  --enable-network-policy
   476          scored: false
   477  
   478        - id: 5.6.8
   479          text: "Ensure use of Google-managed SSL Certificates (Manual)"
   480          type: "manual"
   481          remediation: |
   482            If services of type:LoadBalancer are discovered, consider replacing the Service with an
   483            Ingress.
   484  
   485            To configure the Ingress and use Google-managed SSL certificates, follow the instructions
   486            as listed at https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs.
   487          scored: false
   488  
   489    - id: 5.7
   490      text: "Logging"
   491      checks:
   492        - id: 5.7.1
   493          text: "Ensure Stackdriver Kubernetes Logging and Monitoring is Enabled (Automated)"
   494          type: "manual"
   495          remediation: |
   496            Using Command Line:
   497  
   498              STACKDRIVER KUBERNETES ENGINE MONITORING SUPPORT (PREFERRED):
   499              To enable Stackdriver Kubernetes Engine Monitoring for an existing cluster, run the
   500              following command:
   501  
   502                gcloud container clusters update [CLUSTER_NAME] \
   503                  --zone [COMPUTE_ZONE] \
   504                  --enable-stackdriver-kubernetes
   505  
   506              LEGACY STACKDRIVER SUPPORT:
   507              Both Logging and Monitoring support must be enabled.
   508              To enable Legacy Stackdriver Logging for an existing cluster, run the following command:
   509  
   510                gcloud container clusters update [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   511                  --logging-service logging.googleapis.com
   512  
   513              To enable Legacy Stackdriver Monitoring for an existing cluster, run the following
   514              command:
   515  
   516                gcloud container clusters update [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
   517                  --monitoring-service monitoring.googleapis.com
   518          scored: false
   519  
   520        - id: 5.7.2
   521          text: "Enable Linux auditd logging (Manual)"
   522          type: "manual"
   523          remediation: |
   524            Using Command Line:
   525              Download the example manifests:
   526  
   527                curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml \
   528                  > cos-auditd-logging.yaml
   529  
   530              Edit the example manifests if needed. Then, deploy them:
   531  
   532                kubectl apply -f cos-auditd-logging.yaml
   533  
   534              Verify that the logging Pods have started. If you defined a different Namespace in your
   535              manifests, replace cos-auditd with the name of the namespace you're using:
   536  
   537                kubectl get pods --namespace=cos-auditd
   538          scored: false
   539  
   540    - id: 5.8
   541      text: "Authentication and Authorization"
   542      checks:
   543        - id: 5.8.1
   544          text: "Ensure Basic Authentication using static passwords is Disabled (Automated)"
   545          type: "manual"
   546          remediation: |
   547            Using Command Line:
   548              To update an existing cluster and disable Basic Authentication by removing the static
   549              password:
   550  
   551                gcloud container clusters update [CLUSTER_NAME] \
   552                  --no-enable-basic-auth
   553          scored: false
   554  
   555        - id: 5.8.2
   556          text: "Ensure authentication using Client Certificates is Disabled (Automated)"
   557          type: "manual"
   558          remediation: |
   559            Using Command Line:
   560              Create a new cluster without a Client Certificate:
   561  
   562                gcloud container clusters create [CLUSTER_NAME] \
   563                  --no-issue-client-certificate
   564          scored: false
   565  
   566        - id: 5.8.3
   567          text: "Manage Kubernetes RBAC users with Google Groups for GKE (Manual)"
   568          type: "manual"
   569          remediation: |
   570            Using Command Line:
   571              Follow the G Suite Groups instructions at https://cloud.google.com/kubernetes-
   572              engine/docs/how-to/role-based-access-control#google-groups-for-gke.
   573  
   574              Then, create a cluster with
   575  
   576                gcloud beta container clusters create my-cluster \
   577                  --security-group="gke-security-groups@[yourdomain.com]"
   578  
   579              Finally create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that
   580              reference your G Suite Groups.
   581          scored: false
   582  
   583        - id: 5.8.4
   584          text: "Ensure Legacy Authorization (ABAC) is Disabled (Automated)"
   585          type: "manual"
   586          remediation: |
   587            Using Command Line:
   588              To disable Legacy Authorization for an existing cluster, run the following command:
   589  
   590                gcloud container clusters update [CLUSTER_NAME] \
   591                  --zone [COMPUTE_ZONE] \
   592                  --no-enable-legacy-authorization
   593          scored: false
   594  
   595    - id: 5.9
   596      text: "Storage"
   597      checks:
   598        - id: 5.9.1
   599          text: "Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD) (Manual)"
   600          type: "manual"
   601          remediation: |
   602            Using Command Line:
   603              FOR NODE BOOT DISKS:
   604              Create a new node pool using customer-managed encryption keys for the node boot disk, of
   605              [DISK_TYPE] either pd-standard or pd-ssd :
   606  
   607                gcloud beta container node-pools create [CLUSTER_NAME] \
   608                  --disk-type [DISK_TYPE] \
   609                  --boot-disk-kms-key \
   610                  projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]
   611  
   612              Create a cluster using customer-managed encryption keys for the node boot disk, of
   613              [DISK_TYPE] either pd-standard or pd-ssd :
   614  
   615                gcloud beta container clusters create [CLUSTER_NAME] \
   616                  --disk-type [DISK_TYPE] \
   617                  --boot-disk-kms-key \
   618                  projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]
   619  
   620              FOR ATTACHED DISKS:
   621              Follow the instructions detailed at https://cloud.google.com/kubernetes-
   622              engine/docs/how-to/using-cmek.
   623          scored: false
   624  
   625    - id: 5.10
   626      text: "Other Cluster Configurations"
   627      checks:
   628        - id: 5.10.1
   629          text: "Ensure Kubernetes Web UI is Disabled (Automated)"
   630          type: "manual"
   631          remediation: |
   632            Using Command Line:
   633              To disable the Kubernetes Dashboard on an existing cluster, run the following command:
   634  
   635                gcloud container clusters update [CLUSTER_NAME] \
   636                  --zone [ZONE] \
   637                  --update-addons=KubernetesDashboard=DISABLED
   638          scored: false
   639  
   640        - id: 5.10.2
   641          text: "Ensure that Alpha clusters are not used for production workloads (Automated)"
   642          type: "manual"
   643          remediation: |
   644            Using Command Line:
   645              Upon creating a new cluster
   646  
   647                gcloud container clusters create [CLUSTER_NAME] \
   648                  --zone [COMPUTE_ZONE]
   649  
   650              Do not use the --enable-kubernetes-alpha argument.
   651          scored: false
   652  
   653        - id: 5.10.3
   654          text: "Ensure Pod Security Policy is Enabled and set as appropriate (Manual)"
   655          type: "manual"
   656          remediation: |
   657            Using Command Line:
   658              To enable Pod Security Policy for an existing cluster, run the following command:
   659  
   660                gcloud beta container clusters update [CLUSTER_NAME] \
   661                  --zone [COMPUTE_ZONE] \
   662                  --enable-pod-security-policy
   663          scored: false
   664  
   665        - id: 5.10.4
   666          text: "Consider GKE Sandbox for running untrusted workloads (Manual)"
   667          type: "manual"
   668          remediation: |
   669            Using Command Line:
   670              To enable GKE Sandbox on an existing cluster, a new Node pool must be created.
   671  
   672                gcloud container node-pools create [NODE_POOL_NAME] \
   673                  --zone=[COMPUTE-ZONE] \
   674                  --cluster=[CLUSTER_NAME] \
   675                  --image-type=cos_containerd \
   676                  --sandbox type=gvisor
   677          scored: false
   678  
   679        - id: 5.10.5
   680          text: "Ensure use of Binary Authorization (Automated)"
   681          type: "manual"
   682          remediation: |
   683            Using Command Line:
   684              Firstly, update the cluster to enable Binary Authorization:
   685  
   686                gcloud container cluster update [CLUSTER_NAME] \
   687                  --zone [COMPUTE-ZONE] \
   688                  --enable-binauthz
   689  
   690              Create a Binary Authorization Policy using the Binary Authorization Policy Reference
   691              (https://cloud.google.com/binary-authorization/docs/policy-yaml-reference) for
   692              guidance.
   693  
   694              Import the policy file into Binary Authorization:
   695  
   696                gcloud container binauthz policy import [YAML_POLICY]
   697          scored: false
   698  
   699        - id: 5.10.6
   700          text: "Enable Cloud Security Command Center (Cloud SCC) (Manual)"
   701          type: "manual"
   702          remediation: |
   703            Using Command Line:
   704              Follow the instructions at https://cloud.google.com/security-command-
   705              center/docs/quickstart-scc-setup.
   706          scored: false