github.com/khulnasoft-lab/kube-bench@v0.2.1-0.20240330183753-9df52345ae58/docs/running.md (about)

     1  
     2  ## Running kube-bench
     3  
     4  If you run kube-bench directly from the command line you may need to be root / sudo to have access to all the config files.
     5  
     6  By default kube-bench attempts to auto-detect the running version of Kubernetes, and map this to the corresponding CIS Benchmark version. For example, Kubernetes version 1.15 is mapped to CIS Benchmark version `cis-1.15` which is the benchmark version valid for Kubernetes 1.15.
     7  
     8  kube-bench also attempts to identify the components running on the node, and uses this to determine which tests to run (for example, only running the master node tests if the node is running an API server). 
     9  
    10  **Please note**
    11  It is impossible to inspect the master nodes of managed clusters, e.g. GKE, EKS, AKS and ACK, using kube-bench as one does not have access to such nodes, although it is still possible to use kube-bench to check worker node configuration in these environments.
    12  
    13  ### Running inside a container
    14  
    15  You can avoid installing kube-bench on the host by running it inside a container using the host PID namespace and mounting the `/etc` and `/var` directories where the configuration and other files are located on the host so that kube-bench can check their existence and permissions.
    16  
    17  ```
    18  docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t docker.io/khulnasoft/kube-bench:latest --version 1.18
    19  ```
    20  
    21  > Note: the tests require either the kubelet or kubectl binary in the path in order to auto-detect the Kubernetes version. You can pass `-v $(which kubectl):/usr/local/mount-from-host/bin/kubectl` to resolve this. You will also need to pass in kubeconfig credentials. For example:
    22  
    23  ```
    24  docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -v $(which kubectl):/usr/local/mount-from-host/bin/kubectl -v ~/.kube:/.kube -e KUBECONFIG=/.kube/config -t docker.io/khulnasoft/kube-bench:latest 
    25  ```
    26  
    27  You can use your own configs by mounting them over the default ones in `/opt/kube-bench/cfg/`
    28  
    29  ```
    30  docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t -v path/to/my-config.yaml:/opt/kube-bench/cfg/config.yaml -v $(which kubectl):/usr/local/mount-from-host/bin/kubectl -v ~/.kube:/.kube -e KUBECONFIG=/.kube/config docker.io/khulnasoft/kube-bench:latest
    31  ```
    32  
    33  ### Running in a Kubernetes cluster
    34  
    35  You can run kube-bench inside a pod, but it will need access to the host's PID namespace in order to check the running processes, as well as access to some directories on the host where config files and other files are stored.
    36  
    37  The `job.yaml` file (available in the root directory of the repository) can be applied to run the tests as a Kubernetes `Job`. For example:
    38  
    39  ```bash
    40  $ kubectl apply -f job.yaml
    41  job.batch/kube-bench created
    42  
    43  $ kubectl get pods
    44  NAME                      READY   STATUS              RESTARTS   AGE
    45  kube-bench-j76s9   0/1     ContainerCreating   0          3s
    46  
    47  # Wait for a few seconds for the job to complete
    48  $ kubectl get pods
    49  NAME                      READY   STATUS      RESTARTS   AGE
    50  kube-bench-j76s9   0/1     Completed   0          11s
    51  
    52  # The results are held in the pod's logs
    53  kubectl logs kube-bench-j76s9
    54  [INFO] 1 Master Node Security Configuration
    55  [INFO] 1.1 API Server
    56  ...
    57  ```
    58  
    59  To run tests on the master node, the pod needs to be scheduled on that node. This involves setting a nodeSelector and tolerations in the pod spec.
    60  
    61  The default labels applied to master nodes has changed since Kubernetes 1.11, so if you are using an older version you may need to modify the nodeSelector and tolerations to run the job on the master node.
    62  ### Running in an AKS cluster
    63  
    64  1. Create an AKS cluster(e.g. 1.13.7) with RBAC enabled, otherwise there would be 4 failures
    65  
    66  1. Use the [kubectl-enter plugin](https://github.com/kvaps/kubectl-enter) to shell into a node
    67  `
    68  kubectl-enter {node-name}
    69  `
    70  or ssh to one agent node
    71  could open nsg 22 port and assign a public ip for one agent node (only for testing purpose)
    72  
    73  1. Run CIS benchmark to view results:
    74  ```
    75  docker run --rm -v `pwd`:/host docker.io/khulnasoft/kube-bench:latest install
    76  ./kube-bench 
    77  ```
    78  kube-bench cannot be run on AKS master nodes
    79  
    80  ### Running CIS benchmark in an EKS cluster
    81  
    82  There is a `job-eks.yaml` file for running the kube-bench node checks on an EKS cluster. The significant difference on EKS is that it's not possible to schedule jobs onto the master node, so master checks can't be performed
    83  
    84  1. To create an EKS Cluster refer to [Getting Started with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) in the *Amazon EKS User Guide*
    85    - Information on configuring `eksctl`, `kubectl` and the AWS CLI is within
    86  2. Create an [Amazon Elastic Container Registry (ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) repository to host the kube-bench container image
    87  ```
    88  aws ecr create-repository --repository-name k8s/kube-bench --image-tag-mutability MUTABLE
    89  ```
    90  3. Download, build and push the kube-bench container image to your ECR repo
    91  ```
    92  git clone https://github.com/khulnasoft-lab/kube-bench.git
    93  cd kube-bench
    94  aws ecr get-login-password --region <AWS_REGION> | docker login --username AWS --password-stdin <AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com
    95  docker build -t k8s/kube-bench .
    96  docker tag k8s/kube-bench:latest <AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com/k8s/kube-bench:latest
    97  docker push <AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com/k8s/kube-bench:latest
    98  ```
    99  4. Copy the URI of your pushed image, the URI format is like this: `<AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com/k8s/kube-bench:latest`
   100  5. Replace the `image` value in `job-eks.yaml` with the URI from Step 4
   101  6. Run the kube-bench job on a Pod in your Cluster: `kubectl apply -f job-eks.yaml`
   102  7. Find the Pod that was created, it *should* be in the `default` namespace: `kubectl get pods --all-namespaces`
   103  8. Retrieve the value of this Pod and output the report, note the Pod name will vary: `kubectl logs kube-bench-<value>`
   104    - You can save the report for later reference: `kubectl logs kube-bench-<value> > kube-bench-report.txt`
   105  
   106  ### Running DISA STIG in an EKS cluster
   107  
   108  There is a `job-eks-stig.yaml` file for running the kube-bench node checks on an EKS cluster. The significant difference on EKS is that it's not possible to schedule jobs onto the master node, so master checks can't be performed
   109  
   110  1. To create an EKS Cluster refer to [Getting Started with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) in the *Amazon EKS User Guide*
   111    - Information on configuring `eksctl`, `kubectl` and the AWS CLI is within
   112  2. Create an [Amazon Elastic Container Registry (ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) repository to host the kube-bench container image
   113  ```
   114  aws ecr create-repository --repository-name k8s/kube-bench --image-tag-mutability MUTABLE
   115  ```
   116  3. Download, build and push the kube-bench container image to your ECR repo
   117  ```
   118  git clone https://github.com/khulnasoft-lab/kube-bench.git
   119  cd kube-bench
   120  aws ecr get-login-password --region <AWS_REGION> | docker login --username AWS --password-stdin <AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com
   121  docker build -t k8s/kube-bench .
   122  docker tag k8s/kube-bench:latest <AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com/k8s/kube-bench:latest
   123  docker push <AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com/k8s/kube-bench:latest
   124  ```
   125  4. Copy the URI of your pushed image, the URI format is like this: `<AWS_ACCT_NUMBER>.dkr.ecr.<AWS_REGION>.amazonaws.com/k8s/kube-bench:latest`
   126  5. Replace the `image` value in `job-eks-stig.yaml` with the URI from Step 4
   127  6. Run the kube-bench job on a Pod in your Cluster: `kubectl apply -f job-eks-stig.yaml`
   128  7. Find the Pod that was created, it *should* be in the `default` namespace: `kubectl get pods --all-namespaces`
   129  8. Retrieve the value of this Pod and output the report, note the Pod name will vary: `kubectl logs kube-bench-<value>`
   130    - You can save the report for later reference: `kubectl logs kube-bench-<value> > kube-bench-report.txt`
   131  
   132  ### Running on OpenShift
   133  
   134  | OpenShift Hardening Guide | kube-bench config |
   135  | ------------------------- | ----------------- |
   136  | ocp-3.10 +                | rh-0.7            |
   137  | ocp-4.1 +                 | rh-1.0            |
   138  
   139  kube-bench includes a set of test files for Red Hat's OpenShift hardening guide for OCP 3.10 and 4.1. To run this you will need to specify `--benchmark rh-07`, or `--version ocp-3.10` or,`--version ocp-4.5` or `--benchmark rh-1.0` 
   140  
   141  `kube-bench` supports auto-detection, when you run the `kube-bench` command it will autodetect if running in openshift environment.
   142  
   143  Since running `kube-bench` requires elevated privileges, the `privileged` SecurityContextConstraint needs to be applied to the ServiceAccount used for the `Job`:
   144  
   145  ```
   146  oc create namespace kube-bench
   147  oc adm policy add-scc-to-user privileged --serviceaccount default
   148  oc apply -f job.yaml
   149  ```
   150  
   151  ### Running in a GKE cluster
   152  
   153  | CIS Benchmark | Targets                                                     |
   154  | ------------- | ----------------------------------------------------------- |
   155  | gke-1.0       | master, controlplane, node, etcd, policies, managedservices |
   156  | gke-1.2.0     | master, controlplane, node, policies, managedservices       |
   157  
   158  kube-bench includes benchmarks for GKE. To run this you will need to specify `--benchmark gke-1.0` or `--benchmark gke-1.2.0` when you run the `kube-bench` command.
   159  
   160  To run the benchmark as a job in your GKE cluster apply the included `job-gke.yaml`.
   161  
   162  ```
   163  kubectl apply -f job-gke.yaml
   164  ```
   165  
   166  ### Running in a ACK cluster
   167  
   168  | CIS Benchmark | Targets                                                     |
   169  | ------------- | ----------------------------------------------------------- |
   170  | ack-1.0       | master, controlplane, node, etcd, policies, managedservices |
   171  
   172  kube-bench includes benchmarks for Alibaba Cloud Container Service For Kubernetes (ACK).
   173  To run this you will need to specify `--benchmark ack-1.0` when you run the `kube-bench` command.
   174  
   175  To run the benchmark as a job in your ACK cluster apply the included `job-ack.yaml`.
   176  
   177  ```
   178  kubectl apply -f job-ack.yaml
   179  ```
   180  
   181  ### Running in a VMware TKGI cluster
   182  
   183  | CIS Benchmark | Targets                                    |
   184  |---------------|--------------------------------------------|
   185  | tkgi-1.2.53   | master, etcd, controlplane, node, policies |
   186  
   187  kube-bench includes benchmarks for VMware tkgi platform.
   188  To run this you will need to specify `--benchmark tkgi-1.2.53` when you run the `kube-bench` command.
   189  
   190  To run the benchmark as a job in your VMware tkgi cluster apply the included `job-tkgi.yaml`.
   191  
   192  ```
   193  kubectl apply -f job-tkgi.yaml
   194  ```
   195  
   196  ### Running in a Rancher RKE cluster
   197  
   198  | CIS Benchmark | Targets                                    |
   199  |---------------|--------------------------------------------|
   200  | rke-cis-1.7   | master, etcd, controlplane, node, policies |
   201  
   202  kube-bench includes benchmarks for Rancher RKE platform.
   203  To run this you will need to specify `--benchmark rke-cis-1.7` when you run the `kube-bench` command.
   204  
   205  ### Running in a Rancher RKE2 cluster
   206  
   207  | CIS Benchmark | Targets                                    |
   208  |---------------|--------------------------------------------|
   209  | rke2-cis-1.7  | master, etcd, controlplane, node, policies |
   210  
   211  kube-bench includes benchmarks for Rancher RKE2 platform.
   212  To run this you will need to specify `--benchmark rke2-cis-1.7` when you run the `kube-bench` command.
   213  
   214  ### Running in a Rancher K3s cluster
   215  
   216  | CIS Benchmark | Targets                                    |
   217  |---------------|--------------------------------------------|
   218  | k3s-cis-1.7   | master, etcd, controlplane, node, policies |
   219  
   220  kube-bench includes benchmarks for Rancher K3S platform.
   221  To run this you will need to specify `--benchmark k3s-cis-1.7` when you run the `kube-bench` command.