sigs.k8s.io/cluster-api@v1.6.3/docs/book/src/tasks/diagnostics.md (about)

     1  # Diagnostics
     2  
     3  ## Introduction
     4  
     5  With CAPI v1.6 we introduced new flags to allow serving metrics, the pprof endpoint and an endpoint to dynamically change log levels securely in production.
     6  
     7  This feature is enabled per default via:
     8  ```yaml
     9            args:
    10              - "--diagnostics-address=${CAPI_DIAGNOSTICS_ADDRESS:=:8443}"
    11  ```
    12  
    13  As soon as the feature is enabled the metrics endpoint is served via https and protected via authentication and authorization. This works the same way as 
    14  metrics in core Kubernetes components: [Metrics in Kubernetes](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/).
    15  
    16  To continue serving metrics via http the following configuration can be used:
    17  ```yaml
    18            args:
    19              - "--diagnostics-address=localhost:8080"
    20              - "--insecure-diagnostics"
    21  ```
    22  
    23  The same can be achieved via clusterctl:
    24  ```bash
    25  export CAPI_DIAGNOSTICS_ADDRESS: "localhost:8080"
    26  export CAPI_INSECURE_DIAGNOSTICS: "true"
    27  clusterctl init ...
    28  ```
    29  
    30  **Note**: If insecure serving is configured the pprof and log level endpoints are disabled for security reasons.
    31  
    32  ## Scraping metrics
    33  
    34  A ServiceAccount token is now required to scrape metrics. The corresponding ServiceAccount needs permissions on the `/metrics` path.
    35  This can be achieved e.g. by following the [Kubernetes documentation](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/).
    36  
    37  ### via Prometheus
    38  
    39  With the Prometheus Helm chart it is as easy as using the following config for the Prometheus job scraping the Cluster API controllers:
    40  ```yaml
    41      scheme: https
    42      authorization:
    43        type: Bearer
    44        credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    45      tls_config:
    46        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    47        # The diagnostics endpoint is using a self-signed certificate, so we don't verify it.
    48        insecure_skip_verify: true
    49  ```
    50  
    51  For more details please see our Prometheus development setup: [Prometheus](https://github.com/kubernetes-sigs/cluster-api/tree/main/hack/observability/prometheus) 
    52  
    53  **Note**: The Prometheus Helm chart deploys the required ClusterRole out-of-the-box.
    54  
    55  ### via kubectl
    56  
    57  First deploy the following RBAC configuration:
    58  ```yaml
    59  cat << EOT | kubectl apply -f -
    60  apiVersion: rbac.authorization.k8s.io/v1
    61  kind: ClusterRole
    62  metadata:
    63    name: default-metrics
    64  rules:
    65  - nonResourceURLs:
    66    - "/metrics"
    67    verbs:
    68    - get
    69  ---
    70  apiVersion: rbac.authorization.k8s.io/v1
    71  kind: ClusterRoleBinding
    72  metadata:
    73    name: default-metrics
    74  roleRef:
    75    apiGroup: rbac.authorization.k8s.io
    76    kind: ClusterRole
    77    name: default-metrics
    78  subjects:
    79  - kind: ServiceAccount
    80    name: default
    81    namespace: default
    82  EOT
    83  ```
    84  
    85  Then let's open a port-forward, create a ServiceAccount token and scrape the metrics:
    86  ```bash
    87  # Terminal 1
    88  kubectl -n capi-system port-forward deployments/capi-controller-manager 8443
    89  
    90  # Terminal 2
    91  TOKEN=$(kubectl create token default)
    92  curl https://localhost:8443/metrics --header "Authorization: Bearer $TOKEN" -k
    93  ```
    94  
    95  ## Collecting profiles
    96  
    97  ### via Parca
    98  
    99  Parca can be used to continuously scrape profiles from CAPI providers. For more details please see our Parca 
   100  development setup: [parca](https://github.com/kubernetes-sigs/cluster-api/tree/main/hack/observability/parca)
   101  
   102  ### via kubectl
   103  
   104  First deploy the following RBAC configuration:
   105  ```yaml
   106  cat << EOT | kubectl apply -f -
   107  apiVersion: rbac.authorization.k8s.io/v1
   108  kind: ClusterRole
   109  metadata:
   110    name: default-pprof
   111  rules:
   112  - nonResourceURLs:
   113    - "/debug/pprof/*"
   114    verbs:
   115    - get
   116  ---
   117  apiVersion: rbac.authorization.k8s.io/v1
   118  kind: ClusterRoleBinding
   119  metadata:
   120    name: default-pprof
   121  roleRef:
   122    apiGroup: rbac.authorization.k8s.io
   123    kind: ClusterRole
   124    name: default-pprof
   125  subjects:
   126  - kind: ServiceAccount
   127    name: default
   128    namespace: default
   129  EOT
   130  ```
   131  
   132  Then let's open a port-forward, create a ServiceAccount token and scrape the profile:
   133  ```bash
   134  # Terminal 1
   135  kubectl -n capi-system port-forward deployments/capi-controller-manager 8443
   136  
   137  # Terminal 2
   138  TOKEN=$(kubectl create token default)
   139  curl "https://localhost:8443/debug/pprof/profile?seconds=10" --header "Authorization: Bearer $TOKEN" -k > ./profile.out
   140  go tool pprof -http=:8080 ./profile.out
   141  ```
   142  
   143  ## Changing the log level
   144  
   145  ### via kubectl
   146  
   147  First deploy the following RBAC configuration:
   148  ```yaml
   149  cat << EOT | kubectl apply -f -
   150  apiVersion: rbac.authorization.k8s.io/v1
   151  kind: ClusterRole
   152  metadata:
   153    name: default-loglevel
   154  rules:
   155  - nonResourceURLs:
   156    - "/debug/flags/v"
   157    verbs:
   158    - put
   159  ---
   160  apiVersion: rbac.authorization.k8s.io/v1
   161  kind: ClusterRoleBinding
   162  metadata:
   163    name: default-loglevel
   164  roleRef:
   165    apiGroup: rbac.authorization.k8s.io
   166    kind: ClusterRole
   167    name: default-loglevel
   168  subjects:
   169  - kind: ServiceAccount
   170    name: default
   171    namespace: default
   172  EOT
   173  ```
   174  
   175  Then let's open a port-forward, create a ServiceAccount token and change the log level to `8`:
   176  ```bash
   177  # Terminal 1
   178  kubectl -n capi-system port-forward deployments/capi-controller-manager 8443
   179  
   180  # Terminal 2
   181  TOKEN=$(kubectl create token default)
   182  curl "https://localhost:8443/debug/flags/v" --header "Authorization: Bearer $TOKEN" -X PUT -d '8' -k
   183  ```