github.com/google/cloudprober@v0.11.3/docs/content/how-to/run-on-kubernetes.md (about)

     1  ---
     2  menu:
     3      main:
     4          parent: "How-Tos"
     5          weight: 20
     6  title: "Running On Kubernetes"
     7  date: 2019-10-08T17:24:32-07:00
     8  ---
     9  
    10  Kubernetes is a popular platform for running containers, and Cloudprober container runs on Kubernetes right out of the box. This document shows how you can use config map to provide config to cloudprober and reload cloudprober on config changes.
    11  
    12  ## ConfigMap
    13  
    14  In Kubernetes, a convenient way to provide config to containers is to use config maps.  Let's create a config that specifies a probe to monitor "google.com".
    15  
    16  ```bash
    17  probe {
    18    name: "google-http"
    19    type: HTTP
    20    targets {
    21      host_names: "www.google.com"
    22    }
    23    http_probe {}
    24    interval_msec: 15000
    25    timeout_msec: 1000
    26  }
    27  ```
    28  
    29  Save this config in `cloudprober.cfg`, create a config map using the following command:
    30  
    31  ```bash
    32  kubectl create configmap cloudprober-config \
    33    --from-file=cloudprober.cfg=cloudprober.cfg
    34  ```
    35  
    36  If you change the config, you can update the config map using the following command:
    37  
    38  ```bash
    39  kubectl create configmap cloudprober-config \
    40    --from-file=cloudprober.cfg=cloudprober.cfg  -o yaml --dry-run | \
    41    kubectl replace -f -
    42  ```
    43  
    44  
    45  ## Deployment Map
    46  
    47  Now let's add a `deployment.yaml` to add the config volume and cloudprober container:
    48  
    49  ```yaml
    50  apiVersion: apps/v1
    51  kind: Deployment
    52  metadata:
    53    name: cloudprober
    54  spec:
    55    replicas: 1
    56    selector:
    57      matchLabels:
    58        app: cloudprober
    59    template:
    60      metadata:
    61        annotations:
    62          checksum/config: "${CONFIG_CHECKSUM}"
    63        labels:
    64          app: cloudprober
    65      spec:
    66        volumes:
    67        - name: cloudprober-config
    68          configMap:
    69            name: cloudprober-config
    70        containers:
    71        - name: cloudprober
    72          image: cloudprober/cloudprober
    73          command: ["/cloudprober"]
    74          args: [
    75            "--config_file","/cfg/cloudprober.cfg",
    76            "--logtostderr"
    77          ]
    78          volumeMounts:
    79          - name: cloudprober-config
    80            mountPath: /cfg
    81          ports:
    82          - name: http
    83            containerPort: 9313
    84  ---
    85  apiVersion: v1
    86  kind: Service
    87  metadata:
    88    name: cloudprober
    89    labels:
    90      app: cloudprober
    91  spec:
    92    ports:
    93    - port: 9313
    94      protocol: TCP
    95      targetPort: 9313
    96    selector:
    97      app: cloudprober
    98    type: NodePort
    99  ```
   100  
   101  Note that we added an annotation to the deployment spec; this annotation allows us to update the deployment whenever cloudprober config changes. We can update this annotation based on the local cloudprober config content, and update the deployment using the following one-liner:
   102  
   103  ```bash
   104  # Update the config checksum annotation in deployment.yaml before running
   105  # kubectl apply.
   106  export CONFIG_CHECKSUM=$(kubectl get cm/cloudprober-config -o yaml | sha256sum) && \
   107  cat deployment.yaml | envsubst | kubectl apply -f -
   108  ```
   109  
   110  (Note: If you use Helm for Kubernetes deployments, Helm provides [a more native way](https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments) to include config checksums in deployments.)
   111  
   112  Applying the above yaml file, should create a deployment with a service at port 9313:
   113  
   114  ```bash
   115  $ kubectl get deployment
   116  NAME          READY   UP-TO-DATE   AVAILABLE   AGE
   117  cloudprober   1/1     1            1           94m
   118  
   119  $ kubectl get service cloudprober
   120  NAME          TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
   121  cloudprober   NodePort   10.31.249.108   <none>        9313:31367/TCP   94m
   122  ```
   123  
   124  Now you should be able to access various cloudprober URLs (`/status` for status,`/config` for config, `/metrics` for prometheus-format metrics) from within the cluster. For quick verification you can also set up a port forwarder and access these URLs locally at `localhost:9313`:
   125  
   126  ```
   127  kubectl port-forward svc/cloudprober 9313:9313
   128  ```
   129  
   130  Once you've verified that everything is working as expected, you can go on setting up metrics collection through prometheus (or stackdriver) in usual ways.
   131  
   132  
   133  
   134  ## Kubernetes Targets
   135  
   136  If you're running on Kuberenetes, you'd probably want to monitor Kubernetes resources (e.g. pods, endpoints, etc) as well. Good news is that cloudprober supports dynamic [targets discovery](/concepts/targets/) of Kubernetes resources.
   137  
   138  For example, following config adds HTTP probing of Kubernetes endpoints named 'cloudprober' (equivalent to _kubectl get ep cloudprober_).
   139  
   140  ```bash
   141  probe {
   142    name: "pod-to-endpoints"
   143    type: HTTP
   144  
   145    targets {
   146      # RDS (resource discovery service) targets
   147      # Equivalent to kubectl get ep cloudprober
   148      rds_targets {
   149        resource_path: "k8s://endpoints/cloudprober"
   150      }
   151    }
   152    
   153    http_probe {
   154      resolve_first: true
   155      relative_url: "/status"
   156    }
   157  }
   158  
   159  # Run an RDS gRPC server to discover Kubernetes targets.
   160  rds_server {
   161    provider {
   162      # For all options, please take a look at:
   163      # https://github.com/google/cloudprober/blob/master/rds/kubernetes/proto/config.proto#L38
   164      kubernetes_config {
   165        endpoints {}
   166      }
   167    }
   168  }
   169  ```
   170  
   171  This config adds a probe for endpoints named 'cloudprober'. Kubernetes targets configuration is further explained in the section below.
   172  
   173  ### Kubernetes RDS Targets
   174  
   175  As explained [here](/concepts/targets/#resource-discovery-service), cloudprober uses RDS for dynamic targets discovery. In the above config, we add an internal RDS server that provides expansion for kubernetes `endpoints` (other supported types are -- _pods_, _services_).  Inside the probe, we specify targets of the type [rds_targets](/concepts/targets/#resource-discovery-service) with resource path, `k8s://endpoints/cloudprober`. This resource path specifies resource of the type 'endpoints' and with the name 'cloudprober' (Hint: you can skip the name part of the resource path to discover all endpoints in the cluster).
   176  
   177  ### Cluster Resources Access
   178  
   179  RDS server that we added above discovers cluster resources using kubernetes APIs. It assumes that we are interested in the cluster we are running it in, and uses in-cluster config to talk to the kubernetes API server. For this set up to work, we need to give our container read-only access to kubernetes resources:
   180  
   181  ```yaml
   182  # Define a ClusterRole (resource-reader) for read-only access to the cluster
   183  # resources and bind this ClusterRole to the default service account.
   184  
   185  cat <<EOF | kubectl apply -f -
   186  apiVersion: rbac.authorization.k8s.io/v1
   187  kind: ClusterRole
   188  metadata:
   189    annotations:
   190      rbac.authorization.kubernetes.io/autoupdate: "true"
   191    labels:
   192    name: resource-reader
   193    namespace: default
   194  rules:
   195  - apiGroups: [""]
   196    resources: ["*"]
   197    verbs: ["get", "list"]
   198  - apiGroups:
   199    - extensions
   200    - "networking.k8s.io" # k8s 1.14+
   201    resources:
   202    - ingresses
   203    - ingresses/status
   204    verbs: ["get", "list"]
   205  ---
   206  apiVersion: rbac.authorization.k8s.io/v1
   207  kind: ClusterRoleBinding
   208  metadata:
   209   name: default-resource-reader
   210   namespace: default
   211  subjects:
   212  - kind: ServiceAccount
   213    name: default
   214    namespace: default
   215  roleRef:
   216   kind: ClusterRole
   217   name: resource-reader
   218   apiGroup: rbac.authorization.k8s.io
   219  EOF
   220  ```
   221  
   222  This will give `default` service account read-only access to the cluster resources. If you don't want to give the "default" user this access, you can create a new service account for cloudprober and use it in the deployment spec above.
   223  
   224  ### Push Config Update
   225  
   226  To push new cloudprober config to the cluster:
   227  
   228  ```bash
   229  # Update the config map
   230  kubectl create configmap cloudprober-config \
   231    --from-file=cloudprober.cfg=cloudprober.cfg  -o yaml --dry-run | \
   232    kubectl replace -f -
   233  
   234  # Update deployment
   235  export CONFIG_CHECKSUM=$(kubectl get cm/cloudprober-config -o yaml | sha256sum) && \
   236  cat deployment.yaml | envsubst | kubectl apply -f -
   237  ```
   238  
   239  Cloudprober should now start monitoring cloudprober endpoints. To verify:
   240  
   241  ```bash
   242  # Set up port fowarding such that you can access cloudprober:9313 through
   243  # localhost:9313.
   244  kubectl port-forward svc/cloudprober 9313:9313 &
   245  
   246  # Check config
   247  curl localhost:9313/config
   248  
   249  # Check metrics
   250  curl localhost:9313/metrics
   251  ```
   252  
   253  If you're running on GKE and have not disabled cloud logging, you'll also see logs in [Stackdriver Logging](https://pantheon.corp.google.com/logs/viewer?resource=gce_instance).