github.com/microsoft/fabrikate@v1.0.0-alpha.1.0.20210115014322-dc09194d0885/testdata/local-charts/prometheus/README.md (about)

     1  # Prometheus
     2  
     3  [Prometheus](https://prometheus.io/), a [Cloud Native Computing Foundation](https://cncf.io/) project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
     4  
     5  This chart bootstraps a [Prometheus](https://prometheus.io/) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
     6  
     7  ## Prerequisites
     8  
     9  - Kubernetes 1.3+ with Beta APIs enabled
    10  
    11  ## Get Repo Info
    12  
    13  ```console
    14  helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    15  helm repo add stable https://charts.helm.sh/stable
    16  helm repo update
    17  ```
    18  
    19  _See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
    20  
    21  ## Install Chart
    22  
    23  ```console
    24  # Helm 3
    25  $ helm install [RELEASE_NAME] prometheus-community/prometheus
    26  
    27  # Helm 2
    28  $ helm install --name [RELEASE_NAME] prometheus-community/prometheus
    29  ```
    30  
    31  _See [configuration](#configuration) below._
    32  
    33  _See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._
    34  
    35  ## Dependencies
    36  
    37  By default this chart installs additional, dependent charts:
    38  
    39  - [stable/kube-state-metrics](https://github.com/helm/charts/tree/master/stable/kube-state-metrics)
    40  
    41  To disable the dependency during installation, set `kubeStateMetrics.enabled` to `false`.
    42  
    43  _See [helm dependency](https://helm.sh/docs/helm/helm_dependency/) for command documentation._
    44  
    45  ## Uninstall Chart
    46  
    47  ```console
    48  # Helm 3
    49  $ helm uninstall [RELEASE_NAME]
    50  
    51  # Helm 2
    52  # helm delete --purge [RELEASE_NAME]
    53  ```
    54  
    55  This removes all the Kubernetes components associated with the chart and deletes the release.
    56  
    57  _See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._
    58  
    59  ## Upgrading Chart
    60  
    61  ```console
    62  # Helm 3 or 2
    63  $ helm upgrade [RELEASE_NAME] [CHART] --install
    64  ```
    65  
    66  _See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._
    67  
    68  ### To 9.0
    69  
    70  Version 9.0 adds a new option to enable or disable the Prometheus Server. This supports the use case of running a Prometheus server in one k8s cluster and scraping exporters in another cluster while using the same chart for each deployment. To install the server `server.enabled` must be set to `true`.
    71  
    72  ### To 5.0
    73  
    74  As of version 5.0, this chart uses Prometheus 2.x. This version of prometheus introduces a new data format and is not compatible with prometheus 1.x. It is recommended to install this as a new release, as updating existing releases will not work. See the [prometheus docs](https://prometheus.io/docs/prometheus/latest/migration/#storage) for instructions on retaining your old data.
    75  
    76  Prometheus version 2.x has made changes to alertmanager, storage and recording rules. Check out the migration guide [here](https://prometheus.io/docs/prometheus/2.0/migration/).
    77  
    78  Users of this chart will need to update their alerting rules to the new format before they can upgrade.
    79  
    80  ### Example Migration
    81  
    82  Assuming you have an existing release of the prometheus chart, named `prometheus-old`. In order to update to prometheus 2.x while keeping your old data do the following:
    83  
    84  1. Update the `prometheus-old` release. Disable scraping on every component besides the prometheus server, similar to the configuration below:
    85  
    86    ```yaml
    87    alertmanager:
    88      enabled: false
    89    alertmanagerFiles:
    90      alertmanager.yml: ""
    91    kubeStateMetrics:
    92      enabled: false
    93    nodeExporter:
    94      enabled: false
    95    pushgateway:
    96      enabled: false
    97    server:
    98      extraArgs:
    99        storage.local.retention: 720h
   100    serverFiles:
   101      alerts: ""
   102      prometheus.yml: ""
   103      rules: ""
   104    ```
   105  
   106  1. Deploy a new release of the chart with version 5.0+ using prometheus 2.x. In the values.yaml set the scrape config as usual, and also add the `prometheus-old` instance as a remote-read target.
   107  
   108     ```yaml
   109      prometheus.yml:
   110        ...
   111        remote_read:
   112        - url: http://prometheus-old/api/v1/read
   113        ...
   114     ```
   115  
   116     Old data will be available when you query the new prometheus instance.
   117  
   118  ## Configuration
   119  
   120  See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands:
   121  
   122  ```console
   123  # Helm 2
   124  $ helm inspect values prometheus-community/prometheus
   125  
   126  # Helm 3
   127  $ helm show values prometheus-community/prometheus
   128  ```
   129  
   130  You may similarly use the above configuration commands on each chart [dependency](#dependencies) to see it's configurations.
   131  
   132  ### Scraping Pod Metrics via Annotations
   133  
   134  This chart uses a default configuration that causes prometheus to scrape a variety of kubernetes resource types, provided they have the correct annotations. In this section we describe how to configure pods to be scraped; for information on how other resource types can be scraped you can do a `helm template` to get the kubernetes resource definitions, and then reference the prometheus configuration in the ConfigMap against the prometheus documentation for [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) and [kubernetes_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config).
   135  
   136  In order to get prometheus to scrape pods, you must add annotations to the the pods as below:
   137  
   138  ```yaml
   139  metadata:
   140    annotations:
   141      prometheus.io/scrape: "true"
   142      prometheus.io/path: /metrics
   143      prometheus.io/port: "8080"
   144  ```
   145  
   146  You should adjust `prometheus.io/path` based on the URL that your pod serves metrics from. `prometheus.io/port` should be set to the port that your pod serves metrics from. Note that the values for `prometheus.io/scrape` and `prometheus.io/port` must be enclosed in double quotes.
   147  
   148  ### Sharing Alerts Between Services
   149  
   150  Note that when [installing](#install-chart) or [upgrading](#upgrading-chart) you may use multiple values override files. This is particularly useful when you have alerts belonging to multiple services in the cluster. For example,
   151  
   152  ```yaml
   153  # values.yaml
   154  # ...
   155  
   156  # service1-alert.yaml
   157  serverFiles:
   158    alerts:
   159      service1:
   160        - alert: anAlert
   161        # ...
   162  
   163  # service2-alert.yaml
   164  serverFiles:
   165    alerts:
   166      service2:
   167        - alert: anAlert
   168        # ...
   169  ```
   170  
   171  ```console
   172  helm install [RELEASE_NAME] prometheus-community/prometheus -f values.yaml -f service1-alert.yaml -f service2-alert.yaml
   173  ```
   174  
   175  ### RBAC Configuration
   176  
   177  Roles and RoleBindings resources will be created automatically for `server` service.
   178  
   179  To manually setup RBAC you need to set the parameter `rbac.create=false` and specify the service account to be used for each service by setting the parameters: `serviceAccounts.{{ component }}.create` to `false` and `serviceAccounts.{{ component }}.name` to the name of a pre-existing service account.
   180  
   181  > **Tip**: You can refer to the default `*-clusterrole.yaml` and `*-clusterrolebinding.yaml` files in [templates](templates/) to customize your own.
   182  
   183  ### ConfigMap Files
   184  
   185  AlertManager is configured through [alertmanager.yml](https://prometheus.io/docs/alerting/configuration/). This file (and any others listed in `alertmanagerFiles`) will be mounted into the `alertmanager` pod.
   186  
   187  Prometheus is configured through [prometheus.yml](https://prometheus.io/docs/operating/configuration/). This file (and any others listed in `serverFiles`) will be mounted into the `server` pod.
   188  
   189  ### Ingress TLS
   190  
   191  If your cluster allows automatic creation/retrieval of TLS certificates (e.g. [cert-manager](https://github.com/jetstack/cert-manager)), please refer to the documentation for that mechanism.
   192  
   193  To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:
   194  
   195  ```console
   196  kubectl create secret tls prometheus-server-tls --cert=path/to/tls.cert --key=path/to/tls.key
   197  ```
   198  
   199  Include the secret's name, along with the desired hostnames, in the alertmanager/server Ingress TLS section of your custom `values.yaml` file:
   200  
   201  ```yaml
   202  server:
   203    ingress:
   204      ## If true, Prometheus server Ingress will be created
   205      ##
   206      enabled: true
   207  
   208      ## Prometheus server Ingress hostnames
   209      ## Must be provided if Ingress is enabled
   210      ##
   211      hosts:
   212        - prometheus.domain.com
   213  
   214      ## Prometheus server Ingress TLS configuration
   215      ## Secrets must be manually created in the namespace
   216      ##
   217      tls:
   218        - secretName: prometheus-server-tls
   219          hosts:
   220            - prometheus.domain.com
   221  ```
   222  
   223  ### NetworkPolicy
   224  
   225  Enabling Network Policy for Prometheus will secure connections to Alert Manager and Kube State Metrics by only accepting connections from Prometheus Server. All inbound connections to Prometheus Server are still allowed.
   226  
   227  To enable network policy for Prometheus, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set `networkPolicy.enabled` to true.
   228  
   229  If NetworkPolicy is enabled for Prometheus' scrape targets, you may also need to manually create a networkpolicy which allows it.