github.com/projectcontour/contour@v1.28.2/site/content/guides/prometheus.md (about)

     1  ---
     2  title: Collecting Metrics with Prometheus
     3  layout: page
     4  ---
     5  
     6  <div id="toc" class="navigation"></div>
     7  
     8  Contour and Envoy expose metrics that can be scraped with Prometheus. By
     9  default, annotations to gather them are in all the `deployment` yamls and they
    10  should work out of the box with most configurations.
    11  
    12  ## Envoy Metrics
    13  
    14  Envoy typically [exposes metrics](https://www.envoyproxy.io/docs/envoy/v1.15.0/configuration/http/http_conn_man/stats#config-http-conn-man-stats) through an endpoint on its admin interface. To
    15  avoid exposing the entire admin interface to Prometheus (and other workloads in
    16  the cluster), Contour configures a static listener that sends traffic to the
    17  stats endpoint and nowhere else.
    18  
    19  Envoy supports Prometheus-compatible `/stats/prometheus` endpoint for metrics on
    20  port `8002`.
    21  
    22  ## Contour Metrics
    23  
    24  Contour exposes a Prometheus-compatible `/metrics` endpoint that defaults to listening on port 8000. This can be configured by using the `--http-address` and `--http-port` flags for the `serve` command.
    25  
    26  **Note:** the `Service` deployment manifest when installing Contour must be updated to represent the same port as the configured flag.
    27  
    28  **The metrics endpoint exposes the following metrics:**
    29  
    30  {{% include "guides/metrics/table.md" %}}
    31  
    32  ## Sample Deployment
    33  
    34  In the `/examples` directory there are example deployment files that can be used to spin up an example environment.
    35  All deployments there are configured with annotations for prometheus to scrape by default, so it should be possible to utilize any of them with the following quickstart example instructions.
    36  
    37  ### Deploy Prometheus
    38  
    39  A sample deployment of Prometheus and Alertmanager is provided that uses temporary storage. This deployment can be used for testing and development, but might not be suitable for all environments.
    40  
    41  #### Stateful Deployment
    42  
    43   A stateful deployment of Prometheus should use persistent storage with [Persistent Volumes and Persistent Volume Claims][1] to maintain a correlation between a data volume and the Prometheus Pod.
    44   Persistent volumes can be static or dynamic and depends on the backend storage implementation utilized in environment in which the cluster is deployed. For more information, see the [Kubernetes documentation on types of persistent volumes][2].
    45  
    46  #### Quick start
    47  
    48  ```sh
    49  # Deploy 
    50  $ kubectl apply -f examples/prometheus
    51  ```
    52  
    53  #### Access the Prometheus web UI
    54  
    55  ```sh
    56  $ kubectl -n projectcontour-monitoring port-forward $(kubectl -n projectcontour-monitoring get pods -l app=prometheus -l component=server -o jsonpath='{.items[0].metadata.name}') 9090:9090
    57  ```
    58  
    59  then go to `http://localhost:9090` in your browser.
    60  
    61  #### Access the Alertmanager web UI
    62  
    63  ```sh
    64  $ kubectl -n projectcontour-monitoring port-forward $(kubectl -n projectcontour-monitoring get pods -l app=prometheus -l component=alertmanager -o jsonpath='{.items[0].metadata.name}') 9093:9093
    65  ```
    66  
    67  then go to `http://localhost:9093` in your browser.
    68  
    69  ### Deploy Grafana
    70  
    71  A sample deployment of Grafana is provided that uses temporary storage.
    72  
    73  #### Quick start
    74  
    75  ```sh
    76  # Deploy
    77  $ kubectl apply -f examples/grafana/
    78  
    79  # Create secret with grafana credentials
    80  $ kubectl create secret generic grafana -n projectcontour-monitoring \
    81      --from-literal=grafana-admin-password=admin \
    82      --from-literal=grafana-admin-user=admin
    83  ```
    84  
    85  #### Access the Grafana UI
    86  
    87  ```sh
    88  $ kubectl port-forward $(kubectl get pods -l app=grafana -n projectcontour-monitoring -o jsonpath='{.items[0].metadata.name}') 3000 -n projectcontour-monitoring
    89  ```
    90  
    91  then go to `http://localhost:3000` in your browser.
    92  The username and password are from when you defined the Grafana secret in the previous step.
    93  
    94  [1]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
    95  [2]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes