github.com/muhammadn/cortex@v1.9.1-0.20220510110439-46bb7000d03d/docs/proposals/receiving_metadata.md (about)

     1  ---
     2  title: "Support metadata API"
     3  linkTitle: "Support metadata API"
     4  weight: 1
     5  slug: support-metadata-api
     6  ---
     7  
     8  - Author: @gotjosh
     9  - Reviewers: @gouthamve, @pracucci
    10  - Date: March 2020
    11  - Status: Accepted
    12  
    13  ## Problem Statement
    14  Prometheus holds metric metadata alongside the contents of a scrape. This metadata (`HELP`, `TYPE`, `UNIT` and `METRIC_NAME`) enables [some Prometheus API](https://github.com/prometheus/prometheus/issues/6395) endpoints to output the metadata for integrations (e.g. [Grafana](https://github.com/grafana/grafana/pull/21124)) to consume it.
    15  
    16  At the moment of writing, Cortex does not support the `api/v1/metadata` endpoint that Prometheus implements as metadata was never propagated via remote write. Recent [work is done in Prometheus](https://github.com/prometheus/prometheus/pull/6815/files) enables the propagation of metadata.
    17  
    18  With this in place, remote write integrations such as Cortex can now receive this data and implement the API endpoint. This results in Cortex users being able to enjoy a tiny bit more insight on their metrics.
    19  
    20  ## Potential Solutions
    21  Before we delve into the solutions, let's set a baseline about how the data is received. This applies almost equally for the two.
    22  
    23  Metadata from Prometheus is sent in the same [`WriteRequest` proto message](https://github.com/prometheus/prometheus/blob/master/prompb/remote.proto) that the samples use. It is part of a different field (#3 given #2 is already [used interally](https://github.com/cortexproject/cortex/blob/master/pkg/ingester/client/cortex.proto#L36)), the data is a set identified by the metric name - that means it is aggregated across targets, and is sent all at once. Implying, Cortex will receive a single `WriteRequest` containing a set of the metadata for that instance at an specified interval.
    24  
    25  . It is also important to note that this current process is an intermediary step. Eventually, metadata in a request will be sent alongside samples and only for those included. The solutions proposed, take this nuance into account to avoid coupling between the current and future state of Prometheus, and hopefully do something now that also works for the future.
    26  
    27  As a reference, these are some key numbers regarding the size (and send timings) of the data at hand from our clusters at Grafana Labs:
    28  
    29  - On average, metadata (a combination of `HELP`, `TYPE`, `UNIT` and `METRIC_NAME`) is ~55 bytes uncompressed.
    30  - at GL, on an instance with about 2.6M active series, we hold ~1241 unique metrics in total.
    31  - with that, we can assume that on a worst-case scenario the metadata set for that instance is ~68 kilobytes uncompressed.
    32  - by default, this data is only propagated once every minute (aligning with the default scrape interval), but this can be adjusted.
    33  - Finally, what this gives us is a baseline worst-case scenario formula for the data to store per tenant: `~68KB * Replication Factor * # of Instances`. Keeping in mind that typically, there's a very high overlap of metadata across instances, and we plan to deduplicate in the ingesters.
    34  
    35  ### Write Path
    36  
    37  1. Store the metadata directly from the distributors into a cache (e.g. Memcached)
    38  
    39  Since metadata is received all at once, we could directly store into an external cache using the tenant ID as a key, and still, avoid a read-modify-write.  However, a very common use case of Cortex is to have multiple Prometheus sending data for the same tenant ID. This complicates things, as it adds a need to have an intermediary merging phase and thus making a read-modify-write inevitable.
    40  
    41  2. Keep metadata in memory within the ingesters
    42  
    43  Similarly to what we do with sample data, we can keep the metadata in-memory in the ingesters and apply similar semantics. I propose to use the tenant ID as a hash key, distribute it to the ingesters (taking into account the replication factor), using a hash map to keep a set of the metadata across all instances for a single tenant, and implement a configurable time-based purge process to deal with metadata churn. Given, we need to ensure fair-use we also propose implementing limits for both the number of metadata entries we can receive and the size of a single entry.
    44  
    45  ### Read Path
    46  
    47  In my eyes, the read path seems to only have one option. At the moment of writing, Cortex uses a [`DummyTargetRetriever`](https://github.com/cortexproject/cortex/blob/master/pkg/querier/dummy.go#L11-L20) as a way to signal that these API endpoints are not implemented. We'd need to modify the Prometheus interface to support a `Context` and extract the tenant ID from there. Then, use the tenant ID to query the ingesters for the data, deduplicate it and serve it.
    48  
    49  ## Conclusions
    50  
    51  I conclude that solution #2 is ideal for this work on the write path. It allows us to use similar semantics to samples, thus reducing operational complexity, and lays a groundwork for when we start receiving metadata alongside samples.
    52  
    53  There's one last piece to address: Allowing metadata to survive rolling restarts. Option #1 handles this well, given the aim would be to use an external cache such as Memcached. Option #2 lacks this, as it does not include any plans to persist this data. Given Prometheus (by default) sends metadata every minute, and we don't need a high level of consistency. We expect that an eventual consistency of up to 1 minute on the default case is deemed acceptable.
    54  
    55  ## References
    56  - [Prometheus Propagate metadata via Remote Write Design Doc](https://docs.google.com/document/d/1LoCWPAIIbGSq59NG3ZYyvkeNb8Ymz28PUKbg_yhAzvE/edit#)
    57  - [Prometheus Propagate metadata via Remote Write Design Issue](https://github.com/prometheus/prometheus/issues/6395)