github.com/thanos-io/thanos@v0.32.5/docs/proposals-done/202005-scalable-rule-storage.md (about)

     1  ---
     2  type: proposal
     3  title: Scalable Rule Storage
     4  status: complete
     5  menu: proposals-done
     6  ---
     7  
     8  ## Summary
     9  
    10  There is no way to scale rule evaluation and storage today except functionally sharding rules onto multiple instances of the `thanos rule` component. However, we have already solved scaling storage of time-series across multiple processes: `thanos receive`.
    11  
    12  To scale rule evaluations and storage this proposal proposes to allow the `thanos rule` component to have a stateless mode, storing results of queries by sending it to a `thanos receive` hashring instead of storing them locally.
    13  
    14  ## Motivation
    15  
    16  A few large rules can create a significant amount of resulting time-series, which limits the scalability of Thanos Rule, as it uses a single embedded TSDB.
    17  
    18  Additionally, scaling out the rule component in terms of rule evaluations causes further defragmentation of TSDB blocks, as multiple rule instances produce hard to deduplicate samples. While doable with vertical compaction, it might cause some operational complexity and unnecessary load on the system.
    19  
    20  ## Goals
    21  
    22  Allow scaling storage and execution of rule evaluations.
    23  
    24  ## Verification
    25  
    26  * Run all rule component e2e tests with new mode as well.
    27  
    28  ## Proposal
    29  
    30  Allow specifying one of the following flags:
    31  
    32  * `--remote-write`
    33  * `--remote-write.config` or `--remote-write.config-file` flag following the same scheme as [`--query.config`, and `--query.config-file`](../components/rule.md#query-api)
    34  * `--remote-write.tenant-label-name` which label-value to use to set the tenant to be communicated to the receive component
    35  
    36  If any of these are specified the ruler would run a stateless mode, without local storage, and instead writing samples to the configured remote server, which must implement the `storepb.WritableStore` gRPC service.
    37  
    38  ## Alternatives
    39  
    40  Continue to allow spreading load only by functionally sharding rules.
    41  
    42  ## Work Plan
    43  
    44  Implement functionality alongside the existing architecture of the rule component.
    45  
    46  ## Open questions
    47  
    48  ### Multi tenancy model
    49  
    50  This it stands this proposal does not cover any multi tenancy aspects of the receive component there are two strategies that we could go with:
    51  
    52  * Have a configurable label that determines the tenant in requests.
    53  * Change the receive component to instead of using a header to determine the tenant use a label of the series being written.
    54  
    55  As the first exists, this proposal will continue with this approach and potentially reevaluate in the future.
    56  
    57  ### Removal of embedded TSDB
    58  
    59  For a start this functionality will be implemented alongside the current embedded TSDB. Once experience with this new mode has been gathered, it may be reevaluated to remove the embedded TSDB, but no changes planned for now. Alternatively the receive component could also be embedded into the rule component in an attempt to minimize code paths, but retain functionality.