github.com/thanos-io/thanos@v0.32.5/docs/components/compact.md (about)

     1  # Compactor
     2  
     3  The `thanos compact` command applies the compaction procedure of the Prometheus 2.0 storage engine to block data stored in object storage. It is generally not semantically concurrency safe and must be deployed as a singleton against a bucket.
     4  
     5  Compactor is also responsible for downsampling of data. There is a time delay before downsampling at a given resolution is possible. This is necessary because downsampled chunks will have fewer samples in them, and as chunks are fixed size, data spanning more time will be required to fill them.
     6  * Creating 5m downsampling for blocks older than **40 hours** (2d)
     7  * Creating 1h downsampling for blocks older than **10 days** (2w)
     8  
     9  Example:
    10  
    11  ```bash
    12  thanos compact --data-dir /tmp/thanos-compact --objstore.config-file=bucket.yml
    13  ```
    14  
    15  Example content of `bucket.yml`:
    16  
    17  ```yaml
    18  type: GCS
    19  config:
    20    bucket: example-bucket
    21  ```
    22  
    23  By default, `thanos compact` will run to completion which makes it possible to execute it as a cronjob. Using the arguments `--wait` and `--wait-interval=5m` it's possible to keep it running.
    24  
    25  **Compactor, Sidecar, Receive and Ruler are the only Thanos components which should have write access to object storage, with only Compactor being able to delete data.**
    26  
    27  > **NOTE:** High availability for Compactor is generally not required. See the [Availability](#availability) section.
    28  
    29  ## Compaction
    30  
    31  The Compactor, among other things, is responsible for compacting multiple blocks into one.
    32  
    33  Why even compact? This is a process, also done by Prometheus, to reduce the number of blocks and compact index indices. We can compact an index quite well in most cases, because series usually live longer than the duration of the smallest blocks (2 hours).
    34  
    35  ### Compaction Groups / Block Streams
    36  
    37  Usually those blocks come through the same source. We call blocks from a single source a "stream" of blocks or "compaction group". We distinguish streams by **external labels**. Blocks with the same labels are considered as produced by the same source.
    38  
    39  This is because `external_labels` are added by the Prometheus instance which produced the block.
    40  
    41  ⚠ This is why those labels on block must be both *unique* and *persistent* across different Prometheus instances. ⚠
    42  
    43  * By *unique*, we mean that the set of labels in a Prometheus instance must be different from all other sets of labels of your Prometheus instances, so that the compactor will be able to group blocks by Prometheus instance.
    44  * By *persistent*, we mean that one Prometheus instance must keep the same labels if it restarts, so that the compactor will keep compacting blocks from an instance even when a Prometheus instance goes down for some time.
    45  
    46  Natively Prometheus does not store external labels anywhere. This is why external labels are added only on upload time to the `ThanosMeta` section of `meta.json` in each block.
    47  
    48  > **NOTE:** In default mode the state of two or more blocks having the same external labels and overlapping in time is assumed as an unhealthy situation. Refer to [Overlap Issue Troubleshooting](../operating/troubleshooting.md#overlaps) for more info. This results in compactor [halting](#halting).
    49  
    50  #### Warning: Only one instance of Compactor may run against a single stream of blocks in a single object storage.
    51  
    52  :warning: :warning: :warning:
    53  
    54  Because not all object storage providers implement a safe locking mechanism, you need to ensure on your own that only a single Compactor is running against a single stream of blocks on a single bucket. Running more than one Compactor may result in [Overlap Issues](../operating/troubleshooting.md#overlaps) which have to be resolved manually.
    55  
    56  This rule also means that there could be a problem when both compacted and non-compacted blocks are being uploaded by a sidecar. This is why the "upload compacted" function still lives under a separate `--shipper.upload-compacted` flag that helps to ensure that compacted blocks are uploaded before anything else. The singleton rule is also why local Prometheus compaction has to be disabled in order to use Thanos Sidecar with the upload option. Use - at your own risk! - the hidden `--shipper.ignore-unequal-block-size` flag to disable this check.
    57  
    58  > **NOTE:** In future versions of Thanos it's possible that both restrictions will be removed once [vertical compaction](#vertical-compactions) reaches production status.
    59  
    60  You can though run multiple Compactors against a single Bucket as long as each instance compacts a separate stream of blocks. You can do this in order to [scale the compaction process](#scalability).
    61  
    62  ### Vertical Compactions
    63  
    64  Thanos and Prometheus support vertical compaction, the process of compacting multiple streams of blocks into one.
    65  
    66  In Prometheus, this can be triggered by setting a hidden flag in Prometheus and putting additional TSDB blocks in Prometheus' local data directory. Extra blocks can overlap with existing ones. When Prometheus detects this situation, it performs `vertical compaction` which compacts overlapping blocks into a single one. This is mainly used for **backfilling**.
    67  
    68  In Thanos, this works similarly, but on a bigger scale and using external labels for grouping as explained in the ["Compaction" section](#compaction).
    69  
    70  In both systems, series with the same labels are merged together. In Prometheus, merging samples is **naive**. It works by deduplicating samples within exactly the same timestamps. Otherwise samples are merged and sorted by timestamp. Thanos also supports a new penalty based samples merging strategy, which is explained in [Deduplication](#vertical-compaction-use-cases).
    71  
    72  > **NOTE:** Both Prometheus' and Thanos' default behaviour is to fail compaction if any overlapping blocks are spotted. (For Thanos, with the same external labels).
    73  
    74  #### Vertical Compaction Use Cases
    75  
    76  The following are valid use cases for vertical compaction:
    77  
    78  * **Races** between multiple compactions, for example multiple Thanos compactors or between Thanos and Prometheus compactions. While this will cause extra computational overhead for Compactor it's safe to enable vertical compaction for this case.
    79  * **Backfilling**. If you want to add blocks of data to any stream where there already is existing data for some time range, you will need to enable vertical compaction.
    80  * **Offline deduplication** of series. It's very common to have the same data replicated into multiple streams. We can distinguish two common strategies for deduplications, `one-to-one` and `penalty`:
    81    * `one-to-one` deduplication is when multiple series (with the same labels) from different blocks for the same time range have **exactly** the same samples: Same values and timestamps. This is very common when using [Receivers](receive.md) with replication greater than 1 as receiver replication copies samples exactly (same timestamps and values) to different receive instances.
    82    * `penalty` deduplication is when the same data is **duplicated logically**, i.e. the same application is scraped from two different Prometheis. This usually requires more complex deduplication algorithms. For example, one that is used to [deduplicate on the fly on the Querier](query.md#run-time-deduplication-of-ha-groups). This is a common case when Prometheus HA replicas are used. You can enable this deduplication strategy via the `--deduplication.func=penalty` flag.
    83  
    84  #### Vertical Compaction Risks
    85  
    86  The main risk is the **irreversible** implications of potential configuration errors:
    87  
    88  * If you accidentally upload blocks with the same external labels but produced by totally different Prometheis for totally different applications, some metrics can overlap and potentially merge together, making the series useless.
    89  * If you merge disjoint series in multiple of blocks together, there is currently no easy way to split them back.
    90  * The `penalty` offline deduplication algorithm has its own limitations. Even though it has been battle-tested for quite a long time, very few issues still come up from time to time (such as [breaking rate/irate](https://github.com/thanos-io/thanos/issues/2890)). If you'd like to enable this deduplication algorithm, do so at your own risk and back up your data first!
    91  
    92  #### Enabling Vertical Compaction
    93  
    94  **NOTE:** See the ["risks" section](#vertical-compaction-risks) to understand the implications and experimental nature of this feature.
    95  
    96  You can enable vertical compaction using the hidden flag `--compact.enable-vertical-compaction`
    97  
    98  If you want to "virtually" group blocks differently for deduplication use cases, use `--deduplication.replica-label=LABEL` to set one or more labels to be ignored during block loading.
    99  
   100  For example if you have following set of block streams:
   101  
   102  ```
   103  external_labels: {cluster="eu1", replica="1", receive="true", environment="production"}
   104  external_labels: {cluster="eu1", replica="2", receive="true", environment="production"}
   105  external_labels: {cluster="us1", replica="1", receive="true", environment="production"}
   106  external_labels: {cluster="us1", replica="1", receive="true", environment="staging"}
   107  ```
   108  
   109  and set `--deduplication.replica-label="replica"`, Compactor will assume those as:
   110  
   111  ```
   112  external_labels: {cluster="eu1", receive="true", environment="production"} (2 streams, resulted in one)
   113  external_labels: {cluster="us1", receive="true", environment="production"}
   114  external_labels: {cluster="us1", receive="true", environment="staging"}
   115  ```
   116  
   117  On the next compaction, multiple streams' blocks will be compacted into one.
   118  
   119  If you need a different deduplication algorithm, use `--deduplication.func=FUNC` flag. The default value is the original `one-to-one` deduplication.
   120  
   121  ## Enforcing Retention of Data
   122  
   123  By default, there is NO retention set for object storage data. This means that you store data forever, which is a valid and recommended way of running Thanos.
   124  
   125  You can configure retention by using `--retention.resolution-raw` `--retention.resolution-5m` and `--retention.resolution-1h` flag. Not setting them or setting to `0s` means no retention.
   126  
   127  **NOTE:** ⚠ ️Retention is applied right after Compaction and Downsampling loops. If those are failing, data will never be deleted.
   128  
   129  ## Downsampling
   130  
   131  Downsampling is a process of rewriting series' to reduce overall resolution of the samples without losing accuracy over longer time ranges.
   132  
   133  To learn more see [video from KubeCon 2019](https://youtu.be/qQN0N14HXPM?t=714)
   134  
   135  ### TL;DR on how thanos downsampling works
   136  
   137  Thanos Compactor takes "raw" resolution block and creates a new one with "downsampled" chunks. Downsampled chunk takes on storage level form of "AggrChunk":
   138  
   139  ```proto
   140  message AggrChunk {
   141      int64 min_time = 1;
   142      int64 max_time = 2;
   143  
   144      Chunk raw     = 3;
   145      Chunk count   = 4;
   146      Chunk sum     = 5;
   147      Chunk min     = 6;
   148      Chunk max     = 7;
   149      Chunk counter = 8;
   150  }
   151  ```
   152  
   153  This means that for each series we collect various aggregations with a given interval: 5m or 1h (depending on resolution). This allows us to keep precision on large duration queries, without fetching too many samples.
   154  
   155  ### ⚠ ️Downsampling: Note About Resolution and Retention ⚠️
   156  
   157  Resolution is a distance between data points on your graphs. E.g.
   158  
   159  * `raw` - the same as scrape interval at the moment of data ingestion
   160  * `5 minutes` - data point is every 5 minutes
   161  * `1 hour` - data point is every 1h
   162  
   163  Compactor downsampling is done in two passes:
   164  1) All raw resolution metrics that are older than **40 hours** are downsampled at a 5m resolution
   165  2) All 5m resolution metrics older than **10 days** are downsampled at a 1h resolution
   166  
   167  > **NOTE:** If retention at each resolution is lower than minimum age for the successive downsampling pass, data will be deleted before downsampling can be completed. As a rule of thumb retention for each downsampling level should be the same, and should be greater than the maximum date range (10 days for 5m to 1h downsampling).
   168  
   169  Keep in mind that the initial goal of downsampling is not saving disk or object storage space. In fact, downsampling doesn't save you **any** space but instead, it adds 2 more blocks for each raw block which are only slightly smaller or relatively similar size to raw blocks. This is done by internal downsampling implementation which, to ensure mathematical correctness, holds various aggregations. This means that downsampling can increase the size of your storage a bit (~3x), if you choose to store all resolutions (recommended and enabled by default).
   170  
   171  The goal of downsampling is to provide an opportunity to get fast results for range queries of big time intervals like months or years. In other words, if you set `--retention.resolution-raw` less than `--retention.resolution-5m` and `--retention.resolution-1h` - you might run into a problem of not being able to "zoom in" to your historical data.
   172  
   173  To avoid confusion - you might want to think about `raw` data as a "zoom in" opportunity. Considering the values for mentioned options - always think "Will I need to zoom in to the day 1 year ago?" if the answer is "yes" - you most likely want to keep raw data for as long as 1h and 5m resolution, otherwise you'll be able to see only a downsampled representation of how your raw data looked like.
   174  
   175  There's also a case when you might want to disable downsampling at all with `--downsampling.disable`. You might want to do it when you know for sure that you are not going to request long ranges of data (obviously, because without downsampling those requests are going to be much much more expensive than with it). A valid example of that case is when you only care about the last couple weeks of your data or use it only for alerting, but if that's your case - you also need to ask yourself if you want to introduce Thanos at all instead of just vanilla Prometheus?
   176  
   177  Ideally, you will have an equal retention set (or no retention at all) to all resolutions which allow both "zoom in" capabilities as well as performant long ranges queries. Since object storages are usually quite cheap, storage size might not matter that much, unless your goal with thanos is somewhat very specific and you know exactly what you're doing.
   178  
   179  Not setting this flag, or setting it to `0d`, i.e. `--retention.resolution-X=0d`, will mean that samples at the `X` resolution level will be kept forever.
   180  
   181  Please note that blocks are only deleted after they completely "fall off" of the specified retention policy. In other words, the "max time" of a block needs to be older than the amount of time you had specified.
   182  
   183  ## Deleting Aborted Partial Uploads
   184  
   185  It can happen that a producer started uploading some block, but it never finished and it never will. Sidecars will retry in case of failures during upload or process (unless there was no persistent storage), but a very common case is with Compactor. If the Compactor process crashes during upload of a compacted block, the whole compaction starts from scratch and a new block ID is created. This means that partial upload will never be retried.
   186  
   187  To handle this case there is the `--delete-delay=48h` flag that starts deletion of directories inside object storage without `meta.json` only after a given time.
   188  
   189  This value has to be smaller than upload duration and [consistency delay](#consistency-delay).
   190  
   191  ## Halting
   192  
   193  Because of the very specific nature of Compactor which is writing to object storage, potentially deleting sensitive data, and downloading GBs of data, by default we halt Compactor on certain data failures. This means that Compactor does not crash on halt errors, but instead keeps running and does nothing with metric `thanos_compact_halted` set to 1.
   194  
   195  Reason is that we don't want to retry compaction and all the computations if we know that, for example, there is already an overlapped state in the object storage for some reason.
   196  
   197  Hidden flag `--no-debug.halt-on-error` controls this behavior. If set, on halt error Compactor exits.
   198  
   199  ## Resources
   200  
   201  ### CPU
   202  
   203  It's recommended to give `--compact.concurrency` amount of CPU cores.
   204  
   205  ### Memory
   206  
   207  Memory usage depends on block sizes in the object storage and compaction concurrency.
   208  
   209  Generally, the maximum memory utilization is exactly the same as for Prometheus for compaction process:
   210  
   211  * For each source block considered for compaction:
   212    * 1/32 of all block's symbols
   213    * 1/32 of all block's posting offsets
   214  * Single series with all labels and all chunks.
   215  
   216  You need to multiply this with X where X is `--compact.concurrency` (by default 1).
   217  
   218  **NOTE:** Don't check heap memory only. Prometheus and Thanos compaction leverages `mmap` heavily which is outside of `Go` `runtime` stats. Refer to process / OS memory used rather. On Linux/MacOS Go will also use as much as available, so utilization will be always near limit.
   219  
   220  Generally, for a medium-sized bucket, a limit of 10GB of memory should be enough to keep it working.
   221  
   222  ### Network
   223  
   224  Overall, Compactor is the component that can potentially use the highest amount of network bandwidth, so place it near the bucket's zone/location.
   225  
   226  It has to download each block needed for compaction / downsampling and it does that on every compaction / downsampling. It then uploads computed blocks. It also refreshes the state of bucket often.
   227  
   228  ### Disk
   229  
   230  The compactor needs local disk space to store intermediate data for its processing as well as bucket state cache. Generally, for medium sized bucket about 100GB should be enough to keep working as the compacted time ranges grow over time. However, this highly depends on size of the blocks. In worst case scenario compactor has to have space adequate to 2 times 2 weeks (if your maximum compaction level is 2 weeks) worth of smaller blocks to perform compaction. First, to download all of those source blocks, second to build on disk output of 2 week block composed of those smaller ones.
   231  
   232  You need to multiply this with X where X is `--compact.concurrency` (by default 1).
   233  
   234  On-disk data is safe to delete between restarts and should be the first attempt to get crash-looping compactors unstuck. However, it's recommended to give the Compactor persistent disk in order to effectively use bucket state cache between restarts.
   235  
   236  ## Availability
   237  
   238  Compactor, generally, does not need to be highly available. Compactions are needed from time to time, only when new blocks appear.
   239  
   240  The only risk is that without compactor running for longer time (weeks) you might see reduced performance of your read path due to amount of small blocks, lack of downsampled data and retention not enforced
   241  
   242  ## Scalability
   243  
   244  The main and only `Service Level Indicator` for Compactor is how fast it can cope with uploaded TSDB blocks to the bucket.
   245  
   246  To understand that you can use mix `thanos_objstore_bucket_last_successful_upload_time` being quite fresh, `thanos_compact_halted` being non 1 and `thanos_blocks_meta_synced{state="loaded"}` constantly increasing over days.
   247  
   248  <img src="compactor_no_coping_with_load.png" class="img-fluid" alt="Example view of compactor not coping with amount and size of incoming blocks"/>
   249  
   250  Generally there two scalability directions:
   251  
   252  1. Too many producers/sources (e.g Prometheus-es) are uploading to same object storage. Too many "streams" of work for Compactor. Compactor has to scale with the number of producers in the bucket.
   253  
   254  You should horizontally scale Compactor to cope with this using [label sharding](../sharding.md#compactor). This allows to assign multiple streams to each instance of compactor.
   255  
   256  2. TSDB blocks from single stream is too big, it takes too much time or resources.
   257  
   258  This is rare as first you would need to ingest that amount of data into Prometheus and it's usually not recommended to have bigger than 10 millions series in the 2 hours blocks. However, with 2 weeks blocks, potential [Vertical Compaction](#vertical-compactions) enabled and other producers than Prometheus (e.g backfilling) this scalability concern can appear as well. See [Limit size of blocks](https://github.com/thanos-io/thanos/issues/3068) ticket to track progress of solution if you are hitting this.
   259  
   260  ## Eventual Consistency
   261  
   262  Depending on the Object Storage provider like S3, GCS, Ceph etc; we can divide the storages into strongly consistent or eventually consistent. Since there are no consistency guarantees provided by some Object Storage providers, we have to make sure that we have a consistent lock-free way of dealing with Object Storage irrespective of the choice of object storage.
   263  
   264  ### Consistency Delay
   265  
   266  In order to make sure we don't read partially uploaded block (or eventually visible fully in system) we established `--consistency-delay=30m` delay for all components reading blocks.
   267  
   268  This means that blocks are visible / loadable for compactor (and used for retention, compaction planning, etc), only after 30m from block upload start in object storage.
   269  
   270  ### Block Deletions
   271  
   272  In order to achieve co-ordination between compactor and all object storage readers without any race, blocks are not deleted directly. Instead, blocks are marked for deletion by uploading `deletion-mark.json` file for the block that was chosen to be deleted. This file contains unix time of when the block was marked for deletion.
   273  
   274  ## Flags
   275  
   276  ```$ mdox-exec="thanos compact --help"
   277  usage: thanos compact [<flags>]
   278  
   279  Continuously compacts blocks in an object store bucket.
   280  
   281  Flags:
   282        --block-files-concurrency=1
   283                                  Number of goroutines to use when
   284                                  fetching/uploading block files from object
   285                                  storage.
   286        --block-meta-fetch-concurrency=32
   287                                  Number of goroutines to use when fetching block
   288                                  metadata from object storage.
   289        --block-viewer.global.sync-block-interval=1m
   290                                  Repeat interval for syncing the blocks between
   291                                  local and remote view for /global Block Viewer
   292                                  UI.
   293        --block-viewer.global.sync-block-timeout=5m
   294                                  Maximum time for syncing the blocks between
   295                                  local and remote view for /global Block Viewer
   296                                  UI.
   297        --bucket-web-label=BUCKET-WEB-LABEL
   298                                  External block label to use as group title in
   299                                  the bucket web UI
   300        --compact.blocks-fetch-concurrency=1
   301                                  Number of goroutines to use when download block
   302                                  during compaction.
   303        --compact.cleanup-interval=5m
   304                                  How often we should clean up partially uploaded
   305                                  blocks and blocks with deletion mark in the
   306                                  background when --wait has been enabled. Setting
   307                                  it to "0s" disables it - the cleaning will only
   308                                  happen at the end of an iteration.
   309        --compact.concurrency=1   Number of goroutines to use when compacting
   310                                  groups.
   311        --compact.progress-interval=5m
   312                                  Frequency of calculating the compaction progress
   313                                  in the background when --wait has been enabled.
   314                                  Setting it to "0s" disables it. Now compaction,
   315                                  downsampling and retention progress are
   316                                  supported.
   317        --consistency-delay=30m   Minimum age of fresh (non-compacted)
   318                                  blocks before they are being processed.
   319                                  Malformed blocks older than the maximum of
   320                                  consistency-delay and 48h0m0s will be removed.
   321        --data-dir="./data"       Data directory in which to cache blocks and
   322                                  process compactions.
   323        --deduplication.func=     Experimental. Deduplication algorithm for
   324                                  merging overlapping blocks. Possible values are:
   325                                  "", "penalty". If no value is specified,
   326                                  the default compact deduplication merger
   327                                  is used, which performs 1:1 deduplication
   328                                  for samples. When set to penalty, penalty
   329                                  based deduplication algorithm will be used.
   330                                  At least one replica label has to be set via
   331                                  --deduplication.replica-label flag.
   332        --deduplication.replica-label=DEDUPLICATION.REPLICA-LABEL ...
   333                                  Label to treat as a replica indicator of blocks
   334                                  that can be deduplicated (repeated flag). This
   335                                  will merge multiple replica blocks into one.
   336                                  This process is irreversible.Experimental.
   337                                  When one or more labels are set, compactor
   338                                  will ignore the given labels so that vertical
   339                                  compaction can merge the blocks.Please note
   340                                  that by default this uses a NAIVE algorithm
   341                                  for merging which works well for deduplication
   342                                  of blocks with **precisely the same samples**
   343                                  like produced by Receiver replication.If you
   344                                  need a different deduplication algorithm (e.g
   345                                  one that works well with Prometheus replicas),
   346                                  please set it via --deduplication.func.
   347        --delete-delay=48h        Time before a block marked for deletion is
   348                                  deleted from bucket. If delete-delay is non
   349                                  zero, blocks will be marked for deletion and
   350                                  compactor component will delete blocks marked
   351                                  for deletion from the bucket. If delete-delay
   352                                  is 0, blocks will be deleted straight away.
   353                                  Note that deleting blocks immediately can cause
   354                                  query failures, if store gateway still has the
   355                                  block loaded, or compactor is ignoring the
   356                                  deletion because it's compacting the block at
   357                                  the same time.
   358        --downsample.concurrency=1
   359                                  Number of goroutines to use when downsampling
   360                                  blocks.
   361        --downsampling.disable    Disables downsampling. This is not recommended
   362                                  as querying long time ranges without
   363                                  non-downsampled data is not efficient and useful
   364                                  e.g it is not possible to render all samples for
   365                                  a human eye anyway
   366        --hash-func=              Specify which hash function to use when
   367                                  calculating the hashes of produced files.
   368                                  If no function has been specified, it does not
   369                                  happen. This permits avoiding downloading some
   370                                  files twice albeit at some performance cost.
   371                                  Possible values are: "", "SHA256".
   372    -h, --help                    Show context-sensitive help (also try
   373                                  --help-long and --help-man).
   374        --http-address="0.0.0.0:10902"
   375                                  Listen host:port for HTTP endpoints.
   376        --http-grace-period=2m    Time to wait after an interrupt received for
   377                                  HTTP Server.
   378        --http.config=""          [EXPERIMENTAL] Path to the configuration file
   379                                  that can enable TLS or authentication for all
   380                                  HTTP endpoints.
   381        --log.format=logfmt       Log format to use. Possible options: logfmt or
   382                                  json.
   383        --log.level=info          Log filtering level.
   384        --max-time=9999-12-31T23:59:59Z
   385                                  End of time range limit to compact.
   386                                  Thanos Compactor will compact only blocks,
   387                                  which happened earlier than this value. Option
   388                                  can be a constant time in RFC3339 format or time
   389                                  duration relative to current time, such as -1d
   390                                  or 2h45m. Valid duration units are ms, s, m, h,
   391                                  d, w, y.
   392        --min-time=0000-01-01T00:00:00Z
   393                                  Start of time range limit to compact.
   394                                  Thanos Compactor will compact only blocks, which
   395                                  happened later than this value. Option can be a
   396                                  constant time in RFC3339 format or time duration
   397                                  relative to current time, such as -1d or 2h45m.
   398                                  Valid duration units are ms, s, m, h, d, w, y.
   399        --objstore.config=<content>
   400                                  Alternative to 'objstore.config-file'
   401                                  flag (mutually exclusive). Content of
   402                                  YAML file that contains object store
   403                                  configuration. See format details:
   404                                  https://thanos.io/tip/thanos/storage.md/#configuration
   405        --objstore.config-file=<file-path>
   406                                  Path to YAML file that contains object
   407                                  store configuration. See format details:
   408                                  https://thanos.io/tip/thanos/storage.md/#configuration
   409        --retention.resolution-1h=0d
   410                                  How long to retain samples of resolution 2 (1
   411                                  hour) in bucket. Setting this to 0d will retain
   412                                  samples of this resolution forever
   413        --retention.resolution-5m=0d
   414                                  How long to retain samples of resolution 1 (5
   415                                  minutes) in bucket. Setting this to 0d will
   416                                  retain samples of this resolution forever
   417        --retention.resolution-raw=0d
   418                                  How long to retain raw samples in bucket.
   419                                  Setting this to 0d will retain samples of this
   420                                  resolution forever
   421        --selector.relabel-config=<content>
   422                                  Alternative to 'selector.relabel-config-file'
   423                                  flag (mutually exclusive). Content of
   424                                  YAML file that contains relabeling
   425                                  configuration that allows selecting
   426                                  blocks. It follows native Prometheus
   427                                  relabel-config syntax. See format details:
   428                                  https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
   429        --selector.relabel-config-file=<file-path>
   430                                  Path to YAML file that contains relabeling
   431                                  configuration that allows selecting
   432                                  blocks. It follows native Prometheus
   433                                  relabel-config syntax. See format details:
   434                                  https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
   435        --tracing.config=<content>
   436                                  Alternative to 'tracing.config-file' flag
   437                                  (mutually exclusive). Content of YAML file
   438                                  with tracing configuration. See format details:
   439                                  https://thanos.io/tip/thanos/tracing.md/#configuration
   440        --tracing.config-file=<file-path>
   441                                  Path to YAML file with tracing
   442                                  configuration. See format details:
   443                                  https://thanos.io/tip/thanos/tracing.md/#configuration
   444        --version                 Show application version.
   445    -w, --wait                    Do not exit after all compactions have been
   446                                  processed and wait for new work.
   447        --wait-interval=5m        Wait interval between consecutive compaction
   448                                  runs and bucket refreshes. Only works when
   449                                  --wait flag specified.
   450        --web.disable             Disable Block Viewer UI.
   451        --web.disable-cors        Whether to disable CORS headers to be set by
   452                                  Thanos. By default Thanos sets CORS headers to
   453                                  be allowed by all.
   454        --web.external-prefix=""  Static prefix for all HTML links and redirect
   455                                  URLs in the bucket web UI interface.
   456                                  Actual endpoints are still served on / or the
   457                                  web.route-prefix. This allows thanos bucket
   458                                  web UI to be served behind a reverse proxy that
   459                                  strips a URL sub-path.
   460        --web.prefix-header=""    Name of HTTP request header used for dynamic
   461                                  prefixing of UI links and redirects.
   462                                  This option is ignored if web.external-prefix
   463                                  argument is set. Security risk: enable
   464                                  this option only if a reverse proxy in
   465                                  front of thanos is resetting the header.
   466                                  The --web.prefix-header=X-Forwarded-Prefix
   467                                  option can be useful, for example, if Thanos
   468                                  UI is served via Traefik reverse proxy with
   469                                  PathPrefixStrip option enabled, which sends the
   470                                  stripped prefix value in X-Forwarded-Prefix
   471                                  header. This allows thanos UI to be served on a
   472                                  sub-path.
   473        --web.route-prefix=""     Prefix for API and UI endpoints. This allows
   474                                  thanos UI to be served on a sub-path. This
   475                                  option is analogous to --web.route-prefix of
   476                                  Prometheus.
   477  
   478  ```