github.com/muhammadn/cortex@v1.9.1-0.20220510110439-46bb7000d03d/docs/blocks-storage/convert-stored-chunks-to-blocks.md (about) 1 --- 2 title: "Convert long-term storage from chunks to blocks" 3 linkTitle: "Convert long-term storage from chunks to blocks" 4 weight: 6 5 slug: convert-long-term-storage-from-chunks-to-blocks 6 --- 7 8 If you have [configured your cluster to write new data to blocks](./migrate-from-chunks-to-blocks.md), there is still a question about old data. 9 Cortex can query both chunks and the blocks at the same time, but converting old chunks to blocks still has some benefits, like being able to decommission the chunks storage backend and save costs. 10 This document presents set of tools for doing the conversion. 11 12 _[Original design document](https://docs.google.com/document/d/1VI0cgaJmHD0pcrRb3UV04f8szXXGmFKQyqUJnFOcf6Q/edit?usp=sharing) for `blocksconvert` is also available._ 13 14 ## Tools 15 16 Cortex provides a tool called `blocksconvert`, which is actually collection of three tools for converting chunks to blocks. 17 18 Tools are: 19 20 - [**Scanner**](#scanner)<br /> 21 Scans the chunks index database and produces so-called "plan files", each file being a set of series and chunks for each series. Plan files are uploaded to the same object store bucket where blocks live. 22 - [**Scheduler**](#scheduler)<br /> 23 Looks for plan files, and distributes them to builders. Scheduler has global view of overall conversion progress. 24 - [**Builder**](#builder)<br /> 25 Asks scheduler for next plan file to work on, fetches chunks, puts them into TSDB block, and uploads the block to the object store. It repeats this process until there are no more plans. 26 - [**Cleaner**](#cleaner)<br /> 27 Cleaner asks scheduler for next plan file to work on, but instead of building the block, it actually **REMOVES CHUNKS** and **INDEX ENTRIES** from the Index database. 28 29 All tools start HTTP server (see `-server.http*` options) exposing the `/metrics` endpoint. 30 All tools also start gRPC server (`-server.grpc*` options), but only Scheduler exposes services on it. 31 32 ### Scanner 33 34 Scanner is started by running `blocksconvert -target=scanner`. Scanner requires configuration for accessing Cortex Index: 35 36 - `-schema-config-file` – this is standard Cortex schema file. 37 - `-bigtable.instance`, `-bigtable.project` – options for BigTable access. 38 - `-dynamodb.url` - for DynamoDB access. Example `dynamodb://us-east-1/` 39 - `-blocks-storage.backend` and corresponding `-blocks-storage.*` options for storing plan files. 40 - `-scanner.output-dir` – specifies local directory for writing plan files to. Finished plan files are deleted after upload to the bucket. List of scanned tables is also kept in this directory, to avoid scanning the same tables multiple times when Scanner is restarted. 41 - `-scanner.allowed-users` – comma-separated list of Cortex tenants that should have plans generated. If empty, plans for all found users are generated. 42 - `-scanner.ignore-users-regex` - If plans for all users are generated (`-scanner.allowed-users` is not set), then users matching this non-empty regular expression will be skipped. 43 - `-scanner.tables-limit` – How many tables should be scanned? By default all tables are scanned, but when testing scanner it may be useful to start with small number of tables first. 44 - `-scanner.tables` – Comma-separated list of tables to be scanned. Can be used to scan specific tables only. Note that schema is still used to find all tables first, and then this list is consulted to select only specified tables. 45 - `-scanner.scan-period-start` & `-scanner.scan-period-end` - limit the scan to a particular date range (format like `2020-12-31`) 46 47 Scanner will read the Cortex schema file to discover Index tables, and then it will start scanning them from most-recent table first, going back. 48 For each table, it will fully read the table and generate a plan for each user and day stored in the table. 49 Plan files are then uploaded to the configured blocks-storage bucket (at the `-blocksconvert.bucket-prefix` location prefix), and local copies are deleted. 50 After that, scanner continues with the next table until it scans them all or `-scanner.tables-limit` is reached. 51 52 Note that even though `blocksconvert` has options for configuring different Index store backends, **it only supports BigTable and DynamoDB at the moment.** 53 54 It is expected that only single Scanner process is running. 55 Scanner does the scanning of multiple table subranges concurrently. 56 57 Scanner exposes metrics with `cortex_blocksconvert_scanner_` prefix, eg. total number of scanned index entries of different type, number of open files (scanner doesn't close currently plan files until entire table has been scanned), scanned rows and parsed index entries. 58 59 **Scanner only supports schema version v9 on DynamoDB; v9, v10 and v11 on BigTable. Earlier schema versions are currently not supported.** 60 61 ### Scheduler 62 63 Scheduler is started by running `blocksconvert -target=scheduler`. It only needs to be configured with options to access the object store with blocks: 64 65 - `-blocks-storage.*` - Blocks storage object store configuration. 66 - `-scheduler.scan-interval` – How often to scan for plan files and their status. 67 - `-scheduler.allowed-users` – Comma-separated list of Cortex tenants. If set, only plans for these tenants will be offered to Builders. 68 69 It is expected that only single Scheduler process is running. Schedulers consume very little resources. 70 71 Scheduler's metrics have `cortex_blocksconvert_scheduler` prefix (number of plans in different states, oldest/newest plan). 72 Scheduler HTTP server also exposes `/plans` page that shows currently queued plans, and all plans and their status for all users. 73 74 ### Builder 75 76 Builder asks scheduler for next plan to work on, downloads the plan, builds the block and uploads the block to the blocks storage. It then repeats the process while there are still plans. 77 78 Builder is started by `blocksconvert -target=builder`. It needs to be configured with Scheduler endpoint, Cortex schema file, chunk-store specific options and blocks storage to upload blocks to. 79 80 - `-builder.scheduler-endpoint` - where to find scheduler, eg. "scheduler:9095" 81 - `-schema-config-file` - Cortex schema file, used to find out which chunks store to use for given plan 82 - `-gcs.bucketname` – when using GCS as chunks store (other chunks backend storages, like S3, are supported as well) 83 - `-blocks-storage.*` - blocks storage configuration 84 - `-builder.output-dir` - Local directory where Builder keeps the block while it is being built. Once block is uploaded to blocks storage, it is deleted from local directory. 85 86 Multiple builders may run at the same time, each builder will receive different plan to work on from scheduler. 87 Builders are CPU intensive (decoding and merging chunks), and require fast disk IO for writing blocks. 88 89 Builders's metrics have `cortex_blocksconvert_builder` prefix, and include total number of fetched chunks and their size, read position of the current plan and plan size, total number of written series and samples, number of chunks that couldn't be downloaded. 90 91 ### Cleaner 92 93 Cleaner is similar to builder in that it asks scheduler for next plan to work on, but instead of building blocks, it actually **REMOVES CHUNKS and INDEX ENTRIES**. Use with caution. 94 95 Cleaner is started by using `blocksconvert -target=cleaner`. Like Builder, it needs Scheduler endpoint, Cortex schema file, index and chunk-store specific options. Note that Cleaner works with any index store supported by Cortex, not just BigTable. 96 97 - `-cleaner.scheduler-endpoint` – where to find scheduler 98 - `-blocks-storage.*` – blocks storage configuration, used for downloading plan files 99 - `-cleaner.plans-dir` – local directory to store plan file while it is being processed by Cleaner. 100 - `-schema-config-file` – Cortex schema file. 101 102 Cleaner doesn't **scan** for index entries, but uses existing plan files to find chunks and index entries. For each series, Cleaner needs to download at least one chunk. This is because plan file doesn't contain label names and values, but chunks do. Cleaner will then delete all index entries associated with the series, and also all chunks. 103 104 **WARNING:** If both Builder and Cleaner run at the same time and use use the same Scheduler, **some plans will be handled by builder, and some by cleaner!** This will result in a loss of data! 105 106 Cleaner should only be deployed if no other Builder is running. Running multiple Cleaners at once is not supported, and will result in leftover chunks and index entries. Reason for this is that chunks can span multiple days, and chunk is fully deleted only when processing plan (day) when chunk started. Since cleaner also needs to download some chunks to be able to clean up all index entries, when using multiple cleaners, it can happen that cleaner processing older plans will delete chunks required to properly clean up data in newer plans. When using single cleaner only, this is not a problem, since scheduler sends plans to cleaner in time-reversed order. 107 108 **Note:** Cleaner is designed for use in very special cases, eg. when deleting chunks and index entries for a specific customer. If `blocksconvert` was used to convert ALL chunks to blocks, it is simpler to just drop the index and chunks database afterwards. In such case, Cleaner is not needed. 109 110 ### Limitations 111 112 The `blocksconvert` toolset currently has the following limitations: 113 114 - Scanner supports only BigTable and DynamoDB for chunks index backend, and cannot currently scan other databases. 115 - Supports only chunks schema versions v9 for DynamoDB; v9, v10 and v11 for Bigtable.