github.com/muhammadn/cortex@v1.9.1-0.20220510110439-46bb7000d03d/docs/guides/gossip-ring-getting-started.md (about)

     1  ---
     2  title: "Getting started with gossiped ring"
     3  linkTitle: "Getting started with a gossip ring cluster"
     4  weight: 4
     5  slug: getting-started-with-gossiped-ring
     6  ---
     7  
     8  Cortex requires Key-Value (KV) store to store the ring. It can use traditional KV stores like Consul or Etcd,
     9  but it can also build its own KV store on top of memberlist library using a gossip algorithm.
    10  
    11  This short guide shows how to start Cortex in [single-binary mode](../architecture.md) with memberlist-based ring.
    12  To reduce number of required dependencies in this guide, it will use [blocks storage](../blocks-storage/_index.md) with no shipping to external stores.
    13  Storage engine and external storage configuration are not dependant on the ring configuration.
    14  
    15  ## Single-binary, two Cortex instances
    16  
    17  For simplicity and to get started, we'll run it as a two instances of Cortex on local computer.
    18  We will use prepared configuration files ([file 1](../../configuration/single-process-config-blocks-gossip-1.yaml), [file 2](../../configuration/single-process-config-blocks-gossip-2.yaml)), with no external
    19  dependencies.
    20  
    21  Build Cortex first:
    22  ```sh
    23  $ go build ./cmd/cortex
    24  ```
    25  
    26  Run two instances of Cortex, each one with its own dedicated config file:
    27  ```
    28  $ ./cortex -config.file docs/configuration/single-process-config-blocks-gossip-1.yaml
    29  $ ./cortex -config.file docs/configuration/single-process-config-blocks-gossip-2.yaml
    30  ```
    31  
    32  Download Prometheus and configure it to use our first Cortex instance for remote writes.
    33  
    34  ```yaml
    35  remote_write:
    36  - url: http://localhost:9109/api/v1/push
    37  ```
    38  
    39  After starting Prometheus, it will now start pushing data to Cortex. Distributor component in Cortex will
    40  distribute incoming samples between the two instances.
    41  
    42  To query that data, you can configure your Grafana instance to use http://localhost:9109/prometheus (first Cortex) as a Prometheus data source.
    43  
    44  ## How it works
    45  
    46  The two instances we started earlier should be able to find each other via memberlist configuration (already present in the config files):
    47  
    48  ```yaml
    49  memberlist:
    50    # defaults to hostname
    51    node_name: "Ingester 1"
    52    bind_port: 7946
    53    join_members:
    54      - localhost:7947
    55    abort_if_cluster_join_fails: false
    56  ```
    57  
    58  This tells memberlist to listen on port 7946, and connect to localhost:7947, which is the second instance.
    59  Port numbers are reversed in the second configuration file.
    60  We also need to configure `node_name` and also ingester ID (`ingester.lifecycler.id` field), because default to hostname,
    61  but we are running both Cortex instances on the same host.
    62  
    63  To make sure that both ingesters generate unique tokens, we configure `join_after` and `observe_period` to 10 seconds.
    64  First option tells Cortex to wait 10 seconds before joining the ring.  This option is normally used to tell Cortex ingester
    65  how long to wait for a potential tokens and data transfer from leaving ingester, but we also use it here to increase
    66  the chance of finding other gossip peers. When Cortex joins the ring, it generates tokens and writes them to the ring.
    67  If multiple Cortex instances do this at the same time, they can generate conflicting tokens. This can be a problem
    68  when using gossiped ring (instances may simply not see each other yet), so we use `observe_period` to watch the ring for token conflicts.
    69  If conflict is detected, new tokens are generated instead of conflicting tokens, and observe period is restarted.
    70  If no conflict is detected within the observe period, ingester switches to ACTIVE state.
    71  
    72  We are able to observe ring state on [http://localhost:9109/ring](http://localhost:9109/ring) and [http://localhost:9209/ring](http://localhost:9209/ring).
    73  The two instances may see slightly different views (eg. different timestamps), but should converge to a common state soon, with both instances
    74  being ACTIVE and ready to receive samples.
    75  
    76  ## How to add another instance?
    77  
    78  To add another Cortex to the small cluster, copy `docs/configuration/single-process-config-blocks-gossip-1.yaml` to a new file,
    79  and make following modifications. We assume that third Cortex will run on the same machine again, so we change node name and ingester ID as well. Here
    80  is annotated diff:
    81  
    82  ```diff
    83  ...
    84  
    85   server:
    86  +  # These ports need to be unique.
    87  -  http_listen_port: 9109
    88  -  grpc_listen_port: 9195
    89  +  http_listen_port: 9309
    90  +  grpc_listen_port: 9395
    91  
    92  ...
    93  
    94   ingester:
    95     lifecycler:
    96       # Defaults to hostname, but we run both ingesters in this demonstration on the same machine.
    97  -    id: "Ingester 1"
    98  +    id: "Ingester 3"
    99  
   100  ...
   101  
   102   memberlist:
   103      # defaults to hostname
   104  -   node_name: "Ingester 1"
   105  +   node_name: "Ingester 3"
   106  
   107      # bind_port needs to be unique
   108  -   bind_port: 7946
   109  +   bind_port: 7948
   110  
   111  ...
   112  
   113  +# Directory names in the `blocks_storage` > `tsdb` config ending with `...1` to end with `...3`. This is to avoid different instances
   114  +# writing in-progress data to the same directories.
   115   blocks_storage:
   116     tsdb:
   117  -    dir: /tmp/cortex/tsdb-ing1
   118  +    dir: /tmp/cortex/tsdb-ing3
   119      bucket_store:
   120  -     sync_dir: /tmp/cortex/tsdb-sync-querier1
   121  +     sync_dir: /tmp/cortex/tsdb-sync-querier3
   122  
   123  ...
   124  ```
   125  
   126  We don't need to change or add `memberlist.join_members` list. This new instance will simply join to the second one (listening on port 7947), and
   127  will discover other peers through it. When using kubernetes, suggested setup is to have a headless service pointing to all pods
   128  that want to be part of gossip cluster, and then point `join_members` to this headless service.
   129  
   130  We also don't need to change `/tmp/cortex/storage` directory in `blocks_storage.filesystem.dir` field. This is directory where all ingesters will
   131  "upload" finished blocks. This can also be an S3 or GCP storage, but for simplicity, we use local filesystem in this example.
   132  
   133  After these changes, we can start another Cortex instance using the modified configuration file. This instance will join the ring
   134  and will start receiving samples after it enters into ACTIVE state.