github.com/muhammadn/cortex@v1.9.1-0.20220510110439-46bb7000d03d/docs/getting-started/_index.md (about)

     1  ---
     2  title: "Getting Started"
     3  linkTitle: "Getting Started"
     4  weight: 1
     5  no_section_index_title: true
     6  slug: "getting-started"
     7  ---
     8  
     9  Cortex can be run as a single binary or as multiple independent microservices.
    10  The single-binary mode is easier to deploy and is aimed mainly at users wanting to try out Cortex or develop on it.
    11  The microservices mode is intended for production usage, as it allows you to independently scale different services and isolate failures.
    12  
    13  This document will focus on single-process Cortex with the blocks storage.
    14  See [the architecture doc](../architecture.md) for more information about the microservices and [blocks operation](../blocks-storage/_index.md)
    15  for more information about the blocks storage.
    16  
    17  Separately from single process vs microservices decision, Cortex can be configured to use local storage or cloud storage (S3, GCS and Azure).
    18  Cortex can also make use of external Memcacheds and Redis for caching.
    19  
    20  ## Single instance, single process
    21  
    22  For simplicity and to get started, we'll run it as a [single process](../configuration/single-process-config-blocks.yaml) with no dependencies.
    23  You can reconfigure the config to use GCS, Azure storage or local storage as shown in the file's comments.
    24  
    25  ```sh
    26  $ go build ./cmd/cortex
    27  $ ./cortex -config.file=./docs/configuration/single-process-config-blocks.yaml
    28  ```
    29  
    30  Unless reconfigured this starts a single Cortex node storing blocks to S3 in bucket `cortex`.
    31  It is not intended for production use.
    32  
    33  Clone and build prometheus
    34  ```sh
    35  $ git clone https://github.com/prometheus/prometheus
    36  $ cd prometheus
    37  $ go build ./cmd/prometheus
    38  ```
    39  
    40  Add the following to your Prometheus config (documentation/examples/prometheus.yml in Prometheus repo):
    41  
    42  ```yaml
    43  remote_write:
    44  - url: http://localhost:9009/api/v1/push
    45  ```
    46  
    47  And start Prometheus with that config file:
    48  
    49  ```sh
    50  $ ./prometheus --config.file=./documentation/examples/prometheus.yml
    51  ```
    52  
    53  Your Prometheus instance will now start pushing data to Cortex.  To query that data, start a Grafana instance:
    54  
    55  ```sh
    56  $ docker run --rm -d --name=grafana -p 3000:3000 grafana/grafana
    57  ```
    58  
    59  In [the Grafana UI](http://localhost:3000) (username/password admin/admin), add a Prometheus datasource for Cortex (`http://host.docker.internal:9009/prometheus`).
    60  
    61  **To clean up:** press CTRL-C in both terminals (for Cortex and Prometheus).
    62  
    63  ## Horizontally scale out
    64  
    65  Next we're going to show how you can run a scale out Cortex cluster using Docker. We'll need:
    66  
    67  - A built Cortex image.
    68  - A Docker network to put these containers on so they can resolve each other by name.
    69  - A single node Consul instance to coordinate the Cortex cluster.
    70  
    71  ```sh
    72  $ make ./cmd/cortex/.uptodate
    73  $ docker network create cortex
    74  $ docker run -d --name=consul --network=cortex -e CONSUL_BIND_INTERFACE=eth0 consul
    75  ```
    76  
    77  Next we'll run a couple of Cortex instances pointed at that Consul.  You'll note the Cortex configuration can be specified in either a config file or overridden on the command line.  See [the arguments documentation](../configuration/arguments.md) for more information about Cortex configuration options.
    78  
    79  ```sh
    80  $ docker run -d --name=cortex1 --network=cortex \
    81      -v $(pwd)/docs/configuration/single-process-config-blocks.yaml:/etc/single-process-config-blocks.yaml \
    82      -p 9001:9009 \
    83      quay.io/cortexproject/cortex \
    84      -config.file=/etc/single-process-config-blocks.yaml \
    85      -ring.store=consul \
    86      -consul.hostname=consul:8500
    87  $ docker run -d --name=cortex2 --network=cortex \
    88      -v $(pwd)/docs/configuration/single-process-config-blocks.yaml:/etc/single-process-config-blocks.yaml \
    89      -p 9002:9009 \
    90      quay.io/cortexproject/cortex \
    91      -config.file=/etc/single-process-config-blocks.yaml \
    92      -ring.store=consul \
    93      -consul.hostname=consul:8500
    94  ```
    95  
    96  If you go to http://localhost:9001/ring (or http://localhost:9002/ring) you should see both Cortex nodes join the ring.
    97  
    98  To demonstrate the correct operation of Cortex clustering, we'll send samples
    99  to one of the instances and queries to another.  In production, you'd want to
   100  load balance both pushes and queries evenly among all the nodes.
   101  
   102  Point Prometheus at the first:
   103  
   104  ```yaml
   105  remote_write:
   106  - url: http://localhost:9001/api/v1/push
   107  ```
   108  
   109  ```sh
   110  $ ./prometheus --config.file=./documentation/examples/prometheus.yml
   111  ```
   112  
   113  And Grafana at the second:
   114  
   115  ```sh
   116  $ docker run -d --name=grafana --network=cortex -p 3000:3000 grafana/grafana
   117  ```
   118  
   119  In [the Grafana UI](http://localhost:3000) (username/password admin/admin), add a Prometheus datasource for Cortex (`http://cortex2:9009/prometheus`).
   120  
   121  **To clean up:** CTRL-C the Prometheus process and run:
   122  
   123  ```
   124  $ docker rm -f cortex1 cortex2 consul grafana
   125  $ docker network remove cortex
   126  ```
   127  
   128  ## High availability with replication
   129  
   130  In this last demo we'll show how Cortex can replicate data among three nodes,
   131  and demonstrate Cortex can tolerate a node failure without affecting reads and writes.
   132  
   133  First, create a network and run a new Consul and Grafana:
   134  
   135  ```sh
   136  $ docker network create cortex
   137  $ docker run -d --name=consul --network=cortex -e CONSUL_BIND_INTERFACE=eth0 consul
   138  $ docker run -d --name=grafana --network=cortex -p 3000:3000 grafana/grafana
   139  ```
   140  
   141  Then, launch 3 Cortex nodes with replication factor 3:
   142  
   143  ```sh
   144  $ docker run -d --name=cortex1 --network=cortex \
   145      -v $(pwd)/docs/configuration/single-process-config-blocks.yaml:/etc/single-process-config-blocks.yaml \
   146      -p 9001:9009 \
   147      quay.io/cortexproject/cortex \
   148      -config.file=/etc/single-process-config-blocks.yaml \
   149      -ring.store=consul \
   150      -consul.hostname=consul:8500 \
   151      -distributor.replication-factor=3
   152  $ docker run -d --name=cortex2 --network=cortex \
   153      -v $(pwd)/docs/configuration/single-process-config-blocks.yaml:/etc/single-process-config-blocks.yaml \
   154      -p 9002:9009 \
   155      quay.io/cortexproject/cortex \
   156      -config.file=/etc/single-process-config-blocks.yaml \
   157      -ring.store=consul \
   158      -consul.hostname=consul:8500 \
   159      -distributor.replication-factor=3
   160  $ docker run -d --name=cortex3 --network=cortex \
   161      -v $(pwd)/docs/configuration/single-process-config-blocks.yaml:/etc/single-process-config-blocks.yaml \
   162      -p 9003:9009 \
   163      quay.io/cortexproject/cortex \
   164      -config.file=/etc/single-process-config-blocks.yaml \
   165      -ring.store=consul \
   166      -consul.hostname=consul:8500 \
   167      -distributor.replication-factor=3
   168  ```
   169  
   170  Configure Prometheus to send data to the first replica:
   171  
   172  ```yaml
   173  remote_write:
   174  - url: http://localhost:9001/api/v1/push
   175  ```
   176  
   177  ```sh
   178  $ ./prometheus --config.file=./documentation/examples/prometheus.yml
   179  ```
   180  
   181  In Grafana, add a datasource for the 3rd Cortex replica (`http://cortex3:9009/prometheus`)
   182  and verify the same data appears in both Prometheus and Cortex.
   183  
   184  To show that Cortex can tolerate a node failure, hard kill one of the Cortex replicas:
   185  
   186  ```
   187  $ docker rm -f cortex2
   188  ```
   189  
   190  You should see writes and queries continue to work without error.
   191  
   192  **To clean up:** CTRL-C the Prometheus process and run:
   193  
   194  ```
   195  $ docker rm -f cortex1 cortex2 cortex3 consul grafana
   196  $ docker network remove cortex
   197  ```