github.com/muhammadn/cortex@v1.9.1-0.20220510110439-46bb7000d03d/docs/guides/running-cortex-on-kubernetes.md (about)

     1  ---
     2  title: "Running Cortex on Kubernetes"
     3  linkTitle: "Running Cortex on Kubernetes"
     4  weight: 3
     5  slug: running-cortex-on-kubernetes
     6  ---
     7  
     8  Because Cortex is designed to run multiple instances of each component
     9  (ingester, querier, etc.), you probably want to automate the placement
    10  and shepherding of these instances. Most users choose Kubernetes to do
    11  this, but this is not mandatory.
    12  
    13  ## Configuration
    14  
    15  ### Resource requests
    16  
    17  If using Kubernetes, each container should specify resource requests
    18  so that the scheduler can place them on a node with sufficient capacity.
    19  
    20  For example an ingester might request:
    21  
    22  ```
    23          resources:
    24            requests:
    25              cpu: 4
    26              memory: 10Gi
    27  ```
    28  
    29  The specific values here should be adjusted based on your own
    30  experiences running Cortex - they are very dependent on rate of data
    31  arriving and other factors such as series churn.
    32  
    33  ### Take extra care with ingesters
    34  
    35  Ingesters hold hours of timeseries data in memory; you can configure
    36  Cortex to replicate the data but you should take steps to avoid losing
    37  all replicas at once:
    38  
    39   - Don't run multiple ingesters on the same node.
    40   - Don't run ingesters on preemptible/spot nodes.
    41   - Spread out ingesters across racks / availability zones / whatever
    42     applies in your datacenters.
    43  
    44  You can ask Kubernetes to avoid running on the same node like this:
    45  
    46  ```
    47        affinity:
    48          podAntiAffinity:
    49            preferredDuringSchedulingIgnoredDuringExecution:
    50            - weight: 100
    51              podAffinityTerm:
    52                labelSelector:
    53                  matchExpressions:
    54                  - key: name
    55                    operator: In
    56                    values:
    57                    - ingester
    58                topologyKey: "kubernetes.io/hostname"
    59  ```
    60  
    61  Give plenty of time for an ingester to hand over or flush data to
    62  store when shutting down; for Kubernetes this looks like:
    63  
    64  ```
    65        terminationGracePeriodSeconds: 2400
    66  ```
    67  
    68  Ask Kubernetes to limit rolling updates to one ingester at a time, and
    69  signal the old one to stop before the new one is ready:
    70  
    71  ```
    72    strategy:
    73      rollingUpdate:
    74        maxSurge: 0
    75        maxUnavailable: 1
    76  ```
    77  
    78  Ingesters provide an HTTP hook to signal readiness when all is well;
    79  this is valuable because it stops a rolling update at the first
    80  problem:
    81  
    82  ```
    83          readinessProbe:
    84            httpGet:
    85              path: /ready
    86              port: 80
    87  ```
    88  
    89  We do not recommend configuring a liveness probe on ingesters -
    90  killing them is a last resort and should not be left to a machine.