github.com/muhammadn/cortex@v1.9.1-0.20220510110439-46bb7000d03d/docs/operations/scalable-query-frontend.md (about) 1 --- 2 title: "Scaling the Query Frontend" 3 linkTitle: "Scaling the Query Frontend" 4 weight: 5 5 slug: scaling-query-frontend 6 --- 7 8 Historically scaling the Cortex query frontend has [posed some challenges](../proposals/scalable-query-frontend.md). 9 This document aims to detail how to use some added configuration parameters to correctly scale the frontend. 10 Note that these instructions apply in both the HA single binary scenario or microservices mode. 11 12 ## Scaling the Query Frontend 13 14 For every query frontend the querier adds a [configurable number of concurrent workers](https://github.com/cortexproject/cortex/blob/1797adfed2979f6096c3305b0dc9162c1ec0c046/pkg/querier/worker/worker.go#L212) 15 which are each capable of executing a query. 16 Each worker is connected to a single query frontend instance, therefore scaling up the query frontend impacts the amount of work each individual querier is attempting to do at any given time. 17 18 Scaling up may cause a querier to attempt more work than they are configured for due to restrictions such as memory and cpu limits. 19 Additionally, the PromQL engine itself is limited in the number of queries it can do as configured by the `-querier.max-concurrent` parameter. 20 Attempting more queries concurrently than this value causes the queries to queue up in the querier itself. 21 22 For similar reasons scaling down the query frontend may cause a querier to not use its allocated memory and cpu effectively. 23 This will lower effective resource utilization. 24 Also, because individual queriers will be doing less work, this may cause increased queueing in the query frontends. 25 26 ### Querier Max Concurrency 27 28 To guarantee that querier doesn't receive more queries that it can handle at the same time, make sure to configure the querier to match its PromQL concurrency with number of connections. 29 This can be done by using `-querier.worker-match-max-concurrent=true` option, or `match_max_concurrent: true` field in `frontend_worker` section of YAML config file. 30 This allows the operator to freely scale the frontend or scheduler up and down without impacting the amount of work an individual querier is attempting to perform. 31 32 ### Query Scheduler 33 34 Query scheduler is a service that moves the in-memory queue from query frontend to a separate component. 35 This makes scaling query frontend easier, as it allows running multiple query frontends without increasing the number of queues. 36 37 In order to use query scheduler, both query frontend and queriers must be configured with query scheduler address 38 (using `-frontend.scheduler-address` and `-querier.scheduler-address` options respectively). 39 40 Note that querier will only fetch queries from query frontend or query scheduler, but not both. 41 `-querier.frontend-address` and `-querier.scheduler-address` options are mutually exclusive, and at most one can be set. 42 43 When using query scheduler, it is recommended to run two query scheduler instances. 44 Running only one query scheduler poses a risk of increased query latency when single scheduler crashes or restarts. 45 Running two query-schedulers should be enough even for large Cortex clusters with an high QPS. 46 47 When using single-binary mode, Cortex defaults to run **without** query scheduler. 48 49 ### DNS Configuration / Readiness 50 51 When a new frontend is first created on scale up it will not immediately have queriers attached to it. 52 The existing endpoint `/ready` returns HTTP 200 status code only when the query frontend is ready to serve queries. 53 Make sure to configure this endpoint as a healthcheck in your load balancer, 54 otherwise a query frontend scale up event might result in failed queries or high latency for a bit while queriers attach. 55 56 When using query frontend with query scheduler, `/ready` will report 200 status code only after frontend discovers some schedulers via DNS resolution.