github.com/yankunsam/loki/v2@v2.6.3-0.20220817130409-389df5235c27/docs/sources/operations/scalability.md (about)

     1  ---
     2  title: Scalability
     3  weight: 30
     4  ---
     5  # Scaling with Grafana Loki
     6  
     7  See [Loki: Prometheus-inspired, open source logging for cloud natives](https://grafana.com/blog/2018/12/12/loki-prometheus-inspired-open-source-logging-for-cloud-natives/)
     8  for a discussion about Grafana Loki's scalability.
     9  
    10  When scaling Loki, operators should consider running several Loki processes
    11  partitioned by role (ingester, distributor, querier) rather than a single Loki
    12  process. Grafana Labs' [production setup](https://github.com/grafana/loki/blob/master/production/ksonnet/loki)
    13  contains `.libsonnet` files that demonstrates configuring separate components
    14  and scaling for resource usage.
    15  
    16  ## Separate Query Scheduler
    17  
    18  The Query frontend has an in-memory queue that can be moved out into a separate process similar to the
    19  [Grafana Mimir query-scheduler](https://grafana.com/docs/mimir/latest/operators-guide/architecture/components/query-scheduler/). This allows running multiple query frontends.
    20  
    21  In order to run with the Query Scheduler, the frontend needs to be passed the scheduler's address via `-frontend.scheduler-address` and the querier processes needs to be started with `-querier.scheduler-address` set to the same address. Both options can also be defined via the [configuration file](../configuration).
    22  
    23  It is not valid to start the querier with both a configured frontend and a scheduler address. 
    24  
    25  The query scheduler process itself can be started via the `-target=query-scheduler` option of the Loki Docker image. For instance, `docker run grafana/loki:latest -config.file=/mimir/config/mimir.yaml -target=query-scheduler -server.http-listen-port=8009 -server.grpc-listen-port=9009` starts the query scheduler listening on ports `8009` and `9009`.
    26  
    27  ## Memory ballast
    28  
    29  In compute-constrained environments, garbage collection can become a significant performance factor. Frequently-run garbage collection interferes with running the application by using CPU resources. The use of memory ballast can mitigate the issue. Memory ballast allocates extra, but unused virtual memory in order to inflate the quantity of live heap space. Garbage collection is triggered by the growth of heap space usage. The inflated quantity of heap space reduces the perceived growth, so garbage collection occurs less frequently.
    30  
    31  Configure memory ballast using the ballast_bytes configuration option.