github.com/google/cadvisor@v0.49.1/docs/runtime_options.md (about)

     1  # cAdvisor Runtime Options
     2  
     3  This document describes a set of runtime flags available in cAdvisor.
     4  
     5  ## Container labels
     6  * `--store_container_labels=false` - do not convert container labels and environment variables into labels on prometheus metrics for each container.
     7  * `--whitelisted_container_labels` - comma separated list of container labels to be converted to labels on prometheus metrics for each container. `store_container_labels` must be set to false for this to take effect.
     8  
     9  ## Container envs
    10  
    11  * `--env_metadata_whitelist`: a comma-separated list of environment variable keys that needs to be collected for containers, only support containerd and docker runtime for now.
    12  
    13  ## Limiting which containers are monitored
    14  * `--docker_only=false` - do not report raw cgroup metrics, except the root cgroup.
    15  * `--raw_cgroup_prefix_whitelist` - a comma-separated list of cgroup path prefix that needs to be collected even when `--docker_only` is specified
    16  * `--disable_root_cgroup_stats=false` - disable collecting root Cgroup stats.
    17  
    18  ## Container Hints
    19  
    20  Container hints are a way to pass extra information about a container to cAdvisor. In this way cAdvisor can augment the stats it gathers. For more information on the container hints format see its [definition](../container/common/container_hints.go). Note that container hints are only used by the raw container driver today.
    21  
    22  ```
    23  --container_hints="/etc/cadvisor/container_hints.json": location of the container hints file
    24  ```
    25  
    26  ## CPU
    27  
    28  ```
    29  --enable_load_reader=false: Whether to enable cpu load reader
    30  --max_procs=0: max number of CPUs that can be used simultaneously. Less than 1 for default (number of cores).
    31  ```
    32  
    33  ## Debugging and Logging
    34  
    35  cAdvisor-native flags that help in debugging:
    36  
    37  ```
    38  --log_backtrace_at="": when logging hits line file:N, emit a stack trace
    39  --log_cadvisor_usage=false: Whether to log the usage of the cAdvisor container
    40  --version=false: print cAdvisor version and exit
    41  --profiling=false: Enable profiling via web interface host:port/debug/pprof/
    42  ```
    43  
    44  From [glog](https://github.com/golang/glog) here are some flags we find useful:
    45  
    46  ```
    47  --log_dir="": If non-empty, write log files in this directory
    48  --logtostderr=false: log to standard error instead of files
    49  --alsologtostderr=false: log to standard error as well as files
    50  --stderrthreshold=0: logs at or above this threshold go to stderr
    51  --v=0: log level for V logs
    52  --vmodule=: comma-separated list of pattern=N settings for file-filtered logging
    53  ```
    54  
    55  ## Docker
    56  
    57  ```
    58  --docker="unix:///var/run/docker.sock": docker endpoint (default "unix:///var/run/docker.sock")
    59  --docker_root="/var/lib/docker": DEPRECATED: docker root is read from docker info (this is a fallback, default: /var/lib/docker) (default "/var/lib/docker")
    60  --docker-tls: use TLS to connect to docker
    61  --docker-tls-cert="cert.pem": client certificate for TLS-connection with docker
    62  --docker-tls-key="key.pem": private key for TLS-connection with docker
    63  --docker-tls-ca="ca.pem": trusted CA for TLS-connection with docker
    64  ```
    65  
    66  ## Podman
    67  
    68  ```bash
    69  --podman="unix:///var/run/podman/podman.sock": podman endpoint (default "unix:///var/run/podman/podman.sock")
    70  ```
    71  
    72  ## Housekeeping
    73  
    74  Housekeeping is the periodic actions cAdvisor takes. During these actions, cAdvisor will gather container stats. These flags control how and when cAdvisor performs housekeeping.
    75  
    76  #### Dynamic Housekeeping
    77  
    78  Dynamic housekeeping intervals let cAdvisor vary how often it gathers stats.
    79  It does this depending on how active the container is. Turning this off
    80  provides predictable housekeeping intervals, but increases the resource usage
    81  of cAdvisor.
    82  
    83  ```
    84  --allow_dynamic_housekeeping=true: Whether to allow the housekeeping interval to be dynamic
    85  ```
    86  
    87  #### Housekeeping Intervals
    88  
    89  Intervals for housekeeping. cAdvisor has two housekeepings: global and per-container.
    90  
    91  Global housekeeping is a singular housekeeping done once in cAdvisor. This typically does detection of new containers. Today, cAdvisor discovers new containers with kernel events so this global housekeeping is mostly used as backup in the case that there are any missed events.
    92  
    93  Per-container housekeeping is run once on each container cAdvisor tracks. This typically gets container stats.
    94  
    95  ```
    96  --global_housekeeping_interval=1m0s: Interval between global housekeepings
    97  --housekeeping_interval=1s: Interval between container housekeepings
    98  --max_housekeeping_interval=1m0s: Largest interval to allow between container housekeepings (default 1m0s)
    99  ```
   100  
   101  ## HTTP
   102  
   103  Specify where cAdvisor listens.
   104  
   105  ```
   106  --http_auth_file="": HTTP auth file for the web UI
   107  --http_auth_realm="localhost": HTTP auth realm for the web UI (default "localhost")
   108  --http_digest_file="": HTTP digest file for the web UI
   109  --http_digest_realm="localhost": HTTP digest file for the web UI (default "localhost")
   110  --listen_ip="": IP to listen on, defaults to all IPs
   111  --port=8080: port to listen (default 8080)
   112  --url_base_prefix=/: optional path prefix aded to all resource URLs; useful when running cAdvisor behind a proxy. (default /)
   113  ```
   114  
   115  ## Local Storage Duration
   116  
   117  cAdvisor stores the latest historical data in memory. How long of a history it stores can be configured with the `--storage_duration` flag.
   118  
   119  ```
   120  --storage_duration=2m0s: How long to store data.
   121  ```
   122  
   123  ## Machine
   124  
   125  ```
   126  --boot_id_file="/proc/sys/kernel/random/boot_id": Comma-separated list of files to check for boot-id. Use the first one that exists. (default "/proc/sys/kernel/random/boot_id")
   127  --machine_id_file="/etc/machine-id,/var/lib/dbus/machine-id": Comma-separated list of files to check for machine-id. Use the first one that exists. (default "/etc/machine-id,/var/lib/dbus/machine-id")
   128  --update_machine_info_interval=5m: Interval between machine info updates. (default 5m)
   129  ```
   130  
   131  ## Metrics
   132  
   133  ```
   134  --application_metrics_count_limit=100: Max number of application metrics to store (per container) (default 100)
   135  --collector_cert="": Collector's certificate, exposed to endpoints for certificate based authentication.
   136  --collector_key="": Key for the collector's certificate
   137  --disable_metrics=<metrics>: comma-separated list of metrics to be disabled. Options are advtcp,app,cpu,cpuLoad,cpu_topology,cpuset,disk,diskIO,hugetlb,memory,memory_numa,network,oom_event,percpu,perf_event,process,psi_avg,psi_total,referenced_memory,resctrl,sched,tcp,udp. (default advtcp,cpu_topology,cpuset,hugetlb,memory_numa,process,referenced_memory,resctrl,sched,tcp,udp)
   138  --enable_metrics=<metrics>: comma-separated list of metrics to be enabled. If set, overrides 'disable_metrics'. Options are advtcp,app,cpu,cpuLoad,cpu_topology,cpuset,disk,diskIO,hugetlb,memory,memory_numa,network,oom_event,percpu,perf_event,process,psi_avg,psi_total,referenced_memory,resctrl,sched,tcp,udp.
   139  --prometheus_endpoint="/metrics": Endpoint to expose Prometheus metrics on (default "/metrics")
   140  --disable_root_cgroup_stats=false: Disable collecting root Cgroup stats
   141  ```
   142  
   143  ## Storage Drivers
   144  
   145  ```
   146  --storage_driver="": Storage driver to use. Data is always cached shortly in memory, this controls where data is pushed besides the local cache. Empty means none. Options are: <empty>, bigquery, elasticsearch, influxdb, kafka, redis, statsd, stdout
   147  --storage_driver_buffer_duration="1m0s": Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s)
   148  --storage_driver_db="cadvisor": database name (default "cadvisor")
   149  --storage_driver_host="localhost:8086": database host:port (default "localhost:8086")
   150  --storage_driver_password="root": database password (default "root")
   151  --storage_driver_secure=false: use secure connection with database
   152  --storage_driver_table="stats": table name (default "stats")
   153  --storage_driver_user="root": database username (default "root")
   154  ```
   155  
   156  ## Perf Events
   157  
   158  ```
   159  --perf_events_config="" Path to a JSON file containing configuration of perf events to measure. Empty value disables perf events measuring.
   160  ```
   161  
   162  Core perf events can be exposed on Prometheus endpoint per CPU or aggregated by event. It is controlled through `--disable_metrics` and `--enable_metrics` parameters with option `percpu`, e.g.:
   163  - `--disable_metrics="percpu"` - core perf events are aggregated
   164  - `--disable_metrics=""` - core perf events are exposed per CPU.
   165  
   166  It's possible to get "too many opened files" error when a lot of perf events are exposed per CPU. This happens because of passing system limits.
   167  Try to increase max number of file desctriptors with `ulimit -n <value>`.
   168  
   169  Aggregated form of core perf events significantly decrease volume of data. For aggregated form of core perf events scaling ratio (`container_perf_metric_scaling ratio`) indicates the lowest value of scaling ratio for specific event to show the worst precision.
   170  
   171  ### Perf subsystem introduction
   172  
   173  One of the goals of kernel perf subsystem is to instrument CPU performance counters that allow to profile applications.
   174  Profiling is performed by setting up performance counters that count hardware events (e.g. number of retired
   175  instructions, number of cache misses). The counters are CPU hardware registers and amount of them is limited.
   176  
   177  Other goals of perf subsystem (such as tracing) are beyond the scope of this documentation and you can follow Further
   178  Reading section below to learn more about them.
   179  
   180  Familiarize yourself with following perf-event-related terms:
   181  * `multiplexing` - 2nd Generation Intel® Xeon® Scalable Processors provides 4 counters per each hyper thread. If number
   182  of configured events is greater than number of available counters then Linux will multiplex counting and some (or even
   183  all) of the events will not be accounted for all the time. In such situation information about amount of time that event
   184  was accounted for and amount of time when event was enabled is provided. Counter value that cAdvisor exposes is scaled
   185  automatically.
   186  * `grouping` - in scenario when accounted for events are used to calculate derivative metrics, it is reasonable to
   187  measure them in transactional manner: all the events in a group must be accounted for in the same period of time. Keep
   188  in mind that it is impossible to group more events that there are counters available.
   189  * `uncore events` - events which can be counted by PMUs outside core.
   190  * `PMU` - Performance Monitoring Unit
   191  
   192  #### Getting config values
   193  Using perf tools:
   194  * Identify the event in `perf list` output.
   195  * Execute command: `perf stat -I 5000 -vvv -e EVENT_NAME`
   196  * Find `perf_event_attr` section on `perf stat` output, copy config and type field to configuration file.
   197  
   198  ```
   199  ------------------------------------------------------------
   200  perf_event_attr:
   201    type                             18
   202    size                             112
   203    config                           0x304
   204    sample_type                      IDENTIFIER
   205    read_format                      TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING
   206    disabled                         1
   207    inherit                          1
   208    exclude_guest                    1
   209  ------------------------------------------------------------
   210  ```
   211  * Configuration file should look like:
   212  ```json
   213  {
   214    "core": {
   215      "events": [
   216        "event_name"
   217      ],
   218      "custom_events": [
   219        {
   220          "type": 18,
   221          "config": [
   222            "0x304"
   223          ],
   224          "name": "event_name"
   225        }
   226      ]
   227    },
   228    "uncore": {
   229      "events": [
   230        "event_name"
   231      ],
   232      "custom_events": [
   233        {
   234          "type": 18,
   235          "config": [
   236            "0x304"
   237          ],
   238          "name": "event_name"
   239        }
   240      ]
   241    }
   242  }
   243  ```
   244  
   245  Config values can be also obtain from:
   246  * [Intel® 64 and IA32 Architectures Performance Monitoring Events](https://software.intel.com/content/www/us/en/develop/download/intel-64-and-ia32-architectures-performance-monitoring-events.html)
   247  
   248  
   249  ##### Uncore Events configuration
   250  Uncore Event name should be in form `PMU_PREFIX/event_name` where **PMU_PREFIX** mean
   251  that statistics would be counted on all PMUs with that prefix in name.
   252  
   253  Let's explain this by example:
   254  
   255  ```json
   256  {
   257    "uncore": {
   258      "events": [
   259        "uncore_imc/cas_count_read",
   260        "uncore_imc_0/cas_count_write",
   261        "cas_count_all"
   262      ],
   263      "custom_events": [
   264        {
   265          "config": [
   266            "0x304"
   267          ],
   268          "name": "uncore_imc_0/cas_count_write"
   269        },
   270        {
   271          "type": 19,
   272          "config": [
   273            "0x304"
   274          ],
   275          "name": "cas_count_all"
   276        }
   277      ]
   278    }
   279  }
   280  ```
   281  
   282  - `uncore_imc/cas_count_read` - because of `uncore_imc` type and no entry in custom events,
   283      it would be counted by **all** Integrated Memory Controller PMUs with config provided from libpfm package.
   284      (using this function: https://man7.org/linux/man-pages/man3/pfm_get_os_event_encoding.3.html)
   285  
   286  - `uncore_imc_0/cas_count_write` - because of `uncore_imc_0` type and entry in custom events it would be counted by `uncore_imc_0` PMU with provided config.
   287  
   288  - `uncore_imc_1/cas_count_all` - because of entry in custom events with type field, event would be counted by PMU with **19** type and provided config.
   289  
   290  #### Configuring perf events by name
   291  
   292  It is possible to configure perf events by names using events supported in [libpfm4](http://perfmon2.sourceforge.net/), for detailed information please see [libpfm4 documentation](http://perfmon2.sourceforge.net/docs_v4.html).
   293  
   294  Discovery of perf events supported on platform can be made using python script - [pmu.py](https://sourceforge.net/p/perfmon2/libpfm4/ci/master/tree/python/src/pmu.py) provided with libpfm4, please see [script reqirements](https://sourceforge.net/p/perfmon2/libpfm4/ci/master/tree/python/README).
   295  
   296  ##### Example configuration of perf events using event names supported in libpfm4
   297  
   298  Example output of `pmu.py`:
   299  ```
   300  $ python pmu.py
   301  INSTRUCTIONS 1
   302  		 u 0
   303  		 k 1
   304  		 period 3
   305  		 freq 4
   306  		 precise 5
   307  		 excl 6
   308  		 mg 7
   309  		 mh 8
   310  		 cpu 9
   311  		 pinned 10
   312  INSTRUCTION_RETIRED 192
   313  		 e 2
   314  		 i 3
   315  		 c 4
   316  		 t 5
   317  		 intx 7
   318  		 intxcp 8
   319  		 u 0
   320  		 k 1
   321  		 period 3
   322  		 freq 4
   323  		 excl 6
   324  		 mg 7
   325  		 mh 8
   326  		 cpu 9
   327  		 pinned 10
   328  UNC_M_CAS_COUNT 4
   329  		 RD 3
   330  		 WR 12
   331  		 e 0
   332  		 i 1
   333  		 t 2
   334  		 period 3
   335  		 freq 4
   336  		 excl 6
   337  		 cpu 9
   338  		 pinned 10
   339  ```
   340  and perf events configuration for listed events:
   341  ```json
   342  {
   343    "core": {
   344      "events": [
   345        "instructions",
   346        "instruction_retired"
   347      ]
   348    },
   349    "uncore": {
   350      "events": [
   351        "uncore_imc/unc_m_cas_count:rd",
   352        "uncore_imc/unc_m_cas_count:wr"
   353      ]
   354    }
   355  }
   356  ```
   357  
   358  Notice: PMU_PREFIX is provided in the same way as for configuration with config values.
   359  
   360  #### Grouping
   361  
   362  ```json
   363  {
   364    "core": {
   365      "events": [
   366        ["instructions", "instruction_retired"]
   367      ]
   368    },
   369    "uncore": {
   370      "events": [
   371        ["uncore_imc_0/unc_m_cas_count:rd", "uncore_imc_0/unc_m_cas_count:wr"],
   372        ["uncore_imc_1/unc_m_cas_count:rd", "uncore_imc_1/unc_m_cas_count:wr"]
   373      ]
   374    }
   375  }
   376  ```
   377  
   378  
   379  ### Further reading
   380  
   381  * [perf Examples](http://www.brendangregg.com/perf.html) on Brendan Gregg's blog
   382  * [Kernel Perf Wiki](https://perf.wiki.kernel.org/index.php/Main_Page)
   383  * `man perf_event_open`
   384  * [perf subsystem](https://github.com/torvalds/linux/tree/v5.6/kernel/events) in Linux kernel
   385  * [Uncore Performance Monitoring Reference Manuals](https://software.intel.com/content/www/us/en/develop/articles/intel-sdm.html#uncore)
   386  
   387  See example configuration below:
   388  ```json
   389  {
   390    "core": {
   391      "events": [
   392        "instructions",
   393        "instructions_retired"
   394      ],
   395      "custom_events": [
   396        {
   397          "type": 4,
   398          "config": [
   399            "0x5300c0"
   400          ],
   401          "name": "instructions_retired"
   402        }
   403      ]
   404    },
   405    "uncore": {
   406      "events": [
   407        "uncore_imc/cas_count_read"
   408      ],
   409      "custom_events": [
   410        {
   411          "config": [
   412            "0xc04"
   413          ],
   414          "name": "uncore_imc/cas_count_read"
   415        }
   416      ]
   417    }
   418  }
   419  ```
   420  
   421  In the example above:
   422  * `instructions` will be measured as a non-grouped event and is specified using human friendly interface that can be
   423  obtained by calling `perf list`. You can use any name that appears in the output of `perf list` command. This is
   424  interface that majority of users will rely on.
   425  * `instructions_retired` will be measured as non-grouped event and is specified using an advanced API that allows
   426  to specify any perf event available (some of them are not named and can't be specified with plain string). Event name
   427  should be a human readable string that will become a metric name.
   428  * `cas_count_read` will be measured as uncore non-grouped event on all Integrated Memory Controllers Performance Monitoring Units because of unset `type` field and
   429  `uncore_imc` prefix.
   430  
   431  ## Resctrl
   432  To gain metrics, cAdvisor creates own monitoring groups with `cadvisor` prefix.
   433  
   434  Resctrl file system is not hierarchical like cgroups, so users should set `--docker_only` flag to avoid race conditions and unexpected behaviours.
   435  
   436  ```
   437  --resctrl_interval=0: Resctrl mon groups updating interval. Zero value disables updating mon groups.
   438  ```
   439  
   440  ## Storage driver specific instructions:
   441  
   442  * [InfluxDB instructions](storage/influxdb.md).
   443  * [ElasticSearch instructions](storage/elasticsearch.md).
   444  * [Kafka instructions](storage/kafka.md).
   445  * [Prometheus instructions](storage/prometheus.md).