github.com/yankunsam/loki/v2@v2.6.3-0.20220817130409-389df5235c27/docs/sources/clients/promtail/configuration.md (about)

     1  ---
     2  title: Configuration
     3  ---
     4  # Configuring Promtail
     5  
     6  Promtail is configured in a YAML file (usually referred to as `config.yaml`)
     7  which contains information on the Promtail server, where positions are stored,
     8  and how to scrape logs from files.
     9  
    10  ## Printing Promtail Config At Runtime
    11  
    12  If you pass Promtail the flag `-print-config-stderr` or `-log-config-reverse-order`, (or `-print-config-stderr=true`)
    13  Promtail will dump the entire config object it has created from the built in defaults combined first with
    14  overrides from config file, and second by overrides from flags.
    15  
    16  The result is the value for every config object in the Promtail config struct.
    17  
    18  Some values may not be relevant to your install, this is expected as every option has a default value if it is being used or not.
    19  
    20  This config is what Promtail will use to run, it can be invaluable for debugging issues related to configuration and
    21  is especially useful in making sure your config files and flags are being read and loaded properly.
    22  
    23  `-print-config-stderr` is nice when running Promtail directly e.g. `./promtail ` as you can get a quick output of the entire Promtail config.
    24  
    25  `-log-config-reverse-order` is the flag we run Promtail with in all our environments, the config entries are reversed so
    26  that the order of configs reads correctly top to bottom when viewed in Grafana's Explore.
    27  
    28  
    29  ## Configuration File Reference
    30  
    31  To specify which configuration file to load, pass the `-config.file` flag at the
    32  command line. The file is written in [YAML format](https://en.wikipedia.org/wiki/YAML),
    33  defined by the schema below. Brackets indicate that a parameter is optional. For
    34  non-list parameters the value is set to the specified default.
    35  
    36  For more detailed information on configuring how to discover and scrape logs from
    37  targets, see [Scraping](../scraping/). For more information on transforming logs
    38  from scraped targets, see [Pipelines](../pipelines/).
    39  
    40  ### Use environment variables in the configuration
    41  
    42  You can use environment variable references in the configuration file to set values that need to be configurable during deployment.
    43  To do this, pass `-config.expand-env=true` and use:
    44  
    45  ```
    46  ${VAR}
    47  ```
    48  
    49  Where VAR is the name of the environment variable.
    50  
    51  Each variable reference is replaced at startup by the value of the environment variable.
    52  The replacement is case-sensitive and occurs before the YAML file is parsed.
    53  References to undefined variables are replaced by empty strings unless you specify a default value or custom error text.
    54  
    55  To specify a default value, use:
    56  
    57  ```
    58  ${VAR:-default_value}
    59  ```
    60  
    61  Where default_value is the value to use if the environment variable is undefined.
    62  
    63  **Note**: With `expand-env=true` the configuration will first run through
    64  [envsubst](https://pkg.go.dev/github.com/drone/envsubst) which will replace double
    65  slashes with single slashes. Because of this every use of a slash `\` needs to
    66  be replaced with a double slash `\\`
    67  
    68  ### Generic placeholders:
    69  
    70  - `<boolean>`: a boolean that can take the values `true` or `false`
    71  - `<int>`: any integer matching the regular expression `[1-9]+[0-9]*`
    72  - `<duration>`: a duration matching the regular expression `[0-9]+(ms|[smhdwy])`
    73  - `<labelname>`: a string matching the regular expression `[a-zA-Z_][a-zA-Z0-9_]*`
    74  - `<labelvalue>`: a string of Unicode characters
    75  - `<filename>`: a valid path relative to current working directory or an
    76      absolute path.
    77  - `<host>`: a valid string consisting of a hostname or IP followed by an optional port number
    78  - `<string>`: a string
    79  - `<secret>`: a string that represents a secret, such as a password
    80  
    81  ### Supported contents and default values of `config.yaml`:
    82  
    83  ```yaml
    84  # Configures the server for Promtail.
    85  [server: <server_config>]
    86  
    87  # Describes how Promtail connects to multiple instances
    88  # of Grafana Loki, sending logs to each.
    89  # WARNING: If one of the remote Loki servers fails to respond or responds
    90  # with any error which is retryable, this will impact sending logs to any
    91  # other configured remote Loki servers.  Sending is done on a single thread!
    92  # It is generally recommended to run multiple Promtail clients in parallel
    93  # if you want to send to multiple remote Loki instances.
    94  clients:
    95    - [<client_config>]
    96  
    97  # Describes how to save read file offsets to disk
    98  [positions: <position_config>]
    99  
   100  scrape_configs:
   101    - [<scrape_config>]
   102  
   103  # Configures global limits for this instance of Promtail
   104  [limits_config: <limits_config>]
   105  
   106  # Configures how tailed targets will be watched.
   107  [target_config: <target_config>]
   108  
   109  # Configures additional promtail configurations.
   110  [options: <options_config>]
   111  ```
   112  
   113  ## server
   114  
   115  The `server` block configures Promtail's behavior as an HTTP server:
   116  
   117  ```yaml
   118  # Disable the HTTP and GRPC server.
   119  [disable: <boolean> | default = false]
   120  
   121  # HTTP server listen host
   122  [http_listen_address: <string>]
   123  
   124  # HTTP server listen port (0 means random port)
   125  [http_listen_port: <int> | default = 80]
   126  
   127  # gRPC server listen host
   128  [grpc_listen_address: <string>]
   129  
   130  # gRPC server listen port (0 means random port)
   131  [grpc_listen_port: <int> | default = 9095]
   132  
   133  # Register instrumentation handlers (/metrics, etc.)
   134  [register_instrumentation: <boolean> | default = true]
   135  
   136  # Timeout for graceful shutdowns
   137  [graceful_shutdown_timeout: <duration> | default = 30s]
   138  
   139  # Read timeout for HTTP server
   140  [http_server_read_timeout: <duration> | default = 30s]
   141  
   142  # Write timeout for HTTP server
   143  [http_server_write_timeout: <duration> | default = 30s]
   144  
   145  # Idle timeout for HTTP server
   146  [http_server_idle_timeout: <duration> | default = 120s]
   147  
   148  # Max gRPC message size that can be received
   149  [grpc_server_max_recv_msg_size: <int> | default = 4194304]
   150  
   151  # Max gRPC message size that can be sent
   152  [grpc_server_max_send_msg_size: <int> | default = 4194304]
   153  
   154  # Limit on the number of concurrent streams for gRPC calls (0 = unlimited)
   155  [grpc_server_max_concurrent_streams: <int> | default = 100]
   156  
   157  # Log only messages with the given severity or above. Supported values [debug,
   158  # info, warn, error]
   159  [log_level: <string> | default = "info"]
   160  
   161  # Base path to server all API routes from (e.g., /v1/).
   162  [http_path_prefix: <string>]
   163  
   164  # Target managers check flag for Promtail readiness, if set to false the check is ignored
   165  [health_check_target: <bool> | default = true]
   166  ```
   167  
   168  ## clients
   169  
   170  The `clients` block configures how Promtail connects to instances of
   171  Loki:
   172  
   173  ```yaml
   174  # The URL where Loki is listening, denoted in Loki as http_listen_address and
   175  # http_listen_port. If Loki is running in microservices mode, this is the HTTP
   176  # URL for the Distributor. Path to the push API needs to be included.
   177  # Example: http://example.com:3100/loki/api/v1/push
   178  url: <string>
   179  
   180  # The tenant ID used by default to push logs to Loki. If omitted or empty
   181  # it assumes Loki is running in single-tenant mode and no X-Scope-OrgID header
   182  # is sent.
   183  [tenant_id: <string>]
   184  
   185  # Maximum amount of time to wait before sending a batch, even if that
   186  # batch isn't full.
   187  [batchwait: <duration> | default = 1s]
   188  
   189  # Maximum batch size (in bytes) of logs to accumulate before sending
   190  # the batch to Loki.
   191  [batchsize: <int> | default = 1048576]
   192  
   193  # If using basic auth, configures the username and password
   194  # sent.
   195  basic_auth:
   196    # The username to use for basic auth
   197    [username: <string>]
   198  
   199    # The password to use for basic auth
   200    [password: <string>]
   201  
   202    # The file containing the password for basic auth
   203    [password_file: <filename>]
   204  
   205  # Optional OAuth 2.0 configuration
   206  # Cannot be used at the same time as basic_auth or authorization
   207  oauth2:
   208    # Client id and secret for oauth2
   209    [client_id: <string>]
   210    [client_secret: <secret>]
   211  
   212    # Read the client secret from a file
   213    # It is mutually exclusive with `client_secret`
   214    [client_secret_file: <filename>]
   215  
   216    # Optional scopes for the token request
   217    scopes:
   218      [ - <string> ... ]
   219  
   220    # The URL to fetch the token from
   221    token_url: <string>
   222  
   223    # Optional parameters to append to the token URL
   224    endpoint_params:
   225      [ <string>: <string> ... ]
   226  
   227  # Bearer token to send to the server.
   228  [bearer_token: <secret>]
   229  
   230  # File containing bearer token to send to the server.
   231  [bearer_token_file: <filename>]
   232  
   233  # HTTP proxy server to use to connect to the server.
   234  [proxy_url: <string>]
   235  
   236  # If connecting to a TLS server, configures how the TLS
   237  # authentication handshake will operate.
   238  tls_config:
   239    # The CA file to use to verify the server
   240    [ca_file: <string>]
   241  
   242    # The cert file to send to the server for client auth
   243    [cert_file: <filename>]
   244  
   245    # The key file to send to the server for client auth
   246    [key_file: <filename>]
   247  
   248    # Validates that the server name in the server's certificate
   249    # is this value.
   250    [server_name: <string>]
   251  
   252    # If true, ignores the server certificate being signed by an
   253    # unknown CA.
   254    [insecure_skip_verify: <boolean> | default = false]
   255  
   256  # Configures how to retry requests to Loki when a request
   257  # fails.
   258  # Default backoff schedule:
   259  # 0.5s, 1s, 2s, 4s, 8s, 16s, 32s, 64s, 128s, 256s(4.267m)
   260  # For a total time of 511.5s(8.5m) before logs are lost
   261  backoff_config:
   262    # Initial backoff time between retries
   263    [min_period: <duration> | default = 500ms]
   264  
   265    # Maximum backoff time between retries
   266    [max_period: <duration> | default = 5m]
   267  
   268    # Maximum number of retries to do
   269    [max_retries: <int> | default = 10]
   270  
   271  # Static labels to add to all logs being sent to Loki.
   272  # Use map like {"foo": "bar"} to add a label foo with
   273  # value bar.
   274  # These can also be specified from command line:
   275  # -client.external-labels=k1=v1,k2=v2
   276  # (or --client.external-labels depending on your OS)
   277  # labels supplied by the command line are applied
   278  # to all clients configured in the `clients` section.
   279  # NOTE: values defined in the config file will replace values
   280  # defined on the command line for a given client if the
   281  # label keys are the same.
   282  external_labels:
   283    [ <labelname>: <labelvalue> ... ]
   284  
   285  # Maximum time to wait for a server to respond to a request
   286  [timeout: <duration> | default = 10s]
   287  ```
   288  
   289  ## positions
   290  
   291  The `positions` block configures where Promtail will save a file
   292  indicating how far it has read into a file. It is needed for when Promtail
   293  is restarted to allow it to continue from where it left off.
   294  
   295  ```yaml
   296  # Location of positions file
   297  [filename: <string> | default = "/var/log/positions.yaml"]
   298  
   299  # How often to update the positions file
   300  [sync_period: <duration> | default = 10s]
   301  
   302  # Whether to ignore & later overwrite positions files that are corrupted
   303  [ignore_invalid_yaml: <boolean> | default = false]
   304  ```
   305  
   306  ## scrape_configs
   307  
   308  The `scrape_configs` block configures how Promtail can scrape logs from a series
   309  of targets using a specified discovery method:
   310  
   311  ```yaml
   312  # Name to identify this scrape config in the Promtail UI.
   313  job_name: <string>
   314  
   315  # Describes how to transform logs from targets.
   316  [pipeline_stages: <pipeline_stages>]
   317  
   318  # Describes how to scrape logs from the journal.
   319  [journal: <journal_config>]
   320  
   321  # Describes from which encoding a scraped file should be converted.
   322  [encoding: <iana_encoding_name>]
   323  
   324  # Describes how to receive logs from syslog.
   325  [syslog: <syslog_config>]
   326  
   327  # Describes how to receive logs via the Loki push API, (e.g. from other Promtails or the Docker Logging Driver)
   328  [loki_push_api: <loki_push_api_config>]
   329  
   330  # Describes how to scrape logs from the Windows event logs.
   331  [windows_events: <windows_events_config>]
   332  
   333  # Configuration describing how to pull/receive Google Cloud Platform (GCP) logs.
   334  [gcplog: <gcplog_config>]
   335  
   336  # Describes how to fetch logs from Kafka via a Consumer group.
   337  [kafka: <kafka_config>]
   338  
   339  # Describes how to receive logs from gelf client.
   340  [gelf: <gelf_config>]
   341  
   342  # Configuration describing how to pull logs from Cloudflare.
   343  [cloudflare: <cloudflare>]
   344  
   345  # Configuration describing how to pull logs from a Heroku LogPlex drain.
   346  [heroku_drain: <heroku_drain>]
   347  
   348  # Describes how to relabel targets to determine if they should
   349  # be processed.
   350  relabel_configs:
   351    - [<relabel_config>]
   352  
   353  # Static targets to scrape.
   354  static_configs:
   355    - [<static_config>]
   356  
   357  # Files containing targets to scrape.
   358  file_sd_configs:
   359    - [<file_sd_configs>]
   360  
   361  # Describes how to discover Kubernetes services running on the
   362  # same host.
   363  kubernetes_sd_configs:
   364    - [<kubernetes_sd_config>]
   365  
   366  # Describes how to use the Consul Catalog API to discover services registered with the
   367  # consul cluster.
   368  consul_sd_configs:
   369    [ - <consul_sd_config> ... ]
   370  
   371  # Describes how to use the Consul Agent API to discover services registered with the consul agent
   372  # running on the same host as Promtail.
   373  consulagent_sd_configs:
   374    [ - <consulagent_sd_config> ... ]
   375  
   376  # Describes how to use the Docker daemon API to discover containers running on
   377  # the same host as Promtail.
   378  docker_sd_configs:
   379    [ - <docker_sd_config> ... ]
   380  ```
   381  
   382  ### pipeline_stages
   383  
   384  [Pipeline](../pipelines/) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
   385  
   386  In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by Promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
   387  
   388  ```yaml
   389  - [
   390      <docker> |
   391      <cri> |
   392      <regex> |
   393      <json> |
   394      <template> |
   395      <match> |
   396      <timestamp> |
   397      <output> |
   398      <labels> |
   399      <metrics> |
   400      <tenant> |
   401      <replace>
   402    ]
   403  ```
   404  
   405  #### docker
   406  
   407  The Docker stage parses the contents of logs from Docker containers, and is defined by name with an empty object:
   408  
   409  ```yaml
   410  docker: {}
   411  ```
   412  
   413  The docker stage will match and parse log lines of this format:
   414  
   415  ```nohighlight
   416  `{"log":"level=info ts=2019-04-30T02:12:41.844179Z caller=filetargetmanager.go:180 msg=\"Adding target\"\n","stream":"stderr","time":"2019-04-30T02:12:41.8443515Z"}`
   417  ```
   418  
   419  Automatically extracting the `time` into the logs timestamp, `stream` into a label, and `log` field into the output, this can be very helpful as docker is wrapping your application log in this way and this will unwrap it for further pipeline processing of just the log content.
   420  
   421  The Docker stage is just a convenience wrapper for this definition:
   422  
   423  ```yaml
   424  - json:
   425      output: log
   426      stream: stream
   427      timestamp: time
   428  - labels:
   429      stream:
   430  - timestamp:
   431      source: timestamp
   432      format: RFC3339Nano
   433  - output:
   434      source: output
   435  ```
   436  
   437  
   438  
   439  #### cri
   440  
   441  The CRI stage parses the contents of logs from CRI containers, and is defined by name with an empty object:
   442  
   443  ```yaml
   444  cri: {}
   445  ```
   446  
   447  The CRI  stage will match and parse log lines of this format:
   448  
   449  ```nohighlight
   450  2019-01-01T01:00:00.000000001Z stderr P some log message
   451  ```
   452  Automatically extracting the `time` into the logs timestamp, `stream` into a label, and the remaining message into the output, this can be very helpful as CRI is wrapping your application log in this way and this will unwrap it for further pipeline processing of just the log content.
   453  
   454  The CRI stage is just a convenience wrapper for this definition:
   455  
   456  ```yaml
   457  - regex:
   458      expression: "^(?s)(?P<time>\\S+?) (?P<stream>stdout|stderr) (?P<flags>\\S+?) (?P<content>.*)$"
   459  - labels:
   460      stream:
   461  - timestamp:
   462      source: time
   463      format: RFC3339Nano
   464  - output:
   465      source: content
   466  ```
   467  
   468  #### regex
   469  
   470  The Regex stage takes a regular expression and extracts captured named groups to
   471  be used in further stages.
   472  
   473  ```yaml
   474  regex:
   475    # The RE2 regular expression. Each capture group must be named.
   476    expression: <string>
   477  
   478    # Name from extracted data to parse. If empty, uses the log message.
   479    [source: <string>]
   480  ```
   481  
   482  #### json
   483  
   484  The JSON stage parses a log line as JSON and takes
   485  [JMESPath](http://jmespath.org/) expressions to extract data from the JSON to be
   486  used in further stages.
   487  
   488  ```yaml
   489  json:
   490    # Set of key/value pairs of JMESPath expressions. The key will be
   491    # the key in the extracted data while the expression will be the value,
   492    # evaluated as a JMESPath from the source data.
   493    expressions:
   494      [ <string>: <string> ... ]
   495  
   496    # Name from extracted data to parse. If empty, uses the log message.
   497    [source: <string>]
   498  ```
   499  
   500  #### template
   501  
   502  The template stage uses Go's
   503  [`text/template`](https://golang.org/pkg/text/template) language to manipulate
   504  values.
   505  
   506  ```yaml
   507  template:
   508    # Name from extracted data to parse. If key in extract data doesn't exist, an
   509    # entry for it will be created.
   510    source: <string>
   511  
   512    # Go template string to use. In additional to normal template
   513    # functions, ToLower, ToUpper, Replace, Trim, TrimLeft, TrimRight,
   514    # TrimPrefix, TrimSuffix, and TrimSpace are available as functions.
   515    template: <string>
   516  ```
   517  
   518  Example:
   519  
   520  ```yaml
   521  template:
   522    source: level
   523    template: '{{ if eq .Value "WARN" }}{{ Replace .Value "WARN" "OK" -1 }}{{ else }}{{ .Value }}{{ end }}'
   524  ```
   525  
   526  #### match
   527  
   528  The match stage conditionally executes a set of stages when a log entry matches
   529  a configurable [LogQL](../../../logql/) stream selector.
   530  
   531  ```yaml
   532  match:
   533    # LogQL stream selector.
   534    selector: <string>
   535  
   536    # Names the pipeline. When defined, creates an additional label in
   537    # the pipeline_duration_seconds histogram, where the value is
   538    # concatenated with job_name using an underscore.
   539    [pipeline_name: <string>]
   540  
   541    # Nested set of pipeline stages only if the selector
   542    # matches the labels of the log entries:
   543    stages:
   544      - [
   545          <docker> |
   546          <cri> |
   547          <regex>
   548          <json> |
   549          <template> |
   550          <match> |
   551          <timestamp> |
   552          <output> |
   553          <labels> |
   554          <metrics>
   555        ]
   556  ```
   557  
   558  #### timestamp
   559  
   560  The timestamp stage parses data from the extracted map and overrides the final
   561  time value of the log that is stored by Loki. If this stage isn't present,
   562  Promtail will associate the timestamp of the log entry with the time that
   563  log entry was read.
   564  
   565  ```yaml
   566  timestamp:
   567    # Name from extracted data to use for the timestamp.
   568    source: <string>
   569  
   570    # Determines how to parse the time string. Can use
   571    # pre-defined formats by name: [ANSIC UnixDate RubyDate RFC822
   572    # RFC822Z RFC850 RFC1123 RFC1123Z RFC3339 RFC3339Nano Unix
   573    # UnixMs UnixUs UnixNs].
   574    format: <string>
   575  
   576    # IANA Timezone Database string.
   577    [location: <string>]
   578  ```
   579  
   580  #### output
   581  
   582  The output stage takes data from the extracted map and sets the contents of the
   583  log entry that will be stored by Loki.
   584  
   585  ```yaml
   586  output:
   587    # Name from extracted data to use for the log entry.
   588    source: <string>
   589  ```
   590  
   591  #### labels
   592  
   593  The labels stage takes data from the extracted map and sets additional labels
   594  on the log entry that will be sent to Loki.
   595  
   596  ```yaml
   597  labels:
   598    # Key is REQUIRED and the name for the label that will be created.
   599    # Value is optional and will be the name from extracted data whose value
   600    # will be used for the value of the label. If empty, the value will be
   601    # inferred to be the same as the key.
   602    [ <string>: [<string>] ... ]
   603  ```
   604  
   605  #### metrics
   606  
   607  The metrics stage allows for defining metrics from the extracted data.
   608  
   609  Created metrics are not pushed to Loki and are instead exposed via Promtail's
   610  `/metrics` endpoint. Prometheus should be configured to scrape Promtail to be
   611  able to retrieve the metrics configured by this stage.
   612  
   613  
   614  ```yaml
   615  # A map where the key is the name of the metric and the value is a specific
   616  # metric type.
   617  metrics:
   618    [<string>: [ <counter> | <gauge> | <histogram> ] ...]
   619  ```
   620  
   621  ##### counter
   622  
   623  Defines a counter metric whose value only goes up.
   624  
   625  ```yaml
   626  # The metric type. Must be Counter.
   627  type: Counter
   628  
   629  # Describes the metric.
   630  [description: <string>]
   631  
   632  # Key from the extracted data map to use for the metric,
   633  # defaulting to the metric's name if not present.
   634  [source: <string>]
   635  
   636  config:
   637    # Filters down source data and only changes the metric
   638    # if the targeted value exactly matches the provided string.
   639    # If not present, all data will match.
   640    [value: <string>]
   641  
   642    # Must be either "inc" or "add" (case insensitive). If
   643    # inc is chosen, the metric value will increase by 1 for each
   644    # log line received that passed the filter. If add is chosen,
   645    # the extracted value most be convertible to a positive float
   646    # and its value will be added to the metric.
   647    action: <string>
   648  ```
   649  
   650  ##### gauge
   651  
   652  Defines a gauge metric whose value can go up or down.
   653  
   654  ```yaml
   655  # The metric type. Must be Gauge.
   656  type: Gauge
   657  
   658  # Describes the metric.
   659  [description: <string>]
   660  
   661  # Key from the extracted data map to use for the metric,
   662  # defaulting to the metric's name if not present.
   663  [source: <string>]
   664  
   665  config:
   666    # Filters down source data and only changes the metric
   667    # if the targeted value exactly matches the provided string.
   668    # If not present, all data will match.
   669    [value: <string>]
   670  
   671    # Must be either "set", "inc", "dec"," add", or "sub". If
   672    # add, set, or sub is chosen, the extracted value must be
   673    # convertible to a positive float. inc and dec will increment
   674    # or decrement the metric's value by 1 respectively.
   675    action: <string>
   676  ```
   677  
   678  ##### histogram
   679  
   680  Defines a histogram metric whose values are bucketed.
   681  
   682  ```yaml
   683  # The metric type. Must be Histogram.
   684  type: Histogram
   685  
   686  # Describes the metric.
   687  [description: <string>]
   688  
   689  # Key from the extracted data map to use for the metric,
   690  # defaulting to the metric's name if not present.
   691  [source: <string>]
   692  
   693  config:
   694    # Filters down source data and only changes the metric
   695    # if the targeted value exactly matches the provided string.
   696    # If not present, all data will match.
   697    [value: <string>]
   698  
   699    # Must be either "inc" or "add" (case insensitive). If
   700    # inc is chosen, the metric value will increase by 1 for each
   701    # log line received that passed the filter. If add is chosen,
   702    # the extracted value most be convertible to a positive float
   703    # and its value will be added to the metric.
   704    action: <string>
   705  
   706    # Holds all the numbers in which to bucket the metric.
   707    buckets:
   708      - <int>
   709  ```
   710  
   711  #### tenant
   712  
   713  The tenant stage is an action stage that sets the tenant ID for the log entry
   714  picking it from a field in the extracted data map.
   715  
   716  ```yaml
   717  tenant:
   718    # Name from extracted data to whose value should be set as tenant ID.
   719    # Either source or value config option is required, but not both (they
   720    # are mutually exclusive).
   721    [ source: <string> ]
   722  
   723    # Value to use to set the tenant ID when this stage is executed. Useful
   724    # when this stage is included within a conditional pipeline with "match".
   725    [ value: <string> ]
   726  ```
   727  
   728  #### replace
   729  
   730  The replace stage is a parsing stage that parses a log line using
   731  a regular expression and replaces the log line.
   732  
   733  ```yaml
   734  replace:
   735    # The RE2 regular expression. Each named capture group will be added to extracted.
   736    # Each capture group and named capture group will be replaced with the value given in
   737    # `replace`
   738    expression: <string>
   739  
   740    # Name from extracted data to parse. If empty, uses the log message.
   741    # The replaced value will be assigned back to soure key
   742    [source: <string>]
   743  
   744    # Value to which the captured group will be replaced. The captured group or the named
   745    # captured group will be replaced with this value and the log line will be replaced with
   746    # new replaced values. An empty value will remove the captured group from the log line.
   747    [replace: <string>]
   748  ```
   749  
   750  ### journal
   751  
   752  The `journal` block configures reading from the systemd journal from
   753  Promtail. Requires a build of Promtail that has journal support _enabled_. If
   754  using the AMD64 Docker image, this is enabled by default.
   755  
   756  ```yaml
   757  # When true, log messages from the journal are passed through the
   758  # pipeline as a JSON message with all of the journal entries' original
   759  # fields. When false, the log message is the text content of the MESSAGE
   760  # field from the journal entry.
   761  [json: <boolean> | default = false]
   762  
   763  # The oldest relative time from process start that will be read
   764  # and sent to Loki.
   765  [max_age: <duration> | default = 7h]
   766  
   767  # Label map to add to every log coming out of the journal
   768  labels:
   769    [ <labelname>: <labelvalue> ... ]
   770  
   771  # Path to a directory to read entries from. Defaults to system
   772  # paths (/var/log/journal and /run/log/journal) when empty.
   773  [path: <string>]
   774  ```
   775  
   776  **Note**: priority label is available as both value and keyword. For example, if `priority` is `3` then the labels will be `__journal_priority` with a value `3` and `__journal_priority_keyword` with a corresponding keyword `err`.
   777  
   778  ### syslog
   779  
   780  The `syslog` block configures a syslog listener allowing users to push
   781  logs to Promtail with the syslog protocol.
   782  Currently supported is [IETF Syslog (RFC5424)](https://tools.ietf.org/html/rfc5424)
   783  with and without octet counting.
   784  
   785  The recommended deployment is to have a dedicated syslog forwarder like **syslog-ng** or **rsyslog**
   786  in front of Promtail. The forwarder can take care of the various specifications
   787  and transports that exist (UDP, BSD syslog, ...).
   788  
   789  [Octet counting](https://tools.ietf.org/html/rfc6587#section-3.4.1) is recommended as the
   790  message framing method. In a stream with [non-transparent framing](https://tools.ietf.org/html/rfc6587#section-3.4.2),
   791  Promtail needs to wait for the next message to catch multi-line messages,
   792  therefore delays between messages can occur.
   793  
   794  See recommended output configurations for
   795  [syslog-ng](../scraping#syslog-ng-output-configuration) and
   796  [rsyslog](../scraping#rsyslog-output-configuration). Both configurations enable
   797  IETF Syslog with octet-counting.
   798  
   799  You may need to increase the open files limit for the Promtail process
   800  if many clients are connected. (`ulimit -Sn`)
   801  
   802  ```yaml
   803  # TCP address to listen on. Has the format of "host:port".
   804  listen_address: <string>
   805  
   806  # Configure the receiver to use TLS.
   807  tls_config:
   808    # Certificate and key files sent by the server (required)
   809    cert_file: <string>
   810    key_file: <string>
   811  
   812    # CA certificate used to validate client certificate. Enables client certificate verification when specified.
   813    [ ca_file: <string> ]
   814  
   815  # The idle timeout for tcp syslog connections, default is 120 seconds.
   816  idle_timeout: <duration>
   817  
   818  # Whether to convert syslog structured data to labels.
   819  # A structured data entry of [example@99999 test="yes"] would become
   820  # the label "__syslog_message_sd_example_99999_test" with the value "yes".
   821  label_structured_data: <bool>
   822  
   823  # Label map to add to every log message.
   824  labels:
   825    [ <labelname>: <labelvalue> ... ]
   826  
   827  # Whether Promtail should pass on the timestamp from the incoming syslog message.
   828  # When false, or if no timestamp is present on the syslog message, Promtail will assign the current timestamp to the log when it was processed.
   829  # Default is false
   830  use_incoming_timestamp: <bool>
   831  
   832  # Sets the maximum limit to the length of syslog messages
   833  max_message_length: <int>
   834  ```
   835  
   836  #### Available Labels
   837  
   838  - `__syslog_connection_ip_address`: The remote IP address.
   839  - `__syslog_connection_hostname`: The remote hostname.
   840  - `__syslog_message_severity`: The [syslog severity](https://tools.ietf.org/html/rfc5424#section-6.2.1) parsed from the message. Symbolic name as per [syslog_message.go](https://github.com/influxdata/go-syslog/blob/v2.0.1/rfc5424/syslog_message.go#L184).
   841  - `__syslog_message_facility`: The [syslog facility](https://tools.ietf.org/html/rfc5424#section-6.2.1) parsed from the message. Symbolic name as per [syslog_message.go](https://github.com/influxdata/go-syslog/blob/v2.0.1/rfc5424/syslog_message.go#L235) and `syslog(3)`.
   842  - `__syslog_message_hostname`: The [hostname](https://tools.ietf.org/html/rfc5424#section-6.2.4) parsed from the message.
   843  - `__syslog_message_app_name`: The [app-name field](https://tools.ietf.org/html/rfc5424#section-6.2.5) parsed from the message.
   844  - `__syslog_message_proc_id`: The [procid field](https://tools.ietf.org/html/rfc5424#section-6.2.6) parsed from the message.
   845  - `__syslog_message_msg_id`: The [msgid field](https://tools.ietf.org/html/rfc5424#section-6.2.7) parsed from the message.
   846  - `__syslog_message_sd_<sd_id>[_<iana_enterprise_id>]_<sd_name>`: The [structured-data field](https://tools.ietf.org/html/rfc5424#section-6.3) parsed from the message. The data field `[custom@99770 example="1"]` becomes `__syslog_message_sd_custom_99770_example`.
   847  
   848  ### loki_push_api
   849  
   850  The `loki_push_api` block configures Promtail to expose a [Loki push API](../../../api#post-lokiapiv1push) server.
   851  
   852  Each job configured with a `loki_push_api` will expose this API and will require a separate port.
   853  
   854  Note the `server` configuration is the same as [server](#server).
   855  
   856  Promtail also exposes a second endpoint on `/promtail/api/v1/raw` which expects newline-delimited log lines.
   857  This can be used to send NDJSON or plaintext logs.
   858  
   859  ```yaml
   860  # The push server configuration options
   861  [server: <server_config>]
   862  
   863  # Label map to add to every log line sent to the push API
   864  labels:
   865    [ <labelname>: <labelvalue> ... ]
   866  
   867  # If Promtail should pass on the timestamp from the incoming log or not.
   868  # When false Promtail will assign the current timestamp to the log when it was processed.
   869  # Does not apply to the plaintext endpoint on `/promtail/api/v1/raw`.
   870  [use_incoming_timestamp: <bool> | default = false]
   871  ```
   872  
   873  See [Example Push Config](#example-push-config)
   874  
   875  
   876  ### windows_events
   877  
   878  The `windows_events` block configures Promtail to scrape windows event logs and send them to Loki.
   879  
   880  To subcribe to a specific events stream you need to provide either an `eventlog_name` or an `xpath_query`.
   881  
   882  Events are scraped periodically every 3 seconds by default but can be changed using `poll_interval`.
   883  
   884  A bookmark path `bookmark_path` is mandatory and will be used as a position file where Promtail will
   885  keep record of the last event processed. This file persists across Promtail restarts.
   886  
   887  You can set `use_incoming_timestamp` if you want to keep incomming event timestamps. By default Promtail will use the timestamp when
   888  the event was read from the event log.
   889  
   890  Promtail will serialize JSON windows events, adding `channel` and `computer` labels from the event received.
   891  You can add additional labels with the `labels` property.
   892  
   893  
   894  ```yaml
   895  # LCID (Locale ID) for event rendering
   896  # - 1033 to force English language
   897  # -  0 to use default Windows locale
   898  [locale: <int> | default = 0]
   899  
   900  # Name of eventlog, used only if xpath_query is empty
   901  # Example: "Application"
   902  [eventlog_name: <string> | default = ""]
   903  
   904  # xpath_query can be in defined short form like "Event/System[EventID=999]"
   905  # or you can form a XML Query. Refer to the Consuming Events article:
   906  # https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events
   907  # XML query is the recommended form, because it is most flexible
   908  # You can create or debug XML Query by creating Custom View in Windows Event Viewer
   909  # and then copying resulting XML here
   910  [xpath_query: <string> | default = "*"]
   911  
   912  # Sets the bookmark location on the filesystem.
   913  # The bookmark contains the current position of the target in XML.
   914  # When restarting or rolling out Promtail, the target will continue to scrape events where it left off based on the bookmark position.
   915  # The position is updated after each entry processed.
   916  [bookmark_path: <string> | default = ""]
   917  
   918  # PollInterval is the interval at which we're looking if new events are available. By default the target will check every 3seconds.
   919  [poll_interval: <duration> | default = 3s]
   920  
   921  # Allows to exclude the xml event data.
   922  [exclude_event_data: <bool> | default = false]
   923  
   924  # Allows to exclude the user data of each windows event.
   925  [exclude_user_data: <bool> | default = false]
   926  
   927  # Label map to add to every log line read from the windows event log
   928  labels:
   929    [ <labelname>: <labelvalue> ... ]
   930  
   931  # If Promtail should pass on the timestamp from the incoming log or not.
   932  # When false Promtail will assign the current timestamp to the log when it was processed
   933  [use_incoming_timestamp: <bool> | default = false]
   934  ```
   935  
   936  ### GCP Log
   937  
   938  The `gcplog` block configures how Promtail receives GCP logs. There are two strategies, based on the configuration of `subscription_type`:
   939  - **Pull**: Using GCP Pub/Sub [pull subscriptions](https://cloud.google.com/pubsub/docs/pull). Promtail will consume log messages directly from the configured GCP Pub/Sub topic.
   940  - **Push**: Using GCP Pub/Sub [push subscriptions](https://cloud.google.com/pubsub/docs/push). Promtail will expose an HTTP server, and GCP will deliver logs to that server.
   941  
   942  When using the `push` subscription type, keep in mind:
   943  - The `server` configuration is the same as [server](#server), since Promtail exposes an HTTP server for target that requires so.
   944  - An endpoint at `POST /gcp/api/v1/push`, which expects requests from GCP PubSub message delivery system.
   945  
   946  ```yaml
   947  # Type of subscription used to fetch logs from GCP. Can be either `pull` (default) or `push`.
   948  [subscription_type: <string> | default = "pull"]
   949  
   950  # If the subscription_type is pull,  the GCP project ID
   951  [project_id: <string>]
   952  
   953  # If the subscription_type is pull, GCP PubSub subscription from where Promtail will pull logs from
   954  [subscription: <string>]
   955  
   956  # If the subscription_type is push, the server configuration options
   957  [server: <server_config>]
   958  
   959  # Whether Promtail should pass on the timestamp from the incoming GCP Log message.
   960  # When false, or if no timestamp is present in the syslog message, Promtail will assign the current
   961  # timestamp to the log when it was processed.
   962  [use_incoming_timestamp: <boolean> | default = false]
   963  
   964  # Label map to add to every log message.
   965  labels:
   966    [ <labelname>: <labelvalue> ... ]
   967  ```
   968  
   969  ### Available Labels
   970  
   971  When Promtail receives GCP logs, various internal labels are made available for [relabeling](#relabeling). This depends on the subscription type chosen.
   972  
   973  **Internal labels available for pull**
   974  
   975  - `__gcp_logname`
   976  - `__gcp_resource_type`
   977  - `__gcp_resource_labels_<NAME>`
   978  
   979  **Internal labels available for push**
   980  
   981  - `__gcp_message_id`
   982  - `__gcp_attributes_*`: All attributes read from `.message.attributes` in the incoming push message. Each attribute key is conveniently renamed, since it might contain unsupported characters. For example, `logging.googleapis.com/timestamp` is converted to `__gcp_attributes_logging_googleapis_com_timestamp`.
   983  
   984  ### kafka
   985  
   986  The `kafka` block configures Promtail to scrape logs from [Kafka](https://kafka.apache.org/) using a group consumer.
   987  
   988  The `brokers` should list available brokers to communicate with the Kafka cluster. Use multiple brokers when you want to increase availability.
   989  
   990  The `topics` is the list of topics Promtail will subscribe to. If a topic starts with `^` then a regular expression ([RE2](https://github.com/google/re2/wiki/Syntax)) is used to match topics.
   991  For instance `^promtail-.*` will match the topic `promtail-dev` and `promtail-prod`. Topics are refreshed every 30 seconds, so if a new topic matches, it will be automatically added without requiring a Promtail restart.
   992  
   993  The `group_id` defined the unique consumer group id to use for consuming logs. Each log record published to a topic is delivered to one consumer instance within each subscribing consumer group.
   994  
   995  - If all promtail instances have the same consumer group, then the records will effectively be load balanced over the promtail instances.
   996  - If all promtail instances have different consumer groups, then each record will be broadcast to all promtail instances.
   997  
   998  The `group_id` is useful if you want to effectively send the data to multiple loki instances and/or other sinks.
   999  
  1000  The `assignor` configuration allow you to select the rebalancing strategy to use for the consumer group.
  1001  Rebalancing is the process where a group of consumer instances (belonging to the same group) co-ordinate to own a mutually exclusive set of partitions of topics that the group is subscribed to.
  1002  
  1003  - `range` the default, assigns partitions as ranges to consumer group members.
  1004  - `sticky` assigns partitions to members with an attempt to preserve earlier assignments
  1005  - `roundrobin` assigns partitions to members in alternating order.
  1006  
  1007  The `version` allows to select the kafka version required to connect to the cluster.(default to `2.2.1`)
  1008  
  1009  By default, timestamps are assigned by Promtail when the message is read, if you want to keep the actual message timestamp from Kafka you can set the `use_incoming_timestamp` to true.
  1010  
  1011  ```yaml
  1012  # The list of brokers to connect to kafka (Required).
  1013  [brokers: <strings> | default = [""]]
  1014  
  1015  # The list of Kafka topics to consume (Required).
  1016  [topics: <strings> | default = [""]]
  1017  
  1018  # The Kafka consumer group id.
  1019  [group_id: <string> | default = "promtail"]
  1020  
  1021  # The consumer group rebalancing strategy to use. (e.g `sticky`, `roundrobin` or `range`)
  1022  [assignor: <string> | default = "range"]
  1023  
  1024  # Kafka version to connect to.
  1025  [version: <string> | default = "2.2.1"]
  1026  
  1027  # Optional authentication configuration with Kafka brokers
  1028  authentication:
  1029    # Type is authentication type. Supported values [none, ssl, sasl]
  1030    [type: <string> | default = "none"]
  1031  
  1032    # TLS configuration for authentication and encryption. It is used only when authentication type is ssl.
  1033    tls_config:
  1034      [ <tls_config> ]
  1035  
  1036    # SASL configuration for authentication. It is used only when authentication type is sasl.
  1037    sasl_config:
  1038      # SASL mechanism. Supported values [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
  1039      [mechanism: <string> | default = "PLAIN"]
  1040  
  1041      # The user name to use for SASL authentication
  1042      [user: <string>]
  1043  
  1044      # The password to use for SASL authentication
  1045      [password: <secret>]
  1046  
  1047      # If true, SASL authentication is executed over TLS
  1048      [use_tls: <boolean> | default = false]
  1049  
  1050      # The CA file to use to verify the server
  1051      [ca_file: <string>]
  1052  
  1053      # Validates that the server name in the server's certificate
  1054      # is this value.
  1055      [server_name: <string>]
  1056  
  1057      # If true, ignores the server certificate being signed by an
  1058      # unknown CA.
  1059      [insecure_skip_verify: <boolean> | default = false]
  1060  
  1061  
  1062  # Label map to add to every log line read from kafka
  1063  labels:
  1064    [ <labelname>: <labelvalue> ... ]
  1065  
  1066  # If Promtail should pass on the timestamp from the incoming log or not.
  1067  # When false Promtail will assign the current timestamp to the log when it was processed
  1068  [use_incoming_timestamp: <bool> | default = false]
  1069  ```
  1070  
  1071  **Available Labels:**
  1072  
  1073  The list of labels below are discovered when consuming kafka:
  1074  
  1075  - `__meta_kafka_topic`: The current topic for where the message has been read.
  1076  - `__meta_kafka_partition`: The partition id where the message has been read.
  1077  - `__meta_kafka_member_id`: The consumer group member id.
  1078  - `__meta_kafka_group_id`: The consumer group id.
  1079  - `__meta_kafka_message_key`: The message key. If it is empty, this value will be 'none'. 
  1080  
  1081  To keep discovered labels to your logs use the [relabel_configs](#relabel_configs) section.
  1082  
  1083  ### GELF
  1084  
  1085  The `gelf` block configures a GELF UDP listener allowing users to push
  1086  logs to Promtail with the [GELF](https://docs.graylog.org/docs/gelf) protocol.
  1087  Currently only UDP is supported, please submit a feature request if you're interested into TCP support.
  1088  
  1089  > GELF messages can be sent uncompressed or compressed with either GZIP or ZLIB.
  1090  
  1091  Each GELF message received will be encoded in JSON as the log line. For example:
  1092  
  1093  ```json
  1094  {"version":"1.1","host":"example.org","short_message":"A short message","timestamp":1231231123,"level":5,"_some_extra":"extra"}
  1095  ```
  1096  
  1097  You can leverage [pipeline stages](pipeline_stages) with the GELF target,
  1098  if for example, you want to parse the log line and extract more labels or change the log line format.
  1099  
  1100  ```yaml
  1101  # UDP address to listen on. Has the format of "host:port". Default to 0.0.0.0:12201
  1102  listen_address: <string>
  1103  
  1104  # Label map to add to every log message.
  1105  labels:
  1106    [ <labelname>: <labelvalue> ... ]
  1107  
  1108  # Whether Promtail should pass on the timestamp from the incoming gelf message.
  1109  # When false, or if no timestamp is present on the gelf message, Promtail will assign the current timestamp to the log when it was processed.
  1110  # Default is false
  1111  use_incoming_timestamp: <bool>
  1112  
  1113  ```
  1114  
  1115  **Available Labels:**
  1116  
  1117  - `__gelf_message_level`: The GELF level as string.
  1118  - `__gelf_message_host`: The host sending the GELF message.
  1119  - `__gelf_message_version`: The GELF level message version set by the client.
  1120  - `__gelf_message_facility`: The GELF facility.
  1121  
  1122  To keep discovered labels to your logs use the [relabel_configs](#relabel_configs) section.
  1123  
  1124  ### Cloudflare
  1125  
  1126  The `cloudflare` block configures Promtail to pull logs from the Cloudflare
  1127  [Logpull API](https://developers.cloudflare.com/logs/logpull).
  1128  
  1129  These logs contain data related to the connecting client, the request path through the Cloudflare network, and the response from the origin web server. This data is useful for enriching existing logs on an origin server.
  1130  
  1131  ```yaml
  1132  # The Cloudflare API token to use. (Required)
  1133  # You can create a new token by visiting your [Cloudflare profile](https://dash.cloudflare.com/profile/api-tokens).
  1134  api_token: <string>
  1135  
  1136  # The Cloudflare zone id to pull logs for. (Required)
  1137  zone_id: <string>
  1138  
  1139  # The time range to pull logs for.
  1140  [pull_range: <duration> | default = 1m]
  1141  
  1142  # The quantity of workers that will pull logs.
  1143  [workers: <int> | default = 3]
  1144  
  1145  # The type list of fields to fetch for logs. 
  1146  # Supported values: default, minimal, extended, all.
  1147  [fields_type: <string> | default = default]
  1148  
  1149  # Label map to add to every log message.
  1150  labels:
  1151    [ <labelname>: <labelvalue> ... ]
  1152  
  1153  ```
  1154  
  1155  By default Promtail fetches logs with the default set of fields.
  1156  Here are the different set of fields type available and the fields they include :
  1157  
  1158  - `default` includes `"ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes",
  1159  "EdgeRequestHost", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"`
  1160  
  1161  - `minimal` includes all `default` fields and adds `"ZoneID", "ClientSSLProtocol", "ClientRequestProtocol", "ClientRequestPath", "ClientRequestUserAgent", "ClientRequestReferer",
  1162  "EdgeColoCode", "ClientCountry", "CacheCacheStatus", "CacheResponseStatus", "EdgeResponseContentType`
  1163  
  1164  - `extended` includes all `minimal`fields and adds `"ClientSSLCipher", "ClientASN", "ClientIPClass", "CacheResponseBytes", "EdgePathingOp", "EdgePathingSrc", "EdgePathingStatus", "ParentRayID",
  1165  "WorkerCPUTime", "WorkerStatus", "WorkerSubrequest", "WorkerSubrequestCount", "OriginIP", "OriginResponseStatus", "OriginSSLProtocol",
  1166  "OriginResponseHTTPExpires", "OriginResponseHTTPLastModified"`
  1167  
  1168  - `all` includes all `extended` fields and adds `"BotScore", "BotScoreSrc", "ClientRequestBytes", "ClientSrcPort", "ClientXRequestedWith", "CacheTieredFill", "EdgeResponseCompressionRatio", "EdgeServerIP", "FirewallMatchesSources",
  1169  "FirewallMatchesActions", "FirewallMatchesRuleIDs", "OriginResponseBytes", "OriginResponseTime", "ClientDeviceType", "WAFFlags", "WAFMatchedVar", "EdgeColoID"`
  1170  
  1171  To learn more about each field and its value, refer to the [Cloudflare documentation](https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests).
  1172  
  1173  Promtail saves the last successfully-fetched timestamp in the position file.
  1174  If a position is found in the file for a given zone ID, Promtail will restart pulling logs
  1175  from that position. When no position is found, Promtail will start pulling logs from the current time.
  1176  
  1177  Promtail fetches logs using multiple workers (configurable via `workers`) which request the last available pull range
  1178  (configured via `pull_range`) repeatedly. Verify the last timestamp fetched by Promtail using the `cloudflare_target_last_requested_end_timestamp` metric.
  1179  It is possible for Promtail to fall behind due to having too many log lines to process for each pull.
  1180  Adding more workers, decreasing the pull range, or decreasing the quantity of fields fetched can mitigate this performance issue.
  1181  
  1182  All Cloudflare logs are in JSON. Here is an example:
  1183  
  1184  ```json
  1185  {
  1186  	"CacheCacheStatus": "miss",
  1187  	"CacheResponseBytes": 8377,
  1188  	"CacheResponseStatus": 200,
  1189  	"CacheTieredFill": false,
  1190  	"ClientASN": 786,
  1191  	"ClientCountry": "gb",
  1192  	"ClientDeviceType": "desktop",
  1193  	"ClientIP": "100.100.5.5",
  1194  	"ClientIPClass": "noRecord",
  1195  	"ClientRequestBytes": 2691,
  1196  	"ClientRequestHost": "www.foo.com",
  1197  	"ClientRequestMethod": "GET",
  1198  	"ClientRequestPath": "/comments/foo/",
  1199  	"ClientRequestProtocol": "HTTP/1.0",
  1200  	"ClientRequestReferer": "https://www.foo.com/foo/168855/?offset=8625",
  1201  	"ClientRequestURI": "/foo/15248108/",
  1202  	"ClientRequestUserAgent": "some bot",
  1203  	"ClientSSLCipher": "ECDHE-ECDSA-AES128-GCM-SHA256",
  1204  	"ClientSSLProtocol": "TLSv1.2",
  1205  	"ClientSrcPort": 39816,
  1206  	"ClientXRequestedWith": "",
  1207  	"EdgeColoCode": "MAN",
  1208  	"EdgeColoID": 341,
  1209  	"EdgeEndTimestamp": 1637336610671000000,
  1210  	"EdgePathingOp": "wl",
  1211  	"EdgePathingSrc": "macro",
  1212  	"EdgePathingStatus": "nr",
  1213  	"EdgeRateLimitAction": "",
  1214  	"EdgeRateLimitID": 0,
  1215  	"EdgeRequestHost": "www.foo.com",
  1216  	"EdgeResponseBytes": 14878,
  1217  	"EdgeResponseCompressionRatio": 1,
  1218  	"EdgeResponseContentType": "text/html",
  1219  	"EdgeResponseStatus": 200,
  1220  	"EdgeServerIP": "8.8.8.8",
  1221  	"EdgeStartTimestamp": 1637336610517000000,
  1222  	"FirewallMatchesActions": [],
  1223  	"FirewallMatchesRuleIDs": [],
  1224  	"FirewallMatchesSources": [],
  1225  	"OriginIP": "8.8.8.8",
  1226  	"OriginResponseBytes": 0,
  1227  	"OriginResponseHTTPExpires": "",
  1228  	"OriginResponseHTTPLastModified": "",
  1229  	"OriginResponseStatus": 200,
  1230  	"OriginResponseTime": 123000000,
  1231  	"OriginSSLProtocol": "TLSv1.2",
  1232  	"ParentRayID": "00",
  1233  	"RayID": "6b0a...",
  1234  	"SecurityLevel": "med",
  1235  	"WAFAction": "unknown",
  1236  	"WAFFlags": "0",
  1237  	"WAFMatchedVar": "",
  1238  	"WAFProfile": "unknown",
  1239  	"WAFRuleID": "",
  1240  	"WAFRuleMessage": "",
  1241  	"WorkerCPUTime": 0,
  1242  	"WorkerStatus": "unknown",
  1243  	"WorkerSubrequest": false,
  1244  	"WorkerSubrequestCount": 0,
  1245  	"ZoneID": 1234
  1246  }
  1247  ```
  1248  
  1249  You can leverage [pipeline stages](pipeline_stages) if, for example, you want to parse the JSON log line and extract more labels or change the log line format.
  1250  
  1251  ### heroku_drain
  1252  
  1253  The `heroku_drain` block configures Promtail to expose a [Heroku HTTPS Drain](https://devcenter.heroku.com/articles/log-drains#https-drains).
  1254  
  1255  Each job configured with a Heroku Drain will expose a Drain and will require a separate port.
  1256  
  1257  The `server` configuration is the same as [server](#server), since Promtail exposes an HTTP server for each new drain.
  1258  
  1259  Promtail exposes an endpoint at `/heroku/api/v1/drain`, which expects requests from Heroku's log delivery.
  1260  
  1261  ```yaml
  1262  # The Heroku drain server configuration options
  1263  [server: <server_config>]
  1264  
  1265  # Label map to add to every log message.
  1266  labels:
  1267    [ <labelname>: <labelvalue> ... ]
  1268  
  1269  # Whether Promtail should pass on the timestamp from the incoming Heroku drain message.
  1270  # When false, or if no timestamp is present in the syslog message, Promtail will assign the current
  1271  # timestamp to the log when it was processed.
  1272  [use_incoming_timestamp: <boolean> | default = false]
  1273  
  1274  ```
  1275  
  1276  #### Available Labels
  1277  
  1278  Heroku Log drains send logs in [Syslog-formatted messages](https://datatracker.ietf.org/doc/html/rfc5424#section-6) (with
  1279  some [minor tweaks](https://devcenter.heroku.com/articles/log-drains#https-drain-caveats); they are not RFC-compatible).
  1280  
  1281  The Heroku Drain target exposes for each log entry the received syslog fields with the following labels:
  1282  
  1283  - `__heroku_drain_host`: The [HOSTNAME](https://tools.ietf.org/html/rfc5424#section-6.2.4) field parsed from the message.
  1284  - `__heroku_drain_app`: The [APP-NAME](https://tools.ietf.org/html/rfc5424#section-6.2.5) field parsed from the message.
  1285  - `__heroku_drain_proc`: The [PROCID](https://tools.ietf.org/html/rfc5424#section-6.2.6) field parsed from the message.
  1286  - `__heroku_drain_log_id`: The [MSGID](https://tools.ietf.org/html/rfc5424#section-6.2.7) field parsed from the message.
  1287  
  1288  ### relabel_configs
  1289  
  1290  Relabeling is a powerful tool to dynamically rewrite the label set of a target
  1291  before it gets scraped. Multiple relabeling steps can be configured per scrape
  1292  configuration. They are applied to the label set of each target in order of
  1293  their appearance in the configuration file.
  1294  
  1295  After relabeling, the `instance` label is set to the value of `__address__` by
  1296  default if it was not set during relabeling. The `__scheme__` and
  1297  `__metrics_path__` labels are set to the scheme and metrics path of the target
  1298  respectively. The `__param_<name>` label is set to the value of the first passed
  1299  URL parameter called `<name>`.
  1300  
  1301  Additional labels prefixed with `__meta_` may be available during the relabeling
  1302  phase. They are set by the service discovery mechanism that provided the target
  1303  and vary between mechanisms.
  1304  
  1305  Labels starting with `__` will be removed from the label set after target
  1306  relabeling is completed.
  1307  
  1308  If a relabeling step needs to store a label value only temporarily (as the
  1309  input to a subsequent relabeling step), use the `__tmp` label name prefix. This
  1310  prefix is guaranteed to never be used by Prometheus itself.
  1311  
  1312  ```yaml
  1313  # The source labels select values from existing labels. Their content is concatenated
  1314  # using the configured separator and matched against the configured regular expression
  1315  # for the replace, keep, and drop actions.
  1316  [ source_labels: '[' <labelname> [, ...] ']' ]
  1317  
  1318  # Separator placed between concatenated source label values.
  1319  [ separator: <string> | default = ; ]
  1320  
  1321  # Label to which the resulting value is written in a replace action.
  1322  # It is mandatory for replace actions. Regex capture groups are available.
  1323  [ target_label: <labelname> ]
  1324  
  1325  # Regular expression against which the extracted value is matched.
  1326  [ regex: <regex> | default = (.*) ]
  1327  
  1328  # Modulus to take of the hash of the source label values.
  1329  [ modulus: <uint64> ]
  1330  
  1331  # Replacement value against which a regex replace is performed if the
  1332  # regular expression matches. Regex capture groups are available.
  1333  [ replacement: <string> | default = $1 ]
  1334  
  1335  # Action to perform based on regex matching.
  1336  [ action: <relabel_action> | default = replace ]
  1337  ```
  1338  
  1339  `<regex>` is any valid
  1340  [RE2 regular expression](https://github.com/google/re2/wiki/Syntax). It is
  1341  required for the `replace`, `keep`, `drop`, `labelmap`,`labeldrop` and
  1342  `labelkeep` actions. The regex is anchored on both ends. To un-anchor the regex,
  1343  use `.*<regex>.*`.
  1344  
  1345  `<relabel_action>` determines the relabeling action to take:
  1346  
  1347  - `replace`: Match `regex` against the concatenated `source_labels`. Then, set
  1348    `target_label` to `replacement`, with match group references
  1349    (`${1}`, `${2}`, ...) in `replacement` substituted by their value. If `regex`
  1350    does not match, no replacement takes place.
  1351  - `keep`: Drop targets for which `regex` does not match the concatenated `source_labels`.
  1352  - `drop`: Drop targets for which `regex` matches the concatenated `source_labels`.
  1353  - `hashmod`: Set `target_label` to the `modulus` of a hash of the concatenated `source_labels`.
  1354  - `labelmap`: Match `regex` against all label names. Then copy the values of the matching labels
  1355     to label names given by `replacement` with match group references
  1356    (`${1}`, `${2}`, ...) in `replacement` substituted by their value.
  1357  - `labeldrop`: Match `regex` against all label names. Any label that matches will be
  1358    removed from the set of labels.
  1359  - `labelkeep`: Match `regex` against all label names. Any label that does not match will be
  1360    removed from the set of labels.
  1361  
  1362  Care must be taken with `labeldrop` and `labelkeep` to ensure that logs are
  1363  still uniquely labeled once the labels are removed.
  1364  
  1365  ### static_configs
  1366  
  1367  A `static_configs` allows specifying a list of targets and a common label set
  1368  for them.  It is the canonical way to specify static targets in a scrape
  1369  configuration.
  1370  
  1371  ```yaml
  1372  # Configures the discovery to look on the current machine.
  1373  # This is required by the prometheus service discovery code but doesn't
  1374  # really apply to Promtail which can ONLY look at files on the local machine
  1375  # As such it should only have the value of localhost, OR it can be excluded
  1376  # entirely and a default value of localhost will be applied by Promtail.
  1377  targets:
  1378    - localhost
  1379  
  1380  # Defines a file to scrape and an optional set of additional labels to apply to
  1381  # all streams defined by the files from __path__.
  1382  labels:
  1383    # The path to load logs from. Can use glob patterns (e.g., /var/log/*.log).
  1384    __path__: <string>
  1385  
  1386    # Used to exclude files from being loaded. Can also use glob patterns.
  1387    __path_exclude__: <string>
  1388  
  1389    # Additional labels to assign to the logs
  1390    [ <labelname>: <labelvalue> ... ]
  1391  ```
  1392  
  1393  ### file_sd_config
  1394  
  1395  File-based service discovery provides a more generic way to configure static
  1396  targets and serves as an interface to plug in custom service discovery
  1397  mechanisms.
  1398  
  1399  It reads a set of files containing a list of zero or more
  1400  `<static_config>`s. Changes to all defined files are detected via disk watches
  1401  and applied immediately. Files may be provided in YAML or JSON format. Only
  1402  changes resulting in well-formed target groups are applied.
  1403  
  1404  The JSON file must contain a list of static configs, using this format:
  1405  
  1406  ```yaml
  1407  [
  1408    {
  1409      "targets": [ "localhost" ],
  1410      "labels": {
  1411        "__path__": "<string>", ...
  1412        "<labelname>": "<labelvalue>", ...
  1413      }
  1414    },
  1415    ...
  1416  ]
  1417  ```
  1418  
  1419  As a fallback, the file contents are also re-read periodically at the specified
  1420  refresh interval.
  1421  
  1422  Each target has a meta label `__meta_filepath` during the
  1423  [relabeling phase](#relabel_config). Its value is set to the
  1424  filepath from which the target was extracted.
  1425  
  1426  ```yaml
  1427  # Patterns for files from which target groups are extracted.
  1428  files:
  1429    [ - <filename_pattern> ... ]
  1430  
  1431  # Refresh interval to re-read the files.
  1432  [ refresh_interval: <duration> | default = 5m ]
  1433  ```
  1434  
  1435  Where `<filename_pattern>` may be a path ending in `.json`, `.yml` or `.yaml`.
  1436  The last path segment may contain a single `*` that matches any character
  1437  sequence, e.g. `my/path/tg_*.json`.
  1438  
  1439  ### kubernetes_sd_config
  1440  
  1441  Kubernetes SD configurations allow retrieving scrape targets from
  1442  [Kubernetes'](https://kubernetes.io/) REST API and always staying synchronized
  1443  with the cluster state.
  1444  
  1445  One of the following `role` types can be configured to discover targets:
  1446  
  1447  #### `node`
  1448  
  1449  The `node` role discovers one target per cluster node with the address
  1450  defaulting to the Kubelet's HTTP port.
  1451  
  1452  The target address defaults to the first existing address of the Kubernetes
  1453  node object in the address type order of `NodeInternalIP`, `NodeExternalIP`,
  1454  `NodeLegacyHostIP`, and `NodeHostName`.
  1455  
  1456  Available meta labels:
  1457  
  1458  - `__meta_kubernetes_node_name`: The name of the node object.
  1459  - `__meta_kubernetes_node_label_<labelname>`: Each label from the node object.
  1460  - `__meta_kubernetes_node_labelpresent_<labelname>`: `true` for each label from the node object.
  1461  - `__meta_kubernetes_node_annotation_<annotationname>`: Each annotation from the node object.
  1462  - `__meta_kubernetes_node_annotationpresent_<annotationname>`: `true` for each annotation from the node object.
  1463  - `__meta_kubernetes_node_address_<address_type>`: The first address for each node address type, if it exists.
  1464  
  1465  In addition, the `instance` label for the node will be set to the node name
  1466  as retrieved from the API server.
  1467  
  1468  #### `service`
  1469  
  1470  The `service` role discovers a target for each service port of each service.
  1471  This is generally useful for blackbox monitoring of a service.
  1472  The address will be set to the Kubernetes DNS name of the service and respective
  1473  service port.
  1474  
  1475  Available meta labels:
  1476  
  1477  - `__meta_kubernetes_namespace`: The namespace of the service object.
  1478  - `__meta_kubernetes_service_annotation_<annotationname>`: Each annotation from the service object.
  1479  - `__meta_kubernetes_service_annotationpresent_<annotationname>`: "true" for each annotation of the service object.
  1480  - `__meta_kubernetes_service_cluster_ip`: The cluster IP address of the service. (Does not apply to services of type ExternalName)
  1481  - `__meta_kubernetes_service_external_name`: The DNS name of the service. (Applies to services of type ExternalName)
  1482  - `__meta_kubernetes_service_label_<labelname>`: Each label from the service object.
  1483  - `__meta_kubernetes_service_labelpresent_<labelname>`: `true` for each label of the service object.
  1484  - `__meta_kubernetes_service_name`: The name of the service object.
  1485  - `__meta_kubernetes_service_port_name`: Name of the service port for the target.
  1486  - `__meta_kubernetes_service_port_protocol`: Protocol of the service port for the target.
  1487  
  1488  #### `pod`
  1489  
  1490  The `pod` role discovers all pods and exposes their containers as targets. For
  1491  each declared port of a container, a single target is generated. If a container
  1492  has no specified ports, a port-free target per container is created for manually
  1493  adding a port via relabeling.
  1494  
  1495  Available meta labels:
  1496  
  1497  - `__meta_kubernetes_namespace`: The namespace of the pod object.
  1498  - `__meta_kubernetes_pod_name`: The name of the pod object.
  1499  - `__meta_kubernetes_pod_ip`: The pod IP of the pod object.
  1500  - `__meta_kubernetes_pod_label_<labelname>`: Each label from the pod object.
  1501  - `__meta_kubernetes_pod_labelpresent_<labelname>`: `true`for each label from the pod object.
  1502  - `__meta_kubernetes_pod_annotation_<annotationname>`: Each annotation from the pod object.
  1503  - `__meta_kubernetes_pod_annotationpresent_<annotationname>`: `true` for each annotation from the pod object.
  1504  - `__meta_kubernetes_pod_container_init`: `true` if the container is an [InitContainer](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
  1505  - `__meta_kubernetes_pod_container_name`: Name of the container the target address points to.
  1506  - `__meta_kubernetes_pod_container_port_name`: Name of the container port.
  1507  - `__meta_kubernetes_pod_container_port_number`: Number of the container port.
  1508  - `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container port.
  1509  - `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready state.
  1510  - `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or `Unknown`
  1511    in the [lifecycle](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase).
  1512  - `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto.
  1513  - `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object.
  1514  - `__meta_kubernetes_pod_uid`: The UID of the pod object.
  1515  - `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller.
  1516  - `__meta_kubernetes_pod_controller_name`: Name of the pod controller.
  1517  
  1518  #### `endpoints`
  1519  
  1520  The `endpoints` role discovers targets from listed endpoints of a service. For
  1521  each endpoint address one target is discovered per port. If the endpoint is
  1522  backed by a pod, all additional container ports of the pod, not bound to an
  1523  endpoint port, are discovered as targets as well.
  1524  
  1525  Available meta labels:
  1526  
  1527  - `__meta_kubernetes_namespace`: The namespace of the endpoints object.
  1528  - `__meta_kubernetes_endpoints_name`: The names of the endpoints object.
  1529  - For all targets discovered directly from the endpoints list (those not additionally inferred
  1530    from underlying pods), the following labels are attached:
  1531    - `__meta_kubernetes_endpoint_hostname`: Hostname of the endpoint.
  1532    - `__meta_kubernetes_endpoint_node_name`: Name of the node hosting the endpoint.
  1533    - `__meta_kubernetes_endpoint_ready`: Set to `true` or `false` for the endpoint's ready state.
  1534    - `__meta_kubernetes_endpoint_port_name`: Name of the endpoint port.
  1535    - `__meta_kubernetes_endpoint_port_protocol`: Protocol of the endpoint port.
  1536    - `__meta_kubernetes_endpoint_address_target_kind`: Kind of the endpoint address target.
  1537    - `__meta_kubernetes_endpoint_address_target_name`: Name of the endpoint address target.
  1538  - If the endpoints belong to a service, all labels of the `role: service` discovery are attached.
  1539  - For all targets backed by a pod, all labels of the `role: pod` discovery are attached.
  1540  
  1541  #### `ingress`
  1542  
  1543  The `ingress` role discovers a target for each path of each ingress.
  1544  This is generally useful for blackbox monitoring of an ingress.
  1545  The address will be set to the host specified in the ingress spec.
  1546  
  1547  Available meta labels:
  1548  
  1549  - `__meta_kubernetes_namespace`: The namespace of the ingress object.
  1550  - `__meta_kubernetes_ingress_name`: The name of the ingress object.
  1551  - `__meta_kubernetes_ingress_label_<labelname>`: Each label from the ingress object.
  1552  - `__meta_kubernetes_ingress_labelpresent_<labelname>`: `true` for each label from the ingress object.
  1553  - `__meta_kubernetes_ingress_annotation_<annotationname>`: Each annotation from the ingress object.
  1554  - `__meta_kubernetes_ingress_annotationpresent_<annotationname>`: `true` for each annotation from the ingress object.
  1555  - `__meta_kubernetes_ingress_scheme`: Protocol scheme of ingress, `https` if TLS
  1556    config is set. Defaults to `http`.
  1557  - `__meta_kubernetes_ingress_path`: Path from ingress spec. Defaults to `/`.
  1558  
  1559  See below for the configuration options for Kubernetes discovery:
  1560  
  1561  ```yaml
  1562  # The information to access the Kubernetes API.
  1563  
  1564  # The API server addresses. If left empty, Prometheus is assumed to run inside
  1565  # of the cluster and will discover API servers automatically and use the pod's
  1566  # CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/.
  1567  [ api_server: <host> ]
  1568  
  1569  # The Kubernetes role of entities that should be discovered.
  1570  role: <role>
  1571  
  1572  # Optional authentication information used to authenticate to the API server.
  1573  # Note that `basic_auth`, `bearer_token` and `bearer_token_file` options are
  1574  # mutually exclusive.
  1575  # password and password_file are mutually exclusive.
  1576  
  1577  # Optional HTTP basic authentication information.
  1578  basic_auth:
  1579    [ username: <string> ]
  1580    [ password: <secret> ]
  1581    [ password_file: <string> ]
  1582  
  1583  # Optional bearer token authentication information.
  1584  [ bearer_token: <secret> ]
  1585  
  1586  # Optional bearer token file authentication information.
  1587  [ bearer_token_file: <filename> ]
  1588  
  1589  # Optional proxy URL.
  1590  [ proxy_url: <string> ]
  1591  
  1592  # TLS configuration.
  1593  tls_config:
  1594    [ <tls_config> ]
  1595  
  1596  # Optional namespace discovery. If omitted, all namespaces are used.
  1597  namespaces:
  1598    names:
  1599      [ - <string> ]
  1600  ```
  1601  
  1602  Where `<role>` must be `endpoints`, `service`, `pod`, `node`, or
  1603  `ingress`.
  1604  
  1605  See
  1606  [this example Prometheus configuration file](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml)
  1607  for a detailed example of configuring Prometheus for Kubernetes.
  1608  
  1609  You may wish to check out the 3rd party
  1610  [Prometheus Operator](https://github.com/coreos/prometheus-operator),
  1611  which automates the Prometheus setup on top of Kubernetes.
  1612  
  1613  ### consul_sd_config
  1614  
  1615  Consul SD configurations allow retrieving scrape targets from the [Consul Catalog API](https://www.consul.io).
  1616  When using the Catalog API, each running Promtail will get
  1617  a list of all services known to the whole consul cluster when discovering
  1618  new targets.
  1619  
  1620  The following meta labels are available on targets during [relabeling](#relabel_configs):
  1621  
  1622  * `__meta_consul_address`: the address of the target
  1623  * `__meta_consul_dc`: the datacenter name for the target
  1624  * `__meta_consul_health`: the health status of the service
  1625  * `__meta_consul_metadata_<key>`: each node metadata key value of the target
  1626  * `__meta_consul_node`: the node name defined for the target
  1627  * `__meta_consul_service_address`: the service address of the target
  1628  * `__meta_consul_service_id`: the service ID of the target
  1629  * `__meta_consul_service_metadata_<key>`: each service metadata key value of the target
  1630  * `__meta_consul_service_port`: the service port of the target
  1631  * `__meta_consul_service`: the name of the service the target belongs to
  1632  * `__meta_consul_tagged_address_<key>`: each node tagged address key value of the target
  1633  * `__meta_consul_tags`: the list of tags of the target joined by the tag separator
  1634  
  1635  ```yaml
  1636  # The information to access the Consul Catalog API. It is to be defined
  1637  # as the Consul documentation requires.
  1638  [ server: <host> | default = "localhost:8500" ]
  1639  [ token: <secret> ]
  1640  [ datacenter: <string> ]
  1641  [ scheme: <string> | default = "http" ]
  1642  [ username: <string> ]
  1643  [ password: <secret> ]
  1644  
  1645  tls_config:
  1646    [ <tls_config> ]
  1647  
  1648  # A list of services for which targets are retrieved. If omitted, all services
  1649  # are scraped.
  1650  services:
  1651    [ - <string> ]
  1652  
  1653  # See https://www.consul.io/api/catalog.html#list-nodes-for-service to know more
  1654  # about the possible filters that can be used.
  1655  
  1656  # An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list.
  1657  tags:
  1658    [ - <string> ]
  1659  
  1660  # Node metadata key/value pairs to filter nodes for a given service.
  1661  [ node_meta:
  1662    [ <string>: <string> ... ] ]
  1663  
  1664  # The string by which Consul tags are joined into the tag label.
  1665  [ tag_separator: <string> | default = , ]
  1666  
  1667  # Allow stale Consul results (see https://www.consul.io/api/features/consistency.html). Will reduce load on Consul.
  1668  [ allow_stale: <boolean> | default = true ]
  1669  
  1670  # The time after which the provided names are refreshed.
  1671  # On large setup it might be a good idea to increase this value because the catalog will change all the time.
  1672  [ refresh_interval: <duration> | default = 30s ]
  1673  ```
  1674  
  1675  Note that the IP number and port used to scrape the targets is assembled as
  1676  `<__meta_consul_address>:<__meta_consul_service_port>`. However, in some
  1677  Consul setups, the relevant address is in `__meta_consul_service_address`.
  1678  In those cases, you can use the [relabel](#relabel_configs)
  1679  feature to replace the special `__address__` label.
  1680  
  1681  The [relabeling phase](#relabel_configs) is the preferred and more powerful
  1682  way to filter services or nodes for a service based on arbitrary labels. For
  1683  users with thousands of services it can be more efficient to use the Consul API
  1684  directly which has basic support for filtering nodes (currently by node
  1685  metadata and a single tag).
  1686  
  1687  ### consulagent_sd_config
  1688  
  1689  Consul Agent SD configurations allow retrieving scrape targets from [Consul's](https://www.consul.io)
  1690  Agent API. When using the Agent API, each running Promtail will only get
  1691  services registered with the local agent running on the same host when discovering
  1692  new targets. This is suitable for very large Consul clusters for which using the
  1693  Catalog API would be too slow or resource intensive.
  1694  
  1695  The following meta labels are available on targets during [relabeling](#relabel_configs):
  1696  
  1697  * `__meta_consulagent_address`: the address of the target
  1698  * `__meta_consulagent_dc`: the datacenter name for the target
  1699  * `__meta_consulagent_health`: the health status of the service
  1700  * `__meta_consulagent_metadata_<key>`: each node metadata key value of the target
  1701  * `__meta_consulagent_node`: the node name defined for the target
  1702  * `__meta_consulagent_service_address`: the service address of the target
  1703  * `__meta_consulagent_service_id`: the service ID of the target
  1704  * `__meta_consulagent_service_metadata_<key>`: each service metadata key value of the target
  1705  * `__meta_consulagent_service_port`: the service port of the target
  1706  * `__meta_consulagent_service`: the name of the service the target belongs to
  1707  * `__meta_consulagent_tagged_address_<key>`: each node tagged address key value of the target
  1708  * `__meta_consulagent_tags`: the list of tags of the target joined by the tag separator
  1709  
  1710  ```yaml
  1711  # The information to access the Consul Agent API. It is to be defined
  1712  # as the Consul documentation requires.
  1713  [ server: <host> | default = "localhost:8500" ]
  1714  [ token: <secret> ]
  1715  [ datacenter: <string> ]
  1716  [ scheme: <string> | default = "http" ]
  1717  [ username: <string> ]
  1718  [ password: <secret> ]
  1719  
  1720  tls_config:
  1721    [ <tls_config> ]
  1722  
  1723  # A list of services for which targets are retrieved. If omitted, all services
  1724  # are scraped.
  1725  services:
  1726    [ - <string> ]
  1727  
  1728  # See https://www.consul.io/api-docs/agent/service#filtering to know more
  1729  # about the possible filters that can be used.
  1730  
  1731  # An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list.
  1732  tags:
  1733    [ - <string> ]
  1734  
  1735  # Node metadata key/value pairs to filter nodes for a given service.
  1736  [ node_meta:
  1737    [ <string>: <string> ... ] ]
  1738  
  1739  # The string by which Consul tags are joined into the tag label.
  1740  [ tag_separator: <string> | default = , ]
  1741  ```
  1742  
  1743  Note that the IP address and port number used to scrape the targets is assembled as
  1744  `<__meta_consul_address>:<__meta_consul_service_port>`. However, in some
  1745  Consul setups, the relevant address is in `__meta_consul_service_address`.
  1746  In those cases, you can use the [relabel](#relabel_configs)
  1747  feature to replace the special `__address__` label.
  1748  
  1749  The [relabeling phase](#relabel_configs) is the preferred and more powerful
  1750  way to filter services or nodes for a service based on arbitrary labels. For
  1751  users with thousands of services it can be more efficient to use the Consul API
  1752  directly which has basic support for filtering nodes (currently by node
  1753  metadata and a single tag).
  1754  
  1755  ### docker_sd_config
  1756  
  1757  Docker service discovery allows retrieving targets from a Docker daemon.
  1758  It will only watch containers of the Docker daemon referenced with the host parameter. Docker
  1759  service discovery should run on each node in a distributed setup. The containers must run with
  1760  either the [json-file](https://docs.docker.com/config/containers/logging/json-file/)
  1761  or [journald](https://docs.docker.com/config/containers/logging/journald/) logging driver.
  1762  
  1763  Please note that the discovery will not pick up finished containers. That means
  1764  Promtail will not scrape the remaining logs from finished containers after a restart.
  1765  
  1766  The configuration is inherited from [Prometheus' Docker service discovery](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#docker_sd_config).
  1767  
  1768  ```yaml
  1769  # Address of the Docker daemon.  Use unix:///var/run/docker.sock for a local setup.
  1770  host: <string>
  1771  
  1772  # Optional proxy URL.
  1773  [ proxy_url: <string> ]
  1774  
  1775  # TLS configuration.
  1776  tls_config:
  1777    [ <tls_config> ]
  1778  
  1779  # The port to scrape metrics from, when `role` is nodes, and for discovered
  1780  # tasks and services that don't have published ports.
  1781  [ port: <int> | default = 80 ]
  1782  
  1783  # The host to use if the container is in host networking mode.
  1784  [ host_networking_host: <string> | default = "localhost" ]
  1785  
  1786  # Optional filters to limit the discovery process to a subset of available
  1787  # resources.
  1788  # The available filters are listed in the Docker documentation:
  1789  # Containers: https://docs.docker.com/engine/api/v1.41/#operation/ContainerList
  1790  [ filters:
  1791    [ - name: <string>
  1792        values: <string>, [...] ]
  1793  ]
  1794  
  1795  # The time after which the containers are refreshed.
  1796  [ refresh_interval: <duration> | default = 60s ]
  1797  
  1798  # Authentication information used by Promtail to authenticate itself to the
  1799  # Docker daemon.
  1800  # Note that `basic_auth` and `authorization` options are mutually exclusive.
  1801  # `password` and `password_file` are mutually exclusive.
  1802  
  1803  # Optional HTTP basic authentication information.
  1804  basic_auth:
  1805    [ username: <string> ]
  1806    [ password: <secret> ]
  1807    [ password_file: <string> ]
  1808  
  1809  # Optional `Authorization` header configuration.
  1810  authorization:
  1811    # Sets the authentication type.
  1812    [ type: <string> | default: Bearer ]
  1813    # Sets the credentials. It is mutually exclusive with
  1814    # `credentials_file`.
  1815    [ credentials: <secret> ]
  1816    # Sets the credentials to the credentials read from the configured file.
  1817    # It is mutually exclusive with `credentials`.
  1818    [ credentials_file: <filename> ]
  1819  
  1820  # Optional OAuth 2.0 configuration.
  1821  # Cannot be used at the same time as basic_auth or authorization.
  1822  oauth2:
  1823    [ <oauth2> ]
  1824  
  1825  # Configure whether HTTP requests follow HTTP 3xx redirects.
  1826  [ follow_redirects: <bool> | default = true ]
  1827  ```
  1828  
  1829  Available meta labels:
  1830  
  1831    * `__meta_docker_container_id`: the ID of the container
  1832    * `__meta_docker_container_name`: the name of the container
  1833    * `__meta_docker_container_network_mode`: the network mode of the container
  1834    * `__meta_docker_container_label_<labelname>`: each label of the container
  1835    * `__meta_docker_container_log_stream`: the log stream type `stdout` or `stderr`
  1836    * `__meta_docker_network_id`: the ID of the network
  1837    * `__meta_docker_network_name`: the name of the network
  1838    * `__meta_docker_network_ingress`: whether the network is ingress
  1839    * `__meta_docker_network_internal`: whether the network is internal
  1840    * `__meta_docker_network_label_<labelname>`: each label of the network
  1841    * `__meta_docker_network_scope`: the scope of the network
  1842    * `__meta_docker_network_ip`: the IP of the container in this network
  1843    * `__meta_docker_port_private`: the port on the container
  1844    * `__meta_docker_port_public`: the external port if a port-mapping exists
  1845    * `__meta_docker_port_public_ip`: the public IP if a port-mapping exists
  1846  
  1847  These labels can be used during relabeling. For instance, the following configuration scrapes the container named `flog` and removes the leading slash (`/`) from the container name.
  1848  
  1849  ```yaml
  1850  scrape_configs:
  1851    - job_name: flog_scrape 
  1852      docker_sd_configs:
  1853        - host: unix:///var/run/docker.sock
  1854          refresh_interval: 5s
  1855          filters:
  1856            - name: name
  1857              values: [flog] 
  1858      relabel_configs:
  1859        - source_labels: ['__meta_docker_container_name']
  1860          regex: '/(.*)'
  1861          target_label: 'container'
  1862  ```
  1863  
  1864  ## limits_config
  1865  
  1866  The optional `limits_config` block configures global limits for this instance of Promtail.
  1867  
  1868  ```yaml
  1869  # When true, enforces rate limiting on this instance of Promtail.
  1870  [readline_rate_enabled: <bool> | default = false]
  1871  
  1872  # The rate limit in log lines per second that this instance of Promtail may push to Loki.
  1873  [readline_rate: <int> | default = 10000]
  1874  
  1875  # The cap in the quantity of burst lines that this instance of Promtail may push
  1876  # to Loki.
  1877  [readline_burst: <int> | default = 10000]
  1878  
  1879  # When true, exceeding the rate limit causes this instance of Promtail to discard
  1880  # log lines, rather than sending them to Loki. When false, exceeding the rate limit
  1881  # causes this instance of Promtail to temporarily hold off on sending the log lines and retry later.
  1882  [readline_rate_drop: <bool> | default = true]
  1883  ```
  1884  
  1885  ## target_config
  1886  
  1887  The `target_config` block controls the behavior of reading files from discovered
  1888  targets.
  1889  
  1890  ```yaml
  1891  # Period to resync directories being watched and files being tailed to discover
  1892  # new ones or stop watching removed ones.
  1893  sync_period: "10s"
  1894  ```
  1895  
  1896  ## options_config
  1897  
  1898  ```yaml
  1899  # A comma-separated list of labels to include in the stream lag metric
  1900  # `promtail_stream_lag_seconds`. The default value is "filename". A "host" label is
  1901  # always included. The stream lag metric indicates which streams are falling behind
  1902  # on writes to Loki; be mindful about using too many labels,
  1903  # as it can increase cardinality.
  1904  [stream_lag_labels: <string> | default = "filename"]
  1905  ```
  1906  
  1907  ## Example Docker Config
  1908  
  1909  It's fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS.  We recommend the [Docker logging driver](../../docker-driver/) for local Docker installs or Docker Compose.
  1910  
  1911  If running in a Kubernetes environment, you should look at the defined configs which are in [helm](https://github.com/grafana/helm-charts/blob/main/charts/promtail/templates/configmap.yaml) and [jsonnet](https://github.com/grafana/loki/tree/master/production/ksonnet/promtail/scrape_config.libsonnet), these leverage the prometheus service discovery libraries (and give Promtail it's name) for automatically finding and tailing pods.  The jsonnet config explains with comments what each section is for.
  1912  
  1913  
  1914  ## Example Static Config
  1915  
  1916  While Promtail may have been named for the prometheus service discovery code, that same code works very well for tailing logs without containers or container environments directly on virtual machines or bare metal.
  1917  
  1918  ```yaml
  1919  server:
  1920    http_listen_port: 9080
  1921    grpc_listen_port: 0
  1922  
  1923  positions:
  1924    filename: /var/log/positions.yaml # This location needs to be writeable by Promtail.
  1925  
  1926  clients:
  1927    - url: http://ip_or_hostname_where_Loki_run:3100/loki/api/v1/push
  1928  
  1929  scrape_configs:
  1930   - job_name: system
  1931     pipeline_stages:
  1932     static_configs:
  1933     - targets:
  1934        - localhost
  1935       labels:
  1936        job: varlogs  # A `job` label is fairly standard in prometheus and useful for linking metrics and logs.
  1937        host: yourhost # A `host` label will help identify logs from this machine vs others
  1938        __path__: /var/log/*.log  # The path matching uses a third party library: https://github.com/bmatcuk/doublestar
  1939  ```
  1940  
  1941  If you are rotating logs, be careful when using a wildcard pattern like `*.log`, and make sure it doesn't match the rotated log file. For example, if you move your logs from `server.log` to `server.01-01-1970.log` in the same directory every night, a static config with a wildcard search pattern like `*.log` will pick up that new file and read it, effectively causing the entire days logs to be re-ingested.
  1942  
  1943  ## Example Static Config without targets
  1944  
  1945  While Promtail may have been named for the prometheus service discovery code, that same code works very well for tailing logs without containers or container environments directly on virtual machines or bare metal.
  1946  
  1947  ```yaml
  1948  server:
  1949    http_listen_port: 9080
  1950    grpc_listen_port: 0
  1951  
  1952  positions:
  1953    filename: /var/log/positions.yaml # This location needs to be writeable by Promtail.
  1954  
  1955  clients:
  1956    - url: http://ip_or_hostname_where_Loki_run:3100/loki/api/v1/push
  1957  
  1958  scrape_configs:
  1959   - job_name: system
  1960     pipeline_stages:
  1961     static_configs:
  1962     - labels:
  1963        job: varlogs  # A `job` label is fairly standard in prometheus and useful for linking metrics and logs.
  1964        host: yourhost # A `host` label will help identify logs from this machine vs others
  1965        __path__: /var/log/*.log  # The path matching uses a third party library: https://github.com/bmatcuk/doublestar
  1966  ```
  1967  
  1968  ## Example Journal Config
  1969  
  1970  This example reads entries from a systemd journal:
  1971  
  1972  ```yaml
  1973  server:
  1974    http_listen_port: 9080
  1975    grpc_listen_port: 0
  1976  
  1977  positions:
  1978    filename: /tmp/positions.yaml
  1979  
  1980  clients:
  1981    - url: http://ip_or_hostname_where_loki_runs:3100/loki/api/v1/push
  1982  
  1983  scrape_configs:
  1984    - job_name: journal
  1985      journal:
  1986        max_age: 12h
  1987        labels:
  1988          job: systemd-journal
  1989      relabel_configs:
  1990        - source_labels: ['__journal__systemd_unit']
  1991          target_label: 'unit'
  1992  ```
  1993  
  1994  ## Example Syslog Config
  1995  
  1996  This example starts Promtail as a syslog receiver and can accept syslog entries in Promtail over TCP:
  1997  
  1998  ```yaml
  1999  server:
  2000    http_listen_port: 9080
  2001    grpc_listen_port: 0
  2002  
  2003  positions:
  2004    filename: /tmp/positions.yaml
  2005  
  2006  clients:
  2007    - url: http://loki_addr:3100/loki/api/v1/push
  2008  
  2009  scrape_configs:
  2010    - job_name: syslog
  2011      syslog:
  2012        listen_address: 0.0.0.0:1514
  2013        labels:
  2014          job: "syslog"
  2015      relabel_configs:
  2016        - source_labels: ['__syslog_message_hostname']
  2017          target_label: 'host'
  2018  ```
  2019  
  2020  ## Example Push Config
  2021  
  2022  The example starts Promtail as a Push receiver and will accept logs from other Promtail instances or the Docker Logging Dirver:
  2023  
  2024  ```yaml
  2025  server:
  2026    http_listen_port: 9080
  2027    grpc_listen_port: 0
  2028  
  2029  positions:
  2030    filename: /tmp/positions.yaml
  2031  
  2032  clients:
  2033    - url: http://ip_or_hostname_where_Loki_run:3100/loki/api/v1/push
  2034  
  2035  scrape_configs:
  2036  - job_name: push1
  2037    loki_push_api:
  2038      server:
  2039        http_listen_port: 3500
  2040        grpc_listen_port: 3600
  2041      labels:
  2042        pushserver: push1
  2043  ```
  2044  
  2045  Please note the `job_name` must be provided and must be unique between multiple `loki_push_api` scrape_configs, it will be used to register metrics.
  2046  
  2047  A new server instance is created so the `http_listen_port` and `grpc_listen_port` must be different from the Promtail `server` config section (unless it's disabled)
  2048  
  2049  You can set `grpc_listen_port` to `0` to have a random port assigned if not using httpgrpc.