github.com/Jeffail/benthos/v3@v3.65.0/website/docs/components/inputs/kafka_balanced.md (about)

     1  ---
     2  title: kafka_balanced
     3  type: input
     4  status: deprecated
     5  categories: ["Services"]
     6  ---
     7  
     8  <!--
     9       THIS FILE IS AUTOGENERATED!
    10  
    11       To make changes please edit the contents of:
    12       lib/input/kafka_balanced.go
    13  -->
    14  
    15  import Tabs from '@theme/Tabs';
    16  import TabItem from '@theme/TabItem';
    17  
    18  :::warning DEPRECATED
    19  This component is deprecated and will be removed in the next major version release. Please consider moving onto [alternative components](#alternatives).
    20  :::
    21  
    22  Connects to Kafka brokers and consumes topics by automatically sharing
    23  partitions across other consumers of the same consumer group.
    24  
    25  
    26  <Tabs defaultValue="common" values={[
    27    { label: 'Common', value: 'common', },
    28    { label: 'Advanced', value: 'advanced', },
    29  ]}>
    30  
    31  <TabItem value="common">
    32  
    33  ```yaml
    34  # Common config fields, showing default values
    35  input:
    36    label: ""
    37    kafka_balanced:
    38      addresses:
    39        - localhost:9092
    40      topics:
    41        - benthos_stream
    42      client_id: benthos_kafka_input
    43      consumer_group: benthos_consumer_group
    44      batching:
    45        count: 0
    46        byte_size: 0
    47        period: ""
    48        check: ""
    49  ```
    50  
    51  </TabItem>
    52  <TabItem value="advanced">
    53  
    54  ```yaml
    55  # All config fields, showing default values
    56  input:
    57    label: ""
    58    kafka_balanced:
    59      addresses:
    60        - localhost:9092
    61      tls:
    62        enabled: false
    63        skip_cert_verify: false
    64        enable_renegotiation: false
    65        root_cas: ""
    66        root_cas_file: ""
    67        client_certs: []
    68      sasl:
    69        mechanism: ""
    70        user: ""
    71        password: ""
    72        access_token: ""
    73        token_cache: ""
    74        token_key: ""
    75      topics:
    76        - benthos_stream
    77      client_id: benthos_kafka_input
    78      rack_id: ""
    79      consumer_group: benthos_consumer_group
    80      start_from_oldest: true
    81      commit_period: 1s
    82      max_processing_period: 100ms
    83      group:
    84        session_timeout: 10s
    85        heartbeat_interval: 3s
    86        rebalance_timeout: 60s
    87      fetch_buffer_cap: 256
    88      target_version: 1.0.0
    89      batching:
    90        count: 0
    91        byte_size: 0
    92        period: ""
    93        check: ""
    94        processors: []
    95  ```
    96  
    97  </TabItem>
    98  </Tabs>
    99  
   100  Offsets are managed within Kafka as per the consumer group (set via config), and
   101  partitions are automatically balanced across any members of the consumer group.
   102  
   103  Partitions consumed by this input can be processed in parallel allowing it to
   104  utilise <= N pipeline processing threads and parallel outputs where N is the
   105  number of partitions allocated to this consumer.
   106  
   107  The `batching` fields allow you to configure a
   108  [batching policy](/docs/configuration/batching#batch-policy) which will be
   109  applied per partition. Any other batching mechanism will stall with this input
   110  due its sequential transaction model.
   111  
   112  ## Alternatives
   113  
   114  The functionality of this input is now covered by the general [`kafka` input](/docs/components/inputs/kafka).
   115  
   116  ### Metadata
   117  
   118  This input adds the following metadata fields to each message:
   119  
   120  ``` text
   121  - kafka_key
   122  - kafka_topic
   123  - kafka_partition
   124  - kafka_offset
   125  - kafka_lag
   126  - kafka_timestamp_unix
   127  - All existing message headers (version 0.11+)
   128  ```
   129  
   130  The field `kafka_lag` is the calculated difference between the high
   131  water mark offset of the partition at the time of ingestion and the current
   132  message offset.
   133  
   134  You can access these metadata fields using
   135  [function interpolation](/docs/configuration/interpolation#metadata).
   136  
   137  ## Fields
   138  
   139  ### `addresses`
   140  
   141  A list of broker addresses to connect to. If an item of the list contains commas it will be expanded into multiple addresses.
   142  
   143  
   144  Type: `array`  
   145  Default: `["localhost:9092"]`  
   146  
   147  ```yaml
   148  # Examples
   149  
   150  addresses:
   151    - localhost:9092
   152  
   153  addresses:
   154    - localhost:9041,localhost:9042
   155  
   156  addresses:
   157    - localhost:9041
   158    - localhost:9042
   159  ```
   160  
   161  ### `tls`
   162  
   163  Custom TLS settings can be used to override system defaults.
   164  
   165  
   166  Type: `object`  
   167  
   168  ### `tls.enabled`
   169  
   170  Whether custom TLS settings are enabled.
   171  
   172  
   173  Type: `bool`  
   174  Default: `false`  
   175  
   176  ### `tls.skip_cert_verify`
   177  
   178  Whether to skip server side certificate verification.
   179  
   180  
   181  Type: `bool`  
   182  Default: `false`  
   183  
   184  ### `tls.enable_renegotiation`
   185  
   186  Whether to allow the remote server to repeatedly request renegotiation. Enable this option if you're seeing the error message `local error: tls: no renegotiation`.
   187  
   188  
   189  Type: `bool`  
   190  Default: `false`  
   191  Requires version 3.45.0 or newer  
   192  
   193  ### `tls.root_cas`
   194  
   195  An optional root certificate authority to use. This is a string, representing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.
   196  
   197  
   198  Type: `string`  
   199  Default: `""`  
   200  
   201  ```yaml
   202  # Examples
   203  
   204  root_cas: |-
   205    -----BEGIN CERTIFICATE-----
   206    ...
   207    -----END CERTIFICATE-----
   208  ```
   209  
   210  ### `tls.root_cas_file`
   211  
   212  An optional path of a root certificate authority file to use. This is a file, often with a .pem extension, containing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.
   213  
   214  
   215  Type: `string`  
   216  Default: `""`  
   217  
   218  ```yaml
   219  # Examples
   220  
   221  root_cas_file: ./root_cas.pem
   222  ```
   223  
   224  ### `tls.client_certs`
   225  
   226  A list of client certificates to use. For each certificate either the fields `cert` and `key`, or `cert_file` and `key_file` should be specified, but not both.
   227  
   228  
   229  Type: `array`  
   230  Default: `[]`  
   231  
   232  ```yaml
   233  # Examples
   234  
   235  client_certs:
   236    - cert: foo
   237      key: bar
   238  
   239  client_certs:
   240    - cert_file: ./example.pem
   241      key_file: ./example.key
   242  ```
   243  
   244  ### `tls.client_certs[].cert`
   245  
   246  A plain text certificate to use.
   247  
   248  
   249  Type: `string`  
   250  Default: `""`  
   251  
   252  ### `tls.client_certs[].key`
   253  
   254  A plain text certificate key to use.
   255  
   256  
   257  Type: `string`  
   258  Default: `""`  
   259  
   260  ### `tls.client_certs[].cert_file`
   261  
   262  The path to a certificate to use.
   263  
   264  
   265  Type: `string`  
   266  Default: `""`  
   267  
   268  ### `tls.client_certs[].key_file`
   269  
   270  The path of a certificate key to use.
   271  
   272  
   273  Type: `string`  
   274  Default: `""`  
   275  
   276  ### `sasl`
   277  
   278  Enables SASL authentication.
   279  
   280  
   281  Type: `object`  
   282  
   283  ### `sasl.mechanism`
   284  
   285  The SASL authentication mechanism, if left empty SASL authentication is not used. Warning: SCRAM based methods within Benthos have not received a security audit.
   286  
   287  
   288  Type: `string`  
   289  Default: `""`  
   290  
   291  | Option | Summary |
   292  |---|---|
   293  | `PLAIN` | Plain text authentication. NOTE: When using plain text auth it is extremely likely that you'll also need to [enable TLS](#tlsenabled). |
   294  | `OAUTHBEARER` | OAuth Bearer based authentication. |
   295  | `SCRAM-SHA-256` | Authentication using the SCRAM-SHA-256 mechanism. |
   296  | `SCRAM-SHA-512` | Authentication using the SCRAM-SHA-512 mechanism. |
   297  
   298  
   299  ### `sasl.user`
   300  
   301  A `PLAIN` username. It is recommended that you use environment variables to populate this field.
   302  
   303  
   304  Type: `string`  
   305  Default: `""`  
   306  
   307  ```yaml
   308  # Examples
   309  
   310  user: ${USER}
   311  ```
   312  
   313  ### `sasl.password`
   314  
   315  A `PLAIN` password. It is recommended that you use environment variables to populate this field.
   316  
   317  
   318  Type: `string`  
   319  Default: `""`  
   320  
   321  ```yaml
   322  # Examples
   323  
   324  password: ${PASSWORD}
   325  ```
   326  
   327  ### `sasl.access_token`
   328  
   329  A static `OAUTHBEARER` access token
   330  
   331  
   332  Type: `string`  
   333  Default: `""`  
   334  
   335  ### `sasl.token_cache`
   336  
   337  Instead of using a static `access_token` allows you to query a [`cache`](/docs/components/caches/about) resource to fetch `OAUTHBEARER` tokens from
   338  
   339  
   340  Type: `string`  
   341  Default: `""`  
   342  
   343  ### `sasl.token_key`
   344  
   345  Required when using a `token_cache`, the key to query the cache with for tokens.
   346  
   347  
   348  Type: `string`  
   349  Default: `""`  
   350  
   351  ### `topics`
   352  
   353  A list of topics to consume from. If an item of the list contains commas it will be expanded into multiple topics.
   354  
   355  
   356  Type: `array`  
   357  Default: `["benthos_stream"]`  
   358  
   359  ### `client_id`
   360  
   361  An identifier for the client connection.
   362  
   363  
   364  Type: `string`  
   365  Default: `"benthos_kafka_input"`  
   366  
   367  ### `rack_id`
   368  
   369  A rack identifier for this client.
   370  
   371  
   372  Type: `string`  
   373  Default: `""`  
   374  
   375  ### `consumer_group`
   376  
   377  An identifier for the consumer group of the connection.
   378  
   379  
   380  Type: `string`  
   381  Default: `"benthos_consumer_group"`  
   382  
   383  ### `start_from_oldest`
   384  
   385  If an offset is not found for a topic partition, determines whether to consume from the oldest available offset, otherwise messages are consumed from the latest offset.
   386  
   387  
   388  Type: `bool`  
   389  Default: `true`  
   390  
   391  ### `commit_period`
   392  
   393  The period of time between each commit of the current partition offsets. Offsets are always committed during shutdown.
   394  
   395  
   396  Type: `string`  
   397  Default: `"1s"`  
   398  
   399  ### `max_processing_period`
   400  
   401  A maximum estimate for the time taken to process a message, this is used for tuning consumer group synchronization.
   402  
   403  
   404  Type: `string`  
   405  Default: `"100ms"`  
   406  
   407  ### `group`
   408  
   409  Tuning parameters for consumer group synchronization.
   410  
   411  
   412  Type: `object`  
   413  
   414  ### `group.session_timeout`
   415  
   416  A period after which a consumer of the group is kicked after no heartbeats.
   417  
   418  
   419  Type: `string`  
   420  Default: `"10s"`  
   421  
   422  ### `group.heartbeat_interval`
   423  
   424  A period in which heartbeats should be sent out.
   425  
   426  
   427  Type: `string`  
   428  Default: `"3s"`  
   429  
   430  ### `group.rebalance_timeout`
   431  
   432  A period after which rebalancing is abandoned if unresolved.
   433  
   434  
   435  Type: `string`  
   436  Default: `"60s"`  
   437  
   438  ### `fetch_buffer_cap`
   439  
   440  The maximum number of unprocessed messages to fetch at a given time.
   441  
   442  
   443  Type: `int`  
   444  Default: `256`  
   445  
   446  ### `target_version`
   447  
   448  The version of the Kafka protocol to use.
   449  
   450  
   451  Type: `string`  
   452  Default: `"1.0.0"`  
   453  
   454  ### `batching`
   455  
   456  Allows you to configure a [batching policy](/docs/configuration/batching).
   457  
   458  
   459  Type: `object`  
   460  
   461  ```yaml
   462  # Examples
   463  
   464  batching:
   465    byte_size: 5000
   466    count: 0
   467    period: 1s
   468  
   469  batching:
   470    count: 10
   471    period: 1s
   472  
   473  batching:
   474    check: this.contains("END BATCH")
   475    count: 0
   476    period: 1m
   477  ```
   478  
   479  ### `batching.count`
   480  
   481  A number of messages at which the batch should be flushed. If `0` disables count based batching.
   482  
   483  
   484  Type: `int`  
   485  Default: `0`  
   486  
   487  ### `batching.byte_size`
   488  
   489  An amount of bytes at which the batch should be flushed. If `0` disables size based batching.
   490  
   491  
   492  Type: `int`  
   493  Default: `0`  
   494  
   495  ### `batching.period`
   496  
   497  A period in which an incomplete batch should be flushed regardless of its size.
   498  
   499  
   500  Type: `string`  
   501  Default: `""`  
   502  
   503  ```yaml
   504  # Examples
   505  
   506  period: 1s
   507  
   508  period: 1m
   509  
   510  period: 500ms
   511  ```
   512  
   513  ### `batching.check`
   514  
   515  A [Bloblang query](/docs/guides/bloblang/about/) that should return a boolean value indicating whether a message should end a batch.
   516  
   517  
   518  Type: `string`  
   519  Default: `""`  
   520  
   521  ```yaml
   522  # Examples
   523  
   524  check: this.type == "end_of_transaction"
   525  ```
   526  
   527  ### `batching.processors`
   528  
   529  A list of [processors](/docs/components/processors/about) to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
   530  
   531  
   532  Type: `array`  
   533  Default: `[]`  
   534  
   535  ```yaml
   536  # Examples
   537  
   538  processors:
   539    - archive:
   540        format: lines
   541  
   542  processors:
   543    - archive:
   544        format: json_array
   545  
   546  processors:
   547    - merge_json: {}
   548  ```
   549  
   550