github.com/Jeffail/benthos/v3@v3.65.0/website/docs/components/inputs/broker.md (about)

     1  ---
     2  title: broker
     3  type: input
     4  status: stable
     5  categories: ["Utility"]
     6  ---
     7  
     8  <!--
     9       THIS FILE IS AUTOGENERATED!
    10  
    11       To make changes please edit the contents of:
    12       lib/input/broker.go
    13  -->
    14  
    15  import Tabs from '@theme/Tabs';
    16  import TabItem from '@theme/TabItem';
    17  
    18  
    19  Allows you to combine multiple inputs into a single stream of data, where each input will be read in parallel.
    20  
    21  
    22  <Tabs defaultValue="common" values={[
    23    { label: 'Common', value: 'common', },
    24    { label: 'Advanced', value: 'advanced', },
    25  ]}>
    26  
    27  <TabItem value="common">
    28  
    29  ```yaml
    30  # Common config fields, showing default values
    31  input:
    32    label: ""
    33    broker:
    34      inputs: []
    35      batching:
    36        count: 0
    37        byte_size: 0
    38        period: ""
    39        check: ""
    40  ```
    41  
    42  </TabItem>
    43  <TabItem value="advanced">
    44  
    45  ```yaml
    46  # All config fields, showing default values
    47  input:
    48    label: ""
    49    broker:
    50      copies: 1
    51      inputs: []
    52      batching:
    53        count: 0
    54        byte_size: 0
    55        period: ""
    56        check: ""
    57        processors: []
    58  ```
    59  
    60  </TabItem>
    61  </Tabs>
    62  
    63  A broker type is configured with its own list of input configurations and a field to specify how many copies of the list of inputs should be created.
    64  
    65  Adding more input types allows you to combine streams from multiple sources into one. For example, reading from both RabbitMQ and Kafka:
    66  
    67  ```yaml
    68  input:
    69    broker:
    70      copies: 1
    71      inputs:
    72        - amqp_0_9:
    73            url: amqp://guest:guest@localhost:5672/
    74            consumer_tag: benthos-consumer
    75            queue: benthos-queue
    76  
    77          # Optional list of input specific processing steps
    78          processors:
    79            - bloblang: |
    80                root.message = this
    81                root.meta.link_count = this.links.length()
    82                root.user.age = this.user.age.number()
    83  
    84        - kafka:
    85            addresses:
    86              - localhost:9092
    87            client_id: benthos_kafka_input
    88            consumer_group: benthos_consumer_group
    89            topics: [ benthos_stream:0 ]
    90  ```
    91  
    92  If the number of copies is greater than zero the list will be copied that number
    93  of times. For example, if your inputs were of type foo and bar, with 'copies'
    94  set to '2', you would end up with two 'foo' inputs and two 'bar' inputs.
    95  
    96  ### Batching
    97  
    98  It's possible to configure a [batch policy](/docs/configuration/batching#batch-policy)
    99  with a broker using the `batching` fields. When doing this the feeds
   100  from all child inputs are combined. Some inputs do not support broker based
   101  batching and specify this in their documentation.
   102  
   103  ### Processors
   104  
   105  It is possible to configure [processors](/docs/components/processors/about) at
   106  the broker level, where they will be applied to _all_ child inputs, as well as
   107  on the individual child inputs. If you have processors at both the broker level
   108  _and_ on child inputs then the broker processors will be applied _after_ the
   109  child nodes processors.
   110  
   111  ## Fields
   112  
   113  ### `copies`
   114  
   115  Whatever is specified within `inputs` will be created this many times.
   116  
   117  
   118  Type: `int`  
   119  Default: `1`  
   120  
   121  ### `inputs`
   122  
   123  A list of inputs to create.
   124  
   125  
   126  Type: `array`  
   127  Default: `[]`  
   128  
   129  ### `batching`
   130  
   131  Allows you to configure a [batching policy](/docs/configuration/batching).
   132  
   133  
   134  Type: `object`  
   135  
   136  ```yaml
   137  # Examples
   138  
   139  batching:
   140    byte_size: 5000
   141    count: 0
   142    period: 1s
   143  
   144  batching:
   145    count: 10
   146    period: 1s
   147  
   148  batching:
   149    check: this.contains("END BATCH")
   150    count: 0
   151    period: 1m
   152  ```
   153  
   154  ### `batching.count`
   155  
   156  A number of messages at which the batch should be flushed. If `0` disables count based batching.
   157  
   158  
   159  Type: `int`  
   160  Default: `0`  
   161  
   162  ### `batching.byte_size`
   163  
   164  An amount of bytes at which the batch should be flushed. If `0` disables size based batching.
   165  
   166  
   167  Type: `int`  
   168  Default: `0`  
   169  
   170  ### `batching.period`
   171  
   172  A period in which an incomplete batch should be flushed regardless of its size.
   173  
   174  
   175  Type: `string`  
   176  Default: `""`  
   177  
   178  ```yaml
   179  # Examples
   180  
   181  period: 1s
   182  
   183  period: 1m
   184  
   185  period: 500ms
   186  ```
   187  
   188  ### `batching.check`
   189  
   190  A [Bloblang query](/docs/guides/bloblang/about/) that should return a boolean value indicating whether a message should end a batch.
   191  
   192  
   193  Type: `string`  
   194  Default: `""`  
   195  
   196  ```yaml
   197  # Examples
   198  
   199  check: this.type == "end_of_transaction"
   200  ```
   201  
   202  ### `batching.processors`
   203  
   204  A list of [processors](/docs/components/processors/about) to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
   205  
   206  
   207  Type: `array`  
   208  Default: `[]`  
   209  
   210  ```yaml
   211  # Examples
   212  
   213  processors:
   214    - archive:
   215        format: lines
   216  
   217  processors:
   218    - archive:
   219        format: json_array
   220  
   221  processors:
   222    - merge_json: {}
   223  ```
   224  
   225