github.com/Jeffail/benthos/v3@v3.65.0/website/docs/components/inputs/kinesis.md (about)

     1  ---
     2  title: kinesis
     3  type: input
     4  status: deprecated
     5  categories: ["Services","AWS"]
     6  ---
     7  
     8  <!--
     9       THIS FILE IS AUTOGENERATED!
    10  
    11       To make changes please edit the contents of:
    12       lib/input/kinesis.go
    13  -->
    14  
    15  import Tabs from '@theme/Tabs';
    16  import TabItem from '@theme/TabItem';
    17  
    18  :::warning DEPRECATED
    19  This component is deprecated and will be removed in the next major version release. Please consider moving onto [alternative components](#alternatives).
    20  :::
    21  
    22  Receive messages from a Kinesis stream.
    23  
    24  
    25  <Tabs defaultValue="common" values={[
    26    { label: 'Common', value: 'common', },
    27    { label: 'Advanced', value: 'advanced', },
    28  ]}>
    29  
    30  <TabItem value="common">
    31  
    32  ```yaml
    33  # Common config fields, showing default values
    34  input:
    35    label: ""
    36    kinesis:
    37      stream: ""
    38      shard: "0"
    39      client_id: benthos_consumer
    40      commit_period: 1s
    41      dynamodb_table: ""
    42      start_from_oldest: true
    43      region: eu-west-1
    44      batching:
    45        count: 0
    46        byte_size: 0
    47        period: ""
    48        check: ""
    49  ```
    50  
    51  </TabItem>
    52  <TabItem value="advanced">
    53  
    54  ```yaml
    55  # All config fields, showing default values
    56  input:
    57    label: ""
    58    kinesis:
    59      stream: ""
    60      shard: "0"
    61      client_id: benthos_consumer
    62      commit_period: 1s
    63      dynamodb_table: ""
    64      start_from_oldest: true
    65      region: eu-west-1
    66      endpoint: ""
    67      credentials:
    68        profile: ""
    69        id: ""
    70        secret: ""
    71        token: ""
    72        role: ""
    73        role_external_id: ""
    74      timeout: 5s
    75      limit: 100
    76      batching:
    77        count: 0
    78        byte_size: 0
    79        period: ""
    80        check: ""
    81        processors: []
    82  ```
    83  
    84  </TabItem>
    85  </Tabs>
    86  
    87  ## Alternatives
    88  
    89  This input is being replaced with the shiny new [`aws_kinesis` input](/docs/components/inputs/aws_kinesis), which has improved features, consider trying it out instead.
    90  
    91  It's possible to use DynamoDB for persisting shard iterators by setting the
    92  table name. Offsets will then be tracked per `client_id` per
    93  `shard_id`. When using this mode you should create a table with
    94  `namespace` as the primary key and `shard_id` as a sort key.
    95  
    96  Use the `batching` fields to configure an optional
    97  [batching policy](/docs/configuration/batching#batch-policy). Any other batching
    98  mechanism will stall with this input due its sequential transaction model.
    99  
   100  ## Fields
   101  
   102  ### `stream`
   103  
   104  The Kinesis stream to consume from.
   105  
   106  
   107  Type: `string`  
   108  Default: `""`  
   109  
   110  ### `shard`
   111  
   112  The shard to consume from.
   113  
   114  
   115  Type: `string`  
   116  Default: `"0"`  
   117  
   118  ### `client_id`
   119  
   120  The client identifier to assume.
   121  
   122  
   123  Type: `string`  
   124  Default: `"benthos_consumer"`  
   125  
   126  ### `commit_period`
   127  
   128  The rate at which offset commits should be sent.
   129  
   130  
   131  Type: `string`  
   132  Default: `"1s"`  
   133  
   134  ### `dynamodb_table`
   135  
   136  A DynamoDB table to use for offset storage.
   137  
   138  
   139  Type: `string`  
   140  Default: `""`  
   141  
   142  ### `start_from_oldest`
   143  
   144  Whether to consume from the oldest message when an offset does not yet exist for the stream.
   145  
   146  
   147  Type: `bool`  
   148  Default: `true`  
   149  
   150  ### `region`
   151  
   152  The AWS region to target.
   153  
   154  
   155  Type: `string`  
   156  Default: `"eu-west-1"`  
   157  
   158  ### `endpoint`
   159  
   160  Allows you to specify a custom endpoint for the AWS API.
   161  
   162  
   163  Type: `string`  
   164  Default: `""`  
   165  
   166  ### `credentials`
   167  
   168  Optional manual configuration of AWS credentials to use. More information can be found [in this document](/docs/guides/cloud/aws).
   169  
   170  
   171  Type: `object`  
   172  
   173  ### `credentials.profile`
   174  
   175  A profile from `~/.aws/credentials` to use.
   176  
   177  
   178  Type: `string`  
   179  Default: `""`  
   180  
   181  ### `credentials.id`
   182  
   183  The ID of credentials to use.
   184  
   185  
   186  Type: `string`  
   187  Default: `""`  
   188  
   189  ### `credentials.secret`
   190  
   191  The secret for the credentials being used.
   192  
   193  
   194  Type: `string`  
   195  Default: `""`  
   196  
   197  ### `credentials.token`
   198  
   199  The token for the credentials being used, required when using short term credentials.
   200  
   201  
   202  Type: `string`  
   203  Default: `""`  
   204  
   205  ### `credentials.role`
   206  
   207  A role ARN to assume.
   208  
   209  
   210  Type: `string`  
   211  Default: `""`  
   212  
   213  ### `credentials.role_external_id`
   214  
   215  An external ID to provide when assuming a role.
   216  
   217  
   218  Type: `string`  
   219  Default: `""`  
   220  
   221  ### `timeout`
   222  
   223  The period of time to wait before abandoning a request and trying again.
   224  
   225  
   226  Type: `string`  
   227  Default: `"5s"`  
   228  
   229  ### `limit`
   230  
   231  The maximum number of messages to consume from each request.
   232  
   233  
   234  Type: `int`  
   235  Default: `100`  
   236  
   237  ### `batching`
   238  
   239  Allows you to configure a [batching policy](/docs/configuration/batching).
   240  
   241  
   242  Type: `object`  
   243  
   244  ```yaml
   245  # Examples
   246  
   247  batching:
   248    byte_size: 5000
   249    count: 0
   250    period: 1s
   251  
   252  batching:
   253    count: 10
   254    period: 1s
   255  
   256  batching:
   257    check: this.contains("END BATCH")
   258    count: 0
   259    period: 1m
   260  ```
   261  
   262  ### `batching.count`
   263  
   264  A number of messages at which the batch should be flushed. If `0` disables count based batching.
   265  
   266  
   267  Type: `int`  
   268  Default: `0`  
   269  
   270  ### `batching.byte_size`
   271  
   272  An amount of bytes at which the batch should be flushed. If `0` disables size based batching.
   273  
   274  
   275  Type: `int`  
   276  Default: `0`  
   277  
   278  ### `batching.period`
   279  
   280  A period in which an incomplete batch should be flushed regardless of its size.
   281  
   282  
   283  Type: `string`  
   284  Default: `""`  
   285  
   286  ```yaml
   287  # Examples
   288  
   289  period: 1s
   290  
   291  period: 1m
   292  
   293  period: 500ms
   294  ```
   295  
   296  ### `batching.check`
   297  
   298  A [Bloblang query](/docs/guides/bloblang/about/) that should return a boolean value indicating whether a message should end a batch.
   299  
   300  
   301  Type: `string`  
   302  Default: `""`  
   303  
   304  ```yaml
   305  # Examples
   306  
   307  check: this.type == "end_of_transaction"
   308  ```
   309  
   310  ### `batching.processors`
   311  
   312  A list of [processors](/docs/components/processors/about) to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
   313  
   314  
   315  Type: `array`  
   316  Default: `[]`  
   317  
   318  ```yaml
   319  # Examples
   320  
   321  processors:
   322    - archive:
   323        format: lines
   324  
   325  processors:
   326    - archive:
   327        format: json_array
   328  
   329  processors:
   330    - merge_json: {}
   331  ```
   332  
   333