github.com/Jeffail/benthos/v3@v3.65.0/website/docs/components/buffers/memory.md (about) 1 --- 2 title: memory 3 type: buffer 4 status: stable 5 categories: ["Utility"] 6 --- 7 8 <!-- 9 THIS FILE IS AUTOGENERATED! 10 11 To make changes please edit the contents of: 12 lib/buffer/memory.go 13 --> 14 15 import Tabs from '@theme/Tabs'; 16 import TabItem from '@theme/TabItem'; 17 18 19 Stores consumed messages in memory and acknowledges them at the input level. 20 During shutdown Benthos will make a best attempt at flushing all remaining 21 messages before exiting cleanly. 22 23 24 <Tabs defaultValue="common" values={[ 25 { label: 'Common', value: 'common', }, 26 { label: 'Advanced', value: 'advanced', }, 27 ]}> 28 29 <TabItem value="common"> 30 31 ```yaml 32 # Common config fields, showing default values 33 buffer: 34 memory: 35 limit: 524288000 36 batch_policy: 37 enabled: false 38 count: 0 39 byte_size: 0 40 period: "" 41 check: "" 42 ``` 43 44 </TabItem> 45 <TabItem value="advanced"> 46 47 ```yaml 48 # All config fields, showing default values 49 buffer: 50 memory: 51 limit: 524288000 52 batch_policy: 53 enabled: false 54 count: 0 55 byte_size: 0 56 period: "" 57 check: "" 58 processors: [] 59 ``` 60 61 </TabItem> 62 </Tabs> 63 64 This buffer is appropriate when consuming messages from inputs that do not 65 gracefully handle back pressure and where delivery guarantees aren't critical. 66 67 This buffer has a configurable limit, where consumption will be stopped with 68 back pressure upstream if the total size of messages in the buffer reaches this 69 amount. Since this calculation is only an estimate, and the real size of 70 messages in RAM is always higher, it is recommended to set the limit 71 significantly below the amount of RAM available. 72 73 ## Delivery Guarantees 74 75 This buffer intentionally weakens the delivery guarantees of the pipeline and 76 therefore should never be used in places where data loss is unacceptable. 77 78 ## Batching 79 80 It is possible to batch up messages sent from this buffer using a 81 [batch policy](/docs/configuration/batching#batch-policy). 82 83 ## Fields 84 85 ### `limit` 86 87 The maximum buffer size (in bytes) to allow before applying backpressure upstream. 88 89 90 Type: `int` 91 Default: `524288000` 92 93 ### `batch_policy` 94 95 Optionally configure a policy to flush buffered messages in batches. 96 97 98 Type: `object` 99 100 ### `batch_policy.enabled` 101 102 Whether to batch messages as they are flushed. 103 104 105 Type: `bool` 106 Default: `false` 107 108 ### `batch_policy.count` 109 110 A number of messages at which the batch should be flushed. If `0` disables count based batching. 111 112 113 Type: `int` 114 Default: `0` 115 116 ### `batch_policy.byte_size` 117 118 An amount of bytes at which the batch should be flushed. If `0` disables size based batching. 119 120 121 Type: `int` 122 Default: `0` 123 124 ### `batch_policy.period` 125 126 A period in which an incomplete batch should be flushed regardless of its size. 127 128 129 Type: `string` 130 Default: `""` 131 132 ```yaml 133 # Examples 134 135 period: 1s 136 137 period: 1m 138 139 period: 500ms 140 ``` 141 142 ### `batch_policy.check` 143 144 A [Bloblang query](/docs/guides/bloblang/about/) that should return a boolean value indicating whether a message should end a batch. 145 146 147 Type: `string` 148 Default: `""` 149 150 ```yaml 151 # Examples 152 153 check: this.type == "end_of_transaction" 154 ``` 155 156 ### `batch_policy.processors` 157 158 A list of [processors](/docs/components/processors/about) to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op. 159 160 161 Type: `array` 162 Default: `[]` 163 164 ```yaml 165 # Examples 166 167 processors: 168 - archive: 169 format: lines 170 171 processors: 172 - archive: 173 format: json_array 174 175 processors: 176 - merge_json: {} 177 ``` 178 179