github.com/Jeffail/benthos/v3@v3.65.0/website/docs/configuration/error_handling.md (about) 1 --- 2 title: Error Handling 3 --- 4 5 It's always possible for things to go wrong, be a good captain and plan ahead. 6 7 <div style={{textAlign: 'center'}}><img style={{maxWidth: '300px', marginBottom: '40px'}} src="/img/Blobpirate.svg" /></div> 8 9 Benthos supports a range of [processors][processors] such as `http` and `aws_lambda` that have the potential to fail if their retry attempts are exhausted. When this happens the data is not dropped but instead continues through the pipeline mostly unchanged, but a metadata flag is added allowing you to handle the errors in a way that suits your needs. 10 11 This document outlines common patterns for dealing with errors, such as dropping them, recovering them with more processing, routing them to a dead-letter queue, or any combination thereof. 12 13 ## Abandon on Failure 14 15 It's possible to define a list of processors which should be skipped for messages that failed a previous stage using the [`try` processor][processor.try]: 16 17 ```yaml 18 pipeline: 19 processors: 20 - try: 21 - resource: foo 22 - resource: bar # Skipped if foo failed 23 - resource: baz # Skipped if foo or bar failed 24 ``` 25 26 ## Recover Failed Messages 27 28 Failed messages can be fed into their own processor steps with a [`catch` processor][processor.catch]: 29 30 ```yaml 31 pipeline: 32 processors: 33 - resource: foo # Processor that might fail 34 - catch: 35 - resource: bar # Recover here 36 ``` 37 38 Once messages finish the catch block they will have their failure flags removed and are treated like regular messages. If this behaviour is not desired then it is possible to simulate a catch block with a [`switch` processor][processor.switch]: 39 40 ```yaml 41 pipeline: 42 processors: 43 - resource: foo # Processor that might fail 44 - switch: 45 - check: errored() 46 processors: 47 - resource: bar # Recover here 48 ``` 49 50 ## Logging Errors 51 52 When an error occurs there will occasionally be useful information stored within the error flag that can be exposed with the interpolation function [`error`][configuration.interpolation]. This allows you to expose the information with processors. 53 54 For example, when catching failed processors you can [`log`][processor.log] the messages: 55 56 ```yaml 57 pipeline: 58 processors: 59 - resource: foo # Processor that might fail 60 - catch: 61 - log: 62 message: "Processing failed due to: ${!error()}" 63 ``` 64 65 Or perhaps augment the message payload with the error message: 66 67 ```yaml 68 pipeline: 69 processors: 70 - resource: foo # Processor that might fail 71 - catch: 72 - bloblang: | 73 root = this 74 root.meta.error = error() 75 ``` 76 77 ## Attempt Until Success 78 79 It's possible to reattempt a processor for a particular message until it is successful with a [`while`][processor.while] processor: 80 81 ```yaml 82 pipeline: 83 processors: 84 - for_each: 85 - while: 86 at_least_once: true 87 max_loops: 0 # Set this greater than zero to cap the number of attempts 88 check: errored() 89 processors: 90 - catch: [] # Wipe any previous error 91 - resource: foo # Attempt this processor until success 92 ``` 93 94 This loop will block the pipeline and prevent the blocking message from being acknowledged. It is therefore usually a good idea in practice to use the `max_loops` field to set a limit to the number of attempts to make so that the pipeline can unblock itself without intervention. 95 96 ## Drop Failed Messages 97 98 In order to filter out any failed messages from your pipeline you can use a [`bloblang` processor][processor.bloblang]: 99 100 ```yaml 101 pipeline: 102 processors: 103 - bloblang: root = if errored() { deleted() } 104 ``` 105 106 This will remove any failed messages from a batch. Furthermore, dropping a message will propagate an acknowledgement (also known as "ack") upstream to the pipeline's input. 107 108 ## Route to a Dead-Letter Queue 109 110 It is possible to route failed messages to different destinations using a [`switch` output][output.switch]: 111 112 ```yaml 113 output: 114 switch: 115 cases: 116 - check: errored() 117 output: 118 resource: foo # Dead letter queue 119 120 - output: 121 resource: bar # Everything else 122 ``` 123 124 ## Reject Messages 125 126 Some inputs such as GCP Pub/Sub and AMQP support rejecting messages, in which case it can sometimes be more efficient to reject messages that have failed processing rather than route them to a dead letter queue. This can be achieved with the [`reject` output][output.reject]: 127 128 ```yaml 129 output: 130 switch: 131 retry_until_success: false 132 cases: 133 - check: errored() 134 output: 135 # Reject failed messages 136 reject: "Message failed due to: ${! error() }" 137 138 - output: 139 resource: bar # Everything else 140 ``` 141 142 When the source of a rejected message is a sequential input without support for conventional nacks, such as the Kafka or file inputs, a rejected message will be reprocessed from scratch, applying back pressure until it is successfully processed. This can also sometimes be a useful pattern. 143 144 [processors]: /docs/components/processors/about 145 [processor.bloblang]: /docs/components/processors/bloblang 146 [processor.switch]: /docs/components/processors/switch 147 [processor.while]: /docs/components/processors/while 148 [processor.for_each]: /docs/components/processors/for_each 149 [processor.catch]: /docs/components/processors/catch 150 [processor.try]: /docs/components/processors/try 151 [processor.log]: /docs/components/processors/log 152 [output.switch]: /docs/components/outputs/switch 153 [output.broker]: /docs/components/outputs/broker 154 [output.reject]: /docs/components/outputs/reject 155 [configuration.interpolation]: /docs/configuration/interpolation#bloblang-queries