github.com/Jeffail/benthos/v3@v3.65.0/website/blog/2021-03-09-redpanda.md (about)

     1  ---
     2  title: "Cross Post: We're Bringing Simple Back (to Streaming)"
     3  author: Ashley Jeffs
     4  author_url: https://github.com/Jeffail
     5  author_image_url: /img/ash.jpg
     6  keywords: [
     7      "redpanda",
     8      "stream processing",
     9      "kafka",
    10  ]
    11  tags: []
    12  ---
    13  
    14  (Cross-posted with [https://vectorized.io/blog/benthos/](https://vectorized.io/blog/benthos/))
    15  
    16  Combining the power of Redpanda and Benthos for your streaming needs is so simple that this blog post is almost over already.
    17  
    18  <!--truncate-->
    19  
    20  [Benthos](https://www.benthos.dev/) is an open source stream processor that provides data mapping, filtering, hydration and enrichment capabilities across a wide range of connectors. It is driven by a minimal, declarative configuration spec, and with a transaction based architecture it eliminates the development effort of building resilient stream processing pipelines.
    21  
    22  Likewise, with its simplicity and high performance, Redpanda eliminates the operational effort of data persistence and availability by providing a Kafka-compatible streaming platform without the moving parts.
    23  
    24  With so much taken care of you're well in for a boring, uneventful time when you combine the two. Make sure you've grabbed a copy of both services, full instructions can be found in the [getting started guide for Benthos](https://www.benthos.dev/docs/guides/getting_started) and [the Redpanda docs](https://vectorized.io/docs). In this post we'll be running them with Docker so we'll start by pulling both images:
    25  
    26  ```
    27  docker pull vectorized/redpanda:latest
    28  docker pull jeffail/benthos:latest
    29  ```
    30  
    31  We can then create a new network for the services to connect with:
    32  
    33  ```
    34  docker network create -d bridge redpandanet
    35  ```
    36  
    37  Next, run Redpanda in the background, we'll go with a single node for now:
    38  
    39  ```
    40  docker run -d \
    41    --network redpandanet \
    42    --name redpanda \
    43    -p 9092:9092 \
    44    vectorized/redpanda redpanda start \
    45    --reserve-memory 0M \
    46    --overprovisioned \
    47    --smp 1 \
    48    --memory 1G \
    49    --advertise-kafka-addr redpanda:9092
    50  ```
    51  
    52  In order to send data to Redpanda with Benthos we'll need to create a config, starting off with a simple Stdin to Kafka pipeline, copy the following config into a file `producer.yaml`:
    53  
    54  ```yaml
    55  input:
    56    stdin: {}
    57  
    58  output:
    59    kafka:
    60      addresses: [ redpanda:9092 ]
    61      topic: topic_A
    62  ```
    63  
    64  Pro tip: You can also use Benthos itself to generate a config like this with `docker run --rm jeffail/benthos create stdin//kafka > ./producer.yaml`.
    65  
    66  And now run Benthos by adding the config as a Docker volume, along with a pseudo-TTY for writing our messages:
    67  
    68  ```
    69  docker run --rm -it \
    70    --network redpandanet \
    71    -v $(pwd)/producer.yaml:/benthos.yaml \
    72    jeffail/benthos
    73  ```
    74  
    75  This will open an interactive shell where you can write in some data to send. Benthos will gobble up anything you throw at it, try mixing structured and unstructured messages, ending each message with a newline:
    76  
    77  ```
    78  {"id":"1","data":"a structured message"}
    79  but this here ain't structured at all!
    80  [{"id":"2"},"also structured in a different (but totally valid) way"]
    81  ```
    82  
    83  When you're finished hit CTRL+C and it'll exit.
    84  
    85  Next, let's try reading that data back out from Redpanda, this time let's also add a [processor](https://www.benthos.dev/docs/components/processors/about) in order to mutate our data, copy the following into a file `consumer.yaml`:
    86  
    87  ```yaml
    88  input:
    89    kafka:
    90      addresses: [ redpanda:9092 ]
    91      topics: [ topic_A ]
    92      consumer_group: example_group
    93  
    94  pipeline:
    95    processors:
    96      - bloblang: |
    97          root.doc = this | content().string()
    98          root.length = content().length()
    99          root.topic = meta("kafka_topic")
   100  
   101  output:
   102    stdout: {}
   103  ```
   104  
   105  And run it with our new config, and without the pseudo-TTY this time:
   106  
   107  ```
   108  docker run --rm \
   109    --network redpandanet \
   110    -v $(pwd)/consumer.yaml:/benthos.yaml \
   111    jeffail/benthos
   112  ```
   113  
   114  Now you should see it print mutated versions of the messages you sent to Stdout:
   115  
   116  ```json
   117  {"doc":{"data":"a structured message","id":"1"},"length":40,"topic":"topic_A"}
   118  {"doc":"but this here ain't structured at all!","length":38,"topic":"topic_A"}
   119  {"doc":[{"id":"2"},"also structured in a different (but totally valid) way"],"length":69,"topic":"topic_A"}
   120  ```
   121  
   122  The [Bloblang processor](https://www.benthos.dev/docs/components/processors/bloblang) in our consumer config has remapped the original message to a new field `doc`, first attempting to extract it as a structured document, but falling back to a stringified version of it when it's unstructured. We've also added a field `length` which contains the length of the original message, and `topic` which contains the Kafka topic the message was consumed from.
   123  
   124  That's it for now, if you're still hungry for more then check out the Benthos website at [https://www.benthos.dev](https://www.benthos.dev/), and you can learn more about the Benthos mapping language Bloblang [in this guide](https://www.benthos.dev/docs/guides/bloblang/about).