github.com/deanMdreon/kafka-go@v0.4.32/README.md (about)

     1  # kafka-go [![CircleCI](https://circleci.com/gh/segmentio/kafka-go.svg?style=shield)](https://circleci.com/gh/segmentio/kafka-go) [![Go Report Card](https://goreportcard.com/badge/github.com/deanMdreon/kafka-go)](https://goreportcard.com/report/github.com/deanMdreon/kafka-go) [![GoDoc](https://godoc.org/github.com/deanMdreon/kafka-go?status.svg)](https://godoc.org/github.com/deanMdreon/kafka-go)
     2  
     3  ## Motivations
     4  
     5  We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go
     6  client libraries for Kafka at the time of this writing was not ideal. The available
     7  options were:
     8  
     9  - [sarama](https://github.com/Shopify/sarama), which is by far the most popular
    10    but is quite difficult to work with. It is poorly documented, the API exposes
    11    low level concepts of the Kafka protocol, and it doesn't support recent Go features
    12    like [contexts](https://golang.org/pkg/context/). It also passes all values as
    13    pointers which causes large numbers of dynamic memory allocations, more frequent
    14    garbage collections, and higher memory usage.
    15  
    16  - [confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go) is a
    17    cgo based wrapper around [librdkafka](https://github.com/edenhill/librdkafka),
    18    which means it introduces a dependency to a C library on all Go code that uses
    19    the package. It has much better documentation than sarama but still lacks support
    20    for Go contexts.
    21  
    22  - [goka](https://github.com/lovoo/goka) is a more recent Kafka client for Go
    23    which focuses on a specific usage pattern. It provides abstractions for using Kafka
    24    as a message passing bus between services rather than an ordered log of events, but
    25    this is not the typical use case of Kafka for us at Segment. The package also
    26    depends on sarama for all interactions with Kafka.
    27  
    28  This is where `kafka-go` comes into play. It provides both low and high level
    29  APIs for interacting with Kafka, mirroring concepts and implementing interfaces of
    30  the Go standard library to make it easy to use and integrate with existing
    31  software.
    32  
    33  #### Note:
    34  
    35  In order to better align with our newly adopted Code of Conduct, the kafka-go
    36  project has renamed our default branch to `main`. For the full details of our
    37  Code Of Conduct see [this](./CODE_OF_CONDUCT.md) document.
    38  
    39  ## Kafka versions
    40  
    41  `kafka-go` is currently tested with Kafka versions 0.10.1.0 to 2.7.1.
    42  While it should also be compatible with later versions, newer features available
    43  in the Kafka API may not yet be implemented in the client.
    44  
    45  ## Go versions
    46  
    47  `kafka-go` requires Go version 1.15 or later.
    48  
    49  ## Connection [![GoDoc](https://godoc.org/github.com/deanMdreon/kafka-go?status.svg)](https://godoc.org/github.com/deanMdreon/kafka-go#Conn)
    50  
    51  The `Conn` type is the core of the `kafka-go` package. It wraps around a raw
    52  network connection to expose a low-level API to a Kafka server.
    53  
    54  Here are some examples showing typical use of a connection object:
    55  
    56  ```go
    57  // to produce messages
    58  topic := "my-topic"
    59  partition := 0
    60  
    61  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    62  if err != nil {
    63      log.Fatal("failed to dial leader:", err)
    64  }
    65  
    66  conn.SetWriteDeadline(time.Now().Add(10*time.Second))
    67  _, err = conn.WriteMessages(
    68      kafka.Message{Value: []byte("one!")},
    69      kafka.Message{Value: []byte("two!")},
    70      kafka.Message{Value: []byte("three!")},
    71  )
    72  if err != nil {
    73      log.Fatal("failed to write messages:", err)
    74  }
    75  
    76  if err := conn.Close(); err != nil {
    77      log.Fatal("failed to close writer:", err)
    78  }
    79  ```
    80  
    81  ```go
    82  // to consume messages
    83  topic := "my-topic"
    84  partition := 0
    85  
    86  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    87  if err != nil {
    88      log.Fatal("failed to dial leader:", err)
    89  }
    90  
    91  conn.SetReadDeadline(time.Now().Add(10*time.Second))
    92  batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max
    93  
    94  b := make([]byte, 10e3) // 10KB max per message
    95  for {
    96      n, err := batch.Read(b)
    97      if err != nil {
    98          break
    99      }
   100      fmt.Println(string(b[:n]))
   101  }
   102  
   103  if err := batch.Close(); err != nil {
   104      log.Fatal("failed to close batch:", err)
   105  }
   106  
   107  if err := conn.Close(); err != nil {
   108      log.Fatal("failed to close connection:", err)
   109  }
   110  ```
   111  
   112  ### To Create Topics
   113  
   114  By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_AUTO_CREATE_TOPICS_ENABLE='true'` in the wurstmeister/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
   115  
   116  ```go
   117  // to create topics when auto.create.topics.enable='true'
   118  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
   119  if err != nil {
   120      panic(err.Error())
   121  }
   122  ```
   123  
   124  If `auto.create.topics.enable='false'` then you will need to create topics explicitly like so:
   125  
   126  ```go
   127  // to create topics when auto.create.topics.enable='false'
   128  topic := "my-topic"
   129  
   130  conn, err := kafka.Dial("tcp", "localhost:9092")
   131  if err != nil {
   132      panic(err.Error())
   133  }
   134  defer conn.Close()
   135  
   136  controller, err := conn.Controller()
   137  if err != nil {
   138      panic(err.Error())
   139  }
   140  var controllerConn *kafka.Conn
   141  controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   142  if err != nil {
   143      panic(err.Error())
   144  }
   145  defer controllerConn.Close()
   146  
   147  
   148  topicConfigs := []kafka.TopicConfig{
   149      {
   150          Topic:             topic,
   151          NumPartitions:     1,
   152          ReplicationFactor: 1,
   153      },
   154  }
   155  
   156  err = controllerConn.CreateTopics(topicConfigs...)
   157  if err != nil {
   158      panic(err.Error())
   159  }
   160  ```
   161  
   162  ### To Connect To Leader Via a Non-leader Connection
   163  
   164  ```go
   165  // to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
   166  conn, err := kafka.Dial("tcp", "localhost:9092")
   167  if err != nil {
   168      panic(err.Error())
   169  }
   170  defer conn.Close()
   171  controller, err := conn.Controller()
   172  if err != nil {
   173      panic(err.Error())
   174  }
   175  var connLeader *kafka.Conn
   176  connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   177  if err != nil {
   178      panic(err.Error())
   179  }
   180  defer connLeader.Close()
   181  ```
   182  
   183  ### To list topics
   184  
   185  ```go
   186  conn, err := kafka.Dial("tcp", "localhost:9092")
   187  if err != nil {
   188      panic(err.Error())
   189  }
   190  defer conn.Close()
   191  
   192  partitions, err := conn.ReadPartitions()
   193  if err != nil {
   194      panic(err.Error())
   195  }
   196  
   197  m := map[string]struct{}{}
   198  
   199  for _, p := range partitions {
   200      m[p.Topic] = struct{}{}
   201  }
   202  for k := range m {
   203      fmt.Println(k)
   204  }
   205  ```
   206  
   207  Because it is low level, the `Conn` type turns out to be a great building block
   208  for higher level abstractions, like the `Reader` for example.
   209  
   210  ## Reader [![GoDoc](https://godoc.org/github.com/deanMdreon/kafka-go?status.svg)](https://godoc.org/github.com/deanMdreon/kafka-go#Reader)
   211  
   212  A `Reader` is another concept exposed by the `kafka-go` package, which intends
   213  to make it simpler to implement the typical use case of consuming from a single
   214  topic-partition pair.
   215  A `Reader` also automatically handles reconnections and offset management, and
   216  exposes an API that supports asynchronous cancellations and timeouts using Go
   217  contexts.
   218  
   219  Note that it is important to call `Close()` on a `Reader` when a process exits.
   220  The kafka server needs a graceful disconnect to stop it from continuing to
   221  attempt to send messages to the connected clients. The given example will not
   222  call `Close()` if the process is terminated with SIGINT (ctrl-c at the shell) or
   223  SIGTERM (as docker stop or a kubernetes restart does). This can result in a
   224  delay when a new reader on the same topic connects (e.g. new process started
   225  or new container running). Use a `signal.Notify` handler to close the reader on
   226  process shutdown.
   227  
   228  ```go
   229  // make a new reader that consumes from topic-A, partition 0, at offset 42
   230  r := kafka.NewReader(kafka.ReaderConfig{
   231      Brokers:   []string{"localhost:9092"},
   232      Topic:     "topic-A",
   233      Partition: 0,
   234      MinBytes:  10e3, // 10KB
   235      MaxBytes:  10e6, // 10MB
   236  })
   237  r.SetOffset(42)
   238  
   239  for {
   240      m, err := r.ReadMessage(context.Background())
   241      if err != nil {
   242          break
   243      }
   244      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   245  }
   246  
   247  if err := r.Close(); err != nil {
   248      log.Fatal("failed to close reader:", err)
   249  }
   250  ```
   251  
   252  ### Consumer Groups
   253  
   254  `kafka-go` also supports Kafka consumer groups including broker managed offsets.
   255  To enable consumer groups, simply specify the GroupID in the ReaderConfig.
   256  
   257  ReadMessage automatically commits offsets when using consumer groups.
   258  
   259  ```go
   260  // make a new reader that consumes from topic-A
   261  r := kafka.NewReader(kafka.ReaderConfig{
   262      Brokers:   []string{"localhost:9092"},
   263      GroupID:   "consumer-group-id",
   264      Topic:     "topic-A",
   265      MinBytes:  10e3, // 10KB
   266      MaxBytes:  10e6, // 10MB
   267  })
   268  
   269  for {
   270      m, err := r.ReadMessage(context.Background())
   271      if err != nil {
   272          break
   273      }
   274      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   275  }
   276  
   277  if err := r.Close(); err != nil {
   278      log.Fatal("failed to close reader:", err)
   279  }
   280  ```
   281  
   282  There are a number of limitations when using consumer groups:
   283  
   284  - `(*Reader).SetOffset` will return an error when GroupID is set
   285  - `(*Reader).Offset` will always return `-1` when GroupID is set
   286  - `(*Reader).Lag` will always return `-1` when GroupID is set
   287  - `(*Reader).ReadLag` will return an error when GroupID is set
   288  - `(*Reader).Stats` will return a partition of `-1` when GroupID is set
   289  
   290  ### Explicit Commits
   291  
   292  `kafka-go` also supports explicit commits. Instead of calling `ReadMessage`,
   293  call `FetchMessage` followed by `CommitMessages`.
   294  
   295  ```go
   296  ctx := context.Background()
   297  for {
   298      m, err := r.FetchMessage(ctx)
   299      if err != nil {
   300          break
   301      }
   302      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   303      if err := r.CommitMessages(ctx, m); err != nil {
   304          log.Fatal("failed to commit messages:", err)
   305      }
   306  }
   307  ```
   308  
   309  When committing messages in consumer groups, the message with the highest offset
   310  for a given topic/partition determines the value of the committed offset for
   311  that partition. For example, if messages at offset 1, 2, and 3 of a single
   312  partition were retrieved by call to `FetchMessage`, calling `CommitMessages`
   313  with message offset 3 will also result in committing the messages at offsets 1
   314  and 2 for that partition.
   315  
   316  ### Managing Commits
   317  
   318  By default, CommitMessages will synchronously commit offsets to Kafka. For
   319  improved performance, you can instead periodically commit offsets to Kafka
   320  by setting CommitInterval on the ReaderConfig.
   321  
   322  ```go
   323  // make a new reader that consumes from topic-A
   324  r := kafka.NewReader(kafka.ReaderConfig{
   325      Brokers:        []string{"localhost:9092"},
   326      GroupID:        "consumer-group-id",
   327      Topic:          "topic-A",
   328      MinBytes:       10e3, // 10KB
   329      MaxBytes:       10e6, // 10MB
   330      CommitInterval: time.Second, // flushes commits to Kafka every second
   331  })
   332  ```
   333  
   334  ## Writer [![GoDoc](https://godoc.org/github.com/deanMdreon/kafka-go?status.svg)](https://godoc.org/github.com/deanMdreon/kafka-go#Writer)
   335  
   336  To produce messages to Kafka, a program may use the low-level `Conn` API, but
   337  the package also provides a higher level `Writer` type which is more appropriate
   338  to use in most cases as it provides additional features:
   339  
   340  - Automatic retries and reconnections on errors.
   341  - Configurable distribution of messages across available partitions.
   342  - Synchronous or asynchronous writes of messages to Kafka.
   343  - Asynchronous cancellation using contexts.
   344  - Flushing of pending messages on close to support graceful shutdowns.
   345  
   346  ```go
   347  // make a writer that produces to topic-A, using the least-bytes distribution
   348  w := &kafka.Writer{
   349  	Addr:     kafka.TCP("localhost:9092"),
   350  	Topic:   "topic-A",
   351  	Balancer: &kafka.LeastBytes{},
   352  }
   353  
   354  err := w.WriteMessages(context.Background(),
   355  	kafka.Message{
   356  		Key:   []byte("Key-A"),
   357  		Value: []byte("Hello World!"),
   358  	},
   359  	kafka.Message{
   360  		Key:   []byte("Key-B"),
   361  		Value: []byte("One!"),
   362  	},
   363  	kafka.Message{
   364  		Key:   []byte("Key-C"),
   365  		Value: []byte("Two!"),
   366  	},
   367  )
   368  if err != nil {
   369      log.Fatal("failed to write messages:", err)
   370  }
   371  
   372  if err := w.Close(); err != nil {
   373      log.Fatal("failed to close writer:", err)
   374  }
   375  ```
   376  
   377  ### Writing to multiple topics
   378  
   379  Normally, the `WriterConfig.Topic` is used to initialize a single-topic writer.
   380  By excluding that particular configuration, you are given the ability to define
   381  the topic on a per-message basis by setting `Message.Topic`.
   382  
   383  ```go
   384  w := &kafka.Writer{
   385  	Addr:     kafka.TCP("localhost:9092"),
   386      // NOTE: When Topic is not defined here, each Message must define it instead.
   387  	Balancer: &kafka.LeastBytes{},
   388  }
   389  
   390  err := w.WriteMessages(context.Background(),
   391      // NOTE: Each Message has Topic defined, otherwise an error is returned.
   392  	kafka.Message{
   393          Topic: "topic-A",
   394  		Key:   []byte("Key-A"),
   395  		Value: []byte("Hello World!"),
   396  	},
   397  	kafka.Message{
   398          Topic: "topic-B",
   399  		Key:   []byte("Key-B"),
   400  		Value: []byte("One!"),
   401  	},
   402  	kafka.Message{
   403          Topic: "topic-C",
   404  		Key:   []byte("Key-C"),
   405  		Value: []byte("Two!"),
   406  	},
   407  )
   408  if err != nil {
   409      log.Fatal("failed to write messages:", err)
   410  }
   411  
   412  if err := w.Close(); err != nil {
   413      log.Fatal("failed to close writer:", err)
   414  }
   415  ```
   416  
   417  **NOTE:** These 2 patterns are mutually exclusive, if you set `Writer.Topic`,
   418  you must not also explicitly define `Message.Topic` on the messages you are
   419  writing. The opposite applies when you do not define a topic for the writer.
   420  The `Writer` will return an error if it detects this ambiguity.
   421  
   422  ### Compatibility with other clients
   423  
   424  #### Sarama
   425  
   426  If you're switching from Sarama and need/want to use the same algorithm for message
   427  partitioning, you can use the `kafka.Hash` balancer. `kafka.Hash` routes
   428  messages to the same partitions that Sarama's default partitioner would route to.
   429  
   430  ```go
   431  w := &kafka.Writer{
   432  	Addr:     kafka.TCP("localhost:9092"),
   433  	Topic:    "topic-A",
   434  	Balancer: &kafka.Hash{},
   435  }
   436  ```
   437  
   438  #### librdkafka and confluent-kafka-go
   439  
   440  Use the `kafka.CRC32Balancer` balancer to get the same behaviour as librdkafka's
   441  default `consistent_random` partition strategy.
   442  
   443  ```go
   444  w := &kafka.Writer{
   445  	Addr:     kafka.TCP("localhost:9092"),
   446  	Topic:    "topic-A",
   447  	Balancer: kafka.CRC32Balancer{},
   448  }
   449  ```
   450  
   451  #### Java
   452  
   453  Use the `kafka.Murmur2Balancer` balancer to get the same behaviour as the canonical
   454  Java client's default partitioner. Note: the Java class allows you to directly specify
   455  the partition which is not permitted.
   456  
   457  ```go
   458  w := &kafka.Writer{
   459  	Addr:     kafka.TCP("localhost:9092"),
   460  	Topic:    "topic-A",
   461  	Balancer: kafka.Murmur2Balancer{},
   462  }
   463  ```
   464  
   465  ### Compression
   466  
   467  Compression can be enabled on the `Writer` by setting the `Compression` field:
   468  
   469  ```go
   470  w := &kafka.Writer{
   471  	Addr:        kafka.TCP("localhost:9092"),
   472  	Topic:       "topic-A",
   473  	Compression: kafka.Snappy,
   474  }
   475  ```
   476  
   477  The `Reader` will by determine if the consumed messages are compressed by
   478  examining the message attributes. However, the package(s) for all expected
   479  codecs must be imported so that they get loaded correctly.
   480  
   481  _Note: in versions prior to 0.4 programs had to import compression packages to
   482  install codecs and support reading compressed messages from kafka. This is no
   483  longer the case and import of the compression packages are now no-ops._
   484  
   485  ## TLS Support
   486  
   487  For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS.
   488  _Note:_ Connecting to a Kafka cluster with TLS enabled without configuring TLS on the Conn/Reader/Writer can manifest in opaque io.ErrUnexpectedEOF errors.
   489  
   490  ### Connection
   491  
   492  ```go
   493  dialer := &kafka.Dialer{
   494      Timeout:   10 * time.Second,
   495      DualStack: true,
   496      TLS:       &tls.Config{...tls config...},
   497  }
   498  
   499  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   500  ```
   501  
   502  ### Reader
   503  
   504  ```go
   505  dialer := &kafka.Dialer{
   506      Timeout:   10 * time.Second,
   507      DualStack: true,
   508      TLS:       &tls.Config{...tls config...},
   509  }
   510  
   511  r := kafka.NewReader(kafka.ReaderConfig{
   512      Brokers:        []string{"localhost:9093"},
   513      GroupID:        "consumer-group-id",
   514      Topic:          "topic-A",
   515      Dialer:         dialer,
   516  })
   517  ```
   518  
   519  ### Writer
   520  
   521  Using `kafka.NewWriter`
   522  
   523  ```go
   524  dialer := &kafka.Dialer{
   525      Timeout:   10 * time.Second,
   526      DualStack: true,
   527      TLS:       &tls.Config{...tls config...},
   528  }
   529  
   530  w := kafka.NewWriter(kafka.WriterConfig{
   531  	Brokers: []string{"localhost:9093"},
   532  	Topic:   "topic-A",
   533  	Balancer: &kafka.Hash{},
   534  	Dialer:   dialer,
   535  })
   536  ```
   537  
   538  Direct Writer creation
   539  
   540  ```go
   541  w := kafka.Writer{
   542          Addr: kafka.TCP("localhost:9093"),
   543  	    Topic:   "topic-A",
   544  	    Balancer: &kafka.Hash{},
   545          Transport: &kafka.Transport{
   546              TLS: &tls.Config{},
   547          },
   548      }
   549  
   550  ```
   551  
   552  ## SASL Support
   553  
   554  You can specify an option on the `Dialer` to use SASL authentication. The `Dialer` can be used directly to open a `Conn` or it can be passed to a `Reader` or `Writer` via their respective configs. If the `SASLMechanism` field is `nil`, it will not authenticate with SASL.
   555  
   556  ### SASL Authentication Types
   557  
   558  #### [Plain](https://godoc.org/github.com/deanMdreon/kafka-go/sasl/plain#Mechanism)
   559  
   560  ```go
   561  mechanism := plain.Mechanism{
   562      Username: "username",
   563      Password: "password",
   564  }
   565  ```
   566  
   567  #### [SCRAM](https://godoc.org/github.com/deanMdreon/kafka-go/sasl/scram#Mechanism)
   568  
   569  ```go
   570  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   571  if err != nil {
   572      panic(err)
   573  }
   574  ```
   575  
   576  ### Connection
   577  
   578  ```go
   579  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   580  if err != nil {
   581      panic(err)
   582  }
   583  
   584  dialer := &kafka.Dialer{
   585      Timeout:       10 * time.Second,
   586      DualStack:     true,
   587      SASLMechanism: mechanism,
   588  }
   589  
   590  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   591  ```
   592  
   593  ### Reader
   594  
   595  ```go
   596  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   597  if err != nil {
   598      panic(err)
   599  }
   600  
   601  dialer := &kafka.Dialer{
   602      Timeout:       10 * time.Second,
   603      DualStack:     true,
   604      SASLMechanism: mechanism,
   605  }
   606  
   607  r := kafka.NewReader(kafka.ReaderConfig{
   608      Brokers:        []string{"localhost:9093"},
   609      GroupID:        "consumer-group-id",
   610      Topic:          "topic-A",
   611      Dialer:         dialer,
   612  })
   613  ```
   614  
   615  ### Writer
   616  
   617  ```go
   618  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   619  if err != nil {
   620      panic(err)
   621  }
   622  
   623  // Transports are responsible for managing connection pools and other resources,
   624  // it's generally best to create a few of these and share them across your
   625  // application.
   626  sharedTransport := &kafka.Transport{
   627      SASLMechanism: mechanism,
   628  }
   629  
   630  w := kafka.Writer{
   631  	Addr:      kafka.TCP("localhost:9092"),
   632  	Topic:     "topic-A",
   633  	Balancer:  &kafka.Hash{},
   634  	Transport: sharedTransport,
   635  }
   636  ```
   637  
   638  ### Client
   639  
   640  ```go
   641  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   642  if err != nil {
   643      panic(err)
   644  }
   645  
   646  // Transports are responsible for managing connection pools and other resources,
   647  // it's generally best to create a few of these and share them across your
   648  // application.
   649  sharedTransport := &kafka.Transport{
   650      SASLMechanism: mechanism,
   651  }
   652  
   653  client := &kafka.Client{
   654      Addr:      kafka.TCP("localhost:9092"),
   655      Timeout:   10 * time.Second,
   656      Transport: sharedTransport,
   657  }
   658  ```
   659  
   660  #### Reading all messages within a time range
   661  
   662  ```go
   663  startTime := time.Now().Add(-time.Hour)
   664  endTime := time.Now()
   665  batchSize := int(10e6) // 10MB
   666  
   667  r := kafka.NewReader(kafka.ReaderConfig{
   668      Brokers:   []string{"localhost:9092"},
   669      Topic:     "my-topic1",
   670      Partition: 0,
   671      MinBytes:  batchSize,
   672      MaxBytes:  batchSize,
   673  })
   674  
   675  r.SetOffsetAt(context.Background(), startTime)
   676  
   677  for {
   678      m, err := r.ReadMessage(context.Background())
   679  
   680      if err != nil {
   681          break
   682      }
   683      if m.Time.After(endTime) {
   684          break
   685      }
   686      // TODO: process message
   687      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   688  }
   689  
   690  if err := r.Close(); err != nil {
   691      log.Fatal("failed to close reader:", err)
   692  }
   693  ```
   694  
   695  ## Logging
   696  
   697  For visiblity into the operations of the Reader/Writer types, configure a logger on creation.
   698  
   699  ### Reader
   700  
   701  ```go
   702  func logf(msg string, a ...interface{}) {
   703  	fmt.Println(msg, a...)
   704  }
   705  
   706  r := kafka.NewReader(kafka.ReaderConfig{
   707  	Brokers:     []string{"localhost:9092"},
   708  	Topic:       "my-topic1",
   709  	Partition:   0,
   710  	Logger:      kafka.LoggerFunc(logf),
   711  	ErrorLogger: kafka.LoggerFunc(logf),
   712  })
   713  ```
   714  
   715  ### Writer
   716  
   717  ```go
   718  func logf(msg string, a ...interface{}) {
   719  	fmt.Println(msg, a...)
   720  }
   721  
   722  w := &kafka.Writer{
   723  	Addr:        kafka.TCP("localhost:9092"),
   724  	Topic:       "topic",
   725  	Logger:      kafka.LoggerFunc(logf),
   726  	ErrorLogger: kafka.LoggerFunc(logf),
   727  }
   728  ```
   729  
   730  ## Testing
   731  
   732  Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the `KAFKA_SKIP_NETTEST=1` environment variables will skip those tests.
   733  
   734  Run Kafka locally in docker
   735  
   736  ```bash
   737  docker-compose up -d
   738  ```
   739  
   740  Run tests
   741  
   742  ```bash
   743  KAFKA_VERSION=2.3.1 \
   744    KAFKA_SKIP_NETTEST=1 \
   745    go test -race ./...
   746  ```