github.com/hoveychen/kafka-go@v0.4.42/README.md (about)

     1  # kafka-go [![CircleCI](https://circleci.com/gh/hoveychen/kafka-go.svg?style=shield)](https://circleci.com/gh/hoveychen/kafka-go) [![Go Report Card](https://goreportcard.com/badge/github.com/hoveychen/kafka-go)](https://goreportcard.com/report/github.com/hoveychen/kafka-go) [![GoDoc](https://godoc.org/github.com/hoveychen/kafka-go?status.svg)](https://godoc.org/github.com/hoveychen/kafka-go)
     2  
     3  ## Motivations
     4  
     5  We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go
     6  client libraries for Kafka at the time of this writing was not ideal. The available
     7  options were:
     8  
     9  - [sarama](https://github.com/Shopify/sarama), which is by far the most popular
    10  but is quite difficult to work with. It is poorly documented, the API exposes
    11  low level concepts of the Kafka protocol, and it doesn't support recent Go features
    12  like [contexts](https://golang.org/pkg/context/). It also passes all values as
    13  pointers which causes large numbers of dynamic memory allocations, more frequent
    14  garbage collections, and higher memory usage.
    15  
    16  - [confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go) is a
    17  cgo based wrapper around [librdkafka](https://github.com/edenhill/librdkafka),
    18  which means it introduces a dependency to a C library on all Go code that uses
    19  the package. It has much better documentation than sarama but still lacks support
    20  for Go contexts.
    21  
    22  - [goka](https://github.com/lovoo/goka) is a more recent Kafka client for Go
    23  which focuses on a specific usage pattern. It provides abstractions for using Kafka
    24  as a message passing bus between services rather than an ordered log of events, but
    25  this is not the typical use case of Kafka for us at Segment. The package also
    26  depends on sarama for all interactions with Kafka.
    27  
    28  This is where `kafka-go` comes into play. It provides both low and high level
    29  APIs for interacting with Kafka, mirroring concepts and implementing interfaces of
    30  the Go standard library to make it easy to use and integrate with existing
    31  software.
    32  
    33  #### Note:
    34  
    35  In order to better align with our newly adopted Code of Conduct, the kafka-go
    36  project has renamed our default branch to `main`. For the full details of our
    37  Code Of Conduct see [this](./CODE_OF_CONDUCT.md) document.
    38  
    39  ## Kafka versions
    40  
    41  `kafka-go` is currently tested with Kafka versions 0.10.1.0 to 2.7.1.
    42  While it should also be compatible with later versions, newer features available
    43  in the Kafka API may not yet be implemented in the client.
    44  
    45  ## Go versions
    46  
    47  `kafka-go` requires Go version 1.15 or later.
    48  
    49  ## Connection [![GoDoc](https://godoc.org/github.com/hoveychen/kafka-go?status.svg)](https://godoc.org/github.com/hoveychen/kafka-go#Conn)
    50  
    51  The `Conn` type is the core of the `kafka-go` package. It wraps around a raw
    52  network connection to expose a low-level API to a Kafka server.
    53  
    54  Here are some examples showing typical use of a connection object:
    55  ```go
    56  // to produce messages
    57  topic := "my-topic"
    58  partition := 0
    59  
    60  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    61  if err != nil {
    62      log.Fatal("failed to dial leader:", err)
    63  }
    64  
    65  conn.SetWriteDeadline(time.Now().Add(10*time.Second))
    66  _, err = conn.WriteMessages(
    67      kafka.Message{Value: []byte("one!")},
    68      kafka.Message{Value: []byte("two!")},
    69      kafka.Message{Value: []byte("three!")},
    70  )
    71  if err != nil {
    72      log.Fatal("failed to write messages:", err)
    73  }
    74  
    75  if err := conn.Close(); err != nil {
    76      log.Fatal("failed to close writer:", err)
    77  }
    78  ```
    79  ```go
    80  // to consume messages
    81  topic := "my-topic"
    82  partition := 0
    83  
    84  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    85  if err != nil {
    86      log.Fatal("failed to dial leader:", err)
    87  }
    88  
    89  conn.SetReadDeadline(time.Now().Add(10*time.Second))
    90  batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max
    91  
    92  b := make([]byte, 10e3) // 10KB max per message
    93  for {
    94      n, err := batch.Read(b)
    95      if err != nil {
    96          break
    97      }
    98      fmt.Println(string(b[:n]))
    99  }
   100  
   101  if err := batch.Close(); err != nil {
   102      log.Fatal("failed to close batch:", err)
   103  }
   104  
   105  if err := conn.Close(); err != nil {
   106      log.Fatal("failed to close connection:", err)
   107  }
   108  ```
   109  
   110  ### To Create Topics
   111  By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_AUTO_CREATE_TOPICS_ENABLE='true'` in the wurstmeister/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
   112  ```go
   113  // to create topics when auto.create.topics.enable='true'
   114  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
   115  if err != nil {
   116      panic(err.Error())
   117  }
   118  ```
   119  
   120  If `auto.create.topics.enable='false'` then you will need to create topics explicitly like so:
   121  ```go
   122  // to create topics when auto.create.topics.enable='false'
   123  topic := "my-topic"
   124  
   125  conn, err := kafka.Dial("tcp", "localhost:9092")
   126  if err != nil {
   127      panic(err.Error())
   128  }
   129  defer conn.Close()
   130  
   131  controller, err := conn.Controller()
   132  if err != nil {
   133      panic(err.Error())
   134  }
   135  var controllerConn *kafka.Conn
   136  controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   137  if err != nil {
   138      panic(err.Error())
   139  }
   140  defer controllerConn.Close()
   141  
   142  
   143  topicConfigs := []kafka.TopicConfig{
   144      {
   145          Topic:             topic,
   146          NumPartitions:     1,
   147          ReplicationFactor: 1,
   148      },
   149  }
   150  
   151  err = controllerConn.CreateTopics(topicConfigs...)
   152  if err != nil {
   153      panic(err.Error())
   154  }
   155  ```
   156  
   157  ### To Connect To Leader Via a Non-leader Connection
   158  ```go
   159  // to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
   160  conn, err := kafka.Dial("tcp", "localhost:9092")
   161  if err != nil {
   162      panic(err.Error())
   163  }
   164  defer conn.Close()
   165  controller, err := conn.Controller()
   166  if err != nil {
   167      panic(err.Error())
   168  }
   169  var connLeader *kafka.Conn
   170  connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   171  if err != nil {
   172      panic(err.Error())
   173  }
   174  defer connLeader.Close()
   175  ```
   176  
   177  ### To list topics
   178  ```go
   179  conn, err := kafka.Dial("tcp", "localhost:9092")
   180  if err != nil {
   181      panic(err.Error())
   182  }
   183  defer conn.Close()
   184  
   185  partitions, err := conn.ReadPartitions()
   186  if err != nil {
   187      panic(err.Error())
   188  }
   189  
   190  m := map[string]struct{}{}
   191  
   192  for _, p := range partitions {
   193      m[p.Topic] = struct{}{}
   194  }
   195  for k := range m {
   196      fmt.Println(k)
   197  }
   198  ```
   199  
   200  
   201  Because it is low level, the `Conn` type turns out to be a great building block
   202  for higher level abstractions, like the `Reader` for example.
   203  
   204  ## Reader [![GoDoc](https://godoc.org/github.com/hoveychen/kafka-go?status.svg)](https://godoc.org/github.com/hoveychen/kafka-go#Reader)
   205  
   206  A `Reader` is another concept exposed by the `kafka-go` package, which intends
   207  to make it simpler to implement the typical use case of consuming from a single
   208  topic-partition pair.
   209  A `Reader` also automatically handles reconnections and offset management, and
   210  exposes an API that supports asynchronous cancellations and timeouts using Go
   211  contexts.
   212  
   213  Note that it is important to call `Close()` on a `Reader` when a process exits.
   214  The kafka server needs a graceful disconnect to stop it from continuing to
   215  attempt to send messages to the connected clients. The given example will not
   216  call `Close()` if the process is terminated with SIGINT (ctrl-c at the shell) or
   217  SIGTERM (as docker stop or a kubernetes restart does). This can result in a
   218  delay when a new reader on the same topic connects (e.g. new process started
   219  or new container running). Use a `signal.Notify` handler to close the reader on
   220  process shutdown.
   221  
   222  ```go
   223  // make a new reader that consumes from topic-A, partition 0, at offset 42
   224  r := kafka.NewReader(kafka.ReaderConfig{
   225      Brokers:   []string{"localhost:9092","localhost:9093", "localhost:9094"},
   226      Topic:     "topic-A",
   227      Partition: 0,
   228      MaxBytes:  10e6, // 10MB
   229  })
   230  r.SetOffset(42)
   231  
   232  for {
   233      m, err := r.ReadMessage(context.Background())
   234      if err != nil {
   235          break
   236      }
   237      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   238  }
   239  
   240  if err := r.Close(); err != nil {
   241      log.Fatal("failed to close reader:", err)
   242  }
   243  ```
   244  
   245  ### Consumer Groups
   246  
   247  ```kafka-go``` also supports Kafka consumer groups including broker managed offsets.
   248  To enable consumer groups, simply specify the GroupID in the ReaderConfig.
   249  
   250  ReadMessage automatically commits offsets when using consumer groups.
   251  
   252  ```go
   253  // make a new reader that consumes from topic-A
   254  r := kafka.NewReader(kafka.ReaderConfig{
   255      Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   256      GroupID:   "consumer-group-id",
   257      Topic:     "topic-A",
   258      MaxBytes:  10e6, // 10MB
   259  })
   260  
   261  for {
   262      m, err := r.ReadMessage(context.Background())
   263      if err != nil {
   264          break
   265      }
   266      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   267  }
   268  
   269  if err := r.Close(); err != nil {
   270      log.Fatal("failed to close reader:", err)
   271  }
   272  ```
   273  
   274  There are a number of limitations when using consumer groups:
   275  
   276  * ```(*Reader).SetOffset``` will return an error when GroupID is set
   277  * ```(*Reader).Offset``` will always return ```-1``` when GroupID is set
   278  * ```(*Reader).Lag``` will always return ```-1``` when GroupID is set
   279  * ```(*Reader).ReadLag``` will return an error when GroupID is set
   280  * ```(*Reader).Stats``` will return a partition of ```-1``` when GroupID is set
   281  
   282  ### Explicit Commits
   283  
   284  ```kafka-go``` also supports explicit commits.  Instead of calling ```ReadMessage```,
   285  call ```FetchMessage``` followed by ```CommitMessages```.
   286  
   287  ```go
   288  ctx := context.Background()
   289  for {
   290      m, err := r.FetchMessage(ctx)
   291      if err != nil {
   292          break
   293      }
   294      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   295      if err := r.CommitMessages(ctx, m); err != nil {
   296          log.Fatal("failed to commit messages:", err)
   297      }
   298  }
   299  ```
   300  
   301  When committing messages in consumer groups, the message with the highest offset
   302  for a given topic/partition determines the value of the committed offset for
   303  that partition. For example, if messages at offset 1, 2, and 3 of a single
   304  partition were retrieved by call to `FetchMessage`, calling `CommitMessages`
   305  with message offset 3 will also result in committing the messages at offsets 1
   306  and 2 for that partition.
   307  
   308  ### Managing Commits
   309  
   310  By default, CommitMessages will synchronously commit offsets to Kafka.  For
   311  improved performance, you can instead periodically commit offsets to Kafka
   312  by setting CommitInterval on the ReaderConfig.
   313  
   314  
   315  ```go
   316  // make a new reader that consumes from topic-A
   317  r := kafka.NewReader(kafka.ReaderConfig{
   318      Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   319      GroupID:        "consumer-group-id",
   320      Topic:          "topic-A",
   321      MaxBytes:       10e6, // 10MB
   322      CommitInterval: time.Second, // flushes commits to Kafka every second
   323  })
   324  ```
   325  
   326  ## Writer [![GoDoc](https://godoc.org/github.com/hoveychen/kafka-go?status.svg)](https://godoc.org/github.com/hoveychen/kafka-go#Writer)
   327  
   328  To produce messages to Kafka, a program may use the low-level `Conn` API, but
   329  the package also provides a higher level `Writer` type which is more appropriate
   330  to use in most cases as it provides additional features:
   331  
   332  - Automatic retries and reconnections on errors.
   333  - Configurable distribution of messages across available partitions.
   334  - Synchronous or asynchronous writes of messages to Kafka.
   335  - Asynchronous cancellation using contexts.
   336  - Flushing of pending messages on close to support graceful shutdowns.
   337  - Creation of a missing topic before publishing a message. *Note!* it was the default behaviour up to the version `v0.4.30`.
   338  
   339  ```go
   340  // make a writer that produces to topic-A, using the least-bytes distribution
   341  w := &kafka.Writer{
   342  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   343  	Topic:   "topic-A",
   344  	Balancer: &kafka.LeastBytes{},
   345  }
   346  
   347  err := w.WriteMessages(context.Background(),
   348  	kafka.Message{
   349  		Key:   []byte("Key-A"),
   350  		Value: []byte("Hello World!"),
   351  	},
   352  	kafka.Message{
   353  		Key:   []byte("Key-B"),
   354  		Value: []byte("One!"),
   355  	},
   356  	kafka.Message{
   357  		Key:   []byte("Key-C"),
   358  		Value: []byte("Two!"),
   359  	},
   360  )
   361  if err != nil {
   362      log.Fatal("failed to write messages:", err)
   363  }
   364  
   365  if err := w.Close(); err != nil {
   366      log.Fatal("failed to close writer:", err)
   367  }
   368  ```
   369  
   370  ### Missing topic creation before publication
   371  
   372  ```go
   373  // Make a writer that publishes messages to topic-A.
   374  // The topic will be created if it is missing.
   375  w := &Writer{
   376      Addr:                   kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   377      Topic:                  "topic-A",
   378      AllowAutoTopicCreation: true,
   379  }
   380  
   381  messages := []kafka.Message{
   382      {
   383          Key:   []byte("Key-A"),
   384          Value: []byte("Hello World!"),
   385      },
   386      {
   387          Key:   []byte("Key-B"),
   388          Value: []byte("One!"),
   389      },
   390      {
   391          Key:   []byte("Key-C"),
   392          Value: []byte("Two!"),
   393      },
   394  }
   395  
   396  var err error
   397  const retries = 3
   398  for i := 0; i < retries; i++ {
   399      ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
   400      defer cancel()
   401      
   402      // attempt to create topic prior to publishing the message
   403      err = w.WriteMessages(ctx, messages...)
   404      if errors.Is(err, LeaderNotAvailable) || errors.Is(err, context.DeadlineExceeded) {
   405          time.Sleep(time.Millisecond * 250)
   406          continue
   407      }
   408  
   409      if err != nil {
   410          log.Fatalf("unexpected error %v", err)
   411      }
   412      break
   413  }
   414  
   415  if err := w.Close(); err != nil {
   416      log.Fatal("failed to close writer:", err)
   417  }
   418  ```
   419  
   420  ### Writing to multiple topics
   421  
   422  Normally, the `WriterConfig.Topic` is used to initialize a single-topic writer.
   423  By excluding that particular configuration, you are given the ability to define
   424  the topic on a per-message basis by setting `Message.Topic`.
   425  
   426  ```go
   427  w := &kafka.Writer{
   428  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   429      // NOTE: When Topic is not defined here, each Message must define it instead.
   430  	Balancer: &kafka.LeastBytes{},
   431  }
   432  
   433  err := w.WriteMessages(context.Background(),
   434      // NOTE: Each Message has Topic defined, otherwise an error is returned.
   435  	kafka.Message{
   436          Topic: "topic-A",
   437  		Key:   []byte("Key-A"),
   438  		Value: []byte("Hello World!"),
   439  	},
   440  	kafka.Message{
   441          Topic: "topic-B",
   442  		Key:   []byte("Key-B"),
   443  		Value: []byte("One!"),
   444  	},
   445  	kafka.Message{
   446          Topic: "topic-C",
   447  		Key:   []byte("Key-C"),
   448  		Value: []byte("Two!"),
   449  	},
   450  )
   451  if err != nil {
   452      log.Fatal("failed to write messages:", err)
   453  }
   454  
   455  if err := w.Close(); err != nil {
   456      log.Fatal("failed to close writer:", err)
   457  }
   458  ```
   459  
   460  **NOTE:** These 2 patterns are mutually exclusive, if you set `Writer.Topic`,
   461  you must not also explicitly define `Message.Topic` on the messages you are
   462  writing. The opposite applies when you do not define a topic for the writer.
   463  The `Writer` will return an error if it detects this ambiguity.
   464  
   465  ### Compatibility with other clients
   466  
   467  #### Sarama
   468  
   469  If you're switching from Sarama and need/want to use the same algorithm for message partitioning, you can either use 
   470  the `kafka.Hash` balancer or the `kafka.ReferenceHash` balancer:
   471  * `kafka.Hash` = `sarama.NewHashPartitioner`
   472  * `kafka.ReferenceHash` = `sarama.NewReferenceHashPartitioner`
   473  
   474  The `kafka.Hash` and `kafka.ReferenceHash` balancers would route messages to the same partitions that the two 
   475  aforementioned Sarama partitioners would route them to.
   476  
   477  ```go
   478  w := &kafka.Writer{
   479  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   480  	Topic:    "topic-A",
   481  	Balancer: &kafka.Hash{},
   482  }
   483  ```
   484  
   485  #### librdkafka and confluent-kafka-go
   486  
   487  Use the ```kafka.CRC32Balancer``` balancer to get the same behaviour as librdkafka's
   488  default ```consistent_random``` partition strategy.
   489  
   490  ```go
   491  w := &kafka.Writer{
   492  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   493  	Topic:    "topic-A",
   494  	Balancer: kafka.CRC32Balancer{},
   495  }
   496  ```
   497  
   498  #### Java
   499  
   500  Use the ```kafka.Murmur2Balancer``` balancer to get the same behaviour as the canonical
   501  Java client's default partitioner.  Note: the Java class allows you to directly specify
   502  the partition which is not permitted.
   503  
   504  ```go
   505  w := &kafka.Writer{
   506  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   507  	Topic:    "topic-A",
   508  	Balancer: kafka.Murmur2Balancer{},
   509  }
   510  ```
   511  
   512  ### Compression
   513  
   514  Compression can be enabled on the `Writer` by setting the `Compression` field:
   515  
   516  ```go
   517  w := &kafka.Writer{
   518  	Addr:        kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   519  	Topic:       "topic-A",
   520  	Compression: kafka.Snappy,
   521  }
   522  ```
   523  
   524  The `Reader` will by determine if the consumed messages are compressed by
   525  examining the message attributes.  However, the package(s) for all expected
   526  codecs must be imported so that they get loaded correctly.
   527  
   528  _Note: in versions prior to 0.4 programs had to import compression packages to
   529  install codecs and support reading compressed messages from kafka. This is no
   530  longer the case and import of the compression packages are now no-ops._
   531  
   532  ## TLS Support
   533  
   534  For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS.
   535  *Note:* Connecting to a Kafka cluster with TLS enabled without configuring TLS on the Conn/Reader/Writer can manifest in opaque io.ErrUnexpectedEOF errors.
   536  
   537  
   538  ### Connection
   539  
   540  ```go
   541  dialer := &kafka.Dialer{
   542      Timeout:   10 * time.Second,
   543      DualStack: true,
   544      TLS:       &tls.Config{...tls config...},
   545  }
   546  
   547  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   548  ```
   549  
   550  ### Reader
   551  
   552  ```go
   553  dialer := &kafka.Dialer{
   554      Timeout:   10 * time.Second,
   555      DualStack: true,
   556      TLS:       &tls.Config{...tls config...},
   557  }
   558  
   559  r := kafka.NewReader(kafka.ReaderConfig{
   560      Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   561      GroupID:        "consumer-group-id",
   562      Topic:          "topic-A",
   563      Dialer:         dialer,
   564  })
   565  ```
   566  
   567  ### Writer
   568  
   569  
   570  Direct Writer creation
   571  
   572  ```go
   573  w := kafka.Writer{
   574      Addr: kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"), 
   575      Topic:   "topic-A",
   576      Balancer: &kafka.Hash{},
   577      Transport: &kafka.Transport{
   578          TLS: &tls.Config{},
   579        },
   580      }
   581  ```
   582  
   583  Using `kafka.NewWriter`
   584  
   585  ```go
   586  dialer := &kafka.Dialer{
   587      Timeout:   10 * time.Second,
   588      DualStack: true,
   589      TLS:       &tls.Config{...tls config...},
   590  }
   591  
   592  w := kafka.NewWriter(kafka.WriterConfig{
   593  	Brokers: []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   594  	Topic:   "topic-A",
   595  	Balancer: &kafka.Hash{},
   596  	Dialer:   dialer,
   597  })
   598  ```
   599  Note that `kafka.NewWriter` and `kafka.WriterConfig` are deprecated and will be removed in a future release.
   600  
   601  ## SASL Support
   602  
   603  You can specify an option on the `Dialer` to use SASL authentication. The `Dialer` can be used directly to open a `Conn` or it can be passed to a `Reader` or `Writer` via their respective configs. If the `SASLMechanism` field is `nil`, it will not authenticate with SASL.
   604  
   605  ### SASL Authentication Types
   606  
   607  #### [Plain](https://godoc.org/github.com/hoveychen/kafka-go/sasl/plain#Mechanism)
   608  ```go
   609  mechanism := plain.Mechanism{
   610      Username: "username",
   611      Password: "password",
   612  }
   613  ```
   614  
   615  #### [SCRAM](https://godoc.org/github.com/hoveychen/kafka-go/sasl/scram#Mechanism)
   616  ```go
   617  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   618  if err != nil {
   619      panic(err)
   620  }
   621  ```
   622  
   623  ### Connection
   624  
   625  ```go
   626  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   627  if err != nil {
   628      panic(err)
   629  }
   630  
   631  dialer := &kafka.Dialer{
   632      Timeout:       10 * time.Second,
   633      DualStack:     true,
   634      SASLMechanism: mechanism,
   635  }
   636  
   637  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   638  ```
   639  
   640  
   641  ### Reader
   642  
   643  ```go
   644  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   645  if err != nil {
   646      panic(err)
   647  }
   648  
   649  dialer := &kafka.Dialer{
   650      Timeout:       10 * time.Second,
   651      DualStack:     true,
   652      SASLMechanism: mechanism,
   653  }
   654  
   655  r := kafka.NewReader(kafka.ReaderConfig{
   656      Brokers:        []string{"localhost:9092","localhost:9093", "localhost:9094"},
   657      GroupID:        "consumer-group-id",
   658      Topic:          "topic-A",
   659      Dialer:         dialer,
   660  })
   661  ```
   662  
   663  ### Writer
   664  
   665  ```go
   666  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   667  if err != nil {
   668      panic(err)
   669  }
   670  
   671  // Transports are responsible for managing connection pools and other resources,
   672  // it's generally best to create a few of these and share them across your
   673  // application.
   674  sharedTransport := &kafka.Transport{
   675      SASL: mechanism,
   676  }
   677  
   678  w := kafka.Writer{
   679  	Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   680  	Topic:     "topic-A",
   681  	Balancer:  &kafka.Hash{},
   682  	Transport: sharedTransport,
   683  }
   684  ```
   685  
   686  ### Client
   687  
   688  ```go
   689  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   690  if err != nil {
   691      panic(err)
   692  }
   693  
   694  // Transports are responsible for managing connection pools and other resources,
   695  // it's generally best to create a few of these and share them across your
   696  // application.
   697  sharedTransport := &kafka.Transport{
   698      SASL: mechanism,
   699  }
   700  
   701  client := &kafka.Client{
   702      Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   703      Timeout:   10 * time.Second,
   704      Transport: sharedTransport,
   705  }
   706  ```
   707  
   708  #### Reading all messages within a time range
   709  
   710  ```go
   711  startTime := time.Now().Add(-time.Hour)
   712  endTime := time.Now()
   713  batchSize := int(10e6) // 10MB
   714  
   715  r := kafka.NewReader(kafka.ReaderConfig{
   716      Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   717      Topic:     "my-topic1",
   718      Partition: 0,
   719      MaxBytes:  batchSize,
   720  })
   721  
   722  r.SetOffsetAt(context.Background(), startTime)
   723  
   724  for {
   725      m, err := r.ReadMessage(context.Background())
   726  
   727      if err != nil {
   728          break
   729      }
   730      if m.Time.After(endTime) {
   731          break
   732      }
   733      // TODO: process message
   734      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   735  }
   736  
   737  if err := r.Close(); err != nil {
   738      log.Fatal("failed to close reader:", err)
   739  }
   740  ```
   741  
   742  
   743  ## Logging
   744  
   745  For visiblity into the operations of the Reader/Writer types, configure a logger on creation.
   746  
   747  
   748  ### Reader
   749  
   750  ```go
   751  func logf(msg string, a ...interface{}) {
   752  	fmt.Printf(msg, a...)
   753  	fmt.Println()
   754  }
   755  
   756  r := kafka.NewReader(kafka.ReaderConfig{
   757  	Brokers:     []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   758  	Topic:       "my-topic1",
   759  	Partition:   0,
   760  	Logger:      kafka.LoggerFunc(logf),
   761  	ErrorLogger: kafka.LoggerFunc(logf),
   762  })
   763  ```
   764  
   765  ### Writer
   766  
   767  ```go
   768  func logf(msg string, a ...interface{}) {
   769  	fmt.Printf(msg, a...)
   770  	fmt.Println()
   771  }
   772  
   773  w := &kafka.Writer{
   774  	Addr:        kafka.TCP("localhost:9092"),
   775  	Topic:       "topic",
   776  	Logger:      kafka.LoggerFunc(logf),
   777  	ErrorLogger: kafka.LoggerFunc(logf),
   778  }
   779  ```
   780  
   781  
   782  
   783  ## Testing
   784  
   785  Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the `KAFKA_SKIP_NETTEST=1` environment variables will skip those tests.
   786  
   787  Run Kafka locally in docker
   788  
   789  ```bash
   790  docker-compose up -d
   791  ```
   792  
   793  Run tests
   794  
   795  ```bash
   796  KAFKA_VERSION=2.3.1 \
   797    KAFKA_SKIP_NETTEST=1 \
   798    go test -race ./...
   799  ```