github.com/QuangHoangHao/kafka-go@v0.4.36/README.md (about)

     1  # kafka-go [![CircleCI](https://circleci.com/gh/segmentio/kafka-go.svg?style=shield)](https://circleci.com/gh/segmentio/kafka-go) [![Go Report Card](https://goreportcard.com/badge/github.com/QuangHoangHao/kafka-go)](https://goreportcard.com/report/github.com/QuangHoangHao/kafka-go) [![GoDoc](https://godoc.org/github.com/QuangHoangHao/kafka-go?status.svg)](https://godoc.org/github.com/QuangHoangHao/kafka-go)
     2  
     3  ## Motivations
     4  
     5  We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go
     6  client libraries for Kafka at the time of this writing was not ideal. The available
     7  options were:
     8  
     9  - [sarama](https://github.com/Shopify/sarama), which is by far the most popular
    10    but is quite difficult to work with. It is poorly documented, the API exposes
    11    low level concepts of the Kafka protocol, and it doesn't support recent Go features
    12    like [contexts](https://golang.org/pkg/context/). It also passes all values as
    13    pointers which causes large numbers of dynamic memory allocations, more frequent
    14    garbage collections, and higher memory usage.
    15  
    16  - [confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go) is a
    17    cgo based wrapper around [librdkafka](https://github.com/edenhill/librdkafka),
    18    which means it introduces a dependency to a C library on all Go code that uses
    19    the package. It has much better documentation than sarama but still lacks support
    20    for Go contexts.
    21  
    22  - [goka](https://github.com/lovoo/goka) is a more recent Kafka client for Go
    23    which focuses on a specific usage pattern. It provides abstractions for using Kafka
    24    as a message passing bus between services rather than an ordered log of events, but
    25    this is not the typical use case of Kafka for us at Segment. The package also
    26    depends on sarama for all interactions with Kafka.
    27  
    28  This is where `kafka-go` comes into play. It provides both low and high level
    29  APIs for interacting with Kafka, mirroring concepts and implementing interfaces of
    30  the Go standard library to make it easy to use and integrate with existing
    31  software.
    32  
    33  #### Note:
    34  
    35  In order to better align with our newly adopted Code of Conduct, the kafka-go
    36  project has renamed our default branch to `main`. For the full details of our
    37  Code Of Conduct see [this](./CODE_OF_CONDUCT.md) document.
    38  
    39  ## Kafka versions
    40  
    41  `kafka-go` is currently tested with Kafka versions 0.10.1.0 to 2.7.1.
    42  While it should also be compatible with later versions, newer features available
    43  in the Kafka API may not yet be implemented in the client.
    44  
    45  ## Go versions
    46  
    47  `kafka-go` requires Go version 1.15 or later.
    48  
    49  ## Connection [![GoDoc](https://godoc.org/github.com/QuangHoangHao/kafka-go?status.svg)](https://godoc.org/github.com/QuangHoangHao/kafka-go#Conn)
    50  
    51  The `Conn` type is the core of the `kafka-go` package. It wraps around a raw
    52  network connection to expose a low-level API to a Kafka server.
    53  
    54  Here are some examples showing typical use of a connection object:
    55  
    56  ```go
    57  // to produce messages
    58  topic := "my-topic"
    59  partition := 0
    60  
    61  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    62  if err != nil {
    63      log.Fatal("failed to dial leader:", err)
    64  }
    65  
    66  conn.SetWriteDeadline(time.Now().Add(10*time.Second))
    67  _, err = conn.WriteMessages(
    68      kafka.Message{Value: []byte("one!")},
    69      kafka.Message{Value: []byte("two!")},
    70      kafka.Message{Value: []byte("three!")},
    71  )
    72  if err != nil {
    73      log.Fatal("failed to write messages:", err)
    74  }
    75  
    76  if err := conn.Close(); err != nil {
    77      log.Fatal("failed to close writer:", err)
    78  }
    79  ```
    80  
    81  ```go
    82  // to consume messages
    83  topic := "my-topic"
    84  partition := 0
    85  
    86  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    87  if err != nil {
    88      log.Fatal("failed to dial leader:", err)
    89  }
    90  
    91  conn.SetReadDeadline(time.Now().Add(10*time.Second))
    92  batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max
    93  
    94  b := make([]byte, 10e3) // 10KB max per message
    95  for {
    96      n, err := batch.Read(b)
    97      if err != nil {
    98          break
    99      }
   100      fmt.Println(string(b[:n]))
   101  }
   102  
   103  if err := batch.Close(); err != nil {
   104      log.Fatal("failed to close batch:", err)
   105  }
   106  
   107  if err := conn.Close(); err != nil {
   108      log.Fatal("failed to close connection:", err)
   109  }
   110  ```
   111  
   112  ### To Create Topics
   113  
   114  By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_AUTO_CREATE_TOPICS_ENABLE='true'` in the wurstmeister/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
   115  
   116  ```go
   117  // to create topics when auto.create.topics.enable='true'
   118  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
   119  if err != nil {
   120      panic(err.Error())
   121  }
   122  ```
   123  
   124  If `auto.create.topics.enable='false'` then you will need to create topics explicitly like so:
   125  
   126  ```go
   127  // to create topics when auto.create.topics.enable='false'
   128  topic := "my-topic"
   129  
   130  conn, err := kafka.Dial("tcp", "localhost:9092")
   131  if err != nil {
   132      panic(err.Error())
   133  }
   134  defer conn.Close()
   135  
   136  controller, err := conn.Controller()
   137  if err != nil {
   138      panic(err.Error())
   139  }
   140  var controllerConn *kafka.Conn
   141  controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   142  if err != nil {
   143      panic(err.Error())
   144  }
   145  defer controllerConn.Close()
   146  
   147  
   148  topicConfigs := []kafka.TopicConfig{
   149      {
   150          Topic:             topic,
   151          NumPartitions:     1,
   152          ReplicationFactor: 1,
   153      },
   154  }
   155  
   156  err = controllerConn.CreateTopics(topicConfigs...)
   157  if err != nil {
   158      panic(err.Error())
   159  }
   160  ```
   161  
   162  ### To Connect To Leader Via a Non-leader Connection
   163  
   164  ```go
   165  // to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
   166  conn, err := kafka.Dial("tcp", "localhost:9092")
   167  if err != nil {
   168      panic(err.Error())
   169  }
   170  defer conn.Close()
   171  controller, err := conn.Controller()
   172  if err != nil {
   173      panic(err.Error())
   174  }
   175  var connLeader *kafka.Conn
   176  connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   177  if err != nil {
   178      panic(err.Error())
   179  }
   180  defer connLeader.Close()
   181  ```
   182  
   183  ### To list topics
   184  
   185  ```go
   186  conn, err := kafka.Dial("tcp", "localhost:9092")
   187  if err != nil {
   188      panic(err.Error())
   189  }
   190  defer conn.Close()
   191  
   192  partitions, err := conn.ReadPartitions()
   193  if err != nil {
   194      panic(err.Error())
   195  }
   196  
   197  m := map[string]struct{}{}
   198  
   199  for _, p := range partitions {
   200      m[p.Topic] = struct{}{}
   201  }
   202  for k := range m {
   203      fmt.Println(k)
   204  }
   205  ```
   206  
   207  Because it is low level, the `Conn` type turns out to be a great building block
   208  for higher level abstractions, like the `Reader` for example.
   209  
   210  ## Reader [![GoDoc](https://godoc.org/github.com/QuangHoangHao/kafka-go?status.svg)](https://godoc.org/github.com/QuangHoangHao/kafka-go#Reader)
   211  
   212  A `Reader` is another concept exposed by the `kafka-go` package, which intends
   213  to make it simpler to implement the typical use case of consuming from a single
   214  topic-partition pair.
   215  A `Reader` also automatically handles reconnections and offset management, and
   216  exposes an API that supports asynchronous cancellations and timeouts using Go
   217  contexts.
   218  
   219  Note that it is important to call `Close()` on a `Reader` when a process exits.
   220  The kafka server needs a graceful disconnect to stop it from continuing to
   221  attempt to send messages to the connected clients. The given example will not
   222  call `Close()` if the process is terminated with SIGINT (ctrl-c at the shell) or
   223  SIGTERM (as docker stop or a kubernetes restart does). This can result in a
   224  delay when a new reader on the same topic connects (e.g. new process started
   225  or new container running). Use a `signal.Notify` handler to close the reader on
   226  process shutdown.
   227  
   228  ```go
   229  // make a new reader that consumes from topic-A, partition 0, at offset 42
   230  r := kafka.NewReader(kafka.ReaderConfig{
   231      Brokers:   []string{"localhost:9092","localhost:9093", "localhost:9094"},
   232      Topic:     "topic-A",
   233      Partition: 0,
   234      MinBytes:  10e3, // 10KB
   235      MaxBytes:  10e6, // 10MB
   236  })
   237  r.SetOffset(42)
   238  
   239  for {
   240      m, err := r.ReadMessage(context.Background())
   241      if err != nil {
   242          break
   243      }
   244      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   245  }
   246  
   247  if err := r.Close(); err != nil {
   248      log.Fatal("failed to close reader:", err)
   249  }
   250  ```
   251  
   252  ### Consumer Groups
   253  
   254  `kafka-go` also supports Kafka consumer groups including broker managed offsets.
   255  To enable consumer groups, simply specify the GroupID in the ReaderConfig.
   256  
   257  ReadMessage automatically commits offsets when using consumer groups.
   258  
   259  ```go
   260  // make a new reader that consumes from topic-A
   261  r := kafka.NewReader(kafka.ReaderConfig{
   262      Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   263      GroupID:   "consumer-group-id",
   264      Topic:     "topic-A",
   265      MinBytes:  10e3, // 10KB
   266      MaxBytes:  10e6, // 10MB
   267  })
   268  
   269  for {
   270      m, err := r.ReadMessage(context.Background())
   271      if err != nil {
   272          break
   273      }
   274      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   275  }
   276  
   277  if err := r.Close(); err != nil {
   278      log.Fatal("failed to close reader:", err)
   279  }
   280  ```
   281  
   282  There are a number of limitations when using consumer groups:
   283  
   284  - `(*Reader).SetOffset` will return an error when GroupID is set
   285  - `(*Reader).Offset` will always return `-1` when GroupID is set
   286  - `(*Reader).Lag` will always return `-1` when GroupID is set
   287  - `(*Reader).ReadLag` will return an error when GroupID is set
   288  - `(*Reader).Stats` will return a partition of `-1` when GroupID is set
   289  
   290  ### Explicit Commits
   291  
   292  `kafka-go` also supports explicit commits. Instead of calling `ReadMessage`,
   293  call `FetchMessage` followed by `CommitMessages`.
   294  
   295  ```go
   296  ctx := context.Background()
   297  for {
   298      m, err := r.FetchMessage(ctx)
   299      if err != nil {
   300          break
   301      }
   302      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   303      if err := r.CommitMessages(ctx, m); err != nil {
   304          log.Fatal("failed to commit messages:", err)
   305      }
   306  }
   307  ```
   308  
   309  When committing messages in consumer groups, the message with the highest offset
   310  for a given topic/partition determines the value of the committed offset for
   311  that partition. For example, if messages at offset 1, 2, and 3 of a single
   312  partition were retrieved by call to `FetchMessage`, calling `CommitMessages`
   313  with message offset 3 will also result in committing the messages at offsets 1
   314  and 2 for that partition.
   315  
   316  ### Managing Commits
   317  
   318  By default, CommitMessages will synchronously commit offsets to Kafka. For
   319  improved performance, you can instead periodically commit offsets to Kafka
   320  by setting CommitInterval on the ReaderConfig.
   321  
   322  ```go
   323  // make a new reader that consumes from topic-A
   324  r := kafka.NewReader(kafka.ReaderConfig{
   325      Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   326      GroupID:        "consumer-group-id",
   327      Topic:          "topic-A",
   328      MinBytes:       10e3, // 10KB
   329      MaxBytes:       10e6, // 10MB
   330      CommitInterval: time.Second, // flushes commits to Kafka every second
   331  })
   332  ```
   333  
   334  ## Writer [![GoDoc](https://godoc.org/github.com/QuangHoangHao/kafka-go?status.svg)](https://godoc.org/github.com/QuangHoangHao/kafka-go#Writer)
   335  
   336  To produce messages to Kafka, a program may use the low-level `Conn` API, but
   337  the package also provides a higher level `Writer` type which is more appropriate
   338  to use in most cases as it provides additional features:
   339  
   340  - Automatic retries and reconnections on errors.
   341  - Configurable distribution of messages across available partitions.
   342  - Synchronous or asynchronous writes of messages to Kafka.
   343  - Asynchronous cancellation using contexts.
   344  - Flushing of pending messages on close to support graceful shutdowns.
   345  - Creation of a missing topic before publishing a message. _Note!_ it was the default behaviour up to the version `v0.4.30`.
   346  
   347  ```go
   348  // make a writer that produces to topic-A, using the least-bytes distribution
   349  w := &kafka.Writer{
   350  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   351  	Topic:   "topic-A",
   352  	Balancer: &kafka.LeastBytes{},
   353  }
   354  
   355  err := w.WriteMessages(context.Background(),
   356  	kafka.Message{
   357  		Key:   []byte("Key-A"),
   358  		Value: []byte("Hello World!"),
   359  	},
   360  	kafka.Message{
   361  		Key:   []byte("Key-B"),
   362  		Value: []byte("One!"),
   363  	},
   364  	kafka.Message{
   365  		Key:   []byte("Key-C"),
   366  		Value: []byte("Two!"),
   367  	},
   368  )
   369  if err != nil {
   370      log.Fatal("failed to write messages:", err)
   371  }
   372  
   373  if err := w.Close(); err != nil {
   374      log.Fatal("failed to close writer:", err)
   375  }
   376  ```
   377  
   378  ### Missing topic creation before publication
   379  
   380  ```go
   381  // Make a writer that publishes messages to topic-A.
   382  // The topic will be created if it is missing.
   383  w := &Writer{
   384      Addr:                   kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   385      Topic:                  "topic-A",
   386      AllowAutoTopicCreation: true,
   387  }
   388  
   389  messages := []kafka.Message{
   390      {
   391          Key:   []byte("Key-A"),
   392          Value: []byte("Hello World!"),
   393      },
   394      {
   395          Key:   []byte("Key-B"),
   396          Value: []byte("One!"),
   397      },
   398      {
   399          Key:   []byte("Key-C"),
   400          Value: []byte("Two!"),
   401      },
   402  }
   403  
   404  var err error
   405  const retries = 3
   406  for i := 0; i < retries; i++ {
   407      ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
   408      defer cancel()
   409  
   410      // attempt to create topic prior to publishing the message
   411      err = w.WriteMessages(ctx, messages...)
   412      if errors.Is(err, LeaderNotAvailable) || errors.Is(err, context.DeadlineExceeded) {
   413          time.Sleep(time.Millisecond * 250)
   414          continue
   415      }
   416  
   417      if err != nil {
   418          log.Fatalf("unexpected error %v", err)
   419      }
   420  }
   421  
   422  if err := w.Close(); err != nil {
   423      log.Fatal("failed to close writer:", err)
   424  }
   425  ```
   426  
   427  ### Writing to multiple topics
   428  
   429  Normally, the `WriterConfig.Topic` is used to initialize a single-topic writer.
   430  By excluding that particular configuration, you are given the ability to define
   431  the topic on a per-message basis by setting `Message.Topic`.
   432  
   433  ```go
   434  w := &kafka.Writer{
   435  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   436      // NOTE: When Topic is not defined here, each Message must define it instead.
   437  	Balancer: &kafka.LeastBytes{},
   438  }
   439  
   440  err := w.WriteMessages(context.Background(),
   441      // NOTE: Each Message has Topic defined, otherwise an error is returned.
   442  	kafka.Message{
   443          Topic: "topic-A",
   444  		Key:   []byte("Key-A"),
   445  		Value: []byte("Hello World!"),
   446  	},
   447  	kafka.Message{
   448          Topic: "topic-B",
   449  		Key:   []byte("Key-B"),
   450  		Value: []byte("One!"),
   451  	},
   452  	kafka.Message{
   453          Topic: "topic-C",
   454  		Key:   []byte("Key-C"),
   455  		Value: []byte("Two!"),
   456  	},
   457  )
   458  if err != nil {
   459      log.Fatal("failed to write messages:", err)
   460  }
   461  
   462  if err := w.Close(); err != nil {
   463      log.Fatal("failed to close writer:", err)
   464  }
   465  ```
   466  
   467  **NOTE:** These 2 patterns are mutually exclusive, if you set `Writer.Topic`,
   468  you must not also explicitly define `Message.Topic` on the messages you are
   469  writing. The opposite applies when you do not define a topic for the writer.
   470  The `Writer` will return an error if it detects this ambiguity.
   471  
   472  ### Compatibility with other clients
   473  
   474  #### Sarama
   475  
   476  If you're switching from Sarama and need/want to use the same algorithm for message partitioning, you can either use
   477  the `kafka.Hash` balancer or the `kafka.ReferenceHash` balancer:
   478  
   479  - `kafka.Hash` = `sarama.NewHashPartitioner`
   480  - `kafka.ReferenceHash` = `sarama.NewReferenceHashPartitioner`
   481  
   482  The `kafka.Hash` and `kafka.ReferenceHash` balancers would route messages to the same partitions that the two
   483  aforementioned Sarama partitioners would route them to.
   484  
   485  ```go
   486  w := &kafka.Writer{
   487  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   488  	Topic:    "topic-A",
   489  	Balancer: &kafka.Hash{},
   490  }
   491  ```
   492  
   493  #### librdkafka and confluent-kafka-go
   494  
   495  Use the `kafka.CRC32Balancer` balancer to get the same behaviour as librdkafka's
   496  default `consistent_random` partition strategy.
   497  
   498  ```go
   499  w := &kafka.Writer{
   500  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   501  	Topic:    "topic-A",
   502  	Balancer: kafka.CRC32Balancer{},
   503  }
   504  ```
   505  
   506  #### Java
   507  
   508  Use the `kafka.Murmur2Balancer` balancer to get the same behaviour as the canonical
   509  Java client's default partitioner. Note: the Java class allows you to directly specify
   510  the partition which is not permitted.
   511  
   512  ```go
   513  w := &kafka.Writer{
   514  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   515  	Topic:    "topic-A",
   516  	Balancer: kafka.Murmur2Balancer{},
   517  }
   518  ```
   519  
   520  ### Compression
   521  
   522  Compression can be enabled on the `Writer` by setting the `Compression` field:
   523  
   524  ```go
   525  w := &kafka.Writer{
   526  	Addr:        kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   527  	Topic:       "topic-A",
   528  	Compression: kafka.Snappy,
   529  }
   530  ```
   531  
   532  The `Reader` will by determine if the consumed messages are compressed by
   533  examining the message attributes. However, the package(s) for all expected
   534  codecs must be imported so that they get loaded correctly.
   535  
   536  _Note: in versions prior to 0.4 programs had to import compression packages to
   537  install codecs and support reading compressed messages from kafka. This is no
   538  longer the case and import of the compression packages are now no-ops._
   539  
   540  ## TLS Support
   541  
   542  For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS.
   543  _Note:_ Connecting to a Kafka cluster with TLS enabled without configuring TLS on the Conn/Reader/Writer can manifest in opaque io.ErrUnexpectedEOF errors.
   544  
   545  ### Connection
   546  
   547  ```go
   548  dialer := &kafka.Dialer{
   549      Timeout:   10 * time.Second,
   550      DualStack: true,
   551      TLS:       &tls.Config{...tls config...},
   552  }
   553  
   554  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   555  ```
   556  
   557  ### Reader
   558  
   559  ```go
   560  dialer := &kafka.Dialer{
   561      Timeout:   10 * time.Second,
   562      DualStack: true,
   563      TLS:       &tls.Config{...tls config...},
   564  }
   565  
   566  r := kafka.NewReader(kafka.ReaderConfig{
   567      Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   568      GroupID:        "consumer-group-id",
   569      Topic:          "topic-A",
   570      Dialer:         dialer,
   571  })
   572  ```
   573  
   574  ### Writer
   575  
   576  Direct Writer creation
   577  
   578  ```go
   579  w := kafka.Writer{
   580      Addr: kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   581      Topic:   "topic-A",
   582      Balancer: &kafka.Hash{},
   583      Transport: &kafka.Transport{
   584          TLS: &tls.Config{},
   585        },
   586      }
   587  ```
   588  
   589  Using `kafka.NewWriter`
   590  
   591  ```go
   592  dialer := &kafka.Dialer{
   593      Timeout:   10 * time.Second,
   594      DualStack: true,
   595      TLS:       &tls.Config{...tls config...},
   596  }
   597  
   598  w := kafka.NewWriter(kafka.WriterConfig{
   599  	Brokers: []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   600  	Topic:   "topic-A",
   601  	Balancer: &kafka.Hash{},
   602  	Dialer:   dialer,
   603  })
   604  ```
   605  
   606  Note that `kafka.NewWriter` and `kafka.WriterConfig` are deprecated and will be removed in a future release.
   607  
   608  ## SASL Support
   609  
   610  You can specify an option on the `Dialer` to use SASL authentication. The `Dialer` can be used directly to open a `Conn` or it can be passed to a `Reader` or `Writer` via their respective configs. If the `SASLMechanism` field is `nil`, it will not authenticate with SASL.
   611  
   612  ### SASL Authentication Types
   613  
   614  #### [Plain](https://godoc.org/github.com/QuangHoangHao/kafka-go/sasl/plain#Mechanism)
   615  
   616  ```go
   617  mechanism := plain.Mechanism{
   618      Username: "username",
   619      Password: "password",
   620  }
   621  ```
   622  
   623  #### [SCRAM](https://godoc.org/github.com/QuangHoangHao/kafka-go/sasl/scram#Mechanism)
   624  
   625  ```go
   626  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   627  if err != nil {
   628      panic(err)
   629  }
   630  ```
   631  
   632  ### Connection
   633  
   634  ```go
   635  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   636  if err != nil {
   637      panic(err)
   638  }
   639  
   640  dialer := &kafka.Dialer{
   641      Timeout:       10 * time.Second,
   642      DualStack:     true,
   643      SASLMechanism: mechanism,
   644  }
   645  
   646  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   647  ```
   648  
   649  ### Reader
   650  
   651  ```go
   652  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   653  if err != nil {
   654      panic(err)
   655  }
   656  
   657  dialer := &kafka.Dialer{
   658      Timeout:       10 * time.Second,
   659      DualStack:     true,
   660      SASLMechanism: mechanism,
   661  }
   662  
   663  r := kafka.NewReader(kafka.ReaderConfig{
   664      Brokers:        []string{"localhost:9092","localhost:9093", "localhost:9094"},
   665      GroupID:        "consumer-group-id",
   666      Topic:          "topic-A",
   667      Dialer:         dialer,
   668  })
   669  ```
   670  
   671  ### Writer
   672  
   673  ```go
   674  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   675  if err != nil {
   676      panic(err)
   677  }
   678  
   679  // Transports are responsible for managing connection pools and other resources,
   680  // it's generally best to create a few of these and share them across your
   681  // application.
   682  sharedTransport := &kafka.Transport{
   683      SASL: mechanism,
   684  }
   685  
   686  w := kafka.Writer{
   687  	Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   688  	Topic:     "topic-A",
   689  	Balancer:  &kafka.Hash{},
   690  	Transport: sharedTransport,
   691  }
   692  ```
   693  
   694  ### Client
   695  
   696  ```go
   697  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   698  if err != nil {
   699      panic(err)
   700  }
   701  
   702  // Transports are responsible for managing connection pools and other resources,
   703  // it's generally best to create a few of these and share them across your
   704  // application.
   705  sharedTransport := &kafka.Transport{
   706      SASL: mechanism,
   707  }
   708  
   709  client := &kafka.Client{
   710      Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   711      Timeout:   10 * time.Second,
   712      Transport: sharedTransport,
   713  }
   714  ```
   715  
   716  #### Reading all messages within a time range
   717  
   718  ```go
   719  startTime := time.Now().Add(-time.Hour)
   720  endTime := time.Now()
   721  batchSize := int(10e6) // 10MB
   722  
   723  r := kafka.NewReader(kafka.ReaderConfig{
   724      Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   725      Topic:     "my-topic1",
   726      Partition: 0,
   727      MinBytes:  batchSize,
   728      MaxBytes:  batchSize,
   729  })
   730  
   731  r.SetOffsetAt(context.Background(), startTime)
   732  
   733  for {
   734      m, err := r.ReadMessage(context.Background())
   735  
   736      if err != nil {
   737          break
   738      }
   739      if m.Time.After(endTime) {
   740          break
   741      }
   742      // TODO: process message
   743      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   744  }
   745  
   746  if err := r.Close(); err != nil {
   747      log.Fatal("failed to close reader:", err)
   748  }
   749  ```
   750  
   751  ## Logging
   752  
   753  For visiblity into the operations of the Reader/Writer types, configure a logger on creation.
   754  
   755  ### Reader
   756  
   757  ```go
   758  func logf(msg string, a ...interface{}) {
   759  	fmt.Printf(msg, a...)
   760  	fmt.Println()
   761  }
   762  
   763  r := kafka.NewReader(kafka.ReaderConfig{
   764  	Brokers:     []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   765  	Topic:       "my-topic1",
   766  	Partition:   0,
   767  	Logger:      kafka.LoggerFunc(logf),
   768  	ErrorLogger: kafka.LoggerFunc(logf),
   769  })
   770  ```
   771  
   772  ### Writer
   773  
   774  ```go
   775  func logf(msg string, a ...interface{}) {
   776  	fmt.Printf(msg, a...)
   777  	fmt.Println()
   778  }
   779  
   780  w := &kafka.Writer{
   781  	Addr:        kafka.TCP("localhost:9092"),
   782  	Topic:       "topic",
   783  	Logger:      kafka.LoggerFunc(logf),
   784  	ErrorLogger: kafka.LoggerFunc(logf),
   785  }
   786  ```
   787  
   788  ## Testing
   789  
   790  Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the `KAFKA_SKIP_NETTEST=1` environment variables will skip those tests.
   791  
   792  Run Kafka locally in docker
   793  
   794  ```bash
   795  docker-compose up -d
   796  ```
   797  
   798  Run tests
   799  
   800  ```bash
   801  KAFKA_VERSION=2.3.1 \
   802    KAFKA_SKIP_NETTEST=1 \
   803    go test -race ./...
   804  ```