github.com/streamdal/segmentio-kafka-go@v0.4.47-streamdal/README.md (about)

     1  # kafka-go [![CircleCI](https://circleci.com/gh/segmentio/kafka-go.svg?style=shield)](https://circleci.com/gh/segmentio/kafka-go) [![Go Report Card](https://goreportcard.com/badge/github.com/segmentio/kafka-go)](https://goreportcard.com/report/github.com/segmentio/kafka-go) [![GoDoc](https://godoc.org/github.com/segmentio/kafka-go?status.svg)](https://godoc.org/github.com/segmentio/kafka-go)
     2  
     3  > [!IMPORTANT]
     4  > This library is instrumented with [Streamdal's Go SDK](https://github.com/streamdal/streamdal/tree/main/sdks/go).
     5  >
     6  > Refer to [README.STREAMDAL.md](README.STREAMDAL.md) for more information.
     7  
     8  ## Motivations
     9  
    10  We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go
    11  client libraries for Kafka at the time of this writing was not ideal. The available
    12  options were:
    13  
    14  - [sarama](https://github.com/Shopify/sarama), which is by far the most popular
    15  but is quite difficult to work with. It is poorly documented, the API exposes
    16  low level concepts of the Kafka protocol, and it doesn't support recent Go features
    17  like [contexts](https://golang.org/pkg/context/). It also passes all values as
    18  pointers which causes large numbers of dynamic memory allocations, more frequent
    19  garbage collections, and higher memory usage.
    20  
    21  - [confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go) is a
    22  cgo based wrapper around [librdkafka](https://github.com/edenhill/librdkafka),
    23  which means it introduces a dependency to a C library on all Go code that uses
    24  the package. It has much better documentation than sarama but still lacks support
    25  for Go contexts.
    26  
    27  - [goka](https://github.com/lovoo/goka) is a more recent Kafka client for Go
    28  which focuses on a specific usage pattern. It provides abstractions for using Kafka
    29  as a message passing bus between services rather than an ordered log of events, but
    30  this is not the typical use case of Kafka for us at Segment. The package also
    31  depends on sarama for all interactions with Kafka.
    32  
    33  This is where `kafka-go` comes into play. It provides both low and high level
    34  APIs for interacting with Kafka, mirroring concepts and implementing interfaces of
    35  the Go standard library to make it easy to use and integrate with existing
    36  software.
    37  
    38  #### Note:
    39  
    40  In order to better align with our newly adopted Code of Conduct, the kafka-go
    41  project has renamed our default branch to `main`. For the full details of our
    42  Code Of Conduct see [this](./CODE_OF_CONDUCT.md) document.
    43  
    44  ## Kafka versions
    45  
    46  `kafka-go` is currently tested with Kafka versions 0.10.1.0 to 2.7.1.
    47  While it should also be compatible with later versions, newer features available
    48  in the Kafka API may not yet be implemented in the client.
    49  
    50  ## Go versions
    51  
    52  `kafka-go` requires Go version 1.15 or later.
    53  
    54  ## Connection [![GoDoc](https://godoc.org/github.com/segmentio/kafka-go?status.svg)](https://godoc.org/github.com/segmentio/kafka-go#Conn)
    55  
    56  The `Conn` type is the core of the `kafka-go` package. It wraps around a raw
    57  network connection to expose a low-level API to a Kafka server.
    58  
    59  Here are some examples showing typical use of a connection object:
    60  ```go
    61  // to produce messages
    62  topic := "my-topic"
    63  partition := 0
    64  
    65  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    66  if err != nil {
    67      log.Fatal("failed to dial leader:", err)
    68  }
    69  
    70  conn.SetWriteDeadline(time.Now().Add(10*time.Second))
    71  _, err = conn.WriteMessages(
    72      kafka.Message{Value: []byte("one!")},
    73      kafka.Message{Value: []byte("two!")},
    74      kafka.Message{Value: []byte("three!")},
    75  )
    76  if err != nil {
    77      log.Fatal("failed to write messages:", err)
    78  }
    79  
    80  if err := conn.Close(); err != nil {
    81      log.Fatal("failed to close writer:", err)
    82  }
    83  ```
    84  ```go
    85  // to consume messages
    86  topic := "my-topic"
    87  partition := 0
    88  
    89  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    90  if err != nil {
    91      log.Fatal("failed to dial leader:", err)
    92  }
    93  
    94  conn.SetReadDeadline(time.Now().Add(10*time.Second))
    95  batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max
    96  
    97  b := make([]byte, 10e3) // 10KB max per message
    98  for {
    99      n, err := batch.Read(b)
   100      if err != nil {
   101          break
   102      }
   103      fmt.Println(string(b[:n]))
   104  }
   105  
   106  if err := batch.Close(); err != nil {
   107      log.Fatal("failed to close batch:", err)
   108  }
   109  
   110  if err := conn.Close(); err != nil {
   111      log.Fatal("failed to close connection:", err)
   112  }
   113  ```
   114  
   115  ### To Create Topics
   116  By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE='true'` in the bitnami/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
   117  ```go
   118  // to create topics when auto.create.topics.enable='true'
   119  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
   120  if err != nil {
   121      panic(err.Error())
   122  }
   123  ```
   124  
   125  If `auto.create.topics.enable='false'` then you will need to create topics explicitly like so:
   126  ```go
   127  // to create topics when auto.create.topics.enable='false'
   128  topic := "my-topic"
   129  
   130  conn, err := kafka.Dial("tcp", "localhost:9092")
   131  if err != nil {
   132      panic(err.Error())
   133  }
   134  defer conn.Close()
   135  
   136  controller, err := conn.Controller()
   137  if err != nil {
   138      panic(err.Error())
   139  }
   140  var controllerConn *kafka.Conn
   141  controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   142  if err != nil {
   143      panic(err.Error())
   144  }
   145  defer controllerConn.Close()
   146  
   147  
   148  topicConfigs := []kafka.TopicConfig{
   149      {
   150          Topic:             topic,
   151          NumPartitions:     1,
   152          ReplicationFactor: 1,
   153      },
   154  }
   155  
   156  err = controllerConn.CreateTopics(topicConfigs...)
   157  if err != nil {
   158      panic(err.Error())
   159  }
   160  ```
   161  
   162  ### To Connect To Leader Via a Non-leader Connection
   163  ```go
   164  // to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
   165  conn, err := kafka.Dial("tcp", "localhost:9092")
   166  if err != nil {
   167      panic(err.Error())
   168  }
   169  defer conn.Close()
   170  controller, err := conn.Controller()
   171  if err != nil {
   172      panic(err.Error())
   173  }
   174  var connLeader *kafka.Conn
   175  connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   176  if err != nil {
   177      panic(err.Error())
   178  }
   179  defer connLeader.Close()
   180  ```
   181  
   182  ### To list topics
   183  ```go
   184  conn, err := kafka.Dial("tcp", "localhost:9092")
   185  if err != nil {
   186      panic(err.Error())
   187  }
   188  defer conn.Close()
   189  
   190  partitions, err := conn.ReadPartitions()
   191  if err != nil {
   192      panic(err.Error())
   193  }
   194  
   195  m := map[string]struct{}{}
   196  
   197  for _, p := range partitions {
   198      m[p.Topic] = struct{}{}
   199  }
   200  for k := range m {
   201      fmt.Println(k)
   202  }
   203  ```
   204  
   205  
   206  Because it is low level, the `Conn` type turns out to be a great building block
   207  for higher level abstractions, like the `Reader` for example.
   208  
   209  ## Reader [![GoDoc](https://godoc.org/github.com/segmentio/kafka-go?status.svg)](https://godoc.org/github.com/segmentio/kafka-go#Reader)
   210  
   211  A `Reader` is another concept exposed by the `kafka-go` package, which intends
   212  to make it simpler to implement the typical use case of consuming from a single
   213  topic-partition pair.
   214  A `Reader` also automatically handles reconnections and offset management, and
   215  exposes an API that supports asynchronous cancellations and timeouts using Go
   216  contexts.
   217  
   218  Note that it is important to call `Close()` on a `Reader` when a process exits.
   219  The kafka server needs a graceful disconnect to stop it from continuing to
   220  attempt to send messages to the connected clients. The given example will not
   221  call `Close()` if the process is terminated with SIGINT (ctrl-c at the shell) or
   222  SIGTERM (as docker stop or a kubernetes restart does). This can result in a
   223  delay when a new reader on the same topic connects (e.g. new process started
   224  or new container running). Use a `signal.Notify` handler to close the reader on
   225  process shutdown.
   226  
   227  ```go
   228  // make a new reader that consumes from topic-A, partition 0, at offset 42
   229  r := kafka.NewReader(kafka.ReaderConfig{
   230      Brokers:   []string{"localhost:9092","localhost:9093", "localhost:9094"},
   231      Topic:     "topic-A",
   232      Partition: 0,
   233      MaxBytes:  10e6, // 10MB
   234  })
   235  r.SetOffset(42)
   236  
   237  for {
   238      m, err := r.ReadMessage(context.Background())
   239      if err != nil {
   240          break
   241      }
   242      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   243  }
   244  
   245  if err := r.Close(); err != nil {
   246      log.Fatal("failed to close reader:", err)
   247  }
   248  ```
   249  
   250  ### Consumer Groups
   251  
   252  ```kafka-go``` also supports Kafka consumer groups including broker managed offsets.
   253  To enable consumer groups, simply specify the GroupID in the ReaderConfig.
   254  
   255  ReadMessage automatically commits offsets when using consumer groups.
   256  
   257  ```go
   258  // make a new reader that consumes from topic-A
   259  r := kafka.NewReader(kafka.ReaderConfig{
   260      Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   261      GroupID:   "consumer-group-id",
   262      Topic:     "topic-A",
   263      MaxBytes:  10e6, // 10MB
   264  })
   265  
   266  for {
   267      m, err := r.ReadMessage(context.Background())
   268      if err != nil {
   269          break
   270      }
   271      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   272  }
   273  
   274  if err := r.Close(); err != nil {
   275      log.Fatal("failed to close reader:", err)
   276  }
   277  ```
   278  
   279  There are a number of limitations when using consumer groups:
   280  
   281  * ```(*Reader).SetOffset``` will return an error when GroupID is set
   282  * ```(*Reader).Offset``` will always return ```-1``` when GroupID is set
   283  * ```(*Reader).Lag``` will always return ```-1``` when GroupID is set
   284  * ```(*Reader).ReadLag``` will return an error when GroupID is set
   285  * ```(*Reader).Stats``` will return a partition of ```-1``` when GroupID is set
   286  
   287  ### Explicit Commits
   288  
   289  ```kafka-go``` also supports explicit commits.  Instead of calling ```ReadMessage```,
   290  call ```FetchMessage``` followed by ```CommitMessages```.
   291  
   292  ```go
   293  ctx := context.Background()
   294  for {
   295      m, err := r.FetchMessage(ctx)
   296      if err != nil {
   297          break
   298      }
   299      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   300      if err := r.CommitMessages(ctx, m); err != nil {
   301          log.Fatal("failed to commit messages:", err)
   302      }
   303  }
   304  ```
   305  
   306  When committing messages in consumer groups, the message with the highest offset
   307  for a given topic/partition determines the value of the committed offset for
   308  that partition. For example, if messages at offset 1, 2, and 3 of a single
   309  partition were retrieved by call to `FetchMessage`, calling `CommitMessages`
   310  with message offset 3 will also result in committing the messages at offsets 1
   311  and 2 for that partition.
   312  
   313  ### Managing Commits
   314  
   315  By default, CommitMessages will synchronously commit offsets to Kafka.  For
   316  improved performance, you can instead periodically commit offsets to Kafka
   317  by setting CommitInterval on the ReaderConfig.
   318  
   319  
   320  ```go
   321  // make a new reader that consumes from topic-A
   322  r := kafka.NewReader(kafka.ReaderConfig{
   323      Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   324      GroupID:        "consumer-group-id",
   325      Topic:          "topic-A",
   326      MaxBytes:       10e6, // 10MB
   327      CommitInterval: time.Second, // flushes commits to Kafka every second
   328  })
   329  ```
   330  
   331  ## Writer [![GoDoc](https://godoc.org/github.com/segmentio/kafka-go?status.svg)](https://godoc.org/github.com/segmentio/kafka-go#Writer)
   332  
   333  To produce messages to Kafka, a program may use the low-level `Conn` API, but
   334  the package also provides a higher level `Writer` type which is more appropriate
   335  to use in most cases as it provides additional features:
   336  
   337  - Automatic retries and reconnections on errors.
   338  - Configurable distribution of messages across available partitions.
   339  - Synchronous or asynchronous writes of messages to Kafka.
   340  - Asynchronous cancellation using contexts.
   341  - Flushing of pending messages on close to support graceful shutdowns.
   342  - Creation of a missing topic before publishing a message. *Note!* it was the default behaviour up to the version `v0.4.30`.
   343  
   344  ```go
   345  // make a writer that produces to topic-A, using the least-bytes distribution
   346  w := &kafka.Writer{
   347  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   348  	Topic:   "topic-A",
   349  	Balancer: &kafka.LeastBytes{},
   350  }
   351  
   352  err := w.WriteMessages(context.Background(),
   353  	kafka.Message{
   354  		Key:   []byte("Key-A"),
   355  		Value: []byte("Hello World!"),
   356  	},
   357  	kafka.Message{
   358  		Key:   []byte("Key-B"),
   359  		Value: []byte("One!"),
   360  	},
   361  	kafka.Message{
   362  		Key:   []byte("Key-C"),
   363  		Value: []byte("Two!"),
   364  	},
   365  )
   366  if err != nil {
   367      log.Fatal("failed to write messages:", err)
   368  }
   369  
   370  if err := w.Close(); err != nil {
   371      log.Fatal("failed to close writer:", err)
   372  }
   373  ```
   374  
   375  ### Missing topic creation before publication
   376  
   377  ```go
   378  // Make a writer that publishes messages to topic-A.
   379  // The topic will be created if it is missing.
   380  w := &Writer{
   381      Addr:                   kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   382      Topic:                  "topic-A",
   383      AllowAutoTopicCreation: true,
   384  }
   385  
   386  messages := []kafka.Message{
   387      {
   388          Key:   []byte("Key-A"),
   389          Value: []byte("Hello World!"),
   390      },
   391      {
   392          Key:   []byte("Key-B"),
   393          Value: []byte("One!"),
   394      },
   395      {
   396          Key:   []byte("Key-C"),
   397          Value: []byte("Two!"),
   398      },
   399  }
   400  
   401  var err error
   402  const retries = 3
   403  for i := 0; i < retries; i++ {
   404      ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
   405      defer cancel()
   406      
   407      // attempt to create topic prior to publishing the message
   408      err = w.WriteMessages(ctx, messages...)
   409      if errors.Is(err, kafka.LeaderNotAvailable) || errors.Is(err, context.DeadlineExceeded) {
   410          time.Sleep(time.Millisecond * 250)
   411          continue
   412      }
   413  
   414      if err != nil {
   415          log.Fatalf("unexpected error %v", err)
   416      }
   417      break
   418  }
   419  
   420  if err := w.Close(); err != nil {
   421      log.Fatal("failed to close writer:", err)
   422  }
   423  ```
   424  
   425  ### Writing to multiple topics
   426  
   427  Normally, the `WriterConfig.Topic` is used to initialize a single-topic writer.
   428  By excluding that particular configuration, you are given the ability to define
   429  the topic on a per-message basis by setting `Message.Topic`.
   430  
   431  ```go
   432  w := &kafka.Writer{
   433  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   434      // NOTE: When Topic is not defined here, each Message must define it instead.
   435  	Balancer: &kafka.LeastBytes{},
   436  }
   437  
   438  err := w.WriteMessages(context.Background(),
   439      // NOTE: Each Message has Topic defined, otherwise an error is returned.
   440  	kafka.Message{
   441          Topic: "topic-A",
   442  		Key:   []byte("Key-A"),
   443  		Value: []byte("Hello World!"),
   444  	},
   445  	kafka.Message{
   446          Topic: "topic-B",
   447  		Key:   []byte("Key-B"),
   448  		Value: []byte("One!"),
   449  	},
   450  	kafka.Message{
   451          Topic: "topic-C",
   452  		Key:   []byte("Key-C"),
   453  		Value: []byte("Two!"),
   454  	},
   455  )
   456  if err != nil {
   457      log.Fatal("failed to write messages:", err)
   458  }
   459  
   460  if err := w.Close(); err != nil {
   461      log.Fatal("failed to close writer:", err)
   462  }
   463  ```
   464  
   465  **NOTE:** These 2 patterns are mutually exclusive, if you set `Writer.Topic`,
   466  you must not also explicitly define `Message.Topic` on the messages you are
   467  writing. The opposite applies when you do not define a topic for the writer.
   468  The `Writer` will return an error if it detects this ambiguity.
   469  
   470  ### Compatibility with other clients
   471  
   472  #### Sarama
   473  
   474  If you're switching from Sarama and need/want to use the same algorithm for message partitioning, you can either use 
   475  the `kafka.Hash` balancer or the `kafka.ReferenceHash` balancer:
   476  * `kafka.Hash` = `sarama.NewHashPartitioner`
   477  * `kafka.ReferenceHash` = `sarama.NewReferenceHashPartitioner`
   478  
   479  The `kafka.Hash` and `kafka.ReferenceHash` balancers would route messages to the same partitions that the two 
   480  aforementioned Sarama partitioners would route them to.
   481  
   482  ```go
   483  w := &kafka.Writer{
   484  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   485  	Topic:    "topic-A",
   486  	Balancer: &kafka.Hash{},
   487  }
   488  ```
   489  
   490  #### librdkafka and confluent-kafka-go
   491  
   492  Use the ```kafka.CRC32Balancer``` balancer to get the same behaviour as librdkafka's
   493  default ```consistent_random``` partition strategy.
   494  
   495  ```go
   496  w := &kafka.Writer{
   497  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   498  	Topic:    "topic-A",
   499  	Balancer: kafka.CRC32Balancer{},
   500  }
   501  ```
   502  
   503  #### Java
   504  
   505  Use the ```kafka.Murmur2Balancer``` balancer to get the same behaviour as the canonical
   506  Java client's default partitioner.  Note: the Java class allows you to directly specify
   507  the partition which is not permitted.
   508  
   509  ```go
   510  w := &kafka.Writer{
   511  	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   512  	Topic:    "topic-A",
   513  	Balancer: kafka.Murmur2Balancer{},
   514  }
   515  ```
   516  
   517  ### Compression
   518  
   519  Compression can be enabled on the `Writer` by setting the `Compression` field:
   520  
   521  ```go
   522  w := &kafka.Writer{
   523  	Addr:        kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   524  	Topic:       "topic-A",
   525  	Compression: kafka.Snappy,
   526  }
   527  ```
   528  
   529  The `Reader` will by determine if the consumed messages are compressed by
   530  examining the message attributes.  However, the package(s) for all expected
   531  codecs must be imported so that they get loaded correctly.
   532  
   533  _Note: in versions prior to 0.4 programs had to import compression packages to
   534  install codecs and support reading compressed messages from kafka. This is no
   535  longer the case and import of the compression packages are now no-ops._
   536  
   537  ## TLS Support
   538  
   539  For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS.
   540  *Note:* Connecting to a Kafka cluster with TLS enabled without configuring TLS on the Conn/Reader/Writer can manifest in opaque io.ErrUnexpectedEOF errors.
   541  
   542  
   543  ### Connection
   544  
   545  ```go
   546  dialer := &kafka.Dialer{
   547      Timeout:   10 * time.Second,
   548      DualStack: true,
   549      TLS:       &tls.Config{...tls config...},
   550  }
   551  
   552  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   553  ```
   554  
   555  ### Reader
   556  
   557  ```go
   558  dialer := &kafka.Dialer{
   559      Timeout:   10 * time.Second,
   560      DualStack: true,
   561      TLS:       &tls.Config{...tls config...},
   562  }
   563  
   564  r := kafka.NewReader(kafka.ReaderConfig{
   565      Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   566      GroupID:        "consumer-group-id",
   567      Topic:          "topic-A",
   568      Dialer:         dialer,
   569  })
   570  ```
   571  
   572  ### Writer
   573  
   574  
   575  Direct Writer creation
   576  
   577  ```go
   578  w := kafka.Writer{
   579      Addr: kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"), 
   580      Topic:   "topic-A",
   581      Balancer: &kafka.Hash{},
   582      Transport: &kafka.Transport{
   583          TLS: &tls.Config{},
   584        },
   585      }
   586  ```
   587  
   588  Using `kafka.NewWriter`
   589  
   590  ```go
   591  dialer := &kafka.Dialer{
   592      Timeout:   10 * time.Second,
   593      DualStack: true,
   594      TLS:       &tls.Config{...tls config...},
   595  }
   596  
   597  w := kafka.NewWriter(kafka.WriterConfig{
   598  	Brokers: []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   599  	Topic:   "topic-A",
   600  	Balancer: &kafka.Hash{},
   601  	Dialer:   dialer,
   602  })
   603  ```
   604  Note that `kafka.NewWriter` and `kafka.WriterConfig` are deprecated and will be removed in a future release.
   605  
   606  ## SASL Support
   607  
   608  You can specify an option on the `Dialer` to use SASL authentication. The `Dialer` can be used directly to open a `Conn` or it can be passed to a `Reader` or `Writer` via their respective configs. If the `SASLMechanism` field is `nil`, it will not authenticate with SASL.
   609  
   610  ### SASL Authentication Types
   611  
   612  #### [Plain](https://godoc.org/github.com/segmentio/kafka-go/sasl/plain#Mechanism)
   613  ```go
   614  mechanism := plain.Mechanism{
   615      Username: "username",
   616      Password: "password",
   617  }
   618  ```
   619  
   620  #### [SCRAM](https://godoc.org/github.com/segmentio/kafka-go/sasl/scram#Mechanism)
   621  ```go
   622  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   623  if err != nil {
   624      panic(err)
   625  }
   626  ```
   627  
   628  ### Connection
   629  
   630  ```go
   631  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   632  if err != nil {
   633      panic(err)
   634  }
   635  
   636  dialer := &kafka.Dialer{
   637      Timeout:       10 * time.Second,
   638      DualStack:     true,
   639      SASLMechanism: mechanism,
   640  }
   641  
   642  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   643  ```
   644  
   645  
   646  ### Reader
   647  
   648  ```go
   649  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   650  if err != nil {
   651      panic(err)
   652  }
   653  
   654  dialer := &kafka.Dialer{
   655      Timeout:       10 * time.Second,
   656      DualStack:     true,
   657      SASLMechanism: mechanism,
   658  }
   659  
   660  r := kafka.NewReader(kafka.ReaderConfig{
   661      Brokers:        []string{"localhost:9092","localhost:9093", "localhost:9094"},
   662      GroupID:        "consumer-group-id",
   663      Topic:          "topic-A",
   664      Dialer:         dialer,
   665  })
   666  ```
   667  
   668  ### Writer
   669  
   670  ```go
   671  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   672  if err != nil {
   673      panic(err)
   674  }
   675  
   676  // Transports are responsible for managing connection pools and other resources,
   677  // it's generally best to create a few of these and share them across your
   678  // application.
   679  sharedTransport := &kafka.Transport{
   680      SASL: mechanism,
   681  }
   682  
   683  w := kafka.Writer{
   684  	Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   685  	Topic:     "topic-A",
   686  	Balancer:  &kafka.Hash{},
   687  	Transport: sharedTransport,
   688  }
   689  ```
   690  
   691  ### Client
   692  
   693  ```go
   694  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   695  if err != nil {
   696      panic(err)
   697  }
   698  
   699  // Transports are responsible for managing connection pools and other resources,
   700  // it's generally best to create a few of these and share them across your
   701  // application.
   702  sharedTransport := &kafka.Transport{
   703      SASL: mechanism,
   704  }
   705  
   706  client := &kafka.Client{
   707      Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
   708      Timeout:   10 * time.Second,
   709      Transport: sharedTransport,
   710  }
   711  ```
   712  
   713  #### Reading all messages within a time range
   714  
   715  ```go
   716  startTime := time.Now().Add(-time.Hour)
   717  endTime := time.Now()
   718  batchSize := int(10e6) // 10MB
   719  
   720  r := kafka.NewReader(kafka.ReaderConfig{
   721      Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   722      Topic:     "my-topic1",
   723      Partition: 0,
   724      MaxBytes:  batchSize,
   725  })
   726  
   727  r.SetOffsetAt(context.Background(), startTime)
   728  
   729  for {
   730      m, err := r.ReadMessage(context.Background())
   731  
   732      if err != nil {
   733          break
   734      }
   735      if m.Time.After(endTime) {
   736          break
   737      }
   738      // TODO: process message
   739      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   740  }
   741  
   742  if err := r.Close(); err != nil {
   743      log.Fatal("failed to close reader:", err)
   744  }
   745  ```
   746  
   747  
   748  ## Logging
   749  
   750  For visiblity into the operations of the Reader/Writer types, configure a logger on creation.
   751  
   752  
   753  ### Reader
   754  
   755  ```go
   756  func logf(msg string, a ...interface{}) {
   757  	fmt.Printf(msg, a...)
   758  	fmt.Println()
   759  }
   760  
   761  r := kafka.NewReader(kafka.ReaderConfig{
   762  	Brokers:     []string{"localhost:9092", "localhost:9093", "localhost:9094"},
   763  	Topic:       "my-topic1",
   764  	Partition:   0,
   765  	Logger:      kafka.LoggerFunc(logf),
   766  	ErrorLogger: kafka.LoggerFunc(logf),
   767  })
   768  ```
   769  
   770  ### Writer
   771  
   772  ```go
   773  func logf(msg string, a ...interface{}) {
   774  	fmt.Printf(msg, a...)
   775  	fmt.Println()
   776  }
   777  
   778  w := &kafka.Writer{
   779  	Addr:        kafka.TCP("localhost:9092"),
   780  	Topic:       "topic",
   781  	Logger:      kafka.LoggerFunc(logf),
   782  	ErrorLogger: kafka.LoggerFunc(logf),
   783  }
   784  ```
   785  
   786  
   787  
   788  ## Testing
   789  
   790  Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the `KAFKA_SKIP_NETTEST=1` environment variables will skip those tests.
   791  
   792  Run Kafka locally in docker
   793  
   794  ```bash
   795  docker-compose up -d
   796  ```
   797  
   798  Run tests
   799  
   800  ```bash
   801  KAFKA_VERSION=2.3.1 \
   802    KAFKA_SKIP_NETTEST=1 \
   803    go test -race ./...
   804  ```
   805  
   806  (or) to clean up the cached test results and run tests:
   807  ```
   808  go clean -cache && make test
   809  ```