github.com/rbisecke/kafka-go@v0.4.27/README.md (about)

     1  # kafka-go [![CircleCI](https://circleci.com/gh/rbisecke/kafka-go.svg?style=shield)](https://circleci.com/gh/rbisecke/kafka-go) [![Go Report Card](https://goreportcard.com/badge/github.com/rbisecke/kafka-go)](https://goreportcard.com/report/github.com/rbisecke/kafka-go) [![GoDoc](https://godoc.org/github.com/rbisecke/kafka-go?status.svg)](https://godoc.org/github.com/rbisecke/kafka-go)
     2  
     3  ## Motivations
     4  
     5  We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go
     6  client libraries for Kafka at the time of this writing was not ideal. The available
     7  options were:
     8  
     9  - [sarama](https://github.com/Shopify/sarama), which is by far the most popular
    10  but is quite difficult to work with. It is poorly documented, the API exposes
    11  low level concepts of the Kafka protocol, and it doesn't support recent Go features
    12  like [contexts](https://golang.org/pkg/context/). It also passes all values as
    13  pointers which causes large numbers of dynamic memory allocations, more frequent
    14  garbage collections, and higher memory usage.
    15  
    16  - [confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go) is a
    17  cgo based wrapper around [librdkafka](https://github.com/edenhill/librdkafka),
    18  which means it introduces a dependency to a C library on all Go code that uses
    19  the package. It has much better documentation than sarama but still lacks support
    20  for Go contexts.
    21  
    22  - [goka](https://github.com/lovoo/goka) is a more recent Kafka client for Go
    23  which focuses on a specific usage pattern. It provides abstractions for using Kafka
    24  as a message passing bus between services rather than an ordered log of events, but
    25  this is not the typical use case of Kafka for us at Segment. The package also
    26  depends on sarama for all interactions with Kafka.
    27  
    28  This is where `kafka-go` comes into play. It provides both low and high level
    29  APIs for interacting with Kafka, mirroring concepts and implementing interfaces of
    30  the Go standard library to make it easy to use and integrate with existing
    31  software.
    32  
    33  #### Note:
    34  
    35  In order to better align with our newly adopted Code of Conduct, the kafka-go project has renamed our default branch to `main`.
    36  For the full details of our Code Of Conduct see [this](./CODE_OF_CONDUCT.md) document.
    37  
    38  ## Migrating to 0.4
    39  
    40  Version 0.4 introduces a few breaking changes to the repository structure which
    41  should have minimal impact on programs and should only manifest at compile time
    42  (the runtime behavior should remain unchanged).
    43  
    44  * Programs do not need to import compression packages anymore in order to read
    45  compressed messages from kafka. All compression codecs are supported by default.
    46  
    47  * Programs that used the compression codecs directly must be adapted.
    48  Compression codecs are now exposed in the `compress` sub-package.
    49  
    50  * The experimental `kafka.Client` API has been updated and slightly modified:
    51  the `kafka.NewClient` function and `kafka.ClientConfig` type were removed.
    52  Programs now configure the client values directly through exported fields.
    53  
    54  * The `kafka.(*Client).ConsumerOffsets` method is now deprecated (along with the
    55  `kafka.TopicAndGroup` type, and will be removed when we release version 1.0.
    56  Programs should use the `kafka.(*Client).OffsetFetch` API instead.
    57  
    58  With 0.4, we know that we are starting to introduce a bit more complexity in the
    59  code, but the plan is to eventually converge towards a simpler and more effective
    60  API, allowing us to keep up with Kafka's ever growing feature set, and bringing
    61  a more efficient implementation to programs depending on kafka-go.
    62  
    63  We truly appreciate everyone's input and contributions, which have made this
    64  project way more than what it was when we started it, and we're looking forward
    65  to receive more feedback on where we should take it.
    66  
    67  ## Kafka versions
    68  
    69  `kafka-go` is currently compatible with Kafka versions from 0.10.1.0 to 2.1.0. While latest versions will be working,
    70  some features available from the Kafka API may not be implemented yet.
    71  
    72  ## Golang version
    73  
    74  `kafka-go` is currently compatible with golang version from 1.15+. To use with older versions of golang use release [v0.2.5](https://github.com/rbisecke/kafka-go/releases/tag/v0.2.5).
    75  
    76  ## Connection [![GoDoc](https://godoc.org/github.com/rbisecke/kafka-go?status.svg)](https://godoc.org/github.com/rbisecke/kafka-go#Conn)
    77  
    78  The `Conn` type is the core of the `kafka-go` package. It wraps around a raw
    79  network connection to expose a low-level API to a Kafka server.
    80  
    81  Here are some examples showing typical use of a connection object:
    82  ```go
    83  // to produce messages
    84  topic := "my-topic"
    85  partition := 0
    86  
    87  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    88  if err != nil {
    89      log.Fatal("failed to dial leader:", err)
    90  }
    91  
    92  conn.SetWriteDeadline(time.Now().Add(10*time.Second))
    93  _, err = conn.WriteMessages(
    94      kafka.Message{Value: []byte("one!")},
    95      kafka.Message{Value: []byte("two!")},
    96      kafka.Message{Value: []byte("three!")},
    97  )
    98  if err != nil {
    99      log.Fatal("failed to write messages:", err)
   100  }
   101  
   102  if err := conn.Close(); err != nil {
   103      log.Fatal("failed to close writer:", err)
   104  }
   105  ```
   106  ```go
   107  // to consume messages
   108  topic := "my-topic"
   109  partition := 0
   110  
   111  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
   112  if err != nil {
   113      log.Fatal("failed to dial leader:", err)
   114  }
   115  
   116  conn.SetReadDeadline(time.Now().Add(10*time.Second))
   117  batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max
   118  
   119  b := make([]byte, 10e3) // 10KB max per message
   120  for {
   121      n, err := batch.Read(b)
   122      if err != nil {
   123          break
   124      }
   125      fmt.Println(string(b[:n]))
   126  }
   127  
   128  if err := batch.Close(); err != nil {
   129      log.Fatal("failed to close batch:", err)
   130  }
   131  
   132  if err := conn.Close(); err != nil {
   133      log.Fatal("failed to close connection:", err)
   134  }
   135  ```
   136  
   137  ### To Create Topics
   138  By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_AUTO_CREATE_TOPICS_ENABLE='true'` in the wurstmeister/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
   139  ```go
   140  // to create topics when auto.create.topics.enable='true'
   141  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
   142  if err != nil {
   143      panic(err.Error())
   144  }
   145  ```
   146  
   147  If `auto.create.topics.enable='false'` then you will need to create topics explicitly like so:
   148  ```go
   149  // to create topics when auto.create.topics.enable='false'
   150  topic := "my-topic"
   151  partition := 0
   152  
   153  conn, err := kafka.Dial("tcp", "localhost:9092")
   154  if err != nil {
   155      panic(err.Error())
   156  }
   157  defer conn.Close()
   158  
   159  controller, err := conn.Controller()
   160  if err != nil {
   161      panic(err.Error())
   162  }
   163  var controllerConn *kafka.Conn
   164  controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   165  if err != nil {
   166      panic(err.Error())
   167  }
   168  defer controllerConn.Close()
   169  
   170  
   171  topicConfigs := []kafka.TopicConfig{
   172      kafka.TopicConfig{
   173          Topic:             topic,
   174          NumPartitions:     1,
   175          ReplicationFactor: 1,
   176      },
   177  }
   178  
   179  err = controllerConn.CreateTopics(topicConfigs...)
   180  if err != nil {
   181      panic(err.Error())
   182  }
   183  ```
   184  
   185  ### To Connect To Leader Via a Non-leader Connection
   186  ```go
   187  // to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
   188  conn, err := kafka.Dial("tcp", "localhost:9092")
   189  if err != nil {
   190      panic(err.Error())
   191  }
   192  defer conn.Close()
   193  controller, err := conn.Controller()
   194  if err != nil {
   195      panic(err.Error())
   196  }
   197  var connLeader *kafka.Conn
   198  connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   199  if err != nil {
   200      panic(err.Error())
   201  }
   202  defer connLeader.Close()
   203  ```
   204  
   205  ### To list topics
   206  ```go
   207  conn, err := kafka.Dial("tcp", "localhost:9092")
   208  if err != nil {
   209      panic(err.Error())
   210  }
   211  defer conn.Close()
   212  
   213  partitions, err := conn.ReadPartitions()
   214  if err != nil {
   215      panic(err.Error())
   216  }
   217  
   218  m := map[string]struct{}{}
   219  
   220  for _, p := range partitions {
   221      m[p.Topic] = struct{}{}
   222  }
   223  for k := range m {
   224      fmt.Println(k)
   225  }
   226  ```
   227  
   228  
   229  Because it is low level, the `Conn` type turns out to be a great building block
   230  for higher level abstractions, like the `Reader` for example.
   231  
   232  ## Reader [![GoDoc](https://godoc.org/github.com/rbisecke/kafka-go?status.svg)](https://godoc.org/github.com/rbisecke/kafka-go#Reader)
   233  
   234  A `Reader` is another concept exposed by the `kafka-go` package, which intends
   235  to make it simpler to implement the typical use case of consuming from a single
   236  topic-partition pair.
   237  A `Reader` also automatically handles reconnections and offset management, and
   238  exposes an API that supports asynchronous cancellations and timeouts using Go
   239  contexts.
   240  
   241  Note that it is important to call `Close()` on a `Reader` when a process exits.
   242  The kafka server needs a graceful disconnect to stop it from continuing to
   243  attempt to send messages to the connected clients. The given example will not
   244  call `Close()` if the process is terminated with SIGINT (ctrl-c at the shell) or
   245  SIGTERM (as docker stop or a kubernetes restart does). This can result in a
   246  delay when a new reader on the same topic connects (e.g. new process started
   247  or new container running). Use a `signal.Notify` handler to close the reader on
   248  process shutdown.
   249  
   250  ```go
   251  // make a new reader that consumes from topic-A, partition 0, at offset 42
   252  r := kafka.NewReader(kafka.ReaderConfig{
   253      Brokers:   []string{"localhost:9092"},
   254      Topic:     "topic-A",
   255      Partition: 0,
   256      MinBytes:  10e3, // 10KB
   257      MaxBytes:  10e6, // 10MB
   258  })
   259  r.SetOffset(42)
   260  
   261  for {
   262      m, err := r.ReadMessage(context.Background())
   263      if err != nil {
   264          break
   265      }
   266      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   267  }
   268  
   269  if err := r.Close(); err != nil {
   270      log.Fatal("failed to close reader:", err)
   271  }
   272  ```
   273  
   274  ### Consumer Groups
   275  
   276  ```kafka-go``` also supports Kafka consumer groups including broker managed offsets.
   277  To enable consumer groups, simply specify the GroupID in the ReaderConfig.
   278  
   279  ReadMessage automatically commits offsets when using consumer groups.
   280  
   281  ```go
   282  // make a new reader that consumes from topic-A
   283  r := kafka.NewReader(kafka.ReaderConfig{
   284      Brokers:   []string{"localhost:9092"},
   285      GroupID:   "consumer-group-id",
   286      Topic:     "topic-A",
   287      MinBytes:  10e3, // 10KB
   288      MaxBytes:  10e6, // 10MB
   289  })
   290  
   291  for {
   292      m, err := r.ReadMessage(context.Background())
   293      if err != nil {
   294          break
   295      }
   296      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   297  }
   298  
   299  if err := r.Close(); err != nil {
   300      log.Fatal("failed to close reader:", err)
   301  }
   302  ```
   303  
   304  There are a number of limitations when using consumer groups:
   305  
   306  * ```(*Reader).SetOffset``` will return an error when GroupID is set
   307  * ```(*Reader).Offset``` will always return ```-1``` when GroupID is set
   308  * ```(*Reader).Lag``` will always return ```-1``` when GroupID is set
   309  * ```(*Reader).ReadLag``` will return an error when GroupID is set
   310  * ```(*Reader).Stats``` will return a partition of ```-1``` when GroupID is set
   311  
   312  ### Explicit Commits
   313  
   314  ```kafka-go``` also supports explicit commits.  Instead of calling ```ReadMessage```,
   315  call ```FetchMessage``` followed by ```CommitMessages```.
   316  
   317  ```go
   318  ctx := context.Background()
   319  for {
   320      m, err := r.FetchMessage(ctx)
   321      if err != nil {
   322          break
   323      }
   324      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   325      if err := r.CommitMessages(ctx, m); err != nil {
   326          log.Fatal("failed to commit messages:", err)
   327      }
   328  }
   329  ```
   330  
   331  When committing messages in consumer groups, the message with the highest offset
   332  for a given topic/partition determines the value of the committed offset for
   333  that partition. For example, if messages at offset 1, 2, and 3 of a single
   334  partition were retrieved by call to `FetchMessage`, calling `CommitMessages`
   335  with message offset 3 will also result in committing the messages at offsets 1
   336  and 2 for that partition.
   337  
   338  ### Managing Commits
   339  
   340  By default, CommitMessages will synchronously commit offsets to Kafka.  For
   341  improved performance, you can instead periodically commit offsets to Kafka
   342  by setting CommitInterval on the ReaderConfig.
   343  
   344  
   345  ```go
   346  // make a new reader that consumes from topic-A
   347  r := kafka.NewReader(kafka.ReaderConfig{
   348      Brokers:        []string{"localhost:9092"},
   349      GroupID:        "consumer-group-id",
   350      Topic:          "topic-A",
   351      MinBytes:       10e3, // 10KB
   352      MaxBytes:       10e6, // 10MB
   353      CommitInterval: time.Second, // flushes commits to Kafka every second
   354  })
   355  ```
   356  
   357  ## Writer [![GoDoc](https://godoc.org/github.com/rbisecke/kafka-go?status.svg)](https://godoc.org/github.com/rbisecke/kafka-go#Writer)
   358  
   359  To produce messages to Kafka, a program may use the low-level `Conn` API, but
   360  the package also provides a higher level `Writer` type which is more appropriate
   361  to use in most cases as it provides additional features:
   362  
   363  - Automatic retries and reconnections on errors.
   364  - Configurable distribution of messages across available partitions.
   365  - Synchronous or asynchronous writes of messages to Kafka.
   366  - Asynchronous cancellation using contexts.
   367  - Flushing of pending messages on close to support graceful shutdowns.
   368  
   369  ```go
   370  // make a writer that produces to topic-A, using the least-bytes distribution
   371  w := &kafka.Writer{
   372  	Addr:     kafka.TCP("localhost:9092"),
   373  	Topic:   "topic-A",
   374  	Balancer: &kafka.LeastBytes{},
   375  }
   376  
   377  err := w.WriteMessages(context.Background(),
   378  	kafka.Message{
   379  		Key:   []byte("Key-A"),
   380  		Value: []byte("Hello World!"),
   381  	},
   382  	kafka.Message{
   383  		Key:   []byte("Key-B"),
   384  		Value: []byte("One!"),
   385  	},
   386  	kafka.Message{
   387  		Key:   []byte("Key-C"),
   388  		Value: []byte("Two!"),
   389  	},
   390  )
   391  if err != nil {
   392      log.Fatal("failed to write messages:", err)
   393  }
   394  
   395  if err := w.Close(); err != nil {
   396      log.Fatal("failed to close writer:", err)
   397  }
   398  ```
   399  
   400  ### Writing to multiple topics
   401  
   402  Normally, the `WriterConfig.Topic` is used to initialize a single-topic writer.
   403  By excluding that particular configuration, you are given the ability to define
   404  the topic on a per-message basis by setting `Message.Topic`.
   405  
   406  ```go
   407  w := &kafka.Writer{
   408  	Addr:     kafka.TCP("localhost:9092"),
   409      // NOTE: When Topic is not defined here, each Message must define it instead.
   410  	Balancer: &kafka.LeastBytes{},
   411  }
   412  
   413  err := w.WriteMessages(context.Background(),
   414      // NOTE: Each Message has Topic defined, otherwise an error is returned.
   415  	kafka.Message{
   416          Topic: "topic-A",
   417  		Key:   []byte("Key-A"),
   418  		Value: []byte("Hello World!"),
   419  	},
   420  	kafka.Message{
   421          Topic: "topic-B",
   422  		Key:   []byte("Key-B"),
   423  		Value: []byte("One!"),
   424  	},
   425  	kafka.Message{
   426          Topic: "topic-C",
   427  		Key:   []byte("Key-C"),
   428  		Value: []byte("Two!"),
   429  	},
   430  )
   431  if err != nil {
   432      log.Fatal("failed to write messages:", err)
   433  }
   434  
   435  if err := w.Close(); err != nil {
   436      log.Fatal("failed to close writer:", err)
   437  }
   438  ```
   439  
   440  **NOTE:** These 2 patterns are mutually exclusive, if you set `Writer.Topic`,
   441  you must not also explicitly define `Message.Topic` on the messages you are
   442  writing. The opposite applies when you do not define a topic for the writer.
   443  The `Writer` will return an error if it detects this ambiguity.
   444  
   445  ### Compatibility with other clients
   446  
   447  #### Sarama
   448  
   449  If you're switching from Sarama and need/want to use the same algorithm for message
   450  partitioning, you can use the ```kafka.Hash``` balancer.  ```kafka.Hash``` routes
   451  messages to the same partitions that Sarama's default partitioner would route to.
   452  
   453  ```go
   454  w := &kafka.Writer{
   455  	Addr:     kafka.TCP("localhost:9092"),
   456  	Topic:    "topic-A",
   457  	Balancer: &kafka.Hash{},
   458  }
   459  ```
   460  
   461  #### librdkafka and confluent-kafka-go
   462  
   463  Use the ```kafka.CRC32Balancer``` balancer to get the same behaviour as librdkafka's
   464  default ```consistent_random``` partition strategy.
   465  
   466  ```go
   467  w := &kafka.Writer{
   468  	Addr:     kafka.TCP("localhost:9092"),
   469  	Topic:    "topic-A",
   470  	Balancer: kafka.CRC32Balancer{},
   471  }
   472  ```
   473  
   474  #### Java
   475  
   476  Use the ```kafka.Murmur2Balancer``` balancer to get the same behaviour as the canonical
   477  Java client's default partitioner.  Note: the Java class allows you to directly specify
   478  the partition which is not permitted.
   479  
   480  ```go
   481  w := &kafka.Writer{
   482  	Addr:     kafka.TCP("localhost:9092"),
   483  	Topic:    "topic-A",
   484  	Balancer: kafka.Murmur2Balancer{},
   485  }
   486  ```
   487  
   488  ### Compression
   489  
   490  Compression can be enabled on the `Writer` by setting the `Compression` field:
   491  
   492  ```go
   493  w := &kafka.Writer{
   494  	Addr:        kafka.TCP("localhost:9092"),
   495  	Topic:       "topic-A",
   496  	Compression: kafka.Snappy,
   497  }
   498  ```
   499  
   500  The `Reader` will by determine if the consumed messages are compressed by
   501  examining the message attributes.  However, the package(s) for all expected
   502  codecs must be imported so that they get loaded correctly.
   503  
   504  _Note: in versions prior to 0.4 programs had to import compression packages to
   505  install codecs and support reading compressed messages from kafka. This is no
   506  longer the case and import of the compression packages are now no-ops._
   507  
   508  ## TLS Support
   509  
   510  For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS.
   511  
   512  ### Connection
   513  
   514  ```go
   515  dialer := &kafka.Dialer{
   516      Timeout:   10 * time.Second,
   517      DualStack: true,
   518      TLS:       &tls.Config{...tls config...},
   519  }
   520  
   521  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   522  ```
   523  
   524  ### Reader
   525  
   526  ```go
   527  dialer := &kafka.Dialer{
   528      Timeout:   10 * time.Second,
   529      DualStack: true,
   530      TLS:       &tls.Config{...tls config...},
   531  }
   532  
   533  r := kafka.NewReader(kafka.ReaderConfig{
   534      Brokers:        []string{"localhost:9093"},
   535      GroupID:        "consumer-group-id",
   536      Topic:          "topic-A",
   537      Dialer:         dialer,
   538  })
   539  ```
   540  
   541  ### Writer
   542  
   543  ```go
   544  dialer := &kafka.Dialer{
   545      Timeout:   10 * time.Second,
   546      DualStack: true,
   547      TLS:       &tls.Config{...tls config...},
   548  }
   549  
   550  w := kafka.NewWriter(kafka.WriterConfig{
   551  	Brokers: []string{"localhost:9093"},
   552  	Topic:   "topic-A",
   553  	Balancer: &kafka.Hash{},
   554  	Dialer:   dialer,
   555  })
   556  ```
   557  
   558  ## SASL Support
   559  
   560  You can specify an option on the `Dialer` to use SASL authentication. The `Dialer` can be used directly to open a `Conn` or it can be passed to a `Reader` or `Writer` via their respective configs. If the `SASLMechanism` field is `nil`, it will not authenticate with SASL.
   561  
   562  ### SASL Authentication Types
   563  
   564  #### [Plain](https://godoc.org/github.com/rbisecke/kafka-go/sasl/plain#Mechanism)
   565  ```go
   566  mechanism := plain.Mechanism{
   567      Username: "username",
   568      Password: "password",
   569  }
   570  ```
   571  
   572  #### [SCRAM](https://godoc.org/github.com/rbisecke/kafka-go/sasl/scram#Mechanism)
   573  ```go
   574  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   575  if err != nil {
   576      panic(err)
   577  }
   578  ```
   579  
   580  ### Connection
   581  
   582  ```go
   583  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   584  if err != nil {
   585      panic(err)
   586  }
   587  
   588  dialer := &kafka.Dialer{
   589      Timeout:       10 * time.Second,
   590      DualStack:     true,
   591      SASLMechanism: mechanism,
   592  }
   593  
   594  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   595  ```
   596  
   597  
   598  ### Reader
   599  
   600  ```go
   601  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   602  if err != nil {
   603      panic(err)
   604  }
   605  
   606  dialer := &kafka.Dialer{
   607      Timeout:       10 * time.Second,
   608      DualStack:     true,
   609      SASLMechanism: mechanism,
   610  }
   611  
   612  r := kafka.NewReader(kafka.ReaderConfig{
   613      Brokers:        []string{"localhost:9093"},
   614      GroupID:        "consumer-group-id",
   615      Topic:          "topic-A",
   616      Dialer:         dialer,
   617  })
   618  ```
   619  
   620  ### Writer
   621  
   622  ```go
   623  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   624  if err != nil {
   625      panic(err)
   626  }
   627  
   628  // Transports are responsible for managing connection pools and other resources,
   629  // it's generally best to create a few of these and share them across your
   630  // application.
   631  sharedTransport := &kafka.Transport{
   632      SASLMechanism: mechanism,
   633  }
   634  
   635  w := kafka.Writer{
   636  	Addr:      kafka.TCP("localhost:9092"),
   637  	Topic:     "topic-A",
   638  	Balancer:  &kafka.Hash{},
   639  	Transport: sharedTransport,
   640  }
   641  ```
   642  
   643  ### Client
   644  
   645  ```go
   646  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   647  if err != nil {
   648      panic(err)
   649  }
   650  
   651  // Transports are responsible for managing connection pools and other resources,
   652  // it's generally best to create a few of these and share them across your
   653  // application.
   654  sharedTransport := &kafka.Transport{
   655      SASLMechanism: mechanism,
   656  }
   657  
   658  client := &kafka.Client{
   659      Addr:      kafka.TCP("localhost:9092"),
   660      Timeout:   10 * time.Second,
   661      Transport: sharedTransport,
   662  }
   663  ```
   664  
   665  #### Reading all messages within a time range
   666  
   667  ```go
   668  startTime := time.Now().Add(-time.Hour)
   669  endTime := time.Now()
   670  batchSize := int(10e6) // 10MB
   671  
   672  r := kafka.NewReader(kafka.ReaderConfig{
   673      Brokers:   []string{"localhost:9092"},
   674      Topic:     "my-topic1",
   675      Partition: 0,
   676      MinBytes:  batchSize,
   677      MaxBytes:  batchSize,
   678  })
   679  
   680  r.SetOffsetAt(context.Background(), startTime)
   681  
   682  for {
   683      m, err := r.ReadMessage(context.Background())
   684  
   685      if err != nil {
   686          break
   687      }
   688      if m.Time.After(endTime) {
   689          break
   690      }
   691      // TODO: process message
   692      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   693  }
   694  
   695  if err := r.Close(); err != nil {
   696      log.Fatal("failed to close reader:", err)
   697  }
   698  ```
   699  
   700  ## Testing
   701  
   702  Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the `KAFKA_SKIP_NETTEST=1` environment variables will skip those tests.
   703  
   704  Run Kafka locally in docker
   705  
   706  ```bash
   707  docker-compose up -d
   708  ```
   709  
   710  Run tests
   711  
   712  ```bash
   713  KAFKA_VERSION=2.3.1 \
   714    KAFKA_SKIP_NETTEST=1 \
   715    go test -race ./...
   716  ```