github.com/hack0072008/kafka-go@v1.0.1/README.md (about)

     1  # kafka-go [![CircleCI](https://circleci.com/gh/segmentio/kafka-go.svg?style=shield)](https://circleci.com/gh/segmentio/kafka-go) [![Go Report Card](https://goreportcard.com/badge/github.com/hack0072008/kafka-go)](https://goreportcard.com/report/github.com/hack0072008/kafka-go) [![GoDoc](https://godoc.org/github.com/hack0072008/kafka-go?status.svg)](https://godoc.org/github.com/hack0072008/kafka-go)
     2  
     3  ## Motivations
     4  
     5  We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go
     6  client libraries for Kafka at the time of this writing was not ideal. The available
     7  options were:
     8  
     9  - [sarama](https://github.com/Shopify/sarama), which is by far the most popular
    10  but is quite difficult to work with. It is poorly documented, the API exposes
    11  low level concepts of the Kafka protocol, and it doesn't support recent Go features
    12  like [contexts](https://golang.org/pkg/context/). It also passes all values as
    13  pointers which causes large numbers of dynamic memory allocations, more frequent
    14  garbage collections, and higher memory usage.
    15  
    16  - [confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go) is a
    17  cgo based wrapper around [librdkafka](https://github.com/edenhill/librdkafka),
    18  which means it introduces a dependency to a C library on all Go code that uses
    19  the package. It has much better documentation than sarama but still lacks support
    20  for Go contexts.
    21  
    22  - [goka](https://github.com/lovoo/goka) is a more recent Kafka client for Go
    23  which focuses on a specific usage pattern. It provides abstractions for using Kafka
    24  as a message passing bus between services rather than an ordered log of events, but
    25  this is not the typical use case of Kafka for us at Segment. The package also
    26  depends on sarama for all interactions with Kafka.
    27  
    28  This is where `kafka-go` comes into play. It provides both low and high level
    29  APIs for interacting with Kafka, mirroring concepts and implementing interfaces of
    30  the Go standard library to make it easy to use and integrate with existing
    31  software.
    32  
    33  #### Note:
    34  
    35  In order to better align with our newly adopted Code of Conduct, the kafka-go project has renamed our default branch to `main`.
    36  For the full details of our Code Of Conduct see [this](./CODE_OF_CONDUCT.md) document.
    37  
    38  ## Migrating to 0.4
    39  
    40  Version 0.4 introduces a few breaking changes to the repository structure which
    41  should have minimal impact on programs and should only manifest at compile time
    42  (the runtime behavior should remain unchanged).
    43  
    44  * Programs do not need to import compression packages anymore in order to read
    45  compressed messages from kafka. All compression codecs are supported by default.
    46  
    47  * Programs that used the compression codecs directly must be adapted.
    48  Compression codecs are now exposed in the `compress` sub-package.
    49  
    50  * The experimental `kafka.Client` API has been updated and slightly modified:
    51  the `kafka.NewClient` function and `kafka.ClientConfig` type were removed.
    52  Programs now configure the client values directly through exported fields.
    53  
    54  * The `kafka.(*Client).ConsumerOffsets` method is now deprecated (along with the
    55  `kafka.TopicAndGroup` type, and will be removed when we release version 1.0.
    56  Programs should use the `kafka.(*Client).OffsetFetch` API instead.
    57  
    58  With 0.4, we know that we are starting to introduce a bit more complexity in the
    59  code, but the plan is to eventually converge towards a simpler and more effective
    60  API, allowing us to keep up with Kafka's ever growing feature set, and bringing
    61  a more efficient implementation to programs depending on kafka-go.
    62  
    63  We truly appreciate everyone's input and contributions, which have made this
    64  project way more than what it was when we started it, and we're looking forward
    65  to receive more feedback on where we should take it.
    66  
    67  ## Kafka versions
    68  
    69  `kafka-go` is currently compatible with Kafka versions from 0.10.1.0 to 2.1.0. While latest versions will be working,
    70  some features available from the Kafka API may not be implemented yet.
    71  
    72  ## Golang version
    73  
    74  `kafka-go` is currently compatible with golang version from 1.15+. To use with older versions of golang use release [v0.2.5](https://github.com/hack0072008/kafka-go/releases/tag/v0.2.5).
    75  
    76  ## Connection [![GoDoc](https://godoc.org/github.com/hack0072008/kafka-go?status.svg)](https://godoc.org/github.com/hack0072008/kafka-go#Conn)
    77  
    78  The `Conn` type is the core of the `kafka-go` package. It wraps around a raw
    79  network connection to expose a low-level API to a Kafka server.
    80  
    81  Here are some examples showing typical use of a connection object:
    82  ```go
    83  // to produce messages
    84  topic := "my-topic"
    85  partition := 0
    86  
    87  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
    88  if err != nil {
    89      log.Fatal("failed to dial leader:", err)
    90  }
    91  
    92  conn.SetWriteDeadline(time.Now().Add(10*time.Second))
    93  _, err = conn.WriteMessages(
    94      kafka.Message{Value: []byte("one!")},
    95      kafka.Message{Value: []byte("two!")},
    96      kafka.Message{Value: []byte("three!")},
    97  )
    98  if err != nil {
    99      log.Fatal("failed to write messages:", err)
   100  }
   101  
   102  if err := conn.Close(); err != nil {
   103      log.Fatal("failed to close writer:", err)
   104  }
   105  ```
   106  ```go
   107  // to consume messages
   108  topic := "my-topic"
   109  partition := 0
   110  
   111  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
   112  if err != nil {
   113      log.Fatal("failed to dial leader:", err)
   114  }
   115  
   116  conn.SetReadDeadline(time.Now().Add(10*time.Second))
   117  batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max
   118  
   119  b := make([]byte, 10e3) // 10KB max per message
   120  for {
   121      n, err := batch.Read(b)
   122      if err != nil {
   123          break
   124      }
   125      fmt.Println(string(b[:n]))
   126  }
   127  
   128  if err := batch.Close(); err != nil {
   129      log.Fatal("failed to close batch:", err)
   130  }
   131  
   132  if err := conn.Close(); err != nil {
   133      log.Fatal("failed to close connection:", err)
   134  }
   135  ```
   136  
   137  ### To Create Topics
   138  By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_AUTO_CREATE_TOPICS_ENABLE='true'` in the wurstmeister/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
   139  ```go
   140  // to create topics when auto.create.topics.enable='true'
   141  conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
   142  if err != nil {
   143      panic(err.Error())
   144  }
   145  ```
   146  
   147  If `auto.create.topics.enable='false'` then you will need to create topics explicitly like so:
   148  ```go
   149  // to create topics when auto.create.topics.enable='false'
   150  topic := "my-topic"
   151  
   152  conn, err := kafka.Dial("tcp", "localhost:9092")
   153  if err != nil {
   154      panic(err.Error())
   155  }
   156  defer conn.Close()
   157  
   158  controller, err := conn.Controller()
   159  if err != nil {
   160      panic(err.Error())
   161  }
   162  var controllerConn *kafka.Conn
   163  controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   164  if err != nil {
   165      panic(err.Error())
   166  }
   167  defer controllerConn.Close()
   168  
   169  
   170  topicConfigs := []kafka.TopicConfig{
   171      kafka.TopicConfig{
   172          Topic:             topic,
   173          NumPartitions:     1,
   174          ReplicationFactor: 1,
   175      },
   176  }
   177  
   178  err = controllerConn.CreateTopics(topicConfigs...)
   179  if err != nil {
   180      panic(err.Error())
   181  }
   182  ```
   183  
   184  ### To Connect To Leader Via a Non-leader Connection
   185  ```go
   186  // to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
   187  conn, err := kafka.Dial("tcp", "localhost:9092")
   188  if err != nil {
   189      panic(err.Error())
   190  }
   191  defer conn.Close()
   192  controller, err := conn.Controller()
   193  if err != nil {
   194      panic(err.Error())
   195  }
   196  var connLeader *kafka.Conn
   197  connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
   198  if err != nil {
   199      panic(err.Error())
   200  }
   201  defer connLeader.Close()
   202  ```
   203  
   204  ### To list topics
   205  ```go
   206  conn, err := kafka.Dial("tcp", "localhost:9092")
   207  if err != nil {
   208      panic(err.Error())
   209  }
   210  defer conn.Close()
   211  
   212  partitions, err := conn.ReadPartitions()
   213  if err != nil {
   214      panic(err.Error())
   215  }
   216  
   217  m := map[string]struct{}{}
   218  
   219  for _, p := range partitions {
   220      m[p.Topic] = struct{}{}
   221  }
   222  for k := range m {
   223      fmt.Println(k)
   224  }
   225  ```
   226  
   227  
   228  Because it is low level, the `Conn` type turns out to be a great building block
   229  for higher level abstractions, like the `Reader` for example.
   230  
   231  ## Reader [![GoDoc](https://godoc.org/github.com/hack0072008/kafka-go?status.svg)](https://godoc.org/github.com/hack0072008/kafka-go#Reader)
   232  
   233  A `Reader` is another concept exposed by the `kafka-go` package, which intends
   234  to make it simpler to implement the typical use case of consuming from a single
   235  topic-partition pair.
   236  A `Reader` also automatically handles reconnections and offset management, and
   237  exposes an API that supports asynchronous cancellations and timeouts using Go
   238  contexts.
   239  
   240  Note that it is important to call `Close()` on a `Reader` when a process exits.
   241  The kafka server needs a graceful disconnect to stop it from continuing to
   242  attempt to send messages to the connected clients. The given example will not
   243  call `Close()` if the process is terminated with SIGINT (ctrl-c at the shell) or
   244  SIGTERM (as docker stop or a kubernetes restart does). This can result in a
   245  delay when a new reader on the same topic connects (e.g. new process started
   246  or new container running). Use a `signal.Notify` handler to close the reader on
   247  process shutdown.
   248  
   249  ```go
   250  // make a new reader that consumes from topic-A, partition 0, at offset 42
   251  r := kafka.NewReader(kafka.ReaderConfig{
   252      Brokers:   []string{"localhost:9092"},
   253      Topic:     "topic-A",
   254      Partition: 0,
   255      MinBytes:  10e3, // 10KB
   256      MaxBytes:  10e6, // 10MB
   257  })
   258  r.SetOffset(42)
   259  
   260  for {
   261      m, err := r.ReadMessage(context.Background())
   262      if err != nil {
   263          break
   264      }
   265      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   266  }
   267  
   268  if err := r.Close(); err != nil {
   269      log.Fatal("failed to close reader:", err)
   270  }
   271  ```
   272  
   273  ### Consumer Groups
   274  
   275  ```kafka-go``` also supports Kafka consumer groups including broker managed offsets.
   276  To enable consumer groups, simply specify the GroupID in the ReaderConfig.
   277  
   278  ReadMessage automatically commits offsets when using consumer groups.
   279  
   280  ```go
   281  // make a new reader that consumes from topic-A
   282  r := kafka.NewReader(kafka.ReaderConfig{
   283      Brokers:   []string{"localhost:9092"},
   284      GroupID:   "consumer-group-id",
   285      Topic:     "topic-A",
   286      MinBytes:  10e3, // 10KB
   287      MaxBytes:  10e6, // 10MB
   288  })
   289  
   290  for {
   291      m, err := r.ReadMessage(context.Background())
   292      if err != nil {
   293          break
   294      }
   295      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   296  }
   297  
   298  if err := r.Close(); err != nil {
   299      log.Fatal("failed to close reader:", err)
   300  }
   301  ```
   302  
   303  There are a number of limitations when using consumer groups:
   304  
   305  * ```(*Reader).SetOffset``` will return an error when GroupID is set
   306  * ```(*Reader).Offset``` will always return ```-1``` when GroupID is set
   307  * ```(*Reader).Lag``` will always return ```-1``` when GroupID is set
   308  * ```(*Reader).ReadLag``` will return an error when GroupID is set
   309  * ```(*Reader).Stats``` will return a partition of ```-1``` when GroupID is set
   310  
   311  ### Explicit Commits
   312  
   313  ```kafka-go``` also supports explicit commits.  Instead of calling ```ReadMessage```,
   314  call ```FetchMessage``` followed by ```CommitMessages```.
   315  
   316  ```go
   317  ctx := context.Background()
   318  for {
   319      m, err := r.FetchMessage(ctx)
   320      if err != nil {
   321          break
   322      }
   323      fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
   324      if err := r.CommitMessages(ctx, m); err != nil {
   325          log.Fatal("failed to commit messages:", err)
   326      }
   327  }
   328  ```
   329  
   330  When committing messages in consumer groups, the message with the highest offset
   331  for a given topic/partition determines the value of the committed offset for
   332  that partition. For example, if messages at offset 1, 2, and 3 of a single
   333  partition were retrieved by call to `FetchMessage`, calling `CommitMessages`
   334  with message offset 3 will also result in committing the messages at offsets 1
   335  and 2 for that partition.
   336  
   337  ### Managing Commits
   338  
   339  By default, CommitMessages will synchronously commit offsets to Kafka.  For
   340  improved performance, you can instead periodically commit offsets to Kafka
   341  by setting CommitInterval on the ReaderConfig.
   342  
   343  
   344  ```go
   345  // make a new reader that consumes from topic-A
   346  r := kafka.NewReader(kafka.ReaderConfig{
   347      Brokers:        []string{"localhost:9092"},
   348      GroupID:        "consumer-group-id",
   349      Topic:          "topic-A",
   350      MinBytes:       10e3, // 10KB
   351      MaxBytes:       10e6, // 10MB
   352      CommitInterval: time.Second, // flushes commits to Kafka every second
   353  })
   354  ```
   355  
   356  ## Writer [![GoDoc](https://godoc.org/github.com/hack0072008/kafka-go?status.svg)](https://godoc.org/github.com/hack0072008/kafka-go#Writer)
   357  
   358  To produce messages to Kafka, a program may use the low-level `Conn` API, but
   359  the package also provides a higher level `Writer` type which is more appropriate
   360  to use in most cases as it provides additional features:
   361  
   362  - Automatic retries and reconnections on errors.
   363  - Configurable distribution of messages across available partitions.
   364  - Synchronous or asynchronous writes of messages to Kafka.
   365  - Asynchronous cancellation using contexts.
   366  - Flushing of pending messages on close to support graceful shutdowns.
   367  
   368  ```go
   369  // make a writer that produces to topic-A, using the least-bytes distribution
   370  w := &kafka.Writer{
   371  	Addr:     kafka.TCP("localhost:9092"),
   372  	Topic:   "topic-A",
   373  	Balancer: &kafka.LeastBytes{},
   374  }
   375  
   376  err := w.WriteMessages(context.Background(),
   377  	kafka.Message{
   378  		Key:   []byte("Key-A"),
   379  		Value: []byte("Hello World!"),
   380  	},
   381  	kafka.Message{
   382  		Key:   []byte("Key-B"),
   383  		Value: []byte("One!"),
   384  	},
   385  	kafka.Message{
   386  		Key:   []byte("Key-C"),
   387  		Value: []byte("Two!"),
   388  	},
   389  )
   390  if err != nil {
   391      log.Fatal("failed to write messages:", err)
   392  }
   393  
   394  if err := w.Close(); err != nil {
   395      log.Fatal("failed to close writer:", err)
   396  }
   397  ```
   398  
   399  ### Writing to multiple topics
   400  
   401  Normally, the `WriterConfig.Topic` is used to initialize a single-topic writer.
   402  By excluding that particular configuration, you are given the ability to define
   403  the topic on a per-message basis by setting `Message.Topic`.
   404  
   405  ```go
   406  w := &kafka.Writer{
   407  	Addr:     kafka.TCP("localhost:9092"),
   408      // NOTE: When Topic is not defined here, each Message must define it instead.
   409  	Balancer: &kafka.LeastBytes{},
   410  }
   411  
   412  err := w.WriteMessages(context.Background(),
   413      // NOTE: Each Message has Topic defined, otherwise an error is returned.
   414  	kafka.Message{
   415          Topic: "topic-A",
   416  		Key:   []byte("Key-A"),
   417  		Value: []byte("Hello World!"),
   418  	},
   419  	kafka.Message{
   420          Topic: "topic-B",
   421  		Key:   []byte("Key-B"),
   422  		Value: []byte("One!"),
   423  	},
   424  	kafka.Message{
   425          Topic: "topic-C",
   426  		Key:   []byte("Key-C"),
   427  		Value: []byte("Two!"),
   428  	},
   429  )
   430  if err != nil {
   431      log.Fatal("failed to write messages:", err)
   432  }
   433  
   434  if err := w.Close(); err != nil {
   435      log.Fatal("failed to close writer:", err)
   436  }
   437  ```
   438  
   439  **NOTE:** These 2 patterns are mutually exclusive, if you set `Writer.Topic`,
   440  you must not also explicitly define `Message.Topic` on the messages you are
   441  writing. The opposite applies when you do not define a topic for the writer.
   442  The `Writer` will return an error if it detects this ambiguity.
   443  
   444  ### Compatibility with other clients
   445  
   446  #### Sarama
   447  
   448  If you're switching from Sarama and need/want to use the same algorithm for message
   449  partitioning, you can use the ```kafka.Hash``` balancer.  ```kafka.Hash``` routes
   450  messages to the same partitions that Sarama's default partitioner would route to.
   451  
   452  ```go
   453  w := &kafka.Writer{
   454  	Addr:     kafka.TCP("localhost:9092"),
   455  	Topic:    "topic-A",
   456  	Balancer: &kafka.Hash{},
   457  }
   458  ```
   459  
   460  #### librdkafka and confluent-kafka-go
   461  
   462  Use the ```kafka.CRC32Balancer``` balancer to get the same behaviour as librdkafka's
   463  default ```consistent_random``` partition strategy.
   464  
   465  ```go
   466  w := &kafka.Writer{
   467  	Addr:     kafka.TCP("localhost:9092"),
   468  	Topic:    "topic-A",
   469  	Balancer: kafka.CRC32Balancer{},
   470  }
   471  ```
   472  
   473  #### Java
   474  
   475  Use the ```kafka.Murmur2Balancer``` balancer to get the same behaviour as the canonical
   476  Java client's default partitioner.  Note: the Java class allows you to directly specify
   477  the partition which is not permitted.
   478  
   479  ```go
   480  w := &kafka.Writer{
   481  	Addr:     kafka.TCP("localhost:9092"),
   482  	Topic:    "topic-A",
   483  	Balancer: kafka.Murmur2Balancer{},
   484  }
   485  ```
   486  
   487  ### Compression
   488  
   489  Compression can be enabled on the `Writer` by setting the `Compression` field:
   490  
   491  ```go
   492  w := &kafka.Writer{
   493  	Addr:        kafka.TCP("localhost:9092"),
   494  	Topic:       "topic-A",
   495  	Compression: kafka.Snappy,
   496  }
   497  ```
   498  
   499  The `Reader` will by determine if the consumed messages are compressed by
   500  examining the message attributes.  However, the package(s) for all expected
   501  codecs must be imported so that they get loaded correctly.
   502  
   503  _Note: in versions prior to 0.4 programs had to import compression packages to
   504  install codecs and support reading compressed messages from kafka. This is no
   505  longer the case and import of the compression packages are now no-ops._
   506  
   507  ## TLS Support
   508  
   509  For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS.
   510  
   511  ### Connection
   512  
   513  ```go
   514  dialer := &kafka.Dialer{
   515      Timeout:   10 * time.Second,
   516      DualStack: true,
   517      TLS:       &tls.Config{...tls config...},
   518  }
   519  
   520  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   521  ```
   522  
   523  ### Reader
   524  
   525  ```go
   526  dialer := &kafka.Dialer{
   527      Timeout:   10 * time.Second,
   528      DualStack: true,
   529      TLS:       &tls.Config{...tls config...},
   530  }
   531  
   532  r := kafka.NewReader(kafka.ReaderConfig{
   533      Brokers:        []string{"localhost:9093"},
   534      GroupID:        "consumer-group-id",
   535      Topic:          "topic-A",
   536      Dialer:         dialer,
   537  })
   538  ```
   539  
   540  ### Writer
   541  
   542  ```go
   543  dialer := &kafka.Dialer{
   544      Timeout:   10 * time.Second,
   545      DualStack: true,
   546      TLS:       &tls.Config{...tls config...},
   547  }
   548  
   549  w := kafka.NewWriter(kafka.WriterConfig{
   550  	Brokers: []string{"localhost:9093"},
   551  	Topic:   "topic-A",
   552  	Balancer: &kafka.Hash{},
   553  	Dialer:   dialer,
   554  })
   555  ```
   556  
   557  ## SASL Support
   558  
   559  You can specify an option on the `Dialer` to use SASL authentication. The `Dialer` can be used directly to open a `Conn` or it can be passed to a `Reader` or `Writer` via their respective configs. If the `SASLMechanism` field is `nil`, it will not authenticate with SASL.
   560  
   561  ### SASL Authentication Types
   562  
   563  #### [Plain](https://godoc.org/github.com/hack0072008/kafka-go/sasl/plain#Mechanism)
   564  ```go
   565  mechanism := plain.Mechanism{
   566      Username: "username",
   567      Password: "password",
   568  }
   569  ```
   570  
   571  #### [SCRAM](https://godoc.org/github.com/hack0072008/kafka-go/sasl/scram#Mechanism)
   572  ```go
   573  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   574  if err != nil {
   575      panic(err)
   576  }
   577  ```
   578  
   579  ### Connection
   580  
   581  ```go
   582  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   583  if err != nil {
   584      panic(err)
   585  }
   586  
   587  dialer := &kafka.Dialer{
   588      Timeout:       10 * time.Second,
   589      DualStack:     true,
   590      SASLMechanism: mechanism,
   591  }
   592  
   593  conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")
   594  ```
   595  
   596  
   597  ### Reader
   598  
   599  ```go
   600  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   601  if err != nil {
   602      panic(err)
   603  }
   604  
   605  dialer := &kafka.Dialer{
   606      Timeout:       10 * time.Second,
   607      DualStack:     true,
   608      SASLMechanism: mechanism,
   609  }
   610  
   611  r := kafka.NewReader(kafka.ReaderConfig{
   612      Brokers:        []string{"localhost:9093"},
   613      GroupID:        "consumer-group-id",
   614      Topic:          "topic-A",
   615      Dialer:         dialer,
   616  })
   617  ```
   618  
   619  ### Writer
   620  
   621  ```go
   622  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   623  if err != nil {
   624      panic(err)
   625  }
   626  
   627  // Transports are responsible for managing connection pools and other resources,
   628  // it's generally best to create a few of these and share them across your
   629  // application.
   630  sharedTransport := &kafka.Transport{
   631      SASLMechanism: mechanism,
   632  }
   633  
   634  w := kafka.Writer{
   635  	Addr:      kafka.TCP("localhost:9092"),
   636  	Topic:     "topic-A",
   637  	Balancer:  &kafka.Hash{},
   638  	Transport: sharedTransport,
   639  }
   640  ```
   641  
   642  ### Client
   643  
   644  ```go
   645  mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
   646  if err != nil {
   647      panic(err)
   648  }
   649  
   650  // Transports are responsible for managing connection pools and other resources,
   651  // it's generally best to create a few of these and share them across your
   652  // application.
   653  sharedTransport := &kafka.Transport{
   654      SASLMechanism: mechanism,
   655  }
   656  
   657  client := &kafka.Client{
   658      Addr:      kafka.TCP("localhost:9092"),
   659      Timeout:   10 * time.Second,
   660      Transport: sharedTransport,
   661  }
   662  ```
   663  
   664  #### Reading all messages within a time range
   665  
   666  ```go
   667  startTime := time.Now().Add(-time.Hour)
   668  endTime := time.Now()
   669  batchSize := int(10e6) // 10MB
   670  
   671  r := kafka.NewReader(kafka.ReaderConfig{
   672      Brokers:   []string{"localhost:9092"},
   673      Topic:     "my-topic1",
   674      Partition: 0,
   675      MinBytes:  batchSize,
   676      MaxBytes:  batchSize,
   677  })
   678  
   679  r.SetOffsetAt(context.Background(), startTime)
   680  
   681  for {
   682      m, err := r.ReadMessage(context.Background())
   683  
   684      if err != nil {
   685          break
   686      }
   687      if m.Time.After(endTime) {
   688          break
   689      }
   690      // TODO: process message
   691      fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
   692  }
   693  
   694  if err := r.Close(); err != nil {
   695      log.Fatal("failed to close reader:", err)
   696  }
   697  ```
   698  
   699  ## Testing
   700  
   701  Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the `KAFKA_SKIP_NETTEST=1` environment variables will skip those tests.
   702  
   703  Run Kafka locally in docker
   704  
   705  ```bash
   706  docker-compose up -d
   707  ```
   708  
   709  Run tests
   710  
   711  ```bash
   712  KAFKA_VERSION=2.3.1 \
   713    KAFKA_SKIP_NETTEST=1 \
   714    go test -race ./...
   715  ```