github.com/pingcap/badger@v1.5.1-0.20230103063557-828f39b09b6d/README.md (about)

     1  # BadgerDB [![GoDoc](https://godoc.org/github.com/dgraph-io/badger?status.svg)](https://godoc.org/github.com/dgraph-io/badger) [![Go Report Card](https://goreportcard.com/badge/github.com/dgraph-io/badger)](https://goreportcard.com/report/github.com/dgraph-io/badger) [![Build Status](https://teamcity.dgraph.io/guestAuth/app/rest/builds/buildType:(id:Badger_UnitTests)/statusIcon.svg)](https://teamcity.dgraph.io/viewLog.html?buildTypeId=Badger_UnitTests&buildId=lastFinished&guest=1) ![Appveyor](https://ci.appveyor.com/api/projects/status/github/dgraph-io/badger?branch=master&svg=true) [![Coverage Status](https://coveralls.io/repos/github/dgraph-io/badger/badge.svg?branch=master)](https://coveralls.io/github/dgraph-io/badger?branch=master)
     2  
     3  ![Badger mascot](images/diggy-shadow.png)
     4  
     5  BadgerDB is an embeddable, persistent, simple and fast key-value (KV) database
     6  written in pure Go. It's meant to be a performant alternative to non-Go-based
     7  key-value stores like [RocksDB](https://github.com/facebook/rocksdb).
     8  
     9  ## Project Status
    10  Badger v1.0 was released in Nov 2017. Check the [Changelog] for the full details.
    11  
    12  [Changelog]:https://github.com/dgraph-io/badger/blob/master/CHANGELOG.md
    13  
    14  We introduced transactions in [v0.9.0] which involved a major API change. If you have a Badger
    15  datastore prior to that, please use [v0.8.1], but we strongly urge you to upgrade. Upgrading from
    16  both v0.8 and v0.9 will require you to [take backups](#database-backup) and restore using the new
    17  version.
    18  
    19  [v1.0.1]: //github.com/dgraph-io/badger/tree/v1.0.1
    20  [v0.8.1]: //github.com/dgraph-io/badger/tree/v0.8.1
    21  [v0.9.0]: //github.com/dgraph-io/badger/tree/v0.9.0
    22  
    23  ## Table of Contents
    24   * [Getting Started](#getting-started)
    25      + [Installing](#installing)
    26      + [Opening a database](#opening-a-database)
    27      + [Transactions](#transactions)
    28        - [Read-only transactions](#read-only-transactions)
    29        - [Read-write transactions](#read-write-transactions)
    30        - [Managing transactions manually](#managing-transactions-manually)
    31      + [Using key/value pairs](#using-keyvalue-pairs)
    32      + [Iterating over keys](#iterating-over-keys)
    33        - [Prefix scans](#prefix-scans)
    34        - [Key-only iteration](#key-only-iteration)
    35      + [Garbage Collection](#garbage-collection)
    36      + [Database backup](#database-backup)
    37      + [Memory usage](#memory-usage)
    38      + [Statistics](#statistics)
    39    * [Resources](#resources)
    40      + [Blog Posts](#blog-posts)
    41    * [Contact](#contact)
    42    * [Design](#design)
    43      + [Comparisons](#comparisons)
    44      + [Benchmarks](#benchmarks)
    45    * [Other Projects Using Badger](#other-projects-using-badger)
    46    * [Frequently Asked Questions](#frequently-asked-questions)
    47  
    48  ## Getting Started
    49  
    50  ### Installing
    51  To start using Badger, install Go 1.8 or above and run `go get`:
    52  
    53  ```sh
    54  $ go get github.com/dgraph-io/badger/...
    55  ```
    56  
    57  This will retrieve the library and install the `badger_info` command line
    58  utility into your `$GOBIN` path.
    59  
    60  
    61  ### Opening a database
    62  The top-level object in Badger is a `DB`. It represents multiple files on disk
    63  in specific directories, which contain the data for a single database.
    64  
    65  To open your database, use the `badger.Open()` function, with the appropriate
    66  options. The `Dir` and `ValueDir` options are mandatory and must be
    67  specified by the client. They can be set to the same value to simplify things.
    68  
    69  ```go
    70  package main
    71  
    72  import (
    73  	"log"
    74  
    75  	"github.com/dgraph-io/badger"
    76  )
    77  
    78  func main() {
    79    // Open the Badger database located in the /tmp/badger directory.
    80    // It will be created if it doesn't exist.
    81    opts := badger.DefaultOptions
    82    opts.Dir = "/tmp/badger"
    83    opts.ValueDir = "/tmp/badger"
    84    db, err := badger.Open(opts)
    85    if err != nil {
    86  	  log.Fatal(err)
    87    }
    88    defer db.Close()
    89    // Your code here…
    90  }
    91  ```
    92  
    93  Please note that Badger obtains a lock on the directories so multiple processes
    94  cannot open the same database at the same time.
    95  
    96  ### Transactions
    97  
    98  #### Read-only transactions
    99  To start a read-only transaction, you can use the `DB.View()` method:
   100  
   101  ```go
   102  err := db.View(func(txn *badger.Txn) error {
   103    // Your code here…
   104    return nil
   105  })
   106  ```
   107  
   108  You cannot perform any writes or deletes within this transaction. Badger
   109  ensures that you get a consistent view of the database within this closure. Any
   110  writes that happen elsewhere after the transaction has started, will not be
   111  seen by calls made within the closure.
   112  
   113  #### Read-write transactions
   114  To start a read-write transaction, you can use the `DB.Update()` method:
   115  
   116  ```go
   117  err := db.Update(func(txn *badger.Txn) error {
   118    // Your code here…
   119    return nil
   120  })
   121  ```
   122  
   123  All database operations are allowed inside a read-write transaction.
   124  
   125  Always check the returned error value. If you return an error
   126  within your closure it will be passed through.
   127  
   128  An `ErrConflict` error will be reported in case of a conflict. Depending on the state
   129  of your application, you have the option to retry the operation if you receive
   130  this error.
   131  
   132  An `ErrTxnTooBig` will be reported in case the number of pending writes/deletes in
   133  the transaction exceed a certain limit. In that case, it is best to commit the
   134  transaction and start a new transaction immediately. Here is an example (we are
   135  not checking for errors in some places for simplicity):
   136  
   137  ```go
   138  updates := make(map[string]string)
   139  txn := db.NewTransaction(true)
   140  for k,v := range updates {
   141    if err := txn.Set([]byte(k),[]byte(v)); err == ErrTxnTooBig {
   142      _ = txn.Commit()
   143      txn = db.NewTransaction(..)
   144      _ = txn.Set([]byte(k),[]byte(v))
   145    }
   146  }
   147  _ = txn.Commit()
   148  ```
   149  
   150  #### Managing transactions manually
   151  The `DB.View()` and `DB.Update()` methods are wrappers around the
   152  `DB.NewTransaction()` and `Txn.Commit()` methods (or `Txn.Discard()` in case of
   153  read-only transactions). These helper methods will start the transaction,
   154  execute a function, and then safely discard your transaction if an error is
   155  returned. This is the recommended way to use Badger transactions.
   156  
   157  However, sometimes you may want to manually create and commit your
   158  transactions. You can use the `DB.NewTransaction()` function directly, which
   159  takes in a boolean argument to specify whether a read-write transaction is
   160  required. For read-write transactions, it is necessary to call `Txn.Commit()`
   161  to ensure the transaction is committed. For read-only transactions, calling
   162  `Txn.Discard()` is sufficient. `Txn.Commit()` also calls `Txn.Discard()`
   163  internally to cleanup the transaction, so just calling `Txn.Commit()` is
   164  sufficient for read-write transaction. However, if your code doesn’t call
   165  `Txn.Commit()` for some reason (for e.g it returns prematurely with an error),
   166  then please make sure you call `Txn.Discard()` in a `defer` block. Refer to the
   167  code below.
   168  
   169  ```go
   170  // Start a writable transaction.
   171  txn, err := db.NewTransaction(true)
   172  if err != nil {
   173      return err
   174  }
   175  defer txn.Discard()
   176  
   177  // Use the transaction...
   178  err := txn.Set([]byte("answer"), []byte("42"))
   179  if err != nil {
   180      return err
   181  }
   182  
   183  // Commit the transaction and check for error.
   184  if err := txn.Commit(); err != nil {
   185      return err
   186  }
   187  ```
   188  
   189  The first argument to `DB.NewTransaction()` is a boolean stating if the transaction
   190  should be writable.
   191  
   192  Badger allows an optional callback to the `Txn.Commit()` method. Normally, the
   193  callback can be set to `nil`, and the method will return after all the writes
   194  have succeeded. However, if this callback is provided, the `Txn.Commit()`
   195  method returns as soon as it has checked for any conflicts. The actual writing
   196  to the disk happens asynchronously, and the callback is invoked once the
   197  writing has finished, or an error has occurred. This can improve the throughput
   198  of the application in some cases. But it also means that a transaction is not
   199  durable until the callback has been invoked with a `nil` error value.
   200  
   201  ### Using key/value pairs
   202  To save a key/value pair, use the `Txn.Set()` method:
   203  
   204  ```go
   205  err := db.Update(func(txn *badger.Txn) error {
   206    err := txn.Set([]byte("answer"), []byte("42"))
   207    return err
   208  })
   209  ```
   210  
   211  This will set the value of the `"answer"` key to `"42"`. To retrieve this
   212  value, we can use the `Txn.Get()` method:
   213  
   214  ```go
   215  err := db.View(func(txn *badger.Txn) error {
   216    item, err := txn.Get([]byte("answer"))
   217    if err != nil {
   218      return err
   219    }
   220    val, err := item.Value()
   221    if err != nil {
   222      return err
   223    }
   224    fmt.Printf("The answer is: %s\n", val)
   225    return nil
   226  })
   227  ```
   228  
   229  `Txn.Get()` returns `ErrKeyNotFound` if the value is not found.
   230  
   231  Please note that values returned from `Get()` are only valid while the
   232  transaction is open. If you need to use a value outside of the transaction
   233  then you must use `copy()` to copy it to another byte slice.
   234  
   235  Use the `Txn.Delete()` method to delete a key.
   236  
   237  
   238  
   239  ### Iterating over keys
   240  To iterate over keys, we can use an `Iterator`, which can be obtained using the
   241  `Txn.NewIterator()` method. Iteration happens in byte-wise lexicographical sorting
   242  order.
   243  
   244  
   245  ```go
   246  err := db.View(func(txn *badger.Txn) error {
   247    opts := badger.DefaultIteratorOptions
   248    opts.PrefetchSize = 10
   249    it := txn.NewIterator(opts)
   250    defer it.Close()
   251    for it.Rewind(); it.Valid(); it.Next() {
   252      item := it.Item()
   253      k := item.Key()
   254      v, err := item.Value()
   255      if err != nil {
   256        return err
   257      }
   258      fmt.Printf("key=%s, value=%s\n", k, v)
   259    }
   260    return nil
   261  })
   262  ```
   263  
   264  The iterator allows you to move to a specific point in the list of keys and move
   265  forward or backward through the keys one at a time.
   266  
   267  By default, Badger prefetches the values of the next 100 items. You can adjust
   268  that with the `IteratorOptions.PrefetchSize` field. However, setting it to
   269  a value higher than GOMAXPROCS (which we recommend to be 128 or higher)
   270  shouldn’t give any additional benefits. You can also turn off the fetching of
   271  values altogether. See section below on key-only iteration.
   272  
   273  #### Prefix scans
   274  To iterate over a key prefix, you can combine `Seek()` and `ValidForPrefix()`:
   275  
   276  ```go
   277  db.View(func(txn *badger.Txn) error {
   278    it := txn.NewIterator(badger.DefaultIteratorOptions)
   279    defer it.Close()
   280    prefix := []byte("1234")
   281    for it.Seek(prefix); it.ValidForPrefix(prefix); it.Next() {
   282      item := it.Item()
   283      k := item.Key()
   284      v, err := item.Value()
   285      if err != nil {
   286        return err
   287      }
   288      fmt.Printf("key=%s, value=%s\n", k, v)
   289    }
   290    return nil
   291  })
   292  ```
   293  
   294  #### Key-only iteration
   295  Badger supports a unique mode of iteration called _key-only_ iteration. It is
   296  several order of magnitudes faster than regular iteration, because it involves
   297  access to the LSM-tree only, which is usually resident entirely in RAM. To
   298  enable key-only iteration, you need to set the `IteratorOptions.PrefetchValues`
   299  field to `false`. This can also be used to do sparse reads for selected keys
   300  during an iteration, by calling `item.Value()` only when required.
   301  
   302  ```go
   303  err := db.View(func(txn *badger.Txn) error {
   304    opts := badger.DefaultIteratorOptions
   305    opts.PrefetchValues = false
   306    it := txn.NewIterator(opts)
   307    defer it.Close()
   308    for it.Rewind(); it.Valid(); it.Next() {
   309      item := it.Item()
   310      k := item.Key()
   311      fmt.Printf("key=%s\n", k)
   312    }
   313    return nil
   314  })
   315  ```
   316  
   317  ### Garbage Collection
   318  Badger values need to be garbage collected, because of two reasons:
   319  
   320  * Badger keeps values separately from the LSM tree. This means that the compaction operations
   321  that clean up the LSM tree do not touch the values at all. Values need to be cleaned up
   322  separately.
   323  
   324  * Concurrent read/write transactions could leave behind multiple values for a single key, because they
   325  are stored with different versions. These could accumulate, and take up unneeded space beyond the
   326  time these older versions are needed.
   327  
   328  Badger relies on the client to perform garbage collection at a time of their choosing. It provides
   329  the following methods, which can be invoked at an appropriate time:
   330  
   331  * `DB.PurgeOlderVersions()`: This method iterates over the database, and cleans up all but the latest
   332  versions of the key-value pairs. It marks the older versions as deleted, which makes them eligible for
   333  garbage collection.
   334  * `DB.PurgeVersionsBelow(key, ts)`: This method is useful to do a more targeted clean up of older versions
   335  of key-value pairs. You can specify a key, and a timestamp. All versions of the key older than the timestamp
   336  are marked as deleted, making them eligible for garbage collection.
   337  * `DB.RunValueLogGC()`: This method is designed to do garbage collection while
   338    Badger is online. Please ensure that you call the `DB.Purge…()` methods first
   339    before invoking this method. It uses any statistics generated by the
   340    `DB.Purge(…)` methods to pick files that are likely to lead to maximum space
   341    reclamation. It loops until it encounters a file which does not lead to any
   342    garbage collection.
   343  
   344    It could lead to increased I/O if `DB.RunValueLogGC()` hasn’t been called for
   345    a long time, and many deletes have happened in the meanwhile. So it is recommended
   346    that this method be called regularly.
   347  
   348  ### Database backup
   349  There are two public API methods `DB.Backup()` and `DB.Load()` which can be
   350  used to do online backups and restores. Badger v0.9 provides a CLI tool
   351  `badger`, which can do offline backup/restore. Make sure you have `$GOPATH/bin`
   352  in your PATH to use this tool.
   353  
   354  The command below will create a version-agnostic backup of the database, to a
   355  file `badger.bak` in the current working directory
   356  
   357  ```
   358  badger backup --dir <path/to/badgerdb>
   359  ```
   360  
   361  To restore `badger.bak` in the current working directory to a new database:
   362  
   363  ```
   364  badger restore --dir <path/to/badgerdb>
   365  ```
   366  
   367  See `badger --help` for more details.
   368  
   369  If you have a Badger database that was created using v0.8 (or below), you can
   370  use the `badger_backup` tool provided in v0.8.1, and then restore it using the
   371  command above to upgrade your database to work with the latest version.
   372  
   373  ```
   374  badger_backup --dir <path/to/badgerdb> --backup-file badger.bak
   375  ```
   376  
   377  ### Memory usage
   378  Badger's memory usage can be managed by tweaking several options available in
   379  the `Options` struct that is passed in when opening the database using
   380  `DB.Open`.
   381  
   382  - `Options.ValueLogLoadingMode` can be set to `options.FileIO` (instead of the
   383    default `options.MemoryMap`) to avoid memory-mapping log files. This can be
   384    useful in environments with low RAM.
   385  - Number of memtables (`Options.NumMemtables`)
   386    - If you modify `Options.NumMemtables`, also adjust `Options.NumLevelZeroTables` and
   387     `Options.NumLevelZeroTablesStall` accordingly.
   388  - Number of concurrent compactions (`Options.NumCompactors`)
   389  - Mode in which LSM tree is loaded (`Options.TableLoadingMode`)
   390  - Size of table (`Options.MaxTableSize`)
   391  - Size of value log file (`Options.ValueLogFileSize`)
   392  
   393  If you want to decrease the memory usage of Badger instance, tweak these
   394  options (ideally one at a time) until you achieve the desired
   395  memory usage.
   396  
   397  ### Statistics
   398  Badger records metrics using the [expvar] package, which is included in the Go
   399  standard library. All the metrics are documented in [y/metrics.go][metrics]
   400  file.
   401  
   402  `expvar` package adds a handler in to the default HTTP server (which has to be
   403  started explicitly), and serves up the metrics at the `/debug/vars` endpoint.
   404  These metrics can then be collected by a system like [Prometheus], to get
   405  better visibility into what Badger is doing.
   406  
   407  [expvar]: https://golang.org/pkg/expvar/
   408  [metrics]: https://github.com/dgraph-io/badger/blob/master/y/metrics.go
   409  [Prometheus]: https://prometheus.io/
   410  
   411  ## Resources
   412  
   413  ### Blog Posts
   414  1. [Introducing Badger: A fast key-value store written natively in
   415  Go](https://open.dgraph.io/post/badger/)
   416  2. [Make Badger crash resilient with ALICE](https://blog.dgraph.io/post/alice/)
   417  3. [Badger vs LMDB vs BoltDB: Benchmarking key-value databases in Go](https://blog.dgraph.io/post/badger-lmdb-boltdb/)
   418  4. [Concurrent ACID Transactions in Badger](https://blog.dgraph.io/post/badger-txn/)
   419  
   420  ## Design
   421  Badger was written with these design goals in mind:
   422  
   423  - Write a key-value database in pure Go.
   424  - Use latest research to build the fastest KV database for data sets spanning terabytes.
   425  - Optimize for SSDs.
   426  
   427  Badger’s design is based on a paper titled _[WiscKey: Separating Keys from
   428  Values in SSD-conscious Storage][wisckey]_.
   429  
   430  [wisckey]: https://www.usenix.org/system/files/conference/fast16/fast16-papers-lu.pdf
   431  
   432  ### Comparisons
   433  | Feature             | Badger                                       | RocksDB                       | BoltDB    |
   434  | -------             | ------                                       | -------                       | ------    |
   435  | Design              | LSM tree with value log                      | LSM tree only                 | B+ tree   |
   436  | High Read throughput | Yes                                          | No                           | Yes        |
   437  | High Write throughput | Yes                                          | Yes                           | No        |
   438  | Designed for SSDs   | Yes (with latest research <sup>1</sup>)      | Not specifically <sup>2</sup> | No        |
   439  | Embeddable          | Yes                                          | Yes                           | Yes       |
   440  | Sorted KV access    | Yes                                          | Yes                           | Yes       |
   441  | Pure Go (no Cgo)    | Yes                                          | No                            | Yes       |
   442  | Transactions        | Yes, ACID, concurrent with SSI<sup>3</sup> | Yes (but non-ACID)            | Yes, ACID |
   443  | Snapshots           | Yes                                           | Yes                           | Yes       |
   444  | TTL support         | No                                           | Yes                           | No       |
   445  
   446  <sup>1</sup> The [WISCKEY paper][wisckey] (on which Badger is based) saw big
   447  wins with separating values from keys, significantly reducing the write
   448  amplification compared to a typical LSM tree.
   449  
   450  <sup>2</sup> RocksDB is an SSD optimized version of LevelDB, which was designed specifically for rotating disks.
   451  As such RocksDB's design isn't aimed at SSDs.
   452  
   453  <sup>3</sup> SSI: Serializable Snapshot Isolation. For more details, see the blog post [Concurrent ACID Transactions in Badger](https://blog.dgraph.io/post/badger-txn/)
   454  
   455  ### Benchmarks
   456  We have run comprehensive benchmarks against RocksDB, Bolt and LMDB. The
   457  benchmarking code, and the detailed logs for the benchmarks can be found in the
   458  [badger-bench] repo. More explanation, including graphs can be found the blog posts (linked
   459  above).
   460  
   461  [badger-bench]: https://github.com/dgraph-io/badger-bench
   462  
   463  ## Other Projects Using Badger
   464  Below is a list of known projects that use Badger:
   465  
   466  * [0-stor](https://github.com/zero-os/0-stor) - Single device object store.
   467  * [Dgraph](https://github.com/dgraph-io/dgraph) - Distributed graph database.
   468  * [Sandglass](https://github.com/celrenheit/sandglass) - distributed, horizontally scalable, persistent, time sorted message queue.
   469  * [Usenet Express](https://usenetexpress.com/) - Serving over 300TB of data with Badger.
   470  * [go-ipfs](https://github.com/ipfs/go-ipfs) - Go client for the InterPlanetary File System (IPFS), a new hypermedia distribution protocol.
   471  * [gorush](https://github.com/appleboy/gorush) - A push notification server written in Go.
   472  
   473  If you are using Badger in a project please send a pull request to add it to the list.
   474  
   475  ## Frequently Asked Questions
   476  - **My writes are getting stuck. Why?**
   477  
   478  This can happen if a long running iteration with `Prefetch` is set to false, but
   479  a `Item::Value` call is made internally in the loop. That causes Badger to
   480  acquire read locks over the value log files to avoid value log GC removing the
   481  file from underneath. As a side effect, this also blocks a new value log GC
   482  file from being created, when the value log file boundary is hit.
   483  
   484  Please see Github issues [#293](https://github.com/dgraph-io/badger/issues/293)
   485  and [#315](https://github.com/dgraph-io/badger/issues/315).
   486  
   487  There are multiple workarounds during iteration:
   488  
   489  1. Use `Item::ValueCopy` instead of `Item::Value` when retrieving value.
   490  1. Set `Prefetch` to true. Badger would then copy over the value and release the
   491     file lock immediately.
   492  1. When `Prefetch` is false, don't call `Item::Value` and do a pure key-only
   493     iteration. This might be useful if you just want to delete a lot of keys.
   494  1. Do the writes in a separate transaction after the reads.
   495  
   496  - **My writes are really slow. Why?**
   497  
   498  Are you creating a new transaction for every single key update? This will lead
   499  to very low throughput. To get best write performance, batch up multiple writes
   500  inside a transaction using single `DB.Update()` call. You could also have
   501  multiple such `DB.Update()` calls being made concurrently from multiple
   502  goroutines.
   503  
   504  - **I don't see any disk write. Why?**
   505  
   506  If you're using Badger with `SyncWrites=false`, then your writes might not be written to value log
   507  and won't get synced to disk immediately. Writes to LSM tree are done inmemory first, before they
   508  get compacted to disk. The compaction would only happen once `MaxTableSize` has been reached. So, if
   509  you're doing a few writes and then checking, you might not see anything on disk. Once you `Close`
   510  the database, you'll see these writes on disk.
   511  
   512  - **Reverse iteration doesn't give me the right results.**
   513  
   514  Just like forward iteration goes to the first key which is equal or greater than the SEEK key, reverse iteration goes to the first key which is equal or lesser than the SEEK key. Therefore, SEEK key would not be part of the results. You can typically add a tilde (~) as a suffix to the SEEK key to include it in the results. See the following issues: [#436](https://github.com/dgraph-io/badger/issues/436) and [#347](https://github.com/dgraph-io/badger/issues/347).
   515  
   516  - **Which instances should I use for Badger?**
   517  
   518  We recommend using instances which provide local SSD storage, without any limit
   519  on the maximum IOPS. In AWS, these are storage optimized instances like i3. They
   520  provide local SSDs which clock 100K IOPS over 4KB blocks easily.
   521  
   522  - **I'm getting a closed channel error. Why?**
   523  
   524  ```
   525  panic: close of closed channel
   526  panic: send on closed channel
   527  ```
   528  
   529  If you're seeing panics like above, this would be because you're operating on a closed DB. This can happen, if you call `Close()` before sending a write, or multiple times. You should ensure that you only call `Close()` once, and all your read/write operations finish before closing.
   530  
   531  - **Are there any Go specific settings that I should use?**
   532  
   533  We *highly* recommend setting a high number for GOMAXPROCS, which allows Go to
   534  observe the full IOPS throughput provided by modern SSDs. In Dgraph, we have set
   535  it to 128. For more details, [see this
   536  thread](https://groups.google.com/d/topic/golang-nuts/jPb_h3TvlKE/discussion).
   537  
   538  - Are there any linux specific settings that I should use?
   539  
   540  We recommend setting max file descriptors to a high number depending upon the expected size of you data.
   541  
   542  ## Contact
   543  - Please use [discuss.dgraph.io](https://discuss.dgraph.io) for questions, feature requests and discussions.
   544  - Please use [Github issue tracker](https://github.com/dgraph-io/badger/issues) for filing bugs or feature requests.
   545  - Join [![Slack Status](http://slack.dgraph.io/badge.svg)](http://slack.dgraph.io).
   546  - Follow us on Twitter [@dgraphlabs](https://twitter.com/dgraphlabs).
   547