github.com/dtm-labs/rockscache@v0.1.1/README.md (about)

     1  ![license](https://img.shields.io/github/license/dtm-labs/rockscache)
     2  ![Build Status](https://github.com/dtm-labs/rockscache/actions/workflows/tests.yml/badge.svg?branch=main)
     3  [![codecov](https://codecov.io/gh/dtm-labs/rockscache/branch/main/graph/badge.svg?token=UKKEYQLP3F)](https://codecov.io/gh/dtm-labs/rockscache)
     4  [![Go Report Card](https://goreportcard.com/badge/github.com/dtm-labs/rockscache)](https://goreportcard.com/report/github.com/dtm-labs/rockscache)
     5  [![Go Reference](https://pkg.go.dev/badge/github.com/dtm-labs/rockscache.svg)](https://pkg.go.dev/github.com/dtm-labs/rockscache)
     6  
     7  English | [简体中文](https://github.com/dtm-labs/rockscache/blob/main/helper/README-cn.md)
     8  
     9  # RocksCache
    10  The first Redis cache library to ensure eventual consistency and strong consistency with DB.
    11  
    12  ## Features
    13  - Eventual Consistency: ensures eventual consistency of cache even in extreme cases
    14  - Strong consistency: provides strong consistent access to applications
    15  - Anti-breakdown: a better solution for cache breakdown
    16  - Anti-penetration
    17  - Anti-avalanche
    18  - Batch Query
    19  
    20  ## Usage
    21  This cache repository uses the most common `update DB and then delete cache` cache management policy
    22  
    23  ### Read cache
    24  ``` Go
    25  import "github.com/dtm-labs/rockscache"
    26  
    27  // new a client for rockscache using the default options
    28  rc := rockscache.NewClient(redisClient, NewDefaultOptions())
    29  
    30  // use Fetch to fetch data
    31  // 1. the first parameter is the key of the data
    32  // 2. the second parameter is the data expiration time
    33  // 3. the third parameter is the data fetch function which is called when the cache does not exist
    34  v, err := rc.Fetch("key1", 300 * time.Second, func()(string, error) {
    35    // fetch data from database or other sources
    36    return "value1", nil
    37  })
    38  ```
    39  
    40  ### Delete the cache
    41  ``` Go
    42  rc.TagAsDeleted(key)
    43  ```
    44  
    45  ## Batch usage
    46  
    47  ### Batch read cache
    48  ``` Go
    49  import "github.com/dtm-labs/rockscache"
    50  
    51  // new a client for rockscache using the default options
    52  rc := rockscache.NewClient(redisClient, NewDefaultOptions())
    53  
    54  // use FetchBatch to fetch data
    55  // 1. the first parameter is the keys list of the data
    56  // 2. the second parameter is the data expiration time
    57  // 3. the third parameter is the batch data fetch function which is called when the cache does not exist
    58  // the parameter of the batch data fetch function is the index list of those keys
    59  // missing in cache, which can be used to form a batch query for missing data.
    60  // the return value of the batch data fetch function is a map, with key of the
    61  // index and value of the corresponding data in form of string
    62  v, err := rc.FetchBatch([]string{"key1", "key2", "key3"}, 300, func(idxs []int) (map[int]string, error) {
    63      // fetch data from database or other sources
    64      values := make(map[int]string)
    65      for _, i := range idxs {
    66          values[i] = fmt.Sprintf("value%d", i)
    67      }
    68      return values, nil
    69  })
    70  ```
    71  
    72  ### Batch delete cache
    73  ``` Go
    74  rc.TagAsDeletedBatch(keys)
    75  ```
    76  
    77  ## Eventual consistency
    78  With the introduction of caching, consistency problems in a distributed system show up, as the data is stored in two places at the same time: the database and Redis. For background on this consistency problem, and an introduction to popular Redis caching solutions, see.
    79  - [https://yunpengn.github.io/blog/2019/05/04/consistent-redis-sql/](https://yunpengn.github.io/blog/2019/05/04/consistent-redis-sql/)
    80  
    81  But all the caching solutions we've seen so far, without introducing versioning at the application level, fail to address the following data inconsistency scenario.
    82  
    83  <img alt="cache-version-problem" src="https://en.dtm.pub/assets/cache-version.39d3aace.svg" height=400 />
    84  
    85  Even if you use lock to do the updating, there are still corner cases that can cause inconsistency.
    86  
    87  <img alt="redis cache inconsistency" src="https://martin.kleppmann.com/2016/02/unsafe-lock.png" height=400 />
    88  
    89  ### Solution
    90  This project brings you a brand new solution that guarantee data consistency between the cache and the database, without introducing version. This solution is the first of its kind and has been patented and is now open sourced for everyone to use.
    91  
    92  When the developer calls `Fetch` when reading the data, and makes sure to call `TagAsDeleted` after updating the database, then the cache can guarentee the eventual consistency. When step 5 in the diagram above is writing to v1, the write in this solution will eventually be ignored.
    93  - See [Atomicity of DB and cache operations](https://en.dtm.pub/app/cache.html#atomic) for how to ensure that TagAsDeleted is called after updating the database.
    94  - See [Cache consistency](https://en.dtm.pub/app/cache.html) for why data writes are ignored when step 5 is writing v1 to cache.
    95  
    96  For a full runnable example, see [dtm-cases/cache](https://github.com/dtm-labs/dtm-cases/tree/main/cache)
    97  
    98  ## Strongly consistent access
    99  If your application needs to use caching and requires strong consistency rather than eventual consistency, then this can be supported by turning on the option `StrongConsisteny`, with the access method remaining the same
   100  ``` Go
   101  rc.Options.StrongConsisteny = true
   102  ```
   103  
   104  Refer to [cache consistency](https://en.dtm.pub/app/cache.html) for detailed principles and [dtm-cases/cache](https://github.com/dtm-labs/dtm-cases/tree/main/cache) for examples
   105  
   106  ## Downgrading and strong consistency
   107  The library supports downgrading. The downgrade switch is divided into
   108  - `DisableCacheRead`: turns off cache reads, default `false`; if on, then Fetch does not read from the cache, but calls fn directly to fetch the data
   109  - `DisableCacheDelete`: disables cache delete, default false; if on, then TagAsDeleted does nothing and returns directly
   110  
   111  When Redis has a problem and needs to be downgraded, you can control this with these two switches. If you need to maintain strong consistent access even during a downgrade, rockscache also supports
   112  
   113  Refer to [cache-consistency](https://en.dtm.pub/app/cache.html) for detailed principles and [dtm-cases/cache](https://github.com/dtm-labs/dtm-cases/tree/main/cache) for examples
   114  
   115  ## Anti-Breakdown
   116  The use of cache through this library comes with an anti-breakdown feature. On the one hand `Fetch` will use `singleflight` within the process to avoid multiple requests being sent to Redis within a process, and on the other hand distributed locks will be used in the Redis layer to avoid multiple requests being sent to the DB from multiple processes at the same time, ensuring that only one data query request ends up at the DB.
   117  
   118  The project's anti-breakdown provides a faster response time when hot cached data is deleted. If a hot cache data takes 3s to compute, a normal anti-breakdown solution would cause all requests for this hot data to wait 3s for this time, whereas this project's solution returns it immediately.
   119  
   120  ## Anti-Penetration
   121  The use of caching through this library comes with anti-penetration features. When `fn` in `Fetch` returns an empty string, this is considered an empty result and the expiry time is set to `EmptyExpire` in the rockscache option.
   122  
   123  `EmptyExpire` defaults to 60s, if set to 0 then anti-penetration is turned off and no empty results are saved
   124  
   125  ## Anti-Avalanche
   126  The cache is used with this library and comes with an anti-avalanche. `RandomExpireAdjustment` in rockscache defaults to 0.1, if set to an expiry time of 600 then the expiry time will be set to a random number in the middle of `540s - 600s` to avoid data expiring at the same time
   127  
   128  ## Contact us
   129  
   130  ## Chat Group
   131  
   132  Join the chat via [https://discord.gg/dV9jS5Rb33](https://discord.gg/dV9jS5Rb33).
   133  
   134  ## Give a star! ⭐
   135  
   136  If you think this project is interesting, or helpful to you, please give a star!