github.com/mailgun/holster/v4@v4.20.0/collections/README.md (about)

     1  ## LRUCache
     2  Implements a Least Recently Used Cache with optional TTL and stats collection
     3  
     4  This is a LRU cache based off [github.com/golang/groupcache/lru](https://github.com/golang/groupcache/tree/master/lru) expanded
     5  with the following
     6  
     7  * `Peek()` - Get the value without updating the expiration or last used or stats
     8  * `Keys()` - Get a list of keys at this point in time
     9  * `Stats()` - Returns stats about the current state of the cache
    10  * `AddWithTTL()` - Adds a value to the cache with a expiration time
    11  * `Each()` - Concurrent non blocking access to each item in the cache
    12  * `Map()` - Efficient blocking modification to each item in the cache
    13  
    14  TTL is evaluated during calls to `.Get()` if the entry is past the requested TTL `.Get()`
    15  removes the entry from the cache counts a miss and returns not `ok`
    16  
    17  ```go
    18  cache := collections.NewLRUCache(5000)
    19  go func() {
    20      for {
    21          select {
    22          // Send cache stats every 5 seconds
    23          case <-time.Tick(time.Second * 5):
    24              stats := cache.GetStats()
    25              metrics.Gauge(metrics.Metric("demo", "cache", "size"), int64(stats.Size), 1)
    26              metrics.Gauge(metrics.Metric("demo", "cache", "hit"), stats.Hit, 1)
    27              metrics.Gauge(metrics.Metric("demo", "cache", "miss"), stats.Miss, 1)
    28          }
    29      }
    30  }()
    31  
    32  cache.Add("key", "value")
    33  value, ok := cache.Get("key")
    34  
    35  for _, key := range cache.Keys() {
    36      value, ok := cache.Get(key)
    37      if ok {
    38          fmt.Printf("Key: %+v Value %+v\n", key, value)
    39      }
    40  }
    41  ```
    42  
    43  ## ExpireCache
    44  ExpireCache is a cache which expires entries only after 2 conditions are met
    45  
    46  1. The Specified TTL has expired
    47  2. The item has been processed with ExpireCache.Each()
    48  
    49  This is an unbounded cache which guaranties each item in the cache
    50  has been processed before removal. This cache is useful if you need an
    51  unbounded queue, that can also act like an LRU cache.
    52  
    53  Every time an item is touched by `.Get()` or `.Set()` the duration is
    54  updated which ensures items in frequent use stay in the cache. Processing
    55  the cache with `.Each()` can modify the item in the cache without
    56  updating the expiration time by using the `.Update()` method.
    57  
    58  The cache can also return statistics which can be used to graph cache usage
    59  and size.
    60  
    61  *NOTE: Because this is an unbounded cache, the user MUST process the cache
    62  with `.Each()` regularly! Else the cache items will never expire and the cache
    63  will eventually eat all the memory on the system*
    64  
    65  ```go
    66  // How often the cache is processed
    67  syncInterval := time.Second * 10
    68  
    69  // In this example the cache TTL is slightly less than the sync interval
    70  // such that before the first sync; items that where only accessed once
    71  // between sync intervals should expire. This technique is useful if you
    72  // have a long syncInterval and are only interested in keeping items
    73  // that where accessed during the sync cycle
    74  cache := collections.NewExpireCache((syncInterval / 5) * 4)
    75  
    76  go func() {
    77      for {
    78          select {
    79          // Sync the cache with the database every 10 seconds
    80          // Items in the cache will not be expired until this completes without error
    81          case <-time.Tick(syncInterval):
    82              // Each() uses FanOut() to run several of these concurrently, in this
    83              // example we are capped at running 10 concurrently, Use 0 or 1 if you
    84              // don't need concurrent FanOut
    85              cache.Each(10, func(key inteface{}, value interface{}) error {
    86                  item := value.(Item)
    87                  return db.ExecuteQuery("insert into tbl (id, field) values (?, ?)",
    88                      item.Id, item.Field)
    89              })
    90          // Periodically send stats about the cache
    91          case <-time.Tick(time.Second * 5):
    92              stats := cache.GetStats()
    93              metrics.Gauge(metrics.Metric("demo", "cache", "size"), int64(stats.Size), 1)
    94              metrics.Gauge(metrics.Metric("demo", "cache", "hit"), stats.Hit, 1)
    95              metrics.Gauge(metrics.Metric("demo", "cache", "miss"), stats.Miss, 1)
    96          }
    97      }
    98  }()
    99  
   100  cache.Add("domain-id", Item{Id: 1, Field: "value"},
   101  item, ok := cache.Get("domain-id")
   102  if ok {
   103      fmt.Printf("%+v\n", item.(Item))
   104  }
   105  ```
   106  
   107  ## Priority Queue
   108  Provides a Priority Queue implementation as described [here](https://en.wikipedia.org/wiki/Priority_queue)
   109  
   110  ```go
   111  queue := collections.NewPriorityQueue()
   112  
   113  queue.Push(&collections.PQItem{
   114      Value: "thing3",
   115      Priority: 3,
   116  })
   117  
   118  queue.Push(&collections.PQItem{
   119      Value: "thing1",
   120      Priority: 1,
   121  })
   122  
   123  queue.Push(&collections.PQItem{
   124      Value: "thing2",
   125      Priority: 2,
   126  })
   127  
   128  // Pops item off the queue according to the priority instead of the Push() order
   129  item := queue.Pop()
   130  
   131  fmt.Printf("Item: %s", item.Value.(string))
   132  
   133  // Output: Item: thing1
   134  ```