github.com/fufuok/utils@v1.0.10/xsync/README.md (about)

     1  # 标准库 `sync` 扩展包
     2  
     3  *forked from puzpuzpuz/xsync v20240226 v3.1.0*
     4  
     5  ## 改动:
     6  
     7  - ~~增加 `func NewHashMapOf[K comparable, V any](hasher ...func(K) uint64) HashMapOf[K, V]` 实现统一调用方法, 根据键类型使用 xxHash~~
     8  - 保留了对 go1.18 以下的支持
     9  
    10  **官方版本: `v3.0.0` 已统一了调用方法并内置了 hasher 生成器, 不再需要上面的改动, 可以直接使用官方原版就好**
    11  
    12  [![GoDoc reference](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/github.com/puzpuzpuz/xsync/v3)
    13  [![GoReport](https://goreportcard.com/badge/github.com/puzpuzpuz/xsync/v3)](https://goreportcard.com/report/github.com/puzpuzpuz/xsync/v3)
    14  [![codecov](https://codecov.io/gh/puzpuzpuz/xsync/branch/main/graph/badge.svg)](https://codecov.io/gh/puzpuzpuz/xsync)
    15  
    16  # xsync
    17  
    18  Concurrent data structures for Go. Aims to provide more scalable alternatives for some of the data structures from the standard `sync` package, but not only.
    19  
    20  Covered with tests following the approach described [here](https://puzpuzpuz.dev/testing-concurrent-code-for-fun-and-profit).
    21  
    22  ## Benchmarks
    23  
    24  Benchmark results may be found [here](BENCHMARKS.md). I'd like to thank [@felixge](https://github.com/felixge) who kindly ran the benchmarks on a beefy multicore machine.
    25  
    26  Also, a non-scientific, unfair benchmark comparing Java's [j.u.c.ConcurrentHashMap](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/concurrent/ConcurrentHashMap.html) and `xsync.MapOf` is available [here](https://puzpuzpuz.dev/concurrent-map-in-go-vs-java-yet-another-meaningless-benchmark).
    27  
    28  ## Usage
    29  
    30  The latest xsync major version is v3, so `/v3` suffix should be used when importing the library:
    31  
    32  ```go
    33  import (
    34  	"github.com/fufuok/utils/xsync"
    35  )
    36  ```
    37  
    38  *Note for v1 and v2 users*: v1 and v2 support is discontinued, so please upgrade to v3. While the API has some breaking changes, the migration should be trivial.
    39  
    40  ### Counter
    41  
    42  A `Counter` is a striped `int64` counter inspired by the `j.u.c.a.LongAdder` class from the Java standard library.
    43  
    44  ```go
    45  c := xsync.NewCounter()
    46  // increment and decrement the counter
    47  c.Inc()
    48  c.Dec()
    49  // read the current value
    50  v := c.Value()
    51  ```
    52  
    53  Works better in comparison with a single atomically updated `int64` counter in high contention scenarios.
    54  
    55  ### Map
    56  
    57  A `Map` is like a concurrent hash table-based map. It follows the interface of `sync.Map` with a number of valuable extensions like `Compute` or `Size`.
    58  
    59  ```go
    60  m := xsync.NewMap()
    61  m.Store("foo", "bar")
    62  v, ok := m.Load("foo")
    63  s := m.Size()
    64  ```
    65  
    66  `Map` uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT
    67  
    68  CLHT is built around the idea of organizing the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with minimal cache-line transfer. Also, `Get` operations are obstruction-free and involve no writes to shared memory, hence no mutexes or any other sort of locks. Due to this design, in all considered scenarios `Map` outperforms `sync.Map`.
    69  
    70  One important difference with `sync.Map` is that only string keys are supported. That's because Golang standard library does not expose the built-in hash functions for `interface{}` values.
    71  
    72  `MapOf[K, V]` is an implementation with parametrized key and value types. While it's still a CLHT-inspired hash map, `MapOf`'s design is quite different from `Map`. As a result, less GC pressure and fewer atomic operations on reads.
    73  
    74  ```go
    75  m := xsync.NewMapOf[string, string]()
    76  m.Store("foo", "bar")
    77  v, ok := m.Load("foo")
    78  ```
    79  
    80  One important difference with `Map` is that `MapOf` supports arbitrary `comparable` key types:
    81  
    82  ```go
    83  type Point struct {
    84  	x int32
    85  	y int32
    86  }
    87  m := NewMapOf[Point, int]()
    88  m.Store(Point{42, 42}, 42)
    89  v, ok := m.Load(point{42, 42})
    90  ```
    91  
    92  ### MPMCQueue
    93  
    94  A `MPMCQueue` is a bounded multi-producer multi-consumer concurrent queue.
    95  
    96  ```go
    97  q := xsync.NewMPMCQueue(1024)
    98  // producer inserts an item into the queue
    99  q.Enqueue("foo")
   100  // optimistic insertion attempt; doesn't block
   101  inserted := q.TryEnqueue("bar")
   102  // consumer obtains an item from the queue
   103  item := q.Dequeue() // interface{} pointing to a string
   104  // optimistic obtain attempt; doesn't block
   105  item, ok := q.TryDequeue()
   106  ```
   107  
   108  `MPMCQueueOf[I]` is an implementation with parametrized item type. It is available for Go 1.19 or later.
   109  
   110  ```go
   111  q := xsync.NewMPMCQueueOf[string](1024)
   112  q.Enqueue("foo")
   113  item := q.Dequeue() // string
   114  ```
   115  
   116  The queue is based on the algorithm from the [MPMCQueue](https://github.com/rigtorp/MPMCQueue) C++ library which in its turn references D.Vyukov's [MPMC queue](https://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue). According to the following [classification](https://www.1024cores.net/home/lock-free-algorithms/queues), the queue is array-based, fails on overflow, provides causal FIFO, has blocking producers and consumers.
   117  
   118  The idea of the algorithm is to allow parallelism for concurrent producers and consumers by introducing the notion of tickets, i.e. values of two counters, one per producers/consumers. An atomic increment of one of those counters is the only noticeable contention point in queue operations. The rest of the operation avoids contention on writes thanks to the turn-based read/write access for each of the queue items.
   119  
   120  In essence, `MPMCQueue` is a specialized queue for scenarios where there are multiple concurrent producers and consumers of a single queue running on a large multicore machine.
   121  
   122  To get the optimal performance, you may want to set the queue size to be large enough, say, an order of magnitude greater than the number of producers/consumers, to allow producers and consumers to progress with their queue operations in parallel most of the time.
   123  
   124  ### RBMutex
   125  
   126  A `RBMutex` is a reader-biased reader/writer mutual exclusion lock. The lock can be held by many readers or a single writer.
   127  
   128  ```go
   129  mu := xsync.NewRBMutex()
   130  // reader lock calls return a token
   131  t := mu.RLock()
   132  // the token must be later used to unlock the mutex
   133  mu.RUnlock(t)
   134  // writer locks are the same as in sync.RWMutex
   135  mu.Lock()
   136  mu.Unlock()
   137  ```
   138  
   139  `RBMutex` is based on a modified version of BRAVO (Biased Locking for Reader-Writer Locks) algorithm: https://arxiv.org/pdf/1810.01553.pdf
   140  
   141  The idea of the algorithm is to build on top of an existing reader-writer mutex and introduce a fast path for readers. On the fast path, reader lock attempts are sharded over an internal array based on the reader identity (a token in the case of Golang). This means that readers do not contend over a single atomic counter like it's done in, say, `sync.RWMutex` allowing for better scalability in terms of cores.
   142  
   143  Hence, by the design `RBMutex` is a specialized mutex for scenarios, such as caches, where the vast majority of locks are acquired by readers and write lock acquire attempts are infrequent. In such scenarios, `RBMutex` should perform better than the `sync.RWMutex` on large multicore machines.
   144  
   145  `RBMutex` extends `sync.RWMutex` internally and uses it as the "reader bias disabled" fallback, so the same semantics apply. The only noticeable difference is in the reader tokens returned from the `RLock`/`RUnlock` methods.
   146  
   147  ## License
   148  
   149  Licensed under MIT.