github.com/puzpuzpuz/xsync/v2@v2.5.2-0.20231021165734-92b8269e19a9/README.md (about) 1 [![GoDoc reference](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/github.com/puzpuzpuz/xsync/v2) 2 [![GoReport](https://goreportcard.com/badge/github.com/puzpuzpuz/xsync/v2)](https://goreportcard.com/report/github.com/puzpuzpuz/xsync/v2) 3 [![codecov](https://codecov.io/gh/puzpuzpuz/xsync/branch/main/graph/badge.svg)](https://codecov.io/gh/puzpuzpuz/xsync) 4 5 # xsync 6 7 Concurrent data structures for Go. Aims to provide more scalable alternatives for some of the data structures from the standard `sync` package, but not only. 8 9 Covered with tests following the approach described [here](https://puzpuzpuz.dev/testing-concurrent-code-for-fun-and-profit). 10 11 ## Benchmarks 12 13 Benchmark results may be found [here](BENCHMARKS.md). I'd like to thank [@felixge](https://github.com/felixge) who kindly run the benchmarks on a beefy multicore machine. 14 15 Also, a non-scientific, unfair benchmark comparing Java's [j.u.c.ConcurrentHashMap](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/concurrent/ConcurrentHashMap.html) and `xsync.MapOf` is available [here](https://puzpuzpuz.dev/concurrent-map-in-go-vs-java-yet-another-meaningless-benchmark). 16 17 ## Usage 18 19 The latest xsync major version is v2, so `/v2` suffix should be used when importing the library: 20 21 ```go 22 import ( 23 "github.com/puzpuzpuz/xsync/v2" 24 ) 25 ``` 26 27 *Note for v1 users*: v1 support is discontinued, so please upgrade to v2. While the API has some breaking changes, the migration should be trivial. 28 29 ### Counter 30 31 A `Counter` is a striped `int64` counter inspired by the `j.u.c.a.LongAdder` class from Java standard library. 32 33 ```go 34 c := xsync.NewCounter() 35 // increment and decrement the counter 36 c.Inc() 37 c.Dec() 38 // read the current value 39 v := c.Value() 40 ``` 41 42 Works better in comparison with a single atomically updated `int64` counter in high contention scenarios. 43 44 ### Map 45 46 A `Map` is like a concurrent hash table based map. It follows the interface of `sync.Map` with a number of valuable extensions like `Compute` or `Size`. 47 48 ```go 49 m := xsync.NewMap() 50 m.Store("foo", "bar") 51 v, ok := m.Load("foo") 52 s := m.Size() 53 ``` 54 55 `Map` uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT 56 57 CLHT is built around idea to organize the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with minimal cache-line transfer. Also, `Get` operations are obstruction-free and involve no writes to shared memory, hence no mutexes or any other sort of locks. Due to this design, in all considered scenarios `Map` outperforms `sync.Map`. 58 59 One important difference with `sync.Map` is that only string keys are supported. That's because Golang standard library does not expose the built-in hash functions for `interface{}` values. 60 61 `MapOf[K, V]` is an implementation with parametrized value type. It is available for Go 1.18 or later. While it's still a CLHT-inspired hash map, `MapOf`'s design is quite different from `Map`. As a result, less GC pressure and less atomic operations on reads. 62 63 ```go 64 m := xsync.NewMapOf[string]() 65 m.Store("foo", "bar") 66 v, ok := m.Load("foo") 67 ``` 68 69 One important difference with `Map` is that `MapOf` supports arbitrary `comparable` key types: 70 71 ```go 72 type Point struct { 73 x int32 74 y int32 75 } 76 m := NewTypedMapOf[Point, int](func(seed maphash.Seed, p Point) uint64 { 77 // provide a hash function when creating the MapOf; 78 // we recommend using the hash/maphash package for the function 79 var h maphash.Hash 80 h.SetSeed(seed) 81 binary.Write(&h, binary.LittleEndian, p.x) 82 hash := h.Sum64() 83 h.Reset() 84 binary.Write(&h, binary.LittleEndian, p.y) 85 return 31*hash + h.Sum64() 86 }) 87 m.Store(Point{42, 42}, 42) 88 v, ok := m.Load(point{42, 42}) 89 ``` 90 91 ### MPMCQueue 92 93 A `MPMCQueue` is a bounded multi-producer multi-consumer concurrent queue. 94 95 ```go 96 q := xsync.NewMPMCQueue(1024) 97 // producer inserts an item into the queue 98 q.Enqueue("foo") 99 // optimistic insertion attempt; doesn't block 100 inserted := q.TryEnqueue("bar") 101 // consumer obtains an item from the queue 102 item := q.Dequeue() // interface{} pointing to a string 103 // optimistic obtain attempt; doesn't block 104 item, ok := q.TryDequeue() 105 ``` 106 107 `MPMCQueueOf[I]` is an implementation with parametrized item type. It is available for Go 1.19 or later. 108 109 ```go 110 q := xsync.NewMPMCQueueOf[string](1024) 111 q.Enqueue("foo") 112 item := q.Dequeue() // string 113 ``` 114 115 The queue is based on the algorithm from the [MPMCQueue](https://github.com/rigtorp/MPMCQueue) C++ library which in its turn references D.Vyukov's [MPMC queue](https://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue). According to the following [classification](https://www.1024cores.net/home/lock-free-algorithms/queues), the queue is array-based, fails on overflow, provides causal FIFO, has blocking producers and consumers. 116 117 The idea of the algorithm is to allow parallelism for concurrent producers and consumers by introducing the notion of tickets, i.e. values of two counters, one per producers/consumers. An atomic increment of one of those counters is the only noticeable contention point in queue operations. The rest of the operation avoids contention on writes thanks to the turn-based read/write access for each of the queue items. 118 119 In essence, `MPMCQueue` is a specialized queue for scenarios where there are multiple concurrent producers and consumers of a single queue running on a large multicore machine. 120 121 To get the optimal performance, you may want to set the queue size to be large enough, say, an order of magnitude greater than the number of producers/consumers, to allow producers and consumers to progress with their queue operations in parallel most of the time. 122 123 ### RBMutex 124 125 A `RBMutex` is a reader biased reader/writer mutual exclusion lock. The lock can be held by an many readers or a single writer. 126 127 ```go 128 mu := xsync.NewRBMutex() 129 // reader lock calls return a token 130 t := mu.RLock() 131 // the token must be later used to unlock the mutex 132 mu.RUnlock(t) 133 // writer locks are the same as in sync.RWMutex 134 mu.Lock() 135 mu.Unlock() 136 ``` 137 138 `RBMutex` is based on a modified version of BRAVO (Biased Locking for Reader-Writer Locks) algorithm: https://arxiv.org/pdf/1810.01553.pdf 139 140 The idea of the algorithm is to build on top of an existing reader-writer mutex and introduce a fast path for readers. On the fast path, reader lock attempts are sharded over an internal array based on the reader identity (a token in case of Golang). This means that readers do not contend over a single atomic counter like it's done in, say, `sync.RWMutex` allowing for better scalability in terms of cores. 141 142 Hence, by the design `RBMutex` is a specialized mutex for scenarios, such as caches, where the vast majority of locks are acquired by readers and write lock acquire attempts are infrequent. In such scenarios, `RBMutex` should perform better than the `sync.RWMutex` on large multicore machines. 143 144 `RBMutex` extends `sync.RWMutex` internally and uses it as the "reader bias disabled" fallback, so the same semantics apply. The only noticeable difference is in the reader tokens returned from the `RLock`/`RUnlock` methods. 145 146 ## License 147 148 Licensed under MIT.