github.com/teng231/glock@v1.1.11/README.MD (about) 1 # Glock 2 all lock module and ratelimit using redis for controll. 3 4 [](https://pkg.go.dev/github.com/teng231/glock) 5 6 ## install 7 8 ``` bash 9 go get github.com/teng231/glock 10 ``` 11 12 ## usage 13 many structs and methods, not all but i think it's help you for lock. 14 15 16 ### counter lock 17 Counter lock: It's distributed counter, counter can run up or run down. If run up, you can hanlder check and it's not atomic before all node. 18 Counter down or coundown it's good. 19 Helpful for count turn. 20 21 ``` go 22 func Run(){ 23 cd, err := StartCountLock(&ConnectConfig{ 24 RedisAddr:"localhost:6379", 25 RedisPw: "", 26 Prefix: "test3_", 27 Timelock: time.Minute, 28 RedisDb: 1, 29 }) 30 if err != nil { 31 log.Print(err) 32 t.Fail() 33 } 34 cd.Start("test1", 100, time.Minute) 35 cur, _ := cd.DecrBy("test1", 1) 36 log.Print(cur) 37 if cur != 99 { 38 log.Print(cur) 39 t.Fail() 40 } 41 } 42 ``` 43 44 ### distributed lock 45 As You know about this. It's can be lock for all node. Request run one by one, depend on a `key`. 46 If 2 request going at the same time. Fist request come run and when its done. Second request continue process. 47 48 ``` go 49 func Run(){ 50 dl, err := StartDistributedLock(&ConnectConfig{ 51 RedisAddr:"localhost:6379", 52 RedisPw: "", 53 Prefix: "test3_", 54 Timelock: time.Minute, 55 RedisDb: 1, 56 }) 57 if err != nil { 58 panic(err) 59 } 60 lctx, err := dl.Lock("test-redsync") 61 if err := dl.Unlock(lctx); err != nil { 62 panic(err) 63 } 64 } 65 ``` 66 67 If many request coming. Distributed lock it not good because many ping request to check redis. Redis can be down then lock system down. 68 69 ### kmutex 70 71 It's local lock like distributed lock but using mutex and waitgroup. You can using it combine with distributed lock. 72 73 74 ``` go 75 func Run(){ 76 km := CreateKmutexInstance() 77 km.Lock("key") 78 defer km.Unlock("key") 79 } 80 ``` 81 82 ### limiter 83 84 handler limit like counter lock. but have duration 85 86 87 ``` go 88 func Run(){ 89 r, err := StartLimiter(&ConnectConfig{ 90 RedisAddr:"localhost:6379", 91 RedisPw: "", 92 Timelock: time.Minute, 93 RedisDb: 1, 94 }) 95 if err != nil { 96 panic(err) 97 } 98 if err := r.Allow("key1", Second, 5); err != nil { 99 log.Print(err) 100 } 101 if err := r.Allow("key1", Second, 5); err != nil { 102 log.Print(err) 103 } 104 if err := r.Allow("key1", Second, 5); err != nil { 105 log.Print(err) 106 } 107 if err := r.Allow("key1", Second, 5); err != nil { 108 log.Print(err) 109 } 110 if err := r.Allow("key1", Second, 5); err != nil { 111 log.Print(err) 112 } 113 if err := r.Allow("key1", Second, 5); err != nil { 114 log.Print(err) 115 } 116 time.Sleep(1 * time.Second) 117 if err := r.Allow("key1", Second, 5); err != nil { 118 log.Print(err) 119 } 120 } 121 ``` 122 123 124 ### optimistic lock 125 126 like distributed lock but request 2 come at the same time. It's will return false. 127 Redis will lower traffics 128 129 130 ``` go 131 func Run(){ 132 ol, err := StartOptimisticLock(&ConnectConfig{ 133 RedisAddr:"localhost:6379", 134 RedisPw: "", 135 Prefix: "test3_", 136 Timelock: time.Minute, 137 RedisDb: 1, 138 } 139 ) 140 if err != nil { 141 panic(err) 142 } 143 if err := ol.Lock("key1"); err != nil { 144 log.Print(err) 145 } 146 } 147 ```