gitlab.com/infor-cloud/martian-cloud/tharsis/go-limiter@v0.0.0-20230411193226-3247984d5abc/README.md (about) 1 > **This project is a fork of https://github.com/sethvargo/go-limiter** 2 3 # Go Rate Limiter 4 5 [![GoDoc](https://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](https://pkg.go.dev/mod/github.com/sethvargo/go-limiter) 6 [![GitHub Actions](https://img.shields.io/github/workflow/status/sethvargo/go-limiter/Test?style=flat-square)](https://github.com/sethvargo/go-limiter/actions?query=workflow%3ATest) 7 8 9 This package provides a rate limiter in Go (Golang), suitable for use in HTTP 10 servers and distributed workloads. It's specifically designed for 11 configurability and flexibility without compromising throughput. 12 13 14 ## Usage 15 16 1. Create a store. This example uses an in-memory store: 17 18 ```golang 19 store, err := memorystore.New(&memorystore.Config{ 20 // Number of tokens allowed per interval. 21 Tokens: 15, 22 23 // Interval until tokens reset. 24 Interval: time.Minute, 25 }) 26 if err != nil { 27 log.Fatal(err) 28 } 29 ``` 30 31 1. Determine the limit by calling `Take()` on the store: 32 33 ```golang 34 ctx := context.Background() 35 36 // key is the unique value upon which you want to rate limit, like an IP or 37 // MAC address. 38 key := "127.0.0.1" 39 tokens, remaining, reset, ok, err := store.Take(ctx, key) 40 41 // tokens is the configured tokens (15 in this example). 42 _ = tokens 43 44 // remaining is the number of tokens remaining (14 now). 45 _ = remaining 46 47 // reset is the unix nanoseconds at which the tokens will replenish. 48 _ = reset 49 50 // ok indicates whether the take was successful. If the key is over the 51 // configured limit, ok will be false. 52 _ = ok 53 54 // Here's a more realistic example: 55 if !ok { 56 return fmt.Errorf("rate limited: retry at %v", reset) 57 } 58 ``` 59 60 There's also HTTP middleware via the `httplimit` package. After creating a 61 store, wrap Go's standard HTTP handler: 62 63 ```golang 64 middleware, err := httplimit.NewMiddleware(store, httplimit.IPKeyFunc()) 65 if err != nil { 66 log.Fatal(err) 67 } 68 69 mux1 := http.NewServeMux() 70 mux1.Handle("/", middleware.Handle(doWork)) // doWork is your original handler 71 ``` 72 73 The middleware automatically set the following headers, conforming to the latest 74 RFCs: 75 76 - `X-RateLimit-Limit` - configured rate limit (constant). 77 - `X-RateLimit-Remaining` - number of remaining tokens in current interval. 78 - `X-RateLimit-Reset` - UTC time when the limit resets. 79 - `Retry-After` - Time at which to retry 80 81 82 ## Why _another_ Go rate limiter? 83 84 I really wanted to learn more about the topic and possibly implementations. The 85 existing packages in the Go ecosystem either lacked flexibility or traded 86 flexibility for performance. I wanted to write a package that was highly 87 extensible while still offering the highest levels of performance. 88 89 90 ### Speed and performance 91 92 How fast is it? You can run the benchmarks yourself, but here's a few sample 93 benchmarks with 100,000 unique keys. I added commas to the output for clarity, 94 but you can run the benchmarks via `make benchmarks`: 95 96 ```text 97 $ make benchmarks 98 BenchmarkSethVargoMemory/memory/serial-7 13,706,899 81.7 ns/op 16 B/op 1 allocs/op 99 BenchmarkSethVargoMemory/memory/parallel-7 7,900,639 151 ns/op 61 B/op 3 allocs/op 100 BenchmarkSethVargoMemory/sweep/serial-7 19,601,592 58.3 ns/op 0 B/op 0 allocs/op 101 BenchmarkSethVargoMemory/sweep/parallel-7 21,042,513 55.2 ns/op 0 B/op 0 allocs/op 102 BenchmarkThrottled/memory/serial-7 6,503,260 176 ns/op 0 B/op 0 allocs/op 103 BenchmarkThrottled/memory/parallel-7 3,936,655 297 ns/op 0 B/op 0 allocs/op 104 BenchmarkThrottled/sweep/serial-7 6,901,432 171 ns/op 0 B/op 0 allocs/op 105 BenchmarkThrottled/sweep/parallel-7 5,948,437 202 ns/op 0 B/op 0 allocs/op 106 BenchmarkTollbooth/memory/serial-7 3,064,309 368 ns/op 0 B/op 0 allocs/op 107 BenchmarkTollbooth/memory/parallel-7 2,658,014 448 ns/op 0 B/op 0 allocs/op 108 BenchmarkTollbooth/sweep/serial-7 2,769,937 430 ns/op 192 B/op 3 allocs/op 109 BenchmarkTollbooth/sweep/parallel-7 2,216,211 546 ns/op 192 B/op 3 allocs/op 110 BenchmarkUber/memory/serial-7 13,795,612 94.2 ns/op 0 B/op 0 allocs/op 111 BenchmarkUber/memory/parallel-7 7,503,214 159 ns/op 0 B/op 0 allocs/op 112 BenchmarkUlule/memory/serial-7 2,964,438 405 ns/op 24 B/op 2 allocs/op 113 BenchmarkUlule/memory/parallel-7 2,441,778 469 ns/op 24 B/op 2 allocs/op 114 ``` 115 116 There's likely still optimizations to be had, pull requests are welcome! 117 118 119 ### Ecosystem 120 121 Many of the existing packages in the ecosystem take dependencies on other 122 packages. I'm an advocate of very thin libraries, and I don't think a rate 123 limiter should be pulling external packages. That's why **go-limit uses only the 124 Go standard library**. 125 126 127 ### Flexible and extensible 128 129 Most of the existing rate limiting libraries make a strong assumption that rate 130 limiting is only for HTTP services. Baked in that assumption are more 131 assumptions like rate limiting by "IP address" or are limited to a resolution of 132 "per second". While go-limit supports rate limiting at the HTTP layer, it can 133 also be used to rate limit literally anything. It rate limits on a user-defined 134 arbitrary string key. 135 136 137 ### Stores 138 139 #### Memory 140 141 Memory is the fastest store, but only works on a single container/virtual 142 machine since there's no way to share the state. 143 [Learn more](https://pkg.go.dev/github.com/sethvargo/go-limiter/memorystore). 144 145 #### Redis 146 147 Redis uses Redis + Lua as a shared pool, but comes at a performance cost. 148 [Learn more](https://pkg.go.dev/github.com/sethvargo/go-redisstore). 149 150 #### Noop 151 152 Noop does no rate limiting, but still implements the interface - useful for 153 testing and local development. 154 [Learn more](https://pkg.go.dev/github.com/sethvargo/go-limiter/noopstore). 155 156 157 #### TakeMany 158 159 An interface function was added to Store that will support taking multiple tokens from a bucket. This 160 functionality is supported by passing in a parameter (takeAmount). 161 162 ## Statement of support 163 164 Please submit any bugs or feature requests. Of course, MR's are even better. :)