github.com/sethvargo/go-limiter@v1.0.0/README.md (about) 1 # Go Rate Limiter 2 3 [![GoDoc](https://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](https://pkg.go.dev/github.com/sethvargo/go-limiter) 4 [![GitHub Actions](https://img.shields.io/github/actions/workflow/status/sethvargo/go-limiter/test.yml?style=flat-square)](https://github.com/sethvargo/go-limiter/actions/workflows/test.yml) 5 6 7 This package provides a rate limiter in Go (Golang), suitable for use in HTTP 8 servers and distributed workloads. It's specifically designed for 9 configurability and flexibility without compromising throughput. 10 11 12 ## Usage 13 14 1. Create a store. This example uses an in-memory store: 15 16 ```golang 17 store, err := memorystore.New(&memorystore.Config{ 18 // Number of tokens allowed per interval. 19 Tokens: 15, 20 21 // Interval until tokens reset. 22 Interval: time.Minute, 23 }) 24 if err != nil { 25 log.Fatal(err) 26 } 27 ``` 28 29 1. Determine the limit by calling `Take()` on the store: 30 31 ```golang 32 ctx := context.Background() 33 34 // key is the unique value upon which you want to rate limit, like an IP or 35 // MAC address. 36 key := "127.0.0.1" 37 tokens, remaining, reset, ok, err := store.Take(ctx, key) 38 39 // tokens is the configured tokens (15 in this example). 40 _ = tokens 41 42 // remaining is the number of tokens remaining (14 now). 43 _ = remaining 44 45 // reset is the unix nanoseconds at which the tokens will replenish. 46 _ = reset 47 48 // ok indicates whether the take was successful. If the key is over the 49 // configured limit, ok will be false. 50 _ = ok 51 52 // Here's a more realistic example: 53 if !ok { 54 return fmt.Errorf("rate limited: retry at %v", reset) 55 } 56 ``` 57 58 There's also HTTP middleware via the `httplimit` package. After creating a 59 store, wrap Go's standard HTTP handler: 60 61 ```golang 62 middleware, err := httplimit.NewMiddleware(store, httplimit.IPKeyFunc()) 63 if err != nil { 64 log.Fatal(err) 65 } 66 67 mux1 := http.NewServeMux() 68 mux1.Handle("/", middleware.Handle(doWork)) // doWork is your original handler 69 ``` 70 71 The middleware automatically set the following headers, conforming to the latest 72 RFCs: 73 74 - `X-RateLimit-Limit` - configured rate limit (constant). 75 - `X-RateLimit-Remaining` - number of remaining tokens in current interval. 76 - `X-RateLimit-Reset` - UTC time when the limit resets. 77 - `Retry-After` - Time at which to retry 78 79 80 ## Why _another_ Go rate limiter? 81 82 I really wanted to learn more about the topic and possibly implementations. The 83 existing packages in the Go ecosystem either lacked flexibility or traded 84 flexibility for performance. I wanted to write a package that was highly 85 extensible while still offering the highest levels of performance. 86 87 88 ### Speed and performance 89 90 How fast is it? You can run the benchmarks yourself, but here's a few sample 91 benchmarks with 100,000 unique keys. I added commas to the output for clarity, 92 but you can run the benchmarks via `make benchmarks`: 93 94 ```text 95 $ make benchmarks 96 BenchmarkSethVargoMemory/memory/serial-7 13,706,899 81.7 ns/op 16 B/op 1 allocs/op 97 BenchmarkSethVargoMemory/memory/parallel-7 7,900,639 151 ns/op 61 B/op 3 allocs/op 98 BenchmarkSethVargoMemory/sweep/serial-7 19,601,592 58.3 ns/op 0 B/op 0 allocs/op 99 BenchmarkSethVargoMemory/sweep/parallel-7 21,042,513 55.2 ns/op 0 B/op 0 allocs/op 100 BenchmarkThrottled/memory/serial-7 6,503,260 176 ns/op 0 B/op 0 allocs/op 101 BenchmarkThrottled/memory/parallel-7 3,936,655 297 ns/op 0 B/op 0 allocs/op 102 BenchmarkThrottled/sweep/serial-7 6,901,432 171 ns/op 0 B/op 0 allocs/op 103 BenchmarkThrottled/sweep/parallel-7 5,948,437 202 ns/op 0 B/op 0 allocs/op 104 BenchmarkTollbooth/memory/serial-7 3,064,309 368 ns/op 0 B/op 0 allocs/op 105 BenchmarkTollbooth/memory/parallel-7 2,658,014 448 ns/op 0 B/op 0 allocs/op 106 BenchmarkTollbooth/sweep/serial-7 2,769,937 430 ns/op 192 B/op 3 allocs/op 107 BenchmarkTollbooth/sweep/parallel-7 2,216,211 546 ns/op 192 B/op 3 allocs/op 108 BenchmarkUber/memory/serial-7 13,795,612 94.2 ns/op 0 B/op 0 allocs/op 109 BenchmarkUber/memory/parallel-7 7,503,214 159 ns/op 0 B/op 0 allocs/op 110 BenchmarkUlule/memory/serial-7 2,964,438 405 ns/op 24 B/op 2 allocs/op 111 BenchmarkUlule/memory/parallel-7 2,441,778 469 ns/op 24 B/op 2 allocs/op 112 ``` 113 114 There's likely still optimizations to be had, pull requests are welcome! 115 116 117 ### Ecosystem 118 119 Many of the existing packages in the ecosystem take dependencies on other 120 packages. I'm an advocate of very thin libraries, and I don't think a rate 121 limiter should be pulling external packages. That's why **go-limit uses only the 122 Go standard library**. 123 124 125 ### Flexible and extensible 126 127 Most of the existing rate limiting libraries make a strong assumption that rate 128 limiting is only for HTTP services. Baked in that assumption are more 129 assumptions like rate limiting by "IP address" or are limited to a resolution of 130 "per second". While go-limit supports rate limiting at the HTTP layer, it can 131 also be used to rate limit literally anything. It rate limits on a user-defined 132 arbitrary string key. 133 134 135 ### Stores 136 137 #### Memory 138 139 Memory is the fastest store, but only works on a single container/virtual 140 machine since there's no way to share the state. 141 [Learn more](https://pkg.go.dev/github.com/sethvargo/go-limiter/memorystore). 142 143 #### Redis 144 145 Redis uses Redis + Lua as a shared pool, but comes at a performance cost. 146 [Learn more](https://pkg.go.dev/github.com/sethvargo/go-redisstore). 147 148 #### Noop 149 150 Noop does no rate limiting, but still implements the interface - useful for 151 testing and local development. 152 [Learn more](https://pkg.go.dev/github.com/sethvargo/go-limiter/noopstore).