github.com/muyo/sno@v1.2.1/partition.go (about) 1 package sno 2 3 import "sync/atomic" 4 5 // Partition represents the fixed identifier of a Generator. 6 // 7 // If you'd rather define Partitions as integers instead of as byte arrays, then: 8 // var p sno.Partition 9 // p.PutUint16(65535) 10 type Partition [2]byte 11 12 // AsUint16 returns the Partition as a uint16. 13 func (p Partition) AsUint16() uint16 { 14 return uint16(p[0])<<8 | uint16(p[1]) 15 } 16 17 // PutUint16 sets Partition to the given uint16 in big-endian order. 18 func (p *Partition) PutUint16(u uint16) { 19 p[0] = byte(u >> 8) 20 p[1] = byte(u) 21 } 22 23 // genPartition generates a Partition in its internal representation from a time based seed. 24 // 25 // While this alone would be enough if we only used this once (for the global generator), 26 // generators created with the default configuration also use generated partitions - a case 27 // for which we want to avoid collisions, at the very least within our process. 28 // 29 // Considering we only have a tiny period of 2**16 available, and that predictability of 30 // the partitions is a non-factor, using even a 16-bit Xorshift PRNG would be overkill. 31 // 32 // If we used a PRNG without adjustment, we'd have the following pitfalls: 33 // - we'd need to maintain its state and synchronize access to it. As it can't run atomically, 34 // this would require maintaining a global lock separately; 35 // - our space is limited to barely 65535 partitions, making collisions quite likely 36 // and we have no way of determining them without maintaining yet additional state, 37 // at the very least as a bit set (potentially growing to 8192 bytes for the entire 38 // space). It'd also need to be synchronized. With collisions becoming more and 39 // and more likely as we hand out partitions, we'd need a means of determining free 40 // partitions in the set to be efficient. 41 // 42 // And others. At which point the complexity becomes unreasonable for what we're aiming 43 // to do, so instead of all of that, we go back to the fact that predictability is a non-factor 44 // and our goal being only the prevention of collisions, we simply start off with 45 // a time based seed... which we then atomically increment. 46 // 47 // This way access is safely synchronized and we're guaranteed to get 65535 partitions 48 // without collisions in-process with just a tiny bit of code in comparison. 49 // 50 // Should we ever exceed that number, we however panic. If your usage pattern is weird enough 51 // to hit this edge case, please consider managing the partition space yourself and starting 52 // the Generators using configuration snapshots, instead. 53 // 54 // Note: This being entirely predictable has the upside that the order of creation and the count 55 // of in-process generators created without snapshots can be simply inferred by comparing their 56 // partitions (including comparing to the global generator, which starts at 0 - i.e. at the seed). 57 func genPartition() (uint32, error) { 58 n := atomic.AddUint32(&partitions, 1) 59 60 if n > MaxPartition { 61 return 0, &PartitionPoolExhaustedError{} 62 } 63 64 // Convert to our internal representation leaving 2 bytes empty 65 // for the sequence to simply get ORed at runtime. 66 return uint32(seed+uint16(n)) << 16, nil 67 } 68 69 var ( 70 // Counter starts at -1 since genPartition() will increase it on each call, including 71 // the first. This means the global generator gets an N of 0 and always has a Partition = seed. 72 partitions = ^uint32(0) 73 seed = func() uint16 { 74 t := snotime() 75 76 return uint16((t >> 32) ^ t) 77 }() 78 ) 79 80 func partitionToInternalRepr(p Partition) uint32 { 81 return uint32(p[0])<<24 | uint32(p[1])<<16 82 } 83 84 func partitionToPublicRepr(p uint32) Partition { 85 return Partition{byte(p >> 24), byte(p >> 16)} 86 }