github.com/zxy12/go_duplicate_112_new@v0.0.0-20200807091221-747231827200/src/runtime/mbarrier.go (about) 1 // Copyright 2015 The Go Authors. All rights reserved. 2 // Use of this source code is governed by a BSD-style 3 // license that can be found in the LICENSE file. 4 5 // Garbage collector: write barriers. 6 // 7 // For the concurrent garbage collector, the Go compiler implements 8 // updates to pointer-valued fields that may be in heap objects by 9 // emitting calls to write barriers. The main write barrier for 10 // individual pointer writes is gcWriteBarrier and is implemented in 11 // assembly. This file contains write barrier entry points for bulk 12 // operations. See also mwbbuf.go. 13 14 package runtime 15 16 import ( 17 "runtime/internal/sys" 18 "unsafe" 19 ) 20 21 // Go uses a hybrid barrier that combines a Yuasa-style deletion 22 // barrier—which shades the object whose reference is being 23 // overwritten—with Dijkstra insertion barrier—which shades the object 24 // whose reference is being written. The insertion part of the barrier 25 // is necessary while the calling goroutine's stack is grey. In 26 // pseudocode, the barrier is: 27 // 28 // writePointer(slot, ptr): 29 // shade(*slot) 30 // if current stack is grey: 31 // shade(ptr) 32 // *slot = ptr 33 // 34 // slot is the destination in Go code. 35 // ptr is the value that goes into the slot in Go code. 36 // 37 // Shade indicates that it has seen a white pointer by adding the referent 38 // to wbuf as well as marking it. 39 // 40 // The two shades and the condition work together to prevent a mutator 41 // from hiding an object from the garbage collector: 42 // 43 // 1. shade(*slot) prevents a mutator from hiding an object by moving 44 // the sole pointer to it from the heap to its stack. If it attempts 45 // to unlink an object from the heap, this will shade it. 46 // 47 // 2. shade(ptr) prevents a mutator from hiding an object by moving 48 // the sole pointer to it from its stack into a black object in the 49 // heap. If it attempts to install the pointer into a black object, 50 // this will shade it. 51 // 52 // 3. Once a goroutine's stack is black, the shade(ptr) becomes 53 // unnecessary. shade(ptr) prevents hiding an object by moving it from 54 // the stack to the heap, but this requires first having a pointer 55 // hidden on the stack. Immediately after a stack is scanned, it only 56 // points to shaded objects, so it's not hiding anything, and the 57 // shade(*slot) prevents it from hiding any other pointers on its 58 // stack. 59 // 60 // For a detailed description of this barrier and proof of 61 // correctness, see https://github.com/golang/proposal/blob/master/design/17503-eliminate-rescan.md 62 // 63 // 64 // 65 // Dealing with memory ordering: 66 // 67 // Both the Yuasa and Dijkstra barriers can be made conditional on the 68 // color of the object containing the slot. We chose not to make these 69 // conditional because the cost of ensuring that the object holding 70 // the slot doesn't concurrently change color without the mutator 71 // noticing seems prohibitive. 72 // 73 // Consider the following example where the mutator writes into 74 // a slot and then loads the slot's mark bit while the GC thread 75 // writes to the slot's mark bit and then as part of scanning reads 76 // the slot. 77 // 78 // Initially both [slot] and [slotmark] are 0 (nil) 79 // Mutator thread GC thread 80 // st [slot], ptr st [slotmark], 1 81 // 82 // ld r1, [slotmark] ld r2, [slot] 83 // 84 // Without an expensive memory barrier between the st and the ld, the final 85 // result on most HW (including 386/amd64) can be r1==r2==0. This is a classic 86 // example of what can happen when loads are allowed to be reordered with older 87 // stores (avoiding such reorderings lies at the heart of the classic 88 // Peterson/Dekker algorithms for mutual exclusion). Rather than require memory 89 // barriers, which will slow down both the mutator and the GC, we always grey 90 // the ptr object regardless of the slot's color. 91 // 92 // Another place where we intentionally omit memory barriers is when 93 // accessing mheap_.arena_used to check if a pointer points into the 94 // heap. On relaxed memory machines, it's possible for a mutator to 95 // extend the size of the heap by updating arena_used, allocate an 96 // object from this new region, and publish a pointer to that object, 97 // but for tracing running on another processor to observe the pointer 98 // but use the old value of arena_used. In this case, tracing will not 99 // mark the object, even though it's reachable. However, the mutator 100 // is guaranteed to execute a write barrier when it publishes the 101 // pointer, so it will take care of marking the object. A general 102 // consequence of this is that the garbage collector may cache the 103 // value of mheap_.arena_used. (See issue #9984.) 104 // 105 // 106 // Stack writes: 107 // 108 // The compiler omits write barriers for writes to the current frame, 109 // but if a stack pointer has been passed down the call stack, the 110 // compiler will generate a write barrier for writes through that 111 // pointer (because it doesn't know it's not a heap pointer). 112 // 113 // One might be tempted to ignore the write barrier if slot points 114 // into to the stack. Don't do it! Mark termination only re-scans 115 // frames that have potentially been active since the concurrent scan, 116 // so it depends on write barriers to track changes to pointers in 117 // stack frames that have not been active. 118 // 119 // 120 // Global writes: 121 // 122 // The Go garbage collector requires write barriers when heap pointers 123 // are stored in globals. Many garbage collectors ignore writes to 124 // globals and instead pick up global -> heap pointers during 125 // termination. This increases pause time, so we instead rely on write 126 // barriers for writes to globals so that we don't have to rescan 127 // global during mark termination. 128 // 129 // 130 // Publication ordering: 131 // 132 // The write barrier is *pre-publication*, meaning that the write 133 // barrier happens prior to the *slot = ptr write that may make ptr 134 // reachable by some goroutine that currently cannot reach it. 135 // 136 // 137 // Signal handler pointer writes: 138 // 139 // In general, the signal handler cannot safely invoke the write 140 // barrier because it may run without a P or even during the write 141 // barrier. 142 // 143 // There is exactly one exception: profbuf.go omits a barrier during 144 // signal handler profile logging. That's safe only because of the 145 // deletion barrier. See profbuf.go for a detailed argument. If we 146 // remove the deletion barrier, we'll have to work out a new way to 147 // handle the profile logging. 148 149 // typedmemmove copies a value of type t to dst from src. 150 // Must be nosplit, see #16026. 151 // 152 // TODO: Perfect for go:nosplitrec since we can't have a safe point 153 // anywhere in the bulk barrier or memmove. 154 // 155 //go:nosplit 156 func typedmemmove(typ *_type, dst, src unsafe.Pointer) { 157 if dst == src { 158 return 159 } 160 if typ.kind&kindNoPointers == 0 { 161 bulkBarrierPreWrite(uintptr(dst), uintptr(src), typ.size) 162 } 163 // There's a race here: if some other goroutine can write to 164 // src, it may change some pointer in src after we've 165 // performed the write barrier but before we perform the 166 // memory copy. This safe because the write performed by that 167 // other goroutine must also be accompanied by a write 168 // barrier, so at worst we've unnecessarily greyed the old 169 // pointer that was in src. 170 memmove(dst, src, typ.size) 171 if writeBarrier.cgo { 172 cgoCheckMemmove(typ, dst, src, 0, typ.size) 173 } 174 } 175 176 //go:linkname reflect_typedmemmove reflect.typedmemmove 177 func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer) { 178 if raceenabled { 179 raceWriteObjectPC(typ, dst, getcallerpc(), funcPC(reflect_typedmemmove)) 180 raceReadObjectPC(typ, src, getcallerpc(), funcPC(reflect_typedmemmove)) 181 } 182 if msanenabled { 183 msanwrite(dst, typ.size) 184 msanread(src, typ.size) 185 } 186 typedmemmove(typ, dst, src) 187 } 188 189 // typedmemmovepartial is like typedmemmove but assumes that 190 // dst and src point off bytes into the value and only copies size bytes. 191 //go:linkname reflect_typedmemmovepartial reflect.typedmemmovepartial 192 func reflect_typedmemmovepartial(typ *_type, dst, src unsafe.Pointer, off, size uintptr) { 193 if writeBarrier.needed && typ.kind&kindNoPointers == 0 && size >= sys.PtrSize { 194 // Pointer-align start address for bulk barrier. 195 adst, asrc, asize := dst, src, size 196 if frag := -off & (sys.PtrSize - 1); frag != 0 { 197 adst = add(dst, frag) 198 asrc = add(src, frag) 199 asize -= frag 200 } 201 bulkBarrierPreWrite(uintptr(adst), uintptr(asrc), asize&^(sys.PtrSize-1)) 202 } 203 204 memmove(dst, src, size) 205 if writeBarrier.cgo { 206 cgoCheckMemmove(typ, dst, src, off, size) 207 } 208 } 209 210 // reflectcallmove is invoked by reflectcall to copy the return values 211 // out of the stack and into the heap, invoking the necessary write 212 // barriers. dst, src, and size describe the return value area to 213 // copy. typ describes the entire frame (not just the return values). 214 // typ may be nil, which indicates write barriers are not needed. 215 // 216 // It must be nosplit and must only call nosplit functions because the 217 // stack map of reflectcall is wrong. 218 // 219 //go:nosplit 220 func reflectcallmove(typ *_type, dst, src unsafe.Pointer, size uintptr) { 221 if writeBarrier.needed && typ != nil && typ.kind&kindNoPointers == 0 && size >= sys.PtrSize { 222 bulkBarrierPreWrite(uintptr(dst), uintptr(src), size) 223 } 224 memmove(dst, src, size) 225 } 226 227 //go:nosplit 228 func typedslicecopy(typ *_type, dst, src slice) int { 229 n := dst.len 230 if n > src.len { 231 n = src.len 232 } 233 if n == 0 { 234 return 0 235 } 236 dstp := dst.array 237 srcp := src.array 238 239 // The compiler emits calls to typedslicecopy before 240 // instrumentation runs, so unlike the other copying and 241 // assignment operations, it's not instrumented in the calling 242 // code and needs its own instrumentation. 243 if raceenabled { 244 callerpc := getcallerpc() 245 pc := funcPC(slicecopy) 246 racewriterangepc(dstp, uintptr(n)*typ.size, callerpc, pc) 247 racereadrangepc(srcp, uintptr(n)*typ.size, callerpc, pc) 248 } 249 if msanenabled { 250 msanwrite(dstp, uintptr(n)*typ.size) 251 msanread(srcp, uintptr(n)*typ.size) 252 } 253 254 if writeBarrier.cgo { 255 cgoCheckSliceCopy(typ, dst, src, n) 256 } 257 258 if dstp == srcp { 259 return n 260 } 261 262 // Note: No point in checking typ.kind&kindNoPointers here: 263 // compiler only emits calls to typedslicecopy for types with pointers, 264 // and growslice and reflect_typedslicecopy check for pointers 265 // before calling typedslicecopy. 266 size := uintptr(n) * typ.size 267 if writeBarrier.needed { 268 bulkBarrierPreWrite(uintptr(dstp), uintptr(srcp), size) 269 } 270 // See typedmemmove for a discussion of the race between the 271 // barrier and memmove. 272 memmove(dstp, srcp, size) 273 return n 274 } 275 276 //go:linkname reflect_typedslicecopy reflect.typedslicecopy 277 func reflect_typedslicecopy(elemType *_type, dst, src slice) int { 278 if elemType.kind&kindNoPointers != 0 { 279 n := dst.len 280 if n > src.len { 281 n = src.len 282 } 283 if n == 0 { 284 return 0 285 } 286 287 size := uintptr(n) * elemType.size 288 if raceenabled { 289 callerpc := getcallerpc() 290 pc := funcPC(reflect_typedslicecopy) 291 racewriterangepc(dst.array, size, callerpc, pc) 292 racereadrangepc(src.array, size, callerpc, pc) 293 } 294 if msanenabled { 295 msanwrite(dst.array, size) 296 msanread(src.array, size) 297 } 298 299 memmove(dst.array, src.array, size) 300 return n 301 } 302 return typedslicecopy(elemType, dst, src) 303 } 304 305 // typedmemclr clears the typed memory at ptr with type typ. The 306 // memory at ptr must already be initialized (and hence in type-safe 307 // state). If the memory is being initialized for the first time, see 308 // memclrNoHeapPointers. 309 // 310 // If the caller knows that typ has pointers, it can alternatively 311 // call memclrHasPointers. 312 // 313 //go:nosplit 314 func typedmemclr(typ *_type, ptr unsafe.Pointer) { 315 if typ.kind&kindNoPointers == 0 { 316 bulkBarrierPreWrite(uintptr(ptr), 0, typ.size) 317 } 318 memclrNoHeapPointers(ptr, typ.size) 319 } 320 321 //go:linkname reflect_typedmemclr reflect.typedmemclr 322 func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer) { 323 typedmemclr(typ, ptr) 324 } 325 326 //go:linkname reflect_typedmemclrpartial reflect.typedmemclrpartial 327 func reflect_typedmemclrpartial(typ *_type, ptr unsafe.Pointer, off, size uintptr) { 328 if typ.kind&kindNoPointers == 0 { 329 bulkBarrierPreWrite(uintptr(ptr), 0, size) 330 } 331 memclrNoHeapPointers(ptr, size) 332 } 333 334 // memclrHasPointers clears n bytes of typed memory starting at ptr. 335 // The caller must ensure that the type of the object at ptr has 336 // pointers, usually by checking typ.kind&kindNoPointers. However, ptr 337 // does not have to point to the start of the allocation. 338 // 339 //go:nosplit 340 func memclrHasPointers(ptr unsafe.Pointer, n uintptr) { 341 bulkBarrierPreWrite(uintptr(ptr), 0, n) 342 memclrNoHeapPointers(ptr, n) 343 }