github.com/geraldss/go/src@v0.0.0-20210511222824-ac7d0ebfc235/runtime/HACKING.md (about) 1 This is a living document and at times it will be out of date. It is 2 intended to articulate how programming in the Go runtime differs from 3 writing normal Go. It focuses on pervasive concepts rather than 4 details of particular interfaces. 5 6 Scheduler structures 7 ==================== 8 9 The scheduler manages three types of resources that pervade the 10 runtime: Gs, Ms, and Ps. It's important to understand these even if 11 you're not working on the scheduler. 12 13 Gs, Ms, Ps 14 ---------- 15 16 A "G" is simply a goroutine. It's represented by type `g`. When a 17 goroutine exits, its `g` object is returned to a pool of free `g`s and 18 can later be reused for some other goroutine. 19 20 An "M" is an OS thread that can be executing user Go code, runtime 21 code, a system call, or be idle. It's represented by type `m`. There 22 can be any number of Ms at a time since any number of threads may be 23 blocked in system calls. 24 25 Finally, a "P" represents the resources required to execute user Go 26 code, such as scheduler and memory allocator state. It's represented 27 by type `p`. There are exactly `GOMAXPROCS` Ps. A P can be thought of 28 like a CPU in the OS scheduler and the contents of the `p` type like 29 per-CPU state. This is a good place to put state that needs to be 30 sharded for efficiency, but doesn't need to be per-thread or 31 per-goroutine. 32 33 The scheduler's job is to match up a G (the code to execute), an M 34 (where to execute it), and a P (the rights and resources to execute 35 it). When an M stops executing user Go code, for example by entering a 36 system call, it returns its P to the idle P pool. In order to resume 37 executing user Go code, for example on return from a system call, it 38 must acquire a P from the idle pool. 39 40 All `g`, `m`, and `p` objects are heap allocated, but are never freed, 41 so their memory remains type stable. As a result, the runtime can 42 avoid write barriers in the depths of the scheduler. 43 44 User stacks and system stacks 45 ----------------------------- 46 47 Every non-dead G has a *user stack* associated with it, which is what 48 user Go code executes on. User stacks start small (e.g., 2K) and grow 49 or shrink dynamically. 50 51 Every M has a *system stack* associated with it (also known as the M's 52 "g0" stack because it's implemented as a stub G) and, on Unix 53 platforms, a *signal stack* (also known as the M's "gsignal" stack). 54 System and signal stacks cannot grow, but are large enough to execute 55 runtime and cgo code (8K in a pure Go binary; system-allocated in a 56 cgo binary). 57 58 Runtime code often temporarily switches to the system stack using 59 `systemstack`, `mcall`, or `asmcgocall` to perform tasks that must not 60 be preempted, that must not grow the user stack, or that switch user 61 goroutines. Code running on the system stack is implicitly 62 non-preemptible and the garbage collector does not scan system stacks. 63 While running on the system stack, the current user stack is not used 64 for execution. 65 66 `getg()` and `getg().m.curg` 67 ---------------------------- 68 69 To get the current user `g`, use `getg().m.curg`. 70 71 `getg()` alone returns the current `g`, but when executing on the 72 system or signal stacks, this will return the current M's "g0" or 73 "gsignal", respectively. This is usually not what you want. 74 75 To determine if you're running on the user stack or the system stack, 76 use `getg() == getg().m.curg`. 77 78 Error handling and reporting 79 ============================ 80 81 Errors that can reasonably be recovered from in user code should use 82 `panic` like usual. However, there are some situations where `panic` 83 will cause an immediate fatal error, such as when called on the system 84 stack or when called during `mallocgc`. 85 86 Most errors in the runtime are not recoverable. For these, use 87 `throw`, which dumps the traceback and immediately terminates the 88 process. In general, `throw` should be passed a string constant to 89 avoid allocating in perilous situations. By convention, additional 90 details are printed before `throw` using `print` or `println` and the 91 messages are prefixed with "runtime:". 92 93 For runtime error debugging, it's useful to run with 94 `GOTRACEBACK=system` or `GOTRACEBACK=crash`. 95 96 Synchronization 97 =============== 98 99 The runtime has multiple synchronization mechanisms. They differ in 100 semantics and, in particular, in whether they interact with the 101 goroutine scheduler or the OS scheduler. 102 103 The simplest is `mutex`, which is manipulated using `lock` and 104 `unlock`. This should be used to protect shared structures for short 105 periods. Blocking on a `mutex` directly blocks the M, without 106 interacting with the Go scheduler. This means it is safe to use from 107 the lowest levels of the runtime, but also prevents any associated G 108 and P from being rescheduled. `rwmutex` is similar. 109 110 For one-shot notifications, use `note`, which provides `notesleep` and 111 `notewakeup`. Unlike traditional UNIX `sleep`/`wakeup`, `note`s are 112 race-free, so `notesleep` returns immediately if the `notewakeup` has 113 already happened. A `note` can be reset after use with `noteclear`, 114 which must not race with a sleep or wakeup. Like `mutex`, blocking on 115 a `note` blocks the M. However, there are different ways to sleep on a 116 `note`:`notesleep` also prevents rescheduling of any associated G and 117 P, while `notetsleepg` acts like a blocking system call that allows 118 the P to be reused to run another G. This is still less efficient than 119 blocking the G directly since it consumes an M. 120 121 To interact directly with the goroutine scheduler, use `gopark` and 122 `goready`. `gopark` parks the current goroutine—putting it in the 123 "waiting" state and removing it from the scheduler's run queue—and 124 schedules another goroutine on the current M/P. `goready` puts a 125 parked goroutine back in the "runnable" state and adds it to the run 126 queue. 127 128 In summary, 129 130 <table> 131 <tr><th></th><th colspan="3">Blocks</th></tr> 132 <tr><th>Interface</th><th>G</th><th>M</th><th>P</th></tr> 133 <tr><td>(rw)mutex</td><td>Y</td><td>Y</td><td>Y</td></tr> 134 <tr><td>note</td><td>Y</td><td>Y</td><td>Y/N</td></tr> 135 <tr><td>park</td><td>Y</td><td>N</td><td>N</td></tr> 136 </table> 137 138 Atomics 139 ======= 140 141 The runtime uses its own atomics package at `runtime/internal/atomic`. 142 This corresponds to `sync/atomic`, but functions have different names 143 for historical reasons and there are a few additional functions needed 144 by the runtime. 145 146 In general, we think hard about the uses of atomics in the runtime and 147 try to avoid unnecessary atomic operations. If access to a variable is 148 sometimes protected by another synchronization mechanism, the 149 already-protected accesses generally don't need to be atomic. There 150 are several reasons for this: 151 152 1. Using non-atomic or atomic access where appropriate makes the code 153 more self-documenting. Atomic access to a variable implies there's 154 somewhere else that may concurrently access the variable. 155 156 2. Non-atomic access allows for automatic race detection. The runtime 157 doesn't currently have a race detector, but it may in the future. 158 Atomic access defeats the race detector, while non-atomic access 159 allows the race detector to check your assumptions. 160 161 3. Non-atomic access may improve performance. 162 163 Of course, any non-atomic access to a shared variable should be 164 documented to explain how that access is protected. 165 166 Some common patterns that mix atomic and non-atomic access are: 167 168 * Read-mostly variables where updates are protected by a lock. Within 169 the locked region, reads do not need to be atomic, but the write 170 does. Outside the locked region, reads need to be atomic. 171 172 * Reads that only happen during STW, where no writes can happen during 173 STW, do not need to be atomic. 174 175 That said, the advice from the Go memory model stands: "Don't be 176 [too] clever." The performance of the runtime matters, but its 177 robustness matters more. 178 179 Unmanaged memory 180 ================ 181 182 In general, the runtime tries to use regular heap allocation. However, 183 in some cases the runtime must allocate objects outside of the garbage 184 collected heap, in *unmanaged memory*. This is necessary if the 185 objects are part of the memory manager itself or if they must be 186 allocated in situations where the caller may not have a P. 187 188 There are three mechanisms for allocating unmanaged memory: 189 190 * sysAlloc obtains memory directly from the OS. This comes in whole 191 multiples of the system page size, but it can be freed with sysFree. 192 193 * persistentalloc combines multiple smaller allocations into a single 194 sysAlloc to avoid fragmentation. However, there is no way to free 195 persistentalloced objects (hence the name). 196 197 * fixalloc is a SLAB-style allocator that allocates objects of a fixed 198 size. fixalloced objects can be freed, but this memory can only be 199 reused by the same fixalloc pool, so it can only be reused for 200 objects of the same type. 201 202 In general, types that are allocated using any of these should be 203 marked `//go:notinheap` (see below). 204 205 Objects that are allocated in unmanaged memory **must not** contain 206 heap pointers unless the following rules are also obeyed: 207 208 1. Any pointers from unmanaged memory to the heap must be garbage 209 collection roots. More specifically, any pointer must either be 210 accessible through a global variable or be added as an explicit 211 garbage collection root in `runtime.markroot`. 212 213 2. If the memory is reused, the heap pointers must be zero-initialized 214 before they become visible as GC roots. Otherwise, the GC may 215 observe stale heap pointers. See "Zero-initialization versus 216 zeroing". 217 218 Zero-initialization versus zeroing 219 ================================== 220 221 There are two types of zeroing in the runtime, depending on whether 222 the memory is already initialized to a type-safe state. 223 224 If memory is not in a type-safe state, meaning it potentially contains 225 "garbage" because it was just allocated and it is being initialized 226 for first use, then it must be *zero-initialized* using 227 `memclrNoHeapPointers` or non-pointer writes. This does not perform 228 write barriers. 229 230 If memory is already in a type-safe state and is simply being set to 231 the zero value, this must be done using regular writes, `typedmemclr`, 232 or `memclrHasPointers`. This performs write barriers. 233 234 Runtime-only compiler directives 235 ================================ 236 237 In addition to the "//go:" directives documented in "go doc compile", 238 the compiler supports additional directives only in the runtime. 239 240 go:systemstack 241 -------------- 242 243 `go:systemstack` indicates that a function must run on the system 244 stack. This is checked dynamically by a special function prologue. 245 246 go:nowritebarrier 247 ----------------- 248 249 `go:nowritebarrier` directs the compiler to emit an error if the 250 following function contains any write barriers. (It *does not* 251 suppress the generation of write barriers; it is simply an assertion.) 252 253 Usually you want `go:nowritebarrierrec`. `go:nowritebarrier` is 254 primarily useful in situations where it's "nice" not to have write 255 barriers, but not required for correctness. 256 257 go:nowritebarrierrec and go:yeswritebarrierrec 258 ---------------------------------------------- 259 260 `go:nowritebarrierrec` directs the compiler to emit an error if the 261 following function or any function it calls recursively, up to a 262 `go:yeswritebarrierrec`, contains a write barrier. 263 264 Logically, the compiler floods the call graph starting from each 265 `go:nowritebarrierrec` function and produces an error if it encounters 266 a function containing a write barrier. This flood stops at 267 `go:yeswritebarrierrec` functions. 268 269 `go:nowritebarrierrec` is used in the implementation of the write 270 barrier to prevent infinite loops. 271 272 Both directives are used in the scheduler. The write barrier requires 273 an active P (`getg().m.p != nil`) and scheduler code often runs 274 without an active P. In this case, `go:nowritebarrierrec` is used on 275 functions that release the P or may run without a P and 276 `go:yeswritebarrierrec` is used when code re-acquires an active P. 277 Since these are function-level annotations, code that releases or 278 acquires a P may need to be split across two functions. 279 280 go:notinheap 281 ------------ 282 283 `go:notinheap` applies to type declarations. It indicates that a type 284 must never be allocated from the GC'd heap or on the stack. 285 Specifically, pointers to this type must always fail the 286 `runtime.inheap` check. The type may be used for global variables, or 287 for objects in unmanaged memory (e.g., allocated with `sysAlloc`, 288 `persistentalloc`, `fixalloc`, or from a manually-managed span). 289 Specifically: 290 291 1. `new(T)`, `make([]T)`, `append([]T, ...)` and implicit heap 292 allocation of T are disallowed. (Though implicit allocations are 293 disallowed in the runtime anyway.) 294 295 2. A pointer to a regular type (other than `unsafe.Pointer`) cannot be 296 converted to a pointer to a `go:notinheap` type, even if they have 297 the same underlying type. 298 299 3. Any type that contains a `go:notinheap` type is itself 300 `go:notinheap`. Structs and arrays are `go:notinheap` if their 301 elements are. Maps and channels of `go:notinheap` types are 302 disallowed. To keep things explicit, any type declaration where the 303 type is implicitly `go:notinheap` must be explicitly marked 304 `go:notinheap` as well. 305 306 4. Write barriers on pointers to `go:notinheap` types can be omitted. 307 308 The last point is the real benefit of `go:notinheap`. The runtime uses 309 it for low-level internal structures to avoid memory barriers in the 310 scheduler and the memory allocator where they are illegal or simply 311 inefficient. This mechanism is reasonably safe and does not compromise 312 the readability of the runtime.