github.com/shogo82148/std@v1.22.1-0.20240327122250-4e474527810c/runtime/mgcscavenge.go (about)

     1  // Copyright 2019 The Go Authors. All rights reserved.
     2  // Use of this source code is governed by a BSD-style
     3  // license that can be found in the LICENSE file.
     4  
     5  // Scavenging free pages.
     6  //
     7  // This file implements scavenging (the release of physical pages backing mapped
     8  // memory) of free and unused pages in the heap as a way to deal with page-level
     9  // fragmentation and reduce the RSS of Go applications.
    10  //
    11  // Scavenging in Go happens on two fronts: there's the background
    12  // (asynchronous) scavenger and the allocation-time (synchronous) scavenger.
    13  //
    14  // The former happens on a goroutine much like the background sweeper which is
    15  // soft-capped at using scavengePercent of the mutator's time, based on
    16  // order-of-magnitude estimates of the costs of scavenging. The latter happens
    17  // when allocating pages from the heap.
    18  //
    19  // The scavenger's primary goal is to bring the estimated heap RSS of the
    20  // application down to a goal.
    21  //
    22  // Before we consider what this looks like, we need to split the world into two
    23  // halves. One in which a memory limit is not set, and one in which it is.
    24  //
    25  // For the former, the goal is defined as:
    26  //   (retainExtraPercent+100) / 100 * (heapGoal / lastHeapGoal) * lastHeapInUse
    27  //
    28  // Essentially, we wish to have the application's RSS track the heap goal, but
    29  // the heap goal is defined in terms of bytes of objects, rather than pages like
    30  // RSS. As a result, we need to take into account for fragmentation internal to
    31  // spans. heapGoal / lastHeapGoal defines the ratio between the current heap goal
    32  // and the last heap goal, which tells us by how much the heap is growing and
    33  // shrinking. We estimate what the heap will grow to in terms of pages by taking
    34  // this ratio and multiplying it by heapInUse at the end of the last GC, which
    35  // allows us to account for this additional fragmentation. Note that this
    36  // procedure makes the assumption that the degree of fragmentation won't change
    37  // dramatically over the next GC cycle. Overestimating the amount of
    38  // fragmentation simply results in higher memory use, which will be accounted
    39  // for by the next pacing up date. Underestimating the fragmentation however
    40  // could lead to performance degradation. Handling this case is not within the
    41  // scope of the scavenger. Situations where the amount of fragmentation balloons
    42  // over the course of a single GC cycle should be considered pathologies,
    43  // flagged as bugs, and fixed appropriately.
    44  //
    45  // An additional factor of retainExtraPercent is added as a buffer to help ensure
    46  // that there's more unscavenged memory to allocate out of, since each allocation
    47  // out of scavenged memory incurs a potentially expensive page fault.
    48  //
    49  // If a memory limit is set, then we wish to pick a scavenge goal that maintains
    50  // that memory limit. For that, we look at total memory that has been committed
    51  // (memstats.mappedReady) and try to bring that down below the limit. In this case,
    52  // we want to give buffer space in the *opposite* direction. When the application
    53  // is close to the limit, we want to make sure we push harder to keep it under, so
    54  // if we target below the memory limit, we ensure that the background scavenger is
    55  // giving the situation the urgency it deserves.
    56  //
    57  // In this case, the goal is defined as:
    58  //    (100-reduceExtraPercent) / 100 * memoryLimit
    59  //
    60  // We compute both of these goals, and check whether either of them have been met.
    61  // The background scavenger continues operating as long as either one of the goals
    62  // has not been met.
    63  //
    64  // The goals are updated after each GC.
    65  //
    66  // Synchronous scavenging happens for one of two reasons: if an allocation would
    67  // exceed the memory limit or whenever the heap grows in size, for some
    68  // definition of heap-growth. The intuition behind this second reason is that the
    69  // application had to grow the heap because existing fragments were not sufficiently
    70  // large to satisfy a page-level memory allocation, so we scavenge those fragments
    71  // eagerly to offset the growth in RSS that results.
    72  //
    73  // Lastly, not all pages are available for scavenging at all times and in all cases.
    74  // The background scavenger and heap-growth scavenger only release memory in chunks
    75  // that have not been densely-allocated for at least 1 full GC cycle. The reason
    76  // behind this is likelihood of reuse: the Go heap is allocated in a first-fit order
    77  // and by the end of the GC mark phase, the heap tends to be densely packed. Releasing
    78  // memory in these densely packed chunks while they're being packed is counter-productive,
    79  // and worse, it breaks up huge pages on systems that support them. The scavenger (invoked
    80  // during memory allocation) further ensures that chunks it identifies as "dense" are
    81  // immediately eligible for being backed by huge pages. Note that for the most part these
    82  // density heuristics are best-effort heuristics. It's totally possible (but unlikely)
    83  // that a chunk that just became dense is scavenged in the case of a race between memory
    84  // allocation and scavenging.
    85  //
    86  // When synchronously scavenging for the memory limit or for debug.FreeOSMemory, these
    87  // "dense" packing heuristics are ignored (in other words, scavenging is "forced") because
    88  // in these scenarios returning memory to the OS is more important than keeping CPU
    89  // overheads low.
    90  
    91  package runtime