github.com/shogo82148/std@v1.22.1-0.20240327122250-4e474527810c/runtime/mpagealloc.go (about) 1 // Copyright 2019 The Go Authors. All rights reserved. 2 // Use of this source code is governed by a BSD-style 3 // license that can be found in the LICENSE file. 4 5 // Page allocator. 6 // 7 // The page allocator manages mapped pages (defined by pageSize, NOT 8 // physPageSize) for allocation and re-use. It is embedded into mheap. 9 // 10 // Pages are managed using a bitmap that is sharded into chunks. 11 // In the bitmap, 1 means in-use, and 0 means free. The bitmap spans the 12 // process's address space. Chunks are managed in a sparse-array-style structure 13 // similar to mheap.arenas, since the bitmap may be large on some systems. 14 // 15 // The bitmap is efficiently searched by using a radix tree in combination 16 // with fast bit-wise intrinsics. Allocation is performed using an address-ordered 17 // first-fit approach. 18 // 19 // Each entry in the radix tree is a summary that describes three properties of 20 // a particular region of the address space: the number of contiguous free pages 21 // at the start and end of the region it represents, and the maximum number of 22 // contiguous free pages found anywhere in that region. 23 // 24 // Each level of the radix tree is stored as one contiguous array, which represents 25 // a different granularity of subdivision of the processes' address space. Thus, this 26 // radix tree is actually implicit in these large arrays, as opposed to having explicit 27 // dynamically-allocated pointer-based node structures. Naturally, these arrays may be 28 // quite large for system with large address spaces, so in these cases they are mapped 29 // into memory as needed. The leaf summaries of the tree correspond to a bitmap chunk. 30 // 31 // The root level (referred to as L0 and index 0 in pageAlloc.summary) has each 32 // summary represent the largest section of address space (16 GiB on 64-bit systems), 33 // with each subsequent level representing successively smaller subsections until we 34 // reach the finest granularity at the leaves, a chunk. 35 // 36 // More specifically, each summary in each level (except for leaf summaries) 37 // represents some number of entries in the following level. For example, each 38 // summary in the root level may represent a 16 GiB region of address space, 39 // and in the next level there could be 8 corresponding entries which represent 2 40 // GiB subsections of that 16 GiB region, each of which could correspond to 8 41 // entries in the next level which each represent 256 MiB regions, and so on. 42 // 43 // Thus, this design only scales to heaps so large, but can always be extended to 44 // larger heaps by simply adding levels to the radix tree, which mostly costs 45 // additional virtual address space. The choice of managing large arrays also means 46 // that a large amount of virtual address space may be reserved by the runtime. 47 48 package runtime