github.com/coreos/rocket@v1.30.1-0.20200224141603-171c416fac02/Documentation/devel/inspect-containers.md (about)

     1  # Inspect how rkt works
     2  
     3  There's a variety of ways to inspect how containers work.
     4  Linux provides APIs that expose information about namespaces (proc filesystem) and cgroups (cgroup filesystem).
     5  We also have tools like strace that allow us to see what system calls are used in processes.
     6  
     7  This document explains how to use those APIs and tools to give details on what rkt does under the hood.
     8  
     9  Note that this is not a comprehensive analysis of the inner workings of rkt, but a starting point for people interested in learning how containers work.
    10  
    11  ## What syscalls does rkt use?
    12  
    13  Let's use [strace][strace] to find out what system calls rkt uses to set up containers.
    14  We'll only trace a handful of syscalls since, by default, strace traces every syscall resulting in a lot of output.
    15  Also, we'll redirect its output to a file to make the analysis easier.
    16  
    17  ```bash
    18  $ sudo strace -f -s 512 -e unshare,clone,mount,chroot,execve -o out.txt rkt run coreos.com/etcd:v2.0.10
    19  ...
    20  ^]^]Container rkt-e6d92625-aa3f-4449-bf5d-43ffed440de4 terminated by signal KILL.
    21  ```
    22  
    23  We now have our trace in `out.txt`, let's go through some of its relevant parts.
    24  
    25  ### stage0
    26  
    27  First, we see the actual execution of the rkt command:
    28  
    29  ```
    30  5710  execve("/home/iaguis/work/go/src/github.com/rkt/rkt/build-rkt/target/bin/rkt", ["rkt", "run", "coreos.com/etcd:v2.0.10"], 0x7ffce2052be8 /* 22 vars */) = 0
    31  ```
    32  
    33  Since the image was already fetched and we don't trace many system calls, nothing too exciting happens here except mounting the container filesystems.
    34  
    35  ```
    36  5710  mount("overlay", "/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs", "overlay", 0, "lowerdir=/var/lib/rkt/cas/tree/deps-sha512-cc076d6c508223cc3c13c24d09365d64b6d15e7915a165eab1d9e87f87be5015/rootfs,upperdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-cc076d6c508223cc3c13c24d09365d64b6d15e7915a165eab1d9e87f87be5015/upper,workdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-cc076d6c508223cc3c13c24d09365d64b6d15e7915a165eab1d9e87f87be5015/work") = 0
    37  5710  mount("overlay", "/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs/opt/stage2/etcd/rootfs", "overlay", 0, "lowerdir=/var/lib/rkt/cas/tree/deps-sha512-c0de11e9d504069810da931c94aece3bcc5430dc20f9a5177044eaef62f93fcc/rootfs,upperdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-c0de11e9d504069810da931c94aece3bcc5430dc20f9a5177044eaef62f93fcc/upper/etcd,workdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-c0de11e9d504069810da931c94aece3bcc5430dc20f9a5177044eaef62f93fcc/work/etcd") = 0
    38  ```
    39  
    40  We can see that rkt mounts the stage1 and stage2 filesystems with the [tree store][treestore] as `lowerdir` on the directory rkt expects them to be.
    41  
    42  Note that the stage2 tree is mounted within the stage1 tree via `/opt/stage2`.
    43  You can read more about the tree structure in [rkt architecture][architecture-stage0].
    44  
    45  This means that, for a same tree store, everything will be shared in a copy-on-write (COW) manner, except the bits that each container modifies, which will be in the `upperdir` and will appear magically in the mount destination.
    46  You can read more about this filesystem in the [overlay documentation][overlay].
    47  
    48  
    49  ### stage1
    50  
    51  This is where most of the interesting things happen, the first being executing the stage1 [run entrypoint][run-entrypoint], which is `/init` by default:
    52  
    53  ```
    54  5710  execve("/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs/init", ["/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs/init", "--net=default", "--local-config=/etc/rkt", "d5513d49-d14f-45d1-944b-39437798ddda"], 0xc42009b040 /* 25 vars */ <unfinished ...>
    55  ```
    56  
    57  init does a bunch of stuff, including creating the container's network namespace and mounting a reference to it on the host filesystem:
    58  
    59  ```
    60  5723  unshare(CLONE_NEWNET)             = 0
    61  5723  mount("/proc/5710/task/5723/ns/net", "/var/run/netns/cni-eee014d2-8268-39cc-c176-432bbbc9e959", 0xc42017c6a8, MS_BIND, NULL) = 0
    62  ```
    63  
    64  After creating the network namespace, it will execute the relevant [CNI][cni] plugins from within that network namespace.
    65  The default network uses the [ptp plugin][ptp] with [host-local][host-local] as IPAM:
    66  
    67  ```
    68  5725  execve("stage1/rootfs/usr/lib/rkt/plugins/net/ptp", ["stage1/rootfs/usr/lib/rkt/plugins/net/ptp"], 0xc4201ac000 /* 32 vars */ <unfinished ...>
    69  5730  execve("stage1/rootfs/usr/lib/rkt/plugins/net/host-local", ["stage1/rootfs/usr/lib/rkt/plugins/net/host-local"], 0xc42008e240 /* 32 vars */ <unfinished ...>
    70  ```
    71  
    72  In this case, the CNI plugins use come from rkt's stage1, but [rkt is also able to pick a CNI plugin installed externally](https://github.com/rkt/rkt/blob/v1.30.0/Documentation/networking/overview.md#custom-plugins).
    73  
    74  The plugins will do some iptables magic to configure the network:
    75  
    76  ```
    77  5739  execve("/usr/bin/iptables", ["iptables", "--version"], 0xc42013e000 /* 32 vars */ <unfinished ...>
    78  5740  execve("/usr/bin/iptables", ["/usr/bin/iptables", "-t", "nat", "-N", "CNI-7a59ad232c32bcea94ee08d5", "--wait"], 0xc4200b0a20 /* 32 vars */ <unfinished ...>
    79  ...
    80  ```
    81  
    82  After the network is configured, rkt mounts the container cgroups instead of letting systemd-nspawn do it because we want to have control on how they're mounted.
    83  We also mount the host cgroups if they're not already mounted in the way systemd-nspawn expects them, like in old distributions or distributions that don't use systemd (e.g. [Void Linux][void-linux]).
    84  
    85  We do this in a new mount namespace to avoid polluting the host mounts and to get automatic cleanup when the container exits (`CLONE_NEWNS` is the flag for [mount namespaces][man-namespaces] for historical reasons: it was the first namespace implemented on Linux):
    86  
    87  ```
    88  5710  unshare(CLONE_NEWNS)              = 0
    89  ```
    90  
    91  Here we mount the container hierarchies read-write so the pod can modify its cgroups but we mount the controllers read-only so the pod doesn't modify other cgroups:
    92  
    93  ```
    94  5710  mount("stage1/rootfs/sys/fs/cgroup/freezer/machine.slice/machine-rkt\\x2dd5513d49\\x2dd14f\\x2d45d1\\x2d944b\\x2d39437798ddda.scope/system.slice", "stage1/rootfs/sys/fs/cgroup/freezer/machine.slice/machine-rkt\\x2dd5513d49\\x2dd14f\\x2d45d1\\x2d944b\\x2d39437798ddda.scope/system.slice", 0xc42027d2a8, MS_BIND, NULL) = 0
    95  ...
    96  5710  mount("stage1/rootfs/sys/fs/cgroup/freezer", "stage1/rootfs/sys/fs/cgroup/freezer", 0xc42027d2b8, MS_RDONLY|MS_NOSUID|MS_NODEV|MS_NOEXEC|MS_REMOUNT|MS_BIND, NULL) = 0
    97  ```
    98  
    99  Now is the time to start systemd-nspawn to create the pod itself:
   100  
   101  ```
   102  5710  execve("stage1/rootfs/usr/lib/ld-linux-x86-64.so.2", ["stage1/rootfs/usr/lib/ld-linux-x86-64.so.2", "stage1/rootfs/usr/bin/systemd-nspawn", "--boot", "--notify-ready=yes", "--register=true", "--link-journal=try-guest", "--quiet", "--uuid=d5513d49-d14f-45d1-944b-39437798ddda", "--machine=rkt-d5513d49-d14f-45d1-944b-39437798ddda", "--directory=stage1/rootfs", "--capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT", "--", "--default-standard-output=tty", "--log-target=null", "--show-status=0"], 0xc4202bc0f0 /* 29 vars */ <unfinished ...>
   103  ```
   104  
   105  Note we don't need to pass the `--private-network` option because rkt already created and configured the network using CNI.
   106  
   107  Some interesting things systemd-nspawn does are moving the container filesystem tree to `/`:
   108  
   109  ```
   110  5747  mount("/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs", "/", NULL, MS_MOVE, NULL) = 0
   111  5747  chroot(".")                       = 0
   112  ```
   113  
   114  And creating all the other namespaces: mount, UTS, IPC, and PID.
   115  Check [namespaces(7)][man-namespaces] for more information.
   116  
   117  ```
   118  5747  clone(child_stack=NULL, flags=CLONE_NEWNS|CLONE_NEWUTS|CLONE_NEWIPC|CLONE_NEWPID|SIGCHLD) = 5748
   119  ```
   120  
   121  Once it's done creating the container, it will execute the init process, which is systemd:
   122  
   123  ```
   124  5748  execve("/usr/lib/systemd/systemd", ["/usr/lib/systemd/systemd", "--default-standard-output=tty", "--log-target=null", "--show-status=0"], 0x7f904604f250 /* 8 vars */) = 0
   125  ```
   126  
   127  Which then will execute systemd-journald to handle logging:
   128  
   129  ```
   130  5749  execve("/usr/lib/systemd/systemd-journald", ["/usr/lib/systemd/systemd-journald"], 0x5579c5d79d50 /* 8 vars */ <unfinished ...>
   131  ...
   132  ```
   133  
   134  And at some point it will execute our application's service (in this example, etcd).
   135  
   136  But first, it needs to execute its companion `prepare-app` dependency:
   137  
   138  ```
   139  5751  execve("/prepare-app", ["/prepare-app", "/opt/stage2/etcd/rootfs"], 0x5579c5d7d580 /* 7 vars */) = 0
   140  ```
   141  
   142  `prepare-app` bind-mounts a lot of files from stage1 to stage2, so our app has access to a [reasonable environment][os-spec]:
   143  
   144  ```
   145  5751  mount("/dev/null", "/opt/stage2/etcd/rootfs/dev/null", 0x49006f, MS_BIND, NULL) = 0
   146  5751  mount("/dev/zero", "/opt/stage2/etcd/rootfs/dev/zero", 0x49006f, MS_BIND, NULL) = 0
   147  ...
   148  ```
   149  
   150  After it's finished, our etcd service is ready to start!
   151  
   152  Since we use some additional security directives in our service file (like [`InaccessiblePaths=`][inaccessible-paths] or [`SystemCallFilter=`][syscall-filter]), systemd will create an additional mount namespace per application in the pod and move the stage2 filesystem to `/`:
   153  
   154  ```
   155  5753  unshare(CLONE_NEWNS)              = 0
   156  ...
   157  5753  mount("/opt/stage2/etcd/rootfs", "/", NULL, MS_MOVE, NULL) = 0
   158  5753  chroot(".")                       = 0
   159  ```
   160  
   161  ### stage2
   162  
   163  Now we're ready to execute the etcd binary.
   164  
   165  ```
   166  5753  execve("/etcd", ["/etcd"], 0x5579c5dbd660 /* 9 vars */) = 0
   167  ```
   168  
   169  And that's it, etcd is running in a container!
   170  
   171  ## Inspect running containers with procfs
   172  
   173  We'll now inspect a running container by using the [proc filesystem][procfs].
   174  
   175  Let's start a new container limiting the CPU to 200 millicores and the memory to 100MB:
   176  
   177  ```
   178  $ sudo rkt run --interactive kinvolk.io/aci/busybox --memory=100M --cpu=200m
   179  / # 
   180  ```
   181  
   182  First we'll need to find the PID of a process running inside the container.
   183  We can see the container PID by running `rkt status`:
   184  
   185  ```
   186  $ sudo rkt status 567264dd
   187  state=running
   188  created=2018-01-03 17:17:39.653 +0100 CET
   189  started=2018-01-03 17:17:39.749 +0100 CET
   190  networks=default:ip4=172.16.28.37
   191  pid=10985
   192  exited=false
   193  ```
   194  
   195  Now we need to find the sh process running inside the container:
   196  
   197  ```
   198  $ ps auxf | grep [1]0985 -A 3
   199  root     10985  0.0  0.0  54204  5040 pts/2    S+   17:17   0:00          \_ stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --notify-ready=yes --register=true --link-journal=try-guest --quiet --uuid=567264dd-f28d-42fb-84a1-4714dde9e82c --machine=rkt-567264dd-f28d-42fb-84a1-4714dde9e82c --directory=stage1/rootfs --capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0
   200  root     11021  0.0  0.0  62280  7392 ?        Ss   17:17   0:00              \_ /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0
   201  root     11022  0.0  0.0  66408  8812 ?        Ss   17:17   0:00                  \_ /usr/lib/systemd/systemd-journald
   202  root     11026  0.0  0.0   1212     4 pts/0    Ss+  17:17   0:00                  \_ /bin/sh
   203  ```
   204  
   205  It's 11026!
   206  
   207  Let's start by having a look at its namespaces:
   208  
   209  ```
   210  $ sudo ls -l /proc/11026/ns/
   211  total 0
   212  lrwxrwxrwx 1 root root 0 Jan  3 17:19 cgroup -> 'cgroup:[4026531835]'
   213  lrwxrwxrwx 1 root root 0 Jan  3 17:19 ipc -> 'ipc:[4026532764]'
   214  lrwxrwxrwx 1 root root 0 Jan  3 17:19 mnt -> 'mnt:[4026532761]'
   215  lrwxrwxrwx 1 root root 0 Jan  3 17:19 net -> 'net:[4026532702]'
   216  lrwxrwxrwx 1 root root 0 Jan  3 17:19 pid -> 'pid:[4026532765]'
   217  lrwxrwxrwx 1 root root 0 Jan  3 17:19 pid_for_children -> 'pid:[4026532765]'
   218  lrwxrwxrwx 1 root root 0 Jan  3 17:19 user -> 'user:[4026531837]'
   219  lrwxrwxrwx 1 root root 0 Jan  3 17:19 uts -> 'uts:[4026532763]'
   220  ```
   221  
   222  We can compare it with the namespaces on the host
   223  
   224  ```
   225  $ sudo ls -l /proc/1/ns/
   226  total 0
   227  lrwxrwxrwx 1 root root 0 Jan  3 17:20 cgroup -> 'cgroup:[4026531835]'
   228  lrwxrwxrwx 1 root root 0 Jan  3 17:20 ipc -> 'ipc:[4026531839]'
   229  lrwxrwxrwx 1 root root 0 Jan  3 17:20 mnt -> 'mnt:[4026531840]'
   230  lrwxrwxrwx 1 root root 0 Jan  3 17:20 net -> 'net:[4026532009]'
   231  lrwxrwxrwx 1 root root 0 Jan  3 17:20 pid -> 'pid:[4026531836]'
   232  lrwxrwxrwx 1 root root 0 Jan  3 17:20 pid_for_children -> 'pid:[4026531836]'
   233  lrwxrwxrwx 1 root root 0 Jan  3 17:20 user -> 'user:[4026531837]'
   234  lrwxrwxrwx 1 root root 0 Jan  3 17:20 uts -> 'uts:[4026531838]'
   235  ```
   236  
   237  We can see that the cgroup and user namespace are the same, since rkt doesn't use cgroup namespaces and user namespaces weren't enabled for this execution.
   238  If, for example, we run rkt with `--net=host`, we'll see that the network namespace is the same as the host's.
   239  
   240  Running [lsns][man-lsns] we can see this information too, along with the PID that created the namespace:
   241  
   242  ```
   243  > sudo lsns -p 11026
   244          NS TYPE   NPROCS    PID USER COMMAND
   245  4026531835 cgroup    231      1 root /sbin/init
   246  4026531837 user      231      1 root /sbin/init
   247  4026532702 net         4  10945 root stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --notify-ready=yes --register=true --link-journal=
   248  4026532761 mnt         1  11026 root /etcd
   249  4026532763 uts         3  11021 root /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0
   250  4026532764 ipc         3  11021 root /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0
   251  4026532765 pid         3  11021 root /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0
   252  ```
   253  
   254  We can also see some interesting data about the process:
   255  
   256  ```
   257  $ sudo cat /proc/11026/status
   258  Name:	sh
   259  Umask:	0022
   260  State:	S (sleeping)
   261  ...
   262  CapBnd:	00000000a80425fb
   263  ...
   264  NoNewPrivs:	0
   265  Seccomp:	2
   266  ...
   267  ```
   268  
   269  This tells us the container is not using the `no_new_privs` feature, but it is using [seccomp][seccomp].
   270  
   271  We can also see what [capabilities][capabilities] are in the bounding set of the process, let's decode them with `capsh`:
   272  
   273  ```
   274  $ capsh --decode=00000000a80425fb
   275  0x00000000a80425fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap
   276  ```
   277  
   278  Another interesting thing is the environment variables of the process:
   279  
   280  ```
   281  $ sudo cat /proc/11026/environ | tr '\0' '\n'
   282  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
   283  HOME=/root
   284  LOGNAME=root
   285  USER=root
   286  SHELL=/bin/sh
   287  INVOCATION_ID=d5d94569d482495c809c113fca55abd4
   288  TERM=xterm
   289  AC_APP_NAME=busybox
   290  ```
   291  
   292  Finally, we can check in which cgroups the process is:
   293  
   294  ```
   295  $ sudo cat /proc/11026/cgroup
   296  11:freezer:/
   297  10:rdma:/
   298  9:cpu,cpuacct:/machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice/busybox.service
   299  8:devices:/machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice/busybox.service
   300  7:blkio:/machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice
   301  6:memory:/machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice/busybox.service
   302  5:perf_event:/
   303  4:pids:/machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice/busybox.service
   304  3:net_cls,net_prio:/
   305  2:cpuset:/
   306  1:name=systemd:/machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice/busybox.service
   307  0::/machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice/busybox.service
   308  ```
   309  
   310  Let's explore the cgroups a bit more.
   311  
   312  ## Inspect container cgroups
   313  
   314  systemd offers tools to easily inspect the cgroups of containers.
   315  
   316  We can use `systemd-cgls` to see the cgroup hierarchy of a container:
   317  
   318  ```
   319  $ machinectl
   320  MACHINE                                  CLASS     SERVICE OS VERSION ADDRESSES
   321  rkt-97910fdc-13ec-4025-8f93-5ddea0089eff container rkt     -  -       172.16.28.25...
   322  
   323  1 machines listed.
   324  $ systemd-cgls -M rkt-97910fdc-13ec-4025-8f93-5ddea0089eff
   325  Control group /machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope:
   326  -.slice
   327  ├─init.scope
   328  │ └─12474 /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0
   329  └─system.slice
   330    ├─busybox.service
   331    │ └─12479 /bin/sh
   332    └─systemd-journald.service
   333      └─12475 /usr/lib/systemd/systemd-journald
   334  ```
   335  
   336  And we can use `systemd-cgtop` to see the resource consumption of the container.
   337  This is the output while running the `yes` command (which is basically an infinite loop that outputs the character `y`, so it takes all the CPU) in the container:
   338  
   339  ```
   340  Control Group                                                                                            Tasks   %CPU   Memory  Input/s Output/s
   341  /machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope                         4   19.6     7.6M        -        -
   342  /machine.slice/machine-rkt\x2d97910fdc\x2d13ec\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice            3   19.6     5.1M        -        -
   343  /machine.slice/machine-rkt\x2d979…c\x2d4025\x2d8f93\x2d5ddea0089eff.scope/system.slice/busybox.service       2   19.6   476.0K        -        -
   344  /machine.slice/machine-rkt\x2d979…d8f93\x2d5ddea0089eff.scope/system.slice/system-prepare\x2dapp.slice       -      -   120.0K        -        -
   345  /machine.slice/machine-rkt\x2d979…\x2d8f93\x2d5ddea0089eff.scope/system.slice/systemd-journald.service       1      -     4.5M        -        -
   346  ```
   347  
   348  You can see that our CPU limit is working, since we only see a CPU usage of about 20%.
   349  
   350  This information can also be gathered from the cgroup filesystem itself.
   351  For example, to see the memory consumed by the busybox application:
   352  
   353  ```
   354  $ /sys/fs/cgroup/memory/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice/busybox.service/
   355  $ cat memory.usage_in_bytes
   356  487424
   357  ```
   358  
   359  You can find out more about cgroups in their [kernel documentation][cgroups].
   360  
   361  [strace]: https://linux.die.net/man/1/strace
   362  [overlay]: https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt
   363  [treestore]: ../../store/treestore/tree.go
   364  [run-entrypoint]: stage1-implementors-guide.md#rkt-run
   365  [cni]: https://github.com/containernetworking/cni
   366  [ptp]: ../networking/overview.md#ptp
   367  [host-local]: ../networking/overview.md#host-local
   368  [seccomp]: ../seccomp-guide.md
   369  [procfs]: https://www.kernel.org/doc/Documentation/filesystems/proc.txt
   370  [cgroups]: https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt
   371  [man-namespaces]: http://man7.org/linux/man-pages/man7/namespaces.7.html
   372  [architecture-stage0]: architecture.md#stage-0
   373  [inaccessible-paths]: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#ReadWritePaths=
   374  [syscall-filter]: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#SystemCallFilter=
   375  [capabilities]: ../capabilities-guide.md
   376  [os-spec]: https://github.com/appc/spec/blob/v0.8.11/spec/OS-SPEC.md
   377  [man-lsns]: http://man7.org/linux/man-pages/man8/lsns.8.html
   378  [void-linux]: https://www.voidlinux.eu/