github.com/rkt/rkt@v1.30.1-0.20200224141603-171c416fac02/Documentation/using-rkt-with-systemd.md (about)

     1  # Using rkt with systemd
     2  
     3  `rkt` is designed to cooperate with init systems, like [`systemd`][systemd]. rkt implements a simple CLI that directly executes processes, and does not interpose a long-running daemon, so the lifecycle of rkt pods can be directly managed by systemd. Standard systemd idioms like `systemctl start` and `systemctl stop` work out of the box.
     4  
     5  ![rkt-systemd-interaction](rkt-systemd-interaction.svg)
     6  
     7  In the shell excerpts below, a `#` prompt indicates commands that require root privileges, while the `$` prompt denotes commands issued as an unprivileged user.
     8  
     9  ## systemd-run
    10  
    11  The [`systemd-run`][systemd-run] utility is a convenient shortcut for testing a service before making it permanent in a unit file. To start a "daemonized" container that forks the container processes into the background, wrap the invocation of `rkt` with `systemd-run`:
    12  
    13  ```
    14  # systemd-run --slice=machine rkt run coreos.com/etcd:v2.2.5
    15  Running as unit run-29486.service.
    16  ```
    17  
    18  The `--slice=machine` option to `systemd-run` places the service in `machine.slice` rather than the host's `system.slice`, isolating containers in their own cgroup area.
    19  
    20  Invoking a rkt container through systemd-run in this way creates a transient service unit that can be managed with the usual systemd tools:
    21  
    22  ```
    23  $ systemctl status run-29486.service
    24  ● run-29486.service - /bin/rkt run coreos.com/etcd:v2.2.5
    25     Loaded: loaded (/run/systemd/system/run-29486.service; static; vendor preset: disabled)
    26    Drop-In: /run/systemd/system/run-29486.service.d
    27             └─50-Description.conf, 50-ExecStart.conf, 50-Slice.conf
    28     Active: active (running) since Wed 2016-02-24 12:50:20 CET; 27s ago
    29   Main PID: 29487 (ld-linux-x86-64)
    30     Memory: 36.1M
    31        CPU: 1.467s
    32     CGroup: /machine.slice/run-29486.service
    33             ├─29487 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot -Zsystem_u:system_r:svirt_lxc_net_t:s0:c46...
    34             ├─29535 /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --log-level=warning --show-status=0
    35             └─system.slice
    36               ├─etcd.service
    37               │ └─29544 /etcd
    38               └─systemd-journald.service
    39                 └─29539 /usr/lib/systemd/systemd-journald
    40  ```
    41  
    42  Since every pod is registered with [`machined`][machined] with a machine name of the form `rkt-$UUID`, the systemd tools can inspect pod logs, or stop and restart pod "machines". Use the `machinectl` tool to print the list of rkt pods:
    43  
    44  ```
    45  $ machinectl list
    46  MACHINE                                  CLASS     SERVICE
    47  rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 container nspawn
    48  
    49  1 machines listed.
    50  ```
    51  
    52  Given the name of this rkt machine, `journalctl` can inspect its logs, or `machinectl` can shut it down:
    53  
    54  ```
    55  # journalctl -M rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23
    56  ...
    57  Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518030 I | raft: ce2a822cea30bfca received vote from ce2a822cea30bfca at term 2
    58  Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518073 I | raft: ce2a822cea30bfca became leader at term 2
    59  Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518086 I | raft: raft.node: ce2a822cea30bfca elected leader ce2a822cea30bfca at te
    60  Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518720 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379 h
    61  Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518955 I | etcdserver: setting up the initial cluster version to 2.2
    62  Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.521680 N | etcdserver: set the initial cluster version to 2.2
    63  # machinectl poweroff rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23
    64  $ machinectl list
    65  MACHINE CLASS SERVICE
    66  
    67  0 machines listed.
    68  ```
    69  
    70  Note that, for the "coreos" and "kvm" stage1 flavors, journald integration is only supported if systemd is compiled with `lz4` compression enabled. To inspect this, use `systemctl`:
    71  
    72  ```
    73  $ systemctl --version
    74  systemd 235
    75  [...] +LZ4 [...]
    76  ```
    77  
    78  If the output contains `-LZ4`, journal entries will not be available.
    79  
    80  ## Managing pods as systemd services
    81  
    82  ### Notifications
    83  
    84  Sometimes, defining dependencies between containers makes sense.
    85  An example use case of this is a container running a server, and another container running a client.
    86  We want the server to start before the client tries to connect.
    87  
    88  ![sd_notify-background](sd_notify-background.svg)
    89  
    90  This can be accomplished by using systemd services and dependencies.
    91  However, for this to work in rkt containers, we need special support.
    92  
    93  systemd inside stage1 can notify systemd on the host that it is ready, to make sure that stage1 systemd send the notification at the right time you can use the [sd_notify][sd_notify] mechanism.
    94  
    95  To make use of this feature, you need to set the annotation `appc.io/executor/supports-systemd-notify` to true in the image manifest whenever the app supports sd\_notify (see example manifest below).
    96  If you build your image with [`acbuild`][acbuild] you can use the command: `acbuild annotation add appc.io/executor/supports-systemd-notify true`.
    97  
    98  ```
    99  {
   100  	"acKind": "ImageManifest",
   101  	"acVersion": "0.8.4",
   102  	"name": "coreos.com/etcd",
   103  	...
   104  	"app": {
   105  		"exec": [
   106  			"/etcd"
   107  		],
   108  		...
   109  	},
   110  	"annotations": [
   111  	    "name": "appc.io/executor/supports-systemd-notify",
   112  	    "value": "true"
   113  	]
   114  }
   115  ```
   116  
   117  This feature is always available when using the "coreos" stage1 flavor.
   118  If you use the "host" stage1 flavor (e.g. Fedora RPM or Debian deb package), you will need systemd >= v231.
   119  To verify how it works, run in a terminal the command: `sudo systemd-run --unit=test --service-type=notify rkt run --insecure-options=image /path/to/your/app/image`, then periodically check the status with `systemctl status test`.
   120  
   121  If the pod uses a stage1 image with systemd v231 (or greater), then the pod will be seen active form the host when systemd inside stage1 will reach default target.
   122  Instead, before it was marked as active as soon as it started.
   123  In this way it is possible to easily set up dependencies between pods and host services.
   124  
   125  Moreover, using [`SdNotify()`][sdnotify-go] in the application it is possible to make the pod marked as ready when all the apps or a particular one is ready.
   126  For more information check [systemd services unit][systemd-unit] documentation.
   127  
   128  This is how the sd_notify signal is propagated to the host system:
   129  
   130  ![sd_notify-propagation](sd_notify-propagation.svg)
   131  
   132  #### Using the systemd notification mechanism in an app
   133  
   134  Below there is a simple example of an app using the systemd notification mechanism via [go-systemd][go-systemd] binding library.
   135  
   136  ```go
   137  package main
   138  
   139  import (
   140  		"log"
   141  		"net"
   142  		"net/http"
   143  
   144  		"github.com/coreos/go-systemd/daemon"
   145  )
   146  
   147  func main() {
   148  	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
   149  		log.Printf("request from %v\n", r.RemoteAddr)
   150  		w.Write([]byte("hello\n"))
   151  	})
   152  	ln, err := net.Listen("tcp", ":5000")
   153  	if err != nil {
   154  		log.Fatalf("Listen failed: %s", err)
   155  	}
   156  	sent, err := daemon.SdNotify(true, "READY=1")
   157  	if err != nil {
   158  		log.Fatalf("Notification failed: %s", err)
   159  	}
   160  	if !sent {
   161  		log.Fatalf("Notification not supported: %s", err)
   162  	}
   163  	log.Fatal(http.Serve(ln, nil))
   164  }
   165  ```
   166  
   167  You can run an app that supports `sd\_notify()` with this command:
   168  
   169  ```
   170  # systemd-run --slice=machine --service-type=notify rkt run coreos.com/etcd:v2.2.5
   171  Running as unit run-29486.service.
   172  ```
   173  
   174  ### Simple Unit File
   175  
   176  The following is a simple example of a unit file using `rkt` to run an `etcd` instance under systemd service management:
   177  
   178  ```
   179  [Unit]
   180  Description=etcd
   181  
   182  [Service]
   183  Slice=machine.slice
   184  ExecStart=/usr/bin/rkt run coreos.com/etcd:v2.2.5
   185  KillMode=mixed
   186  Restart=always
   187  ```
   188  
   189  This unit can now be managed using the standard `systemctl` commands:
   190  
   191  ```
   192  # systemctl start etcd.service
   193  # systemctl stop etcd.service
   194  # systemctl restart etcd.service
   195  # systemctl enable etcd.service
   196  # systemctl disable etcd.service
   197  ```
   198  
   199  Note that no `ExecStop` clause is required. Setting [`KillMode=mixed`][systemd-killmode-mixed] means that running `systemctl stop etcd.service` will send `SIGTERM` to `stage1`'s `systemd`, which in turn will initiate orderly shutdown inside the pod. Systemd is additionally able to send the cleanup `SIGKILL` to any lingering service processes, after a timeout. This comprises complete pod lifecycle management with familiar, well-known system init tools.
   200  
   201  ### Advanced Unit File
   202  
   203  A more advanced unit example takes advantage of a few convenient `systemd` features:
   204  
   205  1. Inheriting environment variables specified in the unit with `--inherit-env`. This feature helps keep units concise, instead of layering on many flags to `rkt run`.
   206  2. Using the dependency graph to start our pod after networking has come online. This is helpful if your application requires outside connectivity to fetch remote configuration (for example, from `etcd`).
   207  3. Set resource limits for this `rkt` pod. This can also be done in the unit file, rather than flagged to `rkt run`.
   208  4. Set `ExecStopPost` to invoke `rkt gc --mark-only` to record the timestamp when the pod exits.
   209  (Run `rkt gc --help` to see more details about this flag).
   210  After running `rkt gc --mark-only`, the timestamp can be retrieved from rkt API service in pod's `gc_marked_at` field.
   211  The timestamp can be treated as the finished time of the pod.
   212  
   213  Here is what it looks like all together:
   214  
   215  ```
   216  [Unit]
   217  # Metadata
   218  Description=MyApp
   219  Documentation=https://myapp.com/docs/1.3.4
   220  # Wait for networking
   221  Requires=network-online.target
   222  After=network-online.target
   223  
   224  [Service]
   225  Slice=machine.slice
   226  # Resource limits
   227  Delegate=true
   228  CPUShares=512
   229  MemoryLimit=1G
   230  # Env vars
   231  Environment=HTTP_PROXY=192.0.2.3:5000
   232  Environment=STORAGE_PATH=/opt/myapp
   233  Environment=TMPDIR=/var/tmp
   234  # Fetch the app (not strictly required, `rkt run` will fetch the image if there is not one)
   235  ExecStartPre=/usr/bin/rkt fetch myapp.com/myapp-1.3.4
   236  # Start the app
   237  ExecStart=/usr/bin/rkt run --inherit-env --port=http:8888 myapp.com/myapp-1.3.4
   238  ExecStopPost=/usr/bin/rkt gc --mark-only
   239  KillMode=mixed
   240  Restart=always
   241  ```
   242  
   243  rkt must be the main process of the service in order to support [isolators][systemd-isolators] correctly and to be well-integrated with [systemd-machined][systemd-machined]. To ensure that rkt is the main process of the service, the pattern `/bin/sh -c "foo ; rkt run ..."` should be avoided, because in that case the main process is `sh`.
   244  
   245  In most cases, the parameters `Environment=` and `ExecStartPre=` can simply be used instead of starting a shell. If shell invocation is unavoidable, use `exec` to ensure rkt replaces the preceding shell process:
   246  
   247  ```
   248  ExecStart=/bin/sh -c "foo ; exec rkt run ..."
   249  ```
   250  
   251  ### Resource restrictions (CPU, IO, Memory)
   252  
   253  `rkt` inherits resource limits configured in the systemd service unit file. The systemd documentation explains various [execution environment][systemd.exec], and [resource control][systemd.resource-control] settings to restrict the CPU, IO, and memory resources.
   254  
   255  For example to restrict the CPU time quota, configure the corresponding [CPUQuota][systemd-cpuquota] setting:
   256  
   257  ```
   258  [Service]
   259  ExecStart=/usr/bin/rkt run s-urbaniak.github.io/images/stress:0.0.1
   260  CPUQuota=30%
   261  ```
   262  
   263  ```
   264  $ ps -p <PID> -o %cpu%
   265  CPU
   266  30.0
   267  ```
   268  
   269  Moreover to pin the rkt pod to certain CPUs, configure the corresponding [CPUAffinity][systemd-cpuaffinity] setting:
   270  
   271  ```
   272  [Service]
   273  ExecStart=/usr/bin/rkt run s-urbaniak.github.io/images/stress:0.0.1
   274  CPUAffinity=0,3
   275  ```
   276  
   277  ```
   278  $ top
   279  Tasks: 235 total,   1 running, 234 sleeping,   0 stopped,   0 zombie
   280  %Cpu0  : 100.0/0.0   100[||||||||||||||||||||||||||||||||||||||||||||||
   281  %Cpu1  :   6.0/0.7     7[|||                                           
   282  %Cpu2  :   0.7/0.0     1[                                              
   283  %Cpu3  : 100.0/0.0   100[||||||||||||||||||||||||||||||||||||||||||||||
   284  GiB Mem : 25.7/19.484   [                                              
   285  GiB Swap:  0.0/8.000    [                                              
   286  
   287    PID USER      PR  NI    VIRT    RES  %CPU %MEM     TIME+ S COMMAND   
   288  11684 root      20   0    3.6m   1.1m 200.0  0.0   8:58.63 S stress    
   289  ```
   290  
   291  ### Socket-activated service
   292  
   293  `rkt` supports [socket-activated services][systemd-socket-activated]. This means systemd will listen on a port on behalf of a container, and start the container when receiving a connection. An application needs to be able to accept sockets from systemd's native socket passing interface in order to handle socket activation.
   294  
   295  To make socket activation work, add a [socket-activated port][aci-socketActivated] to the app container manifest:
   296  
   297  ```json
   298  ...
   299  {
   300  ...
   301      "app": {
   302          ...
   303          "ports": [
   304              {
   305                  "name": "80-tcp",
   306                  "protocol": "tcp",
   307                  "port": 80,
   308                  "count": 1,
   309                  "socketActivated": true
   310              }
   311          ]
   312      }
   313  }
   314  ```
   315  
   316  Then you will need a pair of `.service` and `.socket` unit files.
   317  
   318  In this example, we want to use the port 8080 on the host instead of the app's default 80, so we use rkt's `--port` option to override it.
   319  
   320  ```
   321  # my-socket-activated-app.socket
   322  [Unit]
   323  Description=My socket-activated app's socket
   324  
   325  [Socket]
   326  ListenStream=8080
   327  ```
   328  
   329  ```
   330  # my-socket-activated-app.service
   331  [Unit]
   332  Description=My socket-activated app
   333  
   334  [Service]
   335  ExecStart=/usr/bin/rkt run --port 80-tcp:8080 myapp.com/my-socket-activated-app:v1.0
   336  KillMode=mixed
   337  ```
   338  
   339  Finally, start the socket unit:
   340  
   341  ```
   342  # systemctl start my-socket-activated-app.socket
   343  $ systemctl status my-socket-activated-app.socket
   344  ● my-socket-activated-app.socket - My socket-activated app's socket
   345     Loaded: loaded (/etc/systemd/system/my-socket-activated-app.socket; static; vendor preset: disabled)
   346     Active: active (listening) since Thu 2015-07-30 12:24:50 CEST; 2s ago
   347     Listen: [::]:8080 (Stream)
   348  
   349  Jul 30 12:24:50 locke-work systemd[1]: Listening on My socket-activated app's socket.
   350  ```
   351  
   352  Now, a new connection to port 8080 will start your container to handle the request.
   353  
   354  ### Bidirectionally proxy local sockets to another (possibly remote) socket.
   355  
   356  `rkt` also supports the [socket-proxyd service][systemd-socket-proxyd]. Much like socket activation, with socket-proxyd systemd provides a listener on a given port on behalf of a container, and starts the container when a connection is received. Socket-proxy listening can be useful in environments that lack native support for socket activation. The LKVM stage1 flavor is an example of such an environment.
   357  
   358  To set up socket proxyd, create a network template consisting of three units, like the example below. This example uses the redis app and the PTP network template in `/etc/rkt/net.d/ptp0.conf`:
   359  
   360  ```json
   361  {
   362  	"name": "ptp0",
   363  	"type": "ptp",
   364  	"ipMasq": true,
   365  	"ipam": {
   366  		"type": "host-local",
   367  		"subnet": "172.16.28.0/24",
   368  		"routes": [
   369  			{ "dst": "0.0.0.0/0" }
   370  		]
   371  	}
   372  }
   373  ```
   374  
   375  ```
   376  # rkt-redis.service
   377  [Unit]
   378  Description=Socket-proxyd redis server
   379  
   380  [Service]
   381  ExecStart=/usr/bin/rkt --insecure-options=image run --net="ptp:IP=172.16.28.101" docker://redis
   382  KillMode=process
   383  ```
   384  Note that you have to specify IP manually in systemd unit.
   385  
   386  Then you will need a pair of `.service` and `.socket` unit files.
   387  
   388  We want to use the port 6379 on the localhost instead of the remote container IP,
   389  so we use next systemd unit to override it.
   390  
   391  ```
   392  # proxy-to-rkt-redis.service
   393  [Unit]
   394  Requires=rkt-redis.service
   395  After=rkt-redis.service
   396  
   397  [Service]
   398  ExecStart=/usr/lib/systemd/systemd-socket-proxyd 172.16.28.101:6379
   399  ```
   400  Lastly the related socket unit,
   401  ```
   402  # proxy-to-rkt-redis.socket
   403  [Socket]
   404  ListenStream=6371
   405  
   406  [Install]
   407  WantedBy=sockets.target
   408  ```
   409  
   410  Finally, start the socket unit:
   411  
   412  ```
   413  # systemctl enable proxy-to-redis.socket
   414  $ sudo systemctl start proxy-to-redis.socket
   415  ● proxy-to-rkt-redis.socket
   416     Loaded: loaded (/etc/systemd/system/proxy-to-rkt-redis.socket; enabled; vendor preset: disabled)
   417     Active: active (listening) since Mon 2016-03-07 11:53:32 CET; 8s ago
   418     Listen: [::]:6371 (Stream)
   419  
   420  Mar 07 11:53:32 user-host systemd[1]: Listening on proxy-to-rkt-redis.socket.
   421  Mar 07 11:53:32 user-host systemd[1]: Starting proxy-to-rkt-redis.socket.
   422  
   423  ```
   424  
   425  Now, a new connection to localhost port 6371 will start your container with redis, to handle the request.
   426  
   427  ```
   428  $ curl http://localhost:6371/
   429  ```
   430  
   431  ## Other tools for managing pods
   432  
   433  Let us assume the service from the simple example unit file, above, is started on the host.
   434  
   435  ### ps auxf
   436  
   437  The snippet below taken from output of `ps auxf` shows several things:
   438  
   439  1. `rkt` `exec`s stage1's `systemd-nspawn` instead of using `fork-exec` technique. That is why rkt itself is not listed by `ps`.
   440  2. `systemd-nspawn` runs a typical boot sequence - it spawns `systemd` inside the container, which in turn spawns our desired service(s).
   441  3. There can be also other services running, which may be `systemd`-specific, like `systemd-journald`.
   442  
   443  ```
   444  $ ps auxf
   445  USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
   446  root      7258  0.2  0.0  19680  2664 ?        Ss   12:38   0:02 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --register=true --link-journal=try-guest --quiet --keep-unit --uuid=6d0d9608-a744-4333-be21-942145a97a5a --machine=rkt-6d0d9608-a744-4333-be21-942145a97a5a --directory=stage1/rootfs -- --default-standard-output=tty --log-target=null --log-level=warning --show-status=0
   447  root      7275  0.0  0.0  27348  4316 ?        Ss   12:38   0:00  \_ /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --log-level=warning --show-status=0
   448  root      7277  0.0  0.0  23832  6100 ?        Ss   12:38   0:00      \_ /usr/lib/systemd/systemd-journald
   449  root      7343  0.3  0.0  10652  7332 ?        Ssl  12:38   0:04      \_ /etcd
   450  ```
   451  
   452  ### systemd-cgls
   453  
   454  The `systemd-cgls` command prints the list of cgroups active on the system. The inner `system.slice` shown in the excerpt below is a cgroup in rkt's `stage1`, below which an in-container systemd has been started to shepherd pod apps with complete process lifecycle management:
   455  
   456  ```
   457  $ systemd-cgls
   458  ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
   459  ├─machine.slice
   460  │ └─etcd.service
   461  │   ├─1204 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/s...
   462  │   ├─1421 /usr/lib/systemd/systemd --default-standard-output=tty --log-targe...
   463  │   └─system.slice
   464  │     ├─etcd.service
   465  │     │ └─1436 /etcd
   466  │     └─systemd-journald.service
   467  │       └─1428 /usr/lib/systemd/systemd-journald
   468  ```
   469  
   470  ### systemd-cgls --all
   471  
   472  To display all active cgroups, use the `--all` flag. This will show two cgroups for `mount` in the host's `system.slice`. One mount cgroup is for the `stage1` root filesystem, the other for the `stage2` root (the pod's filesystem). Inside the pod's `system.slice` there are more `mount` cgroups -- mostly for bind mounts of standard `/dev`-tree device files.
   473  
   474  ```
   475  $ systemd-cgls --all
   476  ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
   477  ├─machine.slice
   478  │ └─etcd.service
   479  │   ├─1204 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/s...
   480  │   ├─1421 /usr/lib/systemd/systemd --default-standard-output=tty --log-targe...
   481  │   └─system.slice
   482  │     ├─proc-sys-kernel-random-boot_id.mount
   483  │     ├─opt-stage2-etcd-rootfs-proc-kmsg.mount
   484  │     ├─opt-stage2-etcd-rootfs-sys.mount
   485  │     ├─opt-stage2-etcd-rootfs-dev-shm.mount
   486  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-perf_event.mount
   487  │     ├─etcd.service
   488  │     │ └─1436 /etcd
   489  │     ├─opt-stage2-etcd-rootfs-proc-sys-kernel-random-boot_id.mount
   490  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-cpu\x2ccpuacct.mount
   491  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-devices.mount
   492  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-freezer.mount
   493  │     ├─shutdown.service
   494  │     ├─-.mount
   495  │     ├─opt-stage2-etcd-rootfs-data\x2ddir.mount
   496  │     ├─system-prepare\x2dapp.slice
   497  │     ├─tmp.mount
   498  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-cpuset.mount
   499  │     ├─opt-stage2-etcd-rootfs-proc.mount
   500  │     ├─systemd-journald.service
   501  │     │ └─1428 /usr/lib/systemd/systemd-journald
   502  │     ├─opt-stage2-etcd-rootfs.mount
   503  │     ├─opt-stage2-etcd-rootfs-dev-random.mount
   504  │     ├─opt-stage2-etcd-rootfs-dev-pts.mount
   505  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup.mount
   506  │     ├─run-systemd-nspawn-incoming.mount
   507  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-systemd-machine.slice-etcd.service.mount
   508  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-memory-machine.slice-etcd.service-system.slice-etcd.service-cgroup.procs.mount
   509  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-blkio.mount
   510  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-net_cls\x2cnet_prio.mount
   511  │     ├─opt-stage2-etcd-rootfs-dev-net-tun.mount
   512  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-memory-machine.slice-etcd.service-system.slice-etcd.service-memory.limit_in_bytes.mount
   513  │     ├─opt-stage2-etcd-rootfs-dev-tty.mount
   514  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-pids.mount
   515  │     ├─reaper-etcd.service
   516  │     ├─opt-stage2-etcd-rootfs-sys-fs-selinux.mount
   517  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-memory.mount
   518  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-cpu\x2ccpuacct-machine.slice-etcd.service-system.slice-etcd.service-cpu.cfs_quota_us.mount
   519  │     ├─opt-stage2-etcd-rootfs-dev-urandom.mount
   520  │     ├─opt-stage2-etcd-rootfs-dev-zero.mount
   521  │     ├─opt-stage2-etcd-rootfs-dev-null.mount
   522  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-systemd.mount
   523  │     ├─opt-stage2-etcd-rootfs-dev-console.mount
   524  │     ├─opt-stage2-etcd-rootfs-dev-full.mount
   525  │     ├─opt-stage2-etcd-rootfs-sys-fs-cgroup-cpu\x2ccpuacct-machine.slice-etcd.service-system.slice-etcd.service-cgroup.procs.mount
   526  │     ├─opt-stage2-etcd-rootfs-proc-sys.mount
   527  │     └─opt-stage2-etcd-rootfs-sys-fs-cgroup-hugetlb.mount
   528  ```
   529  
   530  
   531  [acbuild]: https://github.com/containers/build
   532  [aci-socketActivated]: https://github.com/appc/spec/blob/master/spec/aci.md#image-manifest-schema
   533  [go-systemd]: https://github.com/coreos/go-systemd
   534  [machined]: https://wiki.freedesktop.org/www/Software/systemd/machined/
   535  [systemd]: https://www.freedesktop.org/wiki/Software/systemd/
   536  [systemd.exec]: https://www.freedesktop.org/software/systemd/man/systemd.exec.html
   537  [systemd.resource-control]: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
   538  [systemd-cpuquota]: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota=
   539  [systemd-cpuaffinity]: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#CPUAffinity=
   540  [systemd-isolators]: https://github.com/appc/spec/blob/master/spec/ace.md#isolators
   541  [systemd-killmode-mixed]: https://www.freedesktop.org/software/systemd/man/systemd.kill.html#KillMode=
   542  [systemd-machined]: https://www.freedesktop.org/software/systemd/man/systemd-machined.service.html
   543  [systemd-run]: https://www.freedesktop.org/software/systemd/man/systemd-run.html
   544  [systemd-socket-activated]: https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
   545  [systemd-socket-proxyd]: https://www.freedesktop.org/software/systemd/man/systemd-socket-proxyd.html
   546  [systemd-unit]: https://www.freedesktop.org/software/systemd/man/systemd.unit.html
   547  [sd_notify]: https://www.freedesktop.org/software/systemd/man/sd_notify.html
   548  [sdnotify-go]: https://github.com/coreos/go-systemd/blob/master/daemon/sdnotify.go