github.com/endocode/docker@v1.4.2-0.20160113120958-46eb4700391e/docs/reference/commandline/daemon.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "daemon"
     4  description = "The daemon command description and usage"
     5  keywords = ["container, daemon, runtime"]
     6  [menu.main]
     7  parent = "smn_cli"
     8  weight = -1
     9  +++
    10  <![end-metadata]-->
    11  
    12  # daemon
    13  
    14      Usage: docker daemon [OPTIONS]
    15  
    16      A self-sufficient runtime for linux containers.
    17  
    18      Options:
    19        --api-cors-header=""                   Set CORS headers in the remote API
    20        --authz-plugin=[]                      Set authorization plugins to load
    21        -b, --bridge=""                        Attach containers to a network bridge
    22        --bip=""                               Specify network bridge IP
    23        --cgroup-parent=                       Set parent cgroup for all containers
    24        -D, --debug                            Enable debug mode
    25        --default-gateway=""                   Container default gateway IPv4 address
    26        --default-gateway-v6=""                Container default gateway IPv6 address
    27        --cluster-store=""                     URL of the distributed storage backend
    28        --cluster-advertise=""                 Address of the daemon instance on the cluster
    29        --cluster-store-opt=map[]              Set cluster options
    30        --dns=[]                               DNS server to use
    31        --dns-opt=[]                           DNS options to use
    32        --dns-search=[]                        DNS search domains to use
    33        --default-ulimit=[]                    Set default ulimit settings for containers
    34        --exec-opt=[]                          Set exec driver options
    35        --exec-root="/var/run/docker"          Root of the Docker execdriver
    36        --fixed-cidr=""                        IPv4 subnet for fixed IPs
    37        --fixed-cidr-v6=""                     IPv6 subnet for fixed IPs
    38        -G, --group="docker"                   Group for the unix socket
    39        -g, --graph="/var/lib/docker"          Root of the Docker runtime
    40        -H, --host=[]                          Daemon socket(s) to connect to
    41        --help                                 Print usage
    42        --icc=true                             Enable inter-container communication
    43        --insecure-registry=[]                 Enable insecure registry communication
    44        --ip=0.0.0.0                           Default IP when binding container ports
    45        --ip-forward=true                      Enable net.ipv4.ip_forward
    46        --ip-masq=true                         Enable IP masquerading
    47        --iptables=true                        Enable addition of iptables rules
    48        --ipv6                                 Enable IPv6 networking
    49        -l, --log-level="info"                 Set the logging level
    50        --label=[]                             Set key=value labels to the daemon
    51        --log-driver="json-file"               Default driver for container logs
    52        --log-opt=[]                           Log driver specific options
    53        --mtu=0                                Set the containers network MTU
    54        --disable-legacy-registry              Do not contact legacy registries
    55        -p, --pidfile="/var/run/docker.pid"    Path to use for daemon PID file
    56        --registry-mirror=[]                   Preferred Docker registry mirror
    57        -s, --storage-driver=""                Storage driver to use
    58        --selinux-enabled                      Enable selinux support
    59        --storage-opt=[]                       Set storage driver options
    60        --tls                                  Use TLS; implied by --tlsverify
    61        --tlscacert="~/.docker/ca.pem"         Trust certs signed only by this CA
    62        --tlscert="~/.docker/cert.pem"         Path to TLS certificate file
    63        --tlskey="~/.docker/key.pem"           Path to TLS key file
    64        --tlsverify                            Use TLS and verify the remote
    65        --userns-remap="default"               Enable user namespace remapping
    66        --userland-proxy=true                  Use userland proxy for loopback traffic
    67  
    68  Options with [] may be specified multiple times.
    69  
    70  The Docker daemon is the persistent process that manages containers. Docker
    71  uses the same binary for both the daemon and client. To run the daemon you
    72  type `docker daemon`.
    73  
    74  To run the daemon with debug output, use `docker daemon -D`.
    75  
    76  ## Daemon socket option
    77  
    78  The Docker daemon can listen for [Docker Remote API](../api/docker_remote_api.md)
    79  requests via three different types of Socket: `unix`, `tcp`, and `fd`.
    80  
    81  By default, a `unix` domain socket (or IPC socket) is created at
    82  `/var/run/docker.sock`, requiring either `root` permission, or `docker` group
    83  membership.
    84  
    85  If you need to access the Docker daemon remotely, you need to enable the `tcp`
    86  Socket. Beware that the default setup provides un-encrypted and
    87  un-authenticated direct access to the Docker daemon - and should be secured
    88  either using the [built in HTTPS encrypted socket](../../articles/https/), or by
    89  putting a secure web proxy in front of it. You can listen on port `2375` on all
    90  network interfaces with `-H tcp://0.0.0.0:2375`, or on a particular network
    91  interface using its IP address: `-H tcp://192.168.59.103:2375`. It is
    92  conventional to use port `2375` for un-encrypted, and port `2376` for encrypted
    93  communication with the daemon.
    94  
    95  > **Note:**
    96  > If you're using an HTTPS encrypted socket, keep in mind that only
    97  > TLS1.0 and greater are supported. Protocols SSLv3 and under are not
    98  > supported anymore for security reasons.
    99  
   100  On Systemd based systems, you can communicate with the daemon via
   101  [Systemd socket activation](http://0pointer.de/blog/projects/socket-activation.html),
   102  use `docker daemon -H fd://`. Using `fd://` will work perfectly for most setups but
   103  you can also specify individual sockets: `docker daemon -H fd://3`. If the
   104  specified socket activated files aren't found, then Docker will exit. You can
   105  find examples of using Systemd socket activation with Docker and Systemd in the
   106  [Docker source tree](https://github.com/docker/docker/tree/master/contrib/init/systemd/).
   107  
   108  You can configure the Docker daemon to listen to multiple sockets at the same
   109  time using multiple `-H` options:
   110  
   111      # listen using the default unix socket, and on 2 specific IP addresses on this host.
   112      docker daemon -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
   113  
   114  The Docker client will honor the `DOCKER_HOST` environment variable to set the
   115  `-H` flag for the client.
   116  
   117      $ docker -H tcp://0.0.0.0:2375 ps
   118      # or
   119      $ export DOCKER_HOST="tcp://0.0.0.0:2375"
   120      $ docker ps
   121      # both are equal
   122  
   123  Setting the `DOCKER_TLS_VERIFY` environment variable to any value other than
   124  the empty string is equivalent to setting the `--tlsverify` flag. The following
   125  are equivalent:
   126  
   127      $ docker --tlsverify ps
   128      # or
   129      $ export DOCKER_TLS_VERIFY=1
   130      $ docker ps
   131  
   132  The Docker client will honor the `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`
   133  environment variables (or the lowercase versions thereof). `HTTPS_PROXY` takes
   134  precedence over `HTTP_PROXY`.
   135  
   136  ### Daemon storage-driver option
   137  
   138  The Docker daemon has support for several different image layer storage
   139  drivers: `aufs`, `devicemapper`, `btrfs`, `zfs` and `overlay`.
   140  
   141  The `aufs` driver is the oldest, but is based on a Linux kernel patch-set that
   142  is unlikely to be merged into the main kernel. These are also known to cause
   143  some serious kernel crashes. However, `aufs` is also the only storage driver
   144  that allows containers to share executable and shared library memory, so is a
   145  useful choice when running thousands of containers with the same program or
   146  libraries.
   147  
   148  The `devicemapper` driver uses thin provisioning and Copy on Write (CoW)
   149  snapshots. For each devicemapper graph location – typically
   150  `/var/lib/docker/devicemapper` – a thin pool is created based on two block
   151  devices, one for data and one for metadata. By default, these block devices
   152  are created automatically by using loopback mounts of automatically created
   153  sparse files. Refer to [Storage driver options](#storage-driver-options) below
   154  for a way how to customize this setup.
   155  [~jpetazzo/Resizing Docker containers with the Device Mapper plugin](http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/)
   156  article explains how to tune your existing setup without the use of options.
   157  
   158  The `btrfs` driver is very fast for `docker build` - but like `devicemapper`
   159  does not share executable memory between devices. Use
   160  `docker daemon -s btrfs -g /mnt/btrfs_partition`.
   161  
   162  The `zfs` driver is probably not as fast as `btrfs` but has a longer track record
   163  on stability. Thanks to `Single Copy ARC` shared blocks between clones will be
   164  cached only once. Use `docker daemon -s zfs`. To select a different zfs filesystem
   165  set `zfs.fsname` option as described in [Storage driver options](#storage-driver-options).
   166  
   167  The `overlay` is a very fast union filesystem. It is now merged in the main
   168  Linux kernel as of [3.18.0](https://lkml.org/lkml/2014/10/26/137). Call
   169  `docker daemon -s overlay` to use it.
   170  
   171  > **Note:**
   172  > As promising as `overlay` is, the feature is still quite young and should not
   173  > be used in production. Most notably, using `overlay` can cause excessive
   174  > inode consumption (especially as the number of images grows), as well as
   175  > being incompatible with the use of RPMs.
   176  
   177  > **Note:**
   178  > It is currently unsupported on `btrfs` or any Copy on Write filesystem
   179  > and should only be used over `ext4` partitions.
   180  
   181  ### Storage driver options
   182  
   183  Particular storage-driver can be configured with options specified with
   184  `--storage-opt` flags. Options for `devicemapper` are prefixed with `dm` and
   185  options for `zfs` start with `zfs`.
   186  
   187  *  `dm.thinpooldev`
   188  
   189       Specifies a custom block storage device to use for the thin pool.
   190  
   191       If using a block device for device mapper storage, it is best to use `lvm`
   192       to create and manage the thin-pool volume. This volume is then handed to Docker
   193       to exclusively create snapshot volumes needed for images and containers.
   194  
   195       Managing the thin-pool outside of Docker makes for the most feature-rich
   196       method of having Docker utilize device mapper thin provisioning as the
   197       backing storage for Docker's containers. The highlights of the lvm-based
   198       thin-pool management feature include: automatic or interactive thin-pool
   199       resize support, dynamically changing thin-pool features, automatic thinp
   200       metadata checking when lvm activates the thin-pool, etc.
   201  
   202       As a fallback if no thin pool is provided, loopback files will be
   203       created. Loopback is very slow, but can be used without any
   204       pre-configuration of storage. It is strongly recommended that you do
   205       not use loopback in production. Ensure your Docker daemon has a
   206       `--storage-opt dm.thinpooldev` argument provided.
   207  
   208       Example use:
   209  
   210          $ docker daemon \
   211                --storage-opt dm.thinpooldev=/dev/mapper/thin-pool
   212  
   213  *  `dm.basesize`
   214  
   215      Specifies the size to use when creating the base device, which limits the
   216      size of images and containers. The default value is 100G. Note, thin devices
   217      are inherently "sparse", so a 100G device which is mostly empty doesn't use
   218      100 GB of space on the pool. However, the filesystem will use more space for
   219      the empty case the larger the device is.
   220  
   221      This value affects the system-wide "base" empty filesystem
   222      that may already be initialized and inherited by pulled images. Typically,
   223      a change to this value requires additional steps to take effect:
   224  
   225          $ sudo service docker stop
   226          $ sudo rm -rf /var/lib/docker
   227          $ sudo service docker start
   228  
   229      Example use:
   230  
   231          $ docker daemon --storage-opt dm.basesize=20G
   232  
   233  *  `dm.loopdatasize`
   234  
   235      > **Note**:
   236  	> This option configures devicemapper loopback, which should not
   237  	> be used in production.
   238  
   239      Specifies the size to use when creating the loopback file for the
   240      "data" device which is used for the thin pool. The default size is
   241      100G. The file is sparse, so it will not initially take up this
   242      much space.
   243  
   244      Example use:
   245  
   246          $ docker daemon --storage-opt dm.loopdatasize=200G
   247  
   248  *  `dm.loopmetadatasize`
   249  
   250      > **Note**:
   251      > This option configures devicemapper loopback, which should not
   252      > be used in production.
   253  
   254      Specifies the size to use when creating the loopback file for the
   255      "metadata" device which is used for the thin pool. The default size
   256      is 2G. The file is sparse, so it will not initially take up
   257      this much space.
   258  
   259      Example use:
   260  
   261          $ docker daemon --storage-opt dm.loopmetadatasize=4G
   262  
   263  *  `dm.fs`
   264  
   265      Specifies the filesystem type to use for the base device. The supported
   266      options are "ext4" and "xfs". The default is "xfs"
   267  
   268      Example use:
   269  
   270          $ docker daemon --storage-opt dm.fs=ext4
   271  
   272  *  `dm.mkfsarg`
   273  
   274      Specifies extra mkfs arguments to be used when creating the base device.
   275  
   276      Example use:
   277  
   278          $ docker daemon --storage-opt "dm.mkfsarg=-O ^has_journal"
   279  
   280  *  `dm.mountopt`
   281  
   282      Specifies extra mount options used when mounting the thin devices.
   283  
   284      Example use:
   285  
   286          $ docker daemon --storage-opt dm.mountopt=nodiscard
   287  
   288  *  `dm.datadev`
   289  
   290      (Deprecated, use `dm.thinpooldev`)
   291  
   292      Specifies a custom blockdevice to use for data for the thin pool.
   293  
   294      If using a block device for device mapper storage, ideally both datadev and
   295      metadatadev should be specified to completely avoid using the loopback
   296      device.
   297  
   298      Example use:
   299  
   300          $ docker daemon \
   301                --storage-opt dm.datadev=/dev/sdb1 \
   302                --storage-opt dm.metadatadev=/dev/sdc1
   303  
   304  *  `dm.metadatadev`
   305  
   306      (Deprecated, use `dm.thinpooldev`)
   307  
   308      Specifies a custom blockdevice to use for metadata for the thin pool.
   309  
   310      For best performance the metadata should be on a different spindle than the
   311      data, or even better on an SSD.
   312  
   313      If setting up a new metadata pool it is required to be valid. This can be
   314      achieved by zeroing the first 4k to indicate empty metadata, like this:
   315  
   316          $ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
   317  
   318      Example use:
   319  
   320          $ docker daemon \
   321                --storage-opt dm.datadev=/dev/sdb1 \
   322                --storage-opt dm.metadatadev=/dev/sdc1
   323  
   324  *  `dm.blocksize`
   325  
   326      Specifies a custom blocksize to use for the thin pool. The default
   327      blocksize is 64K.
   328  
   329      Example use:
   330  
   331          $ docker daemon --storage-opt dm.blocksize=512K
   332  
   333  *  `dm.blkdiscard`
   334  
   335      Enables or disables the use of blkdiscard when removing devicemapper
   336      devices. This is enabled by default (only) if using loopback devices and is
   337      required to resparsify the loopback file on image/container removal.
   338  
   339      Disabling this on loopback can lead to *much* faster container removal
   340      times, but will make the space used in `/var/lib/docker` directory not be
   341      returned to the system for other use when containers are removed.
   342  
   343      Example use:
   344  
   345          $ docker daemon --storage-opt dm.blkdiscard=false
   346  
   347  *  `dm.override_udev_sync_check`
   348  
   349      Overrides the `udev` synchronization checks between `devicemapper` and `udev`.
   350      `udev` is the device manager for the Linux kernel.
   351  
   352      To view the `udev` sync support of a Docker daemon that is using the
   353      `devicemapper` driver, run:
   354  
   355          $ docker info
   356          [...]
   357          Udev Sync Supported: true
   358          [...]
   359  
   360      When `udev` sync support is `true`, then `devicemapper` and udev can
   361      coordinate the activation and deactivation of devices for containers.
   362  
   363      When `udev` sync support is `false`, a race condition occurs between
   364      the`devicemapper` and `udev` during create and cleanup. The race condition
   365      results in errors and failures. (For information on these failures, see
   366      [docker#4036](https://github.com/docker/docker/issues/4036))
   367  
   368      To allow the `docker` daemon to start, regardless of `udev` sync not being
   369      supported, set `dm.override_udev_sync_check` to true:
   370  
   371          $ docker daemon --storage-opt dm.override_udev_sync_check=true
   372  
   373      When this value is `true`, the  `devicemapper` continues and simply warns
   374      you the errors are happening.
   375  
   376      > **Note:**
   377      > The ideal is to pursue a `docker` daemon and environment that does
   378      > support synchronizing with `udev`. For further discussion on this
   379      > topic, see [docker#4036](https://github.com/docker/docker/issues/4036).
   380      > Otherwise, set this flag for migrating existing Docker daemons to
   381      > a daemon with a supported environment.
   382  
   383  *  `dm.use_deferred_removal`
   384  
   385      Enables use of deferred device removal if `libdm` and the kernel driver
   386      support the mechanism.
   387  
   388      Deferred device removal means that if device is busy when devices are
   389      being removed/deactivated, then a deferred removal is scheduled on
   390      device. And devices automatically go away when last user of the device
   391      exits.
   392  
   393      For example, when a container exits, its associated thin device is removed.
   394      If that device has leaked into some other mount namespace and can't be
   395      removed, the container exit still succeeds and this option causes the
   396      system to schedule the device for deferred removal. It does not wait in a
   397      loop trying to remove a busy device.
   398  
   399      Example use:
   400  
   401          $ docker daemon --storage-opt dm.use_deferred_removal=true
   402  
   403  *  `dm.use_deferred_deletion`
   404  
   405      Enables use of deferred device deletion for thin pool devices. By default,
   406      thin pool device deletion is synchronous. Before a container is deleted,
   407      the Docker daemon removes any associated devices. If the storage driver
   408      can not remove a device, the container deletion fails and daemon returns.
   409  
   410          Error deleting container: Error response from daemon: Cannot destroy container
   411  
   412      To avoid this failure, enable both deferred device deletion and deferred
   413      device removal on the daemon.
   414  
   415          $ docker daemon \
   416                --storage-opt dm.use_deferred_deletion=true \
   417                --storage-opt dm.use_deferred_removal=true
   418  
   419      With these two options enabled, if a device is busy when the driver is
   420      deleting a container, the driver marks the device as deleted. Later, when
   421      the device isn't in use, the driver deletes it.
   422  
   423      In general it should be safe to enable this option by default. It will help
   424      when unintentional leaking of mount point happens across multiple mount
   425      namespaces.
   426  
   427  Currently supported options of `zfs`:
   428  
   429  * `zfs.fsname`
   430  
   431      Set zfs filesystem under which docker will create its own datasets.
   432      By default docker will pick up the zfs filesystem where docker graph
   433      (`/var/lib/docker`) is located.
   434  
   435      Example use:
   436  
   437          $ docker daemon -s zfs --storage-opt zfs.fsname=zroot/docker
   438  
   439  ## Docker execdriver option
   440  
   441  The Docker daemon uses a specifically built `libcontainer` execution driver as
   442  its interface to the Linux kernel `namespaces`, `cgroups`, and `SELinux`.
   443  
   444  ## Options for the native execdriver
   445  
   446  You can configure the `native` (libcontainer) execdriver using options specified
   447  with the `--exec-opt` flag. All the flag's options have the `native` prefix. A
   448  single `native.cgroupdriver` option is available.
   449  
   450  The `native.cgroupdriver` option specifies the management of the container's
   451  cgroups. You can specify `cgroupfs` or `systemd`. If you specify `systemd` and
   452  it is not available, the system uses `cgroupfs`. If you omit the
   453  `native.cgroupdriver` option,` cgroupfs` is used.
   454  This example sets the `cgroupdriver` to `systemd`:
   455  
   456      $ sudo docker daemon --exec-opt native.cgroupdriver=systemd
   457  
   458  Setting this option applies to all containers the daemon launches.
   459  
   460  Also Windows Container makes use of `--exec-opt` for special purpose. Docker user
   461  can specify default container isolation technology with this, for example:
   462  
   463      $ docker daemon --exec-opt isolation=hyperv
   464  
   465  Will make `hyperv` the default isolation technology on Windows, without specifying
   466  isolation value on daemon start, Windows isolation technology will default to `process`.
   467  
   468  ## Daemon DNS options
   469  
   470  To set the DNS server for all Docker containers, use
   471  `docker daemon --dns 8.8.8.8`.
   472  
   473  To set the DNS search domain for all Docker containers, use
   474  `docker daemon --dns-search example.com`.
   475  
   476  ## Insecure registries
   477  
   478  Docker considers a private registry either secure or insecure. In the rest of
   479  this section, *registry* is used for *private registry*, and `myregistry:5000`
   480  is a placeholder example for a private registry.
   481  
   482  A secure registry uses TLS and a copy of its CA certificate is placed on the
   483  Docker host at `/etc/docker/certs.d/myregistry:5000/ca.crt`. An insecure
   484  registry is either not using TLS (i.e., listening on plain text HTTP), or is
   485  using TLS with a CA certificate not known by the Docker daemon. The latter can
   486  happen when the certificate was not found under
   487  `/etc/docker/certs.d/myregistry:5000/`, or if the certificate verification
   488  failed (i.e., wrong CA).
   489  
   490  By default, Docker assumes all, but local (see local registries below),
   491  registries are secure. Communicating with an insecure registry is not possible
   492  if Docker assumes that registry is secure. In order to communicate with an
   493  insecure registry, the Docker daemon requires `--insecure-registry` in one of
   494  the following two forms:
   495  
   496  * `--insecure-registry myregistry:5000` tells the Docker daemon that
   497    myregistry:5000 should be considered insecure.
   498  * `--insecure-registry 10.1.0.0/16` tells the Docker daemon that all registries
   499    whose domain resolve to an IP address is part of the subnet described by the
   500    CIDR syntax, should be considered insecure.
   501  
   502  The flag can be used multiple times to allow multiple registries to be marked
   503  as insecure.
   504  
   505  If an insecure registry is not marked as insecure, `docker pull`,
   506  `docker push`, and `docker search` will result in an error message prompting
   507  the user to either secure or pass the `--insecure-registry` flag to the Docker
   508  daemon as described above.
   509  
   510  Local registries, whose IP address falls in the 127.0.0.0/8 range, are
   511  automatically marked as insecure as of Docker 1.3.2. It is not recommended to
   512  rely on this, as it may change in the future.
   513  
   514  Enabling `--insecure-registry`, i.e., allowing un-encrypted and/or untrusted
   515  communication, can be useful when running a local registry.  However,
   516  because its use creates security vulnerabilities it should ONLY be enabled for
   517  testing purposes.  For increased security, users should add their CA to their
   518  system's list of trusted CAs instead of enabling `--insecure-registry`.
   519  
   520  ## Legacy Registries
   521  
   522  Enabling `--disable-legacy-registry` forces a docker daemon to only interact with registries which support the V2 protocol.  Specifically, the daemon will not attempt `push`, `pull` and `login` to v1 registries.  The exception to this is `search` which can still be performed on v1 registries.
   523  
   524  ## Running a Docker daemon behind a HTTPS_PROXY
   525  
   526  When running inside a LAN that uses a `HTTPS` proxy, the Docker Hub
   527  certificates will be replaced by the proxy's certificates. These certificates
   528  need to be added to your Docker host's configuration:
   529  
   530  1. Install the `ca-certificates` package for your distribution
   531  2. Ask your network admin for the proxy's CA certificate and append them to
   532     `/etc/pki/tls/certs/ca-bundle.crt`
   533  3. Then start your Docker daemon with `HTTPS_PROXY=http://username:password@proxy:port/ docker daemon`.
   534     The `username:` and `password@` are optional - and are only needed if your
   535     proxy is set up to require authentication.
   536  
   537  This will only add the proxy and authentication to the Docker daemon's requests -
   538  your `docker build`s and running containers will need extra configuration to
   539  use the proxy
   540  
   541  ## Default Ulimits
   542  
   543  `--default-ulimit` allows you to set the default `ulimit` options to use for
   544  all containers. It takes the same options as `--ulimit` for `docker run`. If
   545  these defaults are not set, `ulimit` settings will be inherited, if not set on
   546  `docker run`, from the Docker daemon. Any `--ulimit` options passed to
   547  `docker run` will overwrite these defaults.
   548  
   549  Be careful setting `nproc` with the `ulimit` flag as `nproc` is designed by Linux to
   550  set the maximum number of processes available to a user, not to a container. For details
   551  please check the [run](run.md) reference.
   552  
   553  ## Nodes discovery
   554  
   555  The `--cluster-advertise` option specifies the 'host:port' or `interface:port`
   556  combination that this particular daemon instance should use when advertising
   557  itself to the cluster. The daemon is reached by remote hosts through this value.
   558  If you  specify an interface, make sure it includes the IP address of the actual
   559  Docker host. For Engine installation created through `docker-machine`, the
   560  interface is typically `eth1`.
   561  
   562  The daemon uses [libkv](https://github.com/docker/libkv/) to advertise
   563  the node within the cluster. Some key-value backends support mutual
   564  TLS. To configure the client TLS settings used by the daemon can be configured
   565  using the `--cluster-store-opt` flag, specifying the paths to PEM encoded
   566  files. For example:
   567  
   568  ```bash
   569  docker daemon \
   570      --cluster-advertise 192.168.1.2:2376 \
   571      --cluster-store etcd://192.168.1.2:2379 \
   572      --cluster-store-opt kv.cacertfile=/path/to/ca.pem \
   573      --cluster-store-opt kv.certfile=/path/to/cert.pem \
   574      --cluster-store-opt kv.keyfile=/path/to/key.pem
   575  ```
   576  
   577  The currently supported cluster store options are:
   578  
   579  *  `discovery.heartbeat`
   580  
   581      Specifies the heartbeat timer in seconds which is used by the daemon as a
   582      keepalive mechanism to make sure discovery module treats the node as alive
   583      in the cluster. If not configured, the default value is 20 seconds.
   584  
   585  *  `discovery.ttl`
   586  
   587      Specifies the ttl (time-to-live) in seconds which is used by the discovery
   588      module to timeout a node if a valid heartbeat is not received within the
   589      configured ttl value. If not configured, the default value is 60 seconds.
   590  
   591  *  `kv.cacertfile`
   592  
   593      Specifies the path to a local file with PEM encoded CA certificates to trust
   594  
   595  *  `kv.certfile`
   596  
   597      Specifies the path to a local file with a PEM encoded certificate.  This
   598      certificate is used as the client cert for communication with the
   599      Key/Value store.
   600  
   601  *  `kv.keyfile`
   602  
   603      Specifies the path to a local file with a PEM encoded private key.  This
   604      private key is used as the client key for communication with the
   605      Key/Value store.
   606  
   607  *  `kv.path`
   608  
   609      Specifies the path in the Key/Value store. If not configured, the default value is 'docker/nodes'.
   610  
   611  ## Access authorization
   612  
   613  Docker's access authorization can be extended by authorization plugins that your
   614  organization can purchase or build themselves. You can install one or more
   615  authorization plugins when you start the Docker `daemon` using the
   616  `--authz-plugin=PLUGIN_ID` option.
   617  
   618  ```bash
   619  docker daemon --authz-plugin=plugin1 --authz-plugin=plugin2,...
   620  ```
   621  
   622  The `PLUGIN_ID` value is either the plugin's name or a path to its specification
   623  file. The plugin's implementation determines whether you can specify a name or
   624  path. Consult with your Docker administrator to get information about the
   625  plugins available to you.
   626  
   627  Once a plugin is installed, requests made to the `daemon` through the command
   628  line or Docker's remote API are allowed or denied by the plugin.  If you have
   629  multiple plugins installed, at least one must allow the request for it to
   630  complete.
   631  
   632  For information about how to create an authorization plugin, see [authorization
   633  plugin](../../extend/authorization.md) section in the Docker extend section of this documentation.
   634  
   635  
   636  ## Daemon user namespace options
   637  
   638  The Linux kernel [user namespace support](http://man7.org/linux/man-pages/man7/user_namespaces.7.html) provides additional security by enabling
   639  a process, and therefore a container, to have a unique range of user and
   640  group IDs which are outside the traditional user and group range utilized by
   641  the host system. Potentially the most important security improvement is that,
   642  by default, container processes running as the `root` user will have expected
   643  administrative privilege (with some restrictions) inside the container but will
   644  effectively be mapped to an unprivileged `uid` on the host.
   645  
   646  When user namespace support is enabled, Docker creates a single daemon-wide mapping
   647  for all containers running on the same engine instance. The mappings will
   648  utilize the existing subordinate user and group ID feature available on all modern
   649  Linux distributions.
   650  The [`/etc/subuid`](http://man7.org/linux/man-pages/man5/subuid.5.html) and
   651  [`/etc/subgid`](http://man7.org/linux/man-pages/man5/subgid.5.html) files will be
   652  read for the user, and optional group, specified to the `--userns-remap`
   653  parameter.  If you do not wish to specify your own user and/or group, you can
   654  provide `default` as the value to this flag, and a user will be created on your behalf
   655  and provided subordinate uid and gid ranges. This default user will be named
   656  `dockremap`, and entries will be created for it in `/etc/passwd` and
   657  `/etc/group` using your distro's standard user and group creation tools.
   658  
   659  > **Note**: The single mapping per-daemon restriction is in place for now
   660  > because Docker shares image layers from its local cache across all
   661  > containers running on the engine instance.  Since file ownership must be
   662  > the same for all containers sharing the same layer content, the decision
   663  > was made to map the file ownership on `docker pull` to the daemon's user and
   664  > group mappings so that there is no delay for running containers once the
   665  > content is downloaded. This design preserves the same performance for `docker
   666  > pull`, `docker push`, and container startup as users expect with
   667  > user namespaces disabled.
   668  
   669  ### Starting the daemon with user namespaces enabled
   670  
   671  To enable user namespace support, start the daemon with the
   672  `--userns-remap` flag, which accepts values in the following formats:
   673  
   674   - uid
   675   - uid:gid
   676   - username
   677   - username:groupname
   678  
   679  If numeric IDs are provided, translation back to valid user or group names
   680  will occur so that the subordinate uid and gid information can be read, given
   681  these resources are name-based, not id-based.  If the numeric ID information
   682  provided does not exist as entries in `/etc/passwd` or `/etc/group`, daemon
   683  startup will fail with an error message.
   684  
   685  *Example: starting with default Docker user management:*
   686  
   687  ```
   688       $ docker daemon --userns-remap=default
   689  ```    
   690  When `default` is provided, Docker will create - or find the existing - user and group
   691  named `dockremap`. If the user is created, and the Linux distribution has
   692  appropriate support, the `/etc/subuid` and `/etc/subgid` files will be populated
   693  with a contiguous 65536 length range of subordinate user and group IDs, starting
   694  at an offset based on prior entries in those files.  For example, Ubuntu will
   695  create the following range, based on an existing user named `user1` already owning
   696  the first 65536 range:
   697  
   698  ```
   699       $ cat /etc/subuid
   700       user1:100000:65536
   701       dockremap:165536:65536
   702  ```
   703  
   704  > **Note:** On a fresh Fedora install, we had to `touch` the
   705  > `/etc/subuid` and `/etc/subgid` files to have ranges assigned when users
   706  > were created.  Once these files existed, range assignment on user creation
   707  > worked properly.
   708  
   709  If you have a preferred/self-managed user with subordinate ID mappings already
   710  configured, you can provide that username or uid to the `--userns-remap` flag.
   711  If you have a group that doesn't match the username, you may provide the `gid`
   712  or group name as well; otherwise the username will be used as the group name
   713  when querying the system for the subordinate group ID range.
   714  
   715  ### Detailed information on `subuid`/`subgid` ranges
   716  
   717  Given potential advanced use of the subordinate ID ranges by power users, the 
   718  following paragraphs define how the Docker daemon currently uses the range entries
   719  found within the subordinate range files.
   720  
   721  The simplest case is that only one contiguous range is defined for the
   722  provided user or group. In this case, Docker will use that entire contiguous
   723  range for the mapping of host uids and gids to the container process.  This
   724  means that the first ID in the range will be the remapped root user, and the
   725  IDs above that initial ID will map host ID 1 through the end of the range.
   726  
   727  From the example `/etc/subid` content shown above, the remapped root
   728  user would be uid 165536.
   729  
   730  If the system administrator has set up multiple ranges for a single user or
   731  group, the Docker daemon will read all the available ranges and use the
   732  following algorithm to create the mapping ranges:
   733  
   734  1. The range segments found for the particular user will be sorted by *start ID* ascending.
   735  2. Map segments will be created from each range in increasing value with a length matching the length of each segment. Therefore the range segment with the lowest numeric starting value will be equal to the remapped root, and continue up through host uid/gid equal to the range segment length. As an example, if the lowest segment starts at ID 1000 and has a length of 100, then a map of 1000 -> 0 (the remapped root) up through 1100 -> 100 will be created from this segment. If the next segment starts at ID 10000, then the next map will start with mapping 10000 -> 101 up to the length of this second segment. This will continue until no more segments are found in the subordinate files for this user.
   736  3. If more than five range segments exist for a single user, only the first five will be utilized, matching the kernel's limitation of only five entries in `/proc/self/uid_map` and `proc/self/gid_map`.
   737  
   738  ### User namespace known restrictions
   739  
   740  The following standard Docker features are currently incompatible when
   741  running a Docker daemon with user namespaces enabled:
   742  
   743   - sharing PID or NET namespaces with the host (`--pid=host` or `--net=host`)
   744   - sharing a network namespace with an existing container (`--net=container:*other*`)
   745   - sharing an IPC namespace with an existing container (`--ipc=container:*other*`)
   746   - A `--readonly` container filesystem (this is a Linux kernel restriction against remounting with modified flags of a currently mounted filesystem when inside a user namespace)
   747   - external (volume or graph) drivers which are unaware/incapable of using daemon user mappings
   748   - Using `--privileged` mode flag on `docker run`
   749  
   750  In general, user namespaces are an advanced feature and will require
   751  coordination with other capabilities. For example, if volumes are mounted from
   752  the host, file ownership will have to be pre-arranged if the user or
   753  administrator wishes the containers to have expected access to the volume
   754  contents.
   755  
   756  Finally, while the `root` user inside a user namespaced container process has
   757  many of the expected admin privileges that go along with being the superuser, the
   758  Linux kernel has restrictions based on internal knowledge that this is a user namespaced
   759  process. The most notable restriction that we are aware of at this time is the
   760  inability to use `mknod`. Permission will be denied for device creation even as
   761  container `root` inside a user namespace.
   762  
   763  ## Miscellaneous options
   764  
   765  IP masquerading uses address translation to allow containers without a public
   766  IP to talk to other machines on the Internet. This may interfere with some
   767  network topologies and can be disabled with `--ip-masq=false`.
   768  
   769  Docker supports softlinks for the Docker data directory (`/var/lib/docker`) and
   770  for `/var/lib/docker/tmp`. The `DOCKER_TMPDIR` and the data directory can be
   771  set like this:
   772  
   773      DOCKER_TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker daemon -D -g /var/lib/docker -H unix:// > /var/lib/docker-machine/docker.log 2>&1
   774      # or
   775      export DOCKER_TMPDIR=/mnt/disk2/tmp
   776      /usr/local/bin/docker daemon -D -g /var/lib/docker -H unix:// > /var/lib/docker-machine/docker.log 2>&1
   777  
   778  
   779  # Default cgroup parent
   780  
   781  The `--cgroup-parent` option allows you to set the default cgroup parent
   782  to use for containers. If this option is not set, it defaults to `/docker` for
   783  fs cgroup driver and `system.slice` for systemd cgroup driver.
   784  
   785  If the cgroup has a leading forward slash (`/`), the cgroup is created
   786  under the root cgroup, otherwise the cgroup is created under the daemon
   787  cgroup.
   788  
   789  Assuming the daemon is running in cgroup `daemoncgroup`,
   790  `--cgroup-parent=/foobar` creates a cgroup in
   791  `/sys/fs/cgroup/memory/foobar`, wheras using `--cgroup-parent=foobar`
   792  creates the cgroup in `/sys/fs/cgroup/memory/daemoncgroup/foobar`
   793  
   794  This setting can also be set per container, using the `--cgroup-parent`
   795  option on `docker create` and `docker run`, and takes precedence over
   796  the `--cgroup-parent` option on the daemon.