github.com/anth0d/nomad@v0.0.0-20221214183521-ae3a0a2cad06/website/content/plugins/drivers/community/containerd.mdx (about)

     1  ---
     2  layout: docs
     3  page_title: 'Drivers: nomad-driver-containerd'
     4  description: >-
     5    The containerd driver is used
     6    for launching containers using containerd.
     7  ---
     8  
     9  # containerd Task Driver
    10  
    11  Name: `containerd-driver`
    12  
    13  Homepage: https://github.com/Roblox/nomad-driver-containerd
    14  
    15  containerd ([`containerd.io`](https://containerd.io)) is a lightweight container
    16  daemon for running and managing container lifecycle. Docker daemon also uses
    17  containerd.
    18  
    19  ```hcl
    20  dockerd (docker daemon) --> containerd --> containerd-shim --> runc
    21  ```
    22  
    23  `nomad-driver-containerd` enables Nomad clients to launch containers directly
    24  using containerd, without Docker. The Docker daemon is therefore not required on
    25  the host system.
    26  
    27  See the [project's homepage](https://github.com/Roblox/nomad-driver-containerd)
    28  for more details.
    29  
    30  ## Client Requirements
    31  
    32  The containerd task driver is not built into Nomad. It must be
    33  [`downloaded`][releases] onto the client host in the configured plugin
    34  directory.
    35  
    36  - Linux (Ubuntu >=16.04) with [`containerd`](https://containerd.io/downloads/) (>=1.3) installed.
    37  
    38  - [`containerd-driver`][releases] binary in Nomad's [plugin_dir][].
    39  
    40  ## Capabilities
    41  
    42  The `containerd-driver` implements the following [capabilities](/docs/concepts/plugins/task-drivers#capabilities-capabilities-error).
    43  
    44  | Feature              | Implementation          |
    45  | -------------------- | ----------------------- |
    46  | send signals         | true                    |
    47  | exec                 | true                    |
    48  | filesystem isolation | image                   |
    49  | network isolation    | host, group, task, none |
    50  | volume mounting      | true                    |
    51  
    52  For sending signals, one can use `nomad alloc signal` command.<br/>
    53  For exec'ing into the container, one can use `nomad alloc exec` command.
    54  
    55  ## Task Configuration
    56  
    57  Since Docker also relies on containerd for managing container lifecycle, the
    58  example job created by [`nomad init -short`][nomad-init] can easily be adapted
    59  to use `containerd-driver` instead:
    60  
    61  ```hcl
    62  job "redis" {
    63    datacenters = ["dc1"]
    64  
    65    group "redis-group" {
    66      task "redis-task" {
    67        driver = "containerd-driver"
    68  
    69        config {
    70          image = "docker.io/library/redis:alpine"
    71        }
    72  
    73        resources {
    74          cpu    = 500
    75          memory = 256
    76        }
    77      }
    78    }
    79  }
    80  ```
    81  
    82  The containerd task driver supports the following parameters:
    83  
    84  - `image` - (Required) OCI image (Docker is also OCI compatible) for your
    85    container.
    86  
    87  ```hcl
    88  config {
    89    image = "docker.io/library/redis:alpine"
    90  }
    91  ```
    92  
    93  - `image_pull_timeout` - (Optional) A time duration that controls how long
    94    `containerd-driver` will wait before cancelling an in-progress pull of the
    95    OCI image as specified in `image`. Defaults to `"5m"`.
    96  
    97  - `command` - (Optional) Command to override command defined in the image.
    98  
    99  ```hcl
   100  config {
   101    command = "some-command"
   102  }
   103  ```
   104  
   105  - `args` - (Optional) Arguments to the command.
   106  
   107  ```hcl
   108  config {
   109    args = [
   110      "arg1",
   111      "arg2",
   112    ]
   113  }
   114  ```
   115  
   116  - `auth` - (Optional) Provide authentication for a private registry (See [Authentication] below).
   117  
   118  - `entrypoint` - (Optional) A string list overriding the image's entrypoint.
   119  
   120  - `cwd` - (Optional) Specify the current working directory (cwd) for your container process.
   121    If the directory does not exist, one will be created for you.
   122  
   123  - `privileged` - (Optional) `true` or `false` (default) Run container in
   124    privileged mode. Your container will have all Linux capabilities when running
   125    in privileged mode.
   126  
   127  ```hcl
   128  config {
   129    privileged = true
   130  }
   131  ```
   132  
   133  - `pids_limit` - (Optional) An integer value that specifies the pid limit for
   134    the container. Defaults to unlimited.
   135  
   136  - `pid_mode` - (Optional) `host` or not set (default). Set to `host` to share
   137    the PID namespace with the host.
   138  
   139  - `host_dns` - (Optional) `true` (default) or `false` By default, a container
   140    launched using `containerd-driver` will use host `/etc/resolv.conf`. This is
   141    similar to [Docker's behavior]. However, if you don't want to use
   142    host DNS, you can turn off this flag by setting `host_dns=false`.
   143  
   144  - `seccomp` - (Optional) Enable default seccomp profile. List of [allowed syscalls].
   145  
   146  - `seccomp_profile` - (Optional) Path to custom seccomp profile.
   147    `seccomp` must be set to `true` in order to use `seccomp_profile`.
   148  
   149    The default `docker` seccomp profile found in the [Moby repository]
   150    can be downloaded, and modified (by removing/adding syscalls) to create a custom seccomp profile.
   151    The custom seccomp profile can then be saved under `/opt/seccomp/seccomp.json` on the Nomad client nodes.
   152  
   153  ```hcl
   154  config {
   155    seccomp         = true
   156    seccomp_profile = "/opt/seccomp/seccomp.json"
   157  }
   158  ```
   159  
   160  - `shm_size` - (Optional) Size of /dev/shm e.g. `128M` if you want 128 MB of /dev/shm.
   161  
   162  - `sysctl` - (Optional) A key-value map of sysctl configurations to set to the
   163    containers on start.
   164  
   165  ```hcl
   166    config {
   167      sysctl = {
   168        "net.core.somaxconn"  = "16384"
   169        "net.ipv4.ip_forward" = "1"
   170      }
   171    }
   172  ```
   173  
   174  - `readonly_rootfs` - (Optional) `true` or `false` (default) Container root
   175    filesystem will be read-only.
   176  
   177  ```hcl
   178  config {
   179    readonly_rootfs = true
   180  }
   181  ```
   182  
   183  - `host_network` ((#host_network)) - (Optional) `true` or `false` (default)
   184    Enable host network. This is equivalent to `--net=host` in docker.
   185  
   186  ```hcl
   187  config {
   188    host_network = true
   189  }
   190  ```
   191  
   192  - `extra_hosts` - (Optional) A list of hosts, given as host:IP, to be added to
   193    `/etc/hosts`.
   194  
   195  - `hostname` - (Optional) The hostname to assign to the container. When
   196    launching more than one of a task (using `count`) with this option set, every
   197    container the task starts will have the same hostname.
   198  
   199  - `cap_add` - (Optional) Add individual capabilities.
   200  
   201  ```hcl
   202  config {
   203    cap_add = [
   204      "CAP_SYS_ADMIN",
   205      "CAP_CHOWN",
   206      "CAP_SYS_CHROOT"
   207    ]
   208  }
   209  ```
   210  
   211  - `cap_drop` - (Optional) Drop individual capabilities.
   212  
   213  ```hcl
   214  config {
   215    cap_drop = [
   216      "CAP_SYS_ADMIN",
   217      "CAP_CHOWN",
   218      "CAP_SYS_CHROOT"
   219    ]
   220  }
   221  ```
   222  
   223  - `devices` - (Optional) A list of devices to be exposed to the container.
   224  
   225  ```hcl
   226  config {
   227    devices = [
   228      "/dev/loop0",
   229      "/dev/loop1"
   230    ]
   231  }
   232  ```
   233  
   234  - `mounts` - (Optional) A list of mounts to be mounted in the container. Volume,
   235    bind and tmpfs type mounts are supported. fstab style [`mount options`][] are
   236    supported.
   237  
   238    - `type` - (Optional) Supported values are `volume`, `bind` or `tmpfs`.
   239      **Default:** `volume`.
   240  
   241    - `target` - (Required) Target path in the container.
   242  
   243    - `source` - (Optional) Source path on the host.
   244  
   245    - `options` - (Optional) fstab style [`mount options`][]. **NOTE:** For bind
   246      mounts, at least `rbind` and `ro` are required.
   247  
   248  ```hcl
   249  config {
   250    mounts = [
   251      {
   252        type = "bind"
   253        target = "/tmp/t1"
   254        source = "/tmp/s1"
   255        options = ["rbind", "ro"]
   256      }
   257    ]
   258  }
   259  ```
   260  
   261  ## Networking
   262  
   263  `nomad-driver-containerd` supports **host** and **bridge** networks.
   264  
   265  **NOTE:** `host` and `bridge` are mutually exclusive options, and only one of
   266  them should be used at a time.
   267  
   268  1. **Host** network can be enabled by setting `host_network` to `true` in task
   269     config of the job spec (see [host_network][host-network] under Task
   270     Configuration).
   271  
   272  2. **Bridge** network can be enabled by setting the `network` stanza in the task
   273     group section of the job spec.
   274  
   275  ```hcl
   276  network {
   277    mode = "bridge"
   278  }
   279  ```
   280  
   281  You need to install CNI plugins on Nomad client nodes under `/opt/cni/bin`
   282  before you can use `bridge` networks.
   283  
   284  **Instructions for installing CNI plugins.**
   285  
   286  ```hcl
   287   $ curl -L -o cni-plugins.tgz "https://github.com/containernetworking/plugins/releases/download/v1.0.0/cni-plugins-linux-$( [ $(uname -m) = aarch64 ] && echo arm64 || echo amd64)"-v1.0.0.tgz
   288   $ sudo mkdir -p /opt/cni/bin
   289   $ sudo tar -C /opt/cni/bin -xzf cni-plugins.tgz
   290  ```
   291  
   292  Also, ensure your Linux operating system distribution has been configured
   293  to allow container traffic through the bridge network to be routed via iptables.
   294  These tunables can be set as follows:
   295  
   296  ```hcl
   297   $ echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables
   298   $ echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
   299   $ echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
   300  ```
   301  
   302  To preserve these settings on startup of a Nomad client node, add a file
   303  including the following to `/etc/sysctl.d/` or remove the file your Linux
   304  distribution puts in that directory.
   305  
   306  ```hcl
   307   net.bridge.bridge-nf-call-arptables = 1
   308   net.bridge.bridge-nf-call-ip6tables = 1
   309   net.bridge.bridge-nf-call-iptables = 1
   310  ```
   311  
   312  ## Port Forwarding
   313  
   314  Nomad supports both `static` and `dynamic` port mapping.
   315  
   316  1. **Static ports**
   317  
   318  Static port mapping can be added in the `network` stanza.
   319  
   320  ```hcl
   321  network {
   322    mode = "bridge"
   323    port "lb" {
   324      static = 8889
   325      to     = 8889
   326    }
   327  }
   328  ```
   329  
   330  Here, `host` port `8889` is mapped to `container` port `8889`.<br/>
   331  **NOTE:** static ports are usually not recommended, except for
   332  `system` or specialized jobs like load balancers.
   333  
   334  2. **Dynamic ports**
   335  
   336  Dynamic port mapping is also enabled in the `network` stanza.
   337  
   338  ```hcl
   339  network {
   340    mode = "bridge"
   341    port "http" {
   342      to = 8080
   343    }
   344  }
   345  ```
   346  
   347  Here, nomad will allocate a dynamic port on the `host` and that port
   348  will be mapped to `8080` in the container.
   349  
   350  You can read more about configuring networking under the [`network`] stanza documentation.
   351  
   352  ## Service discovery
   353  
   354  Nomad schedules workloads of various types across a cluster of generic hosts.
   355  Because of this, placement is not known in advance and you will need to use
   356  service discovery to connect tasks to other services deployed across your cluster.
   357  Nomad integrates with Consul to provide service discovery and monitoring.
   358  
   359  A [`service`] block can be added to your job spec, to enable service discovery.
   360  
   361  The service stanza instructs Nomad to register a service with Consul.
   362  
   363  ## Authentication ((#authentication))
   364  
   365  `auth` stanza allow you to set credentials for your private registry e.g. if you want
   366  to pull an image from a private repository in docker hub.
   367  `auth` stanza can be set either in `Driver Config` or `Task Config` or both.
   368  If set at both places, `Task Config` auth will take precedence over `Driver Config` auth.
   369  
   370  **NOTE**: In the below example, `user` and `pass` are just placeholder values which need to be
   371  replaced by actual `username` and `password`, when specifying the credentials. Below `auth`
   372  stanza can be used for both `Driver Config` and `Task Config`.
   373  
   374  ```hcl
   375  auth {
   376    username = "user"
   377    password = "pass"
   378  }
   379  ```
   380  
   381  ## Plugin Options ((#plugin_options))
   382  
   383  - `enabled` - (Optional) The `containerd` driver may be disabled on hosts by
   384    setting this option to `false` (defaults to `true`).
   385  
   386  - `containerd_runtime` - (Required) Runtime for `containerd` e.g.
   387    `io.containerd.runc.v1` or `io.containerd.runc.v2`
   388  
   389  - `stats_interval` - (Optional) This value defines how frequently you want to
   390    send `TaskStats` to nomad client. (defaults to `1 second`).
   391  
   392  - `allow_privileged` - (Optional) If set to `false`, driver will deny running
   393    privileged jobs. (defaults to `true`).
   394  
   395  An example of using these plugin options with the new [plugin syntax][plugin] is
   396  shown below:
   397  
   398  ```hcl
   399  plugin "containerd-driver" {
   400    config {
   401      enabled = true
   402      containerd_runtime = "io.containerd.runc.v2"
   403      stats_interval = "5s"
   404    }
   405  }
   406  ```
   407  
   408  Please note the plugin name should match whatever name you have specified for
   409  the external driver in the [plugin_dir][plugin_dir] directory.
   410  
   411  [nomad-driver-containerd]: https://github.com/Roblox/nomad-driver-containerd
   412  [nomad-init]: /docs/commands/job/init
   413  [plugin]: /docs/configuration/plugin
   414  [plugin_dir]: /docs/configuration#plugin_dir
   415  [plugin-options]: #plugin_options
   416  [authentication]: #authentication
   417  [host-network]: #host_network
   418  [`mount options`]: https://github.com/containerd/containerd/blob/9561d9389d3dd87ff6030bf1da4e705bbc024130/mount/mount_linux.go#L198-L222
   419  [moby repository]: https://github.com/moby/moby/blob/master/profiles/seccomp/default.json
   420  [docker's behavior]: https://docs.docker.com/config/containers/container-networking/#dns-services
   421  [allowed syscalls]: https://github.com/containerd/containerd/blob/master/contrib/seccomp/seccomp_default.go#L51-L390
   422  [`network`]: /docs/job-specification/network
   423  [`service`]: /docs/job-specification/service
   424  [releases]: https://github.com/Roblox/nomad-driver-containerd/releases/