github.com/anth0d/nomad@v0.0.0-20221214183521-ae3a0a2cad06/website/content/docs/configuration/client.mdx (about)

     1  ---
     2  layout: docs
     3  page_title: client Stanza - Agent Configuration
     4  description: |-
     5    The "client" stanza configures the Nomad agent to accept jobs as assigned by
     6    the Nomad server, join the cluster, and specify driver-specific configuration.
     7  ---
     8  
     9  # `client` Stanza
    10  
    11  <Placement groups={['client']} />
    12  
    13  The `client` stanza configures the Nomad agent to accept jobs as assigned by
    14  the Nomad server, join the cluster, and specify driver-specific configuration.
    15  
    16  ```hcl
    17  client {
    18    enabled = true
    19    servers = ["1.2.3.4:4647", "5.6.7.8:4647"]
    20  }
    21  ```
    22  
    23  ## `client` Parameters
    24  
    25  - `alloc_dir` `(string: "[data_dir]/alloc")` - Specifies the directory to use
    26    for allocation data. By default, this is the top-level
    27    [data_dir](/docs/configuration#data_dir) suffixed with
    28    "alloc", like `"/opt/nomad/alloc"`. This must be an absolute path.
    29  
    30  - `chroot_env` <code>([ChrootEnv](#chroot_env-parameters): nil)</code> -
    31    Specifies a key-value mapping that defines the chroot environment for jobs
    32    using the Exec and Java drivers.
    33  
    34  - `enabled` `(bool: false)` - Specifies if client mode is enabled. All other
    35    client configuration options depend on this value.
    36  
    37  - `max_kill_timeout` `(string: "30s")` - Specifies the maximum amount of time a
    38    job is allowed to wait to exit. Individual jobs may customize their own kill
    39    timeout, but it may not exceed this value.
    40  
    41  - `disable_remote_exec` `(bool: false)` - Specifies if the client should disable
    42    remote task execution to tasks running on this client.
    43  
    44  - `meta` `(map[string]string: nil)` - Specifies a key-value map that annotates
    45    with user-defined metadata.
    46  
    47  - `network_interface` `(string: varied)` - Specifies the name of the interface
    48    to force network fingerprinting on. When run in dev mode, this defaults to the
    49    loopback interface. When not in dev mode, the interface attached to the
    50    default route is used. The scheduler chooses from these fingerprinted IP
    51    addresses when allocating ports for tasks. This value support [go-sockaddr/template
    52    format][go-sockaddr/template].
    53  
    54    If no non-local IP addresses are found, Nomad could fingerprint link-local IPv6
    55    addresses depending on the client's
    56    [`"fingerprint.network.disallow_link_local"`](#fingerprint-network-disallow_link_local)
    57    configuration value.
    58  
    59  - `cpu_total_compute` `(int: 0)` - Specifies an override for the total CPU
    60    compute. This value should be set to `# Cores * Core MHz`. For example, a
    61    quad-core running at 2 GHz would have a total compute of 8000 (4 \* 2000). Most
    62    clients can determine their total CPU compute automatically, and thus in most
    63    cases this should be left unset.
    64  
    65  - `memory_total_mb` `(int:0)` - Specifies an override for the total memory. If set,
    66    this value overrides any detected memory.
    67  
    68  - `min_dynamic_port` `(int:20000)` - Specifies the minimum dynamic port to be
    69    assigned. Individual ports and ranges of ports may be excluded from dynamic
    70    port assignment via [`reserved`](#reserved-parameters) parameters.
    71  
    72  - `max_dynamic_port` `(int:32000)` - Specifies the maximum dynamic port to be
    73    assigned. Individual ports and ranges of ports may be excluded from dynamic
    74    port assignment via [`reserved`](#reserved-parameters) parameters.
    75  
    76  - `node_class` `(string: "")` - Specifies an arbitrary string used to logically
    77    group client nodes by user-defined class. This can be used during job
    78    placement as a filter.
    79  
    80  - `options` <code>([Options](#options-parameters): nil)</code> - Specifies a
    81    key-value mapping of internal configuration for clients, such as for driver
    82    configuration.
    83  
    84  - `reserved` <code>([Reserved](#reserved-parameters): nil)</code> - Specifies
    85    that Nomad should reserve a portion of the node's resources from receiving
    86    tasks. This can be used to target a certain capacity usage for the node. For
    87    example, a value equal to 20% of the node's CPU could be reserved to target
    88    a CPU utilization of 80%.
    89  
    90  - `servers` `(array<string>: [])` - Specifies an array of addresses to the Nomad
    91    servers this client should join. This list is used to register the client with
    92    the server nodes and advertise the available resources so that the agent can
    93    receive work. This may be specified as an IP address or DNS, with or without
    94    the port. If the port is omitted, the default port of `4647` is used.
    95  
    96  - `server_join` <code>([server_join][server-join]: nil)</code> - Specifies
    97    how the Nomad client will connect to Nomad servers. The `start_join` field
    98    is not supported on the client. The retry_join fields may directly specify
    99    the server address or use go-discover syntax for auto-discovery. See the
   100    documentation for more detail.
   101  
   102  - `state_dir` `(string: "[data_dir]/client")` - Specifies the directory to use
   103    to store client state. By default, this is - the top-level
   104    [data_dir](/docs/configuration#data_dir) suffixed with
   105    "client", like `"/opt/nomad/client"`. This must be an absolute path.
   106  
   107  - `gc_interval` `(string: "1m")` - Specifies the interval at which Nomad
   108    attempts to garbage collect terminal allocation directories.
   109  
   110  - `gc_disk_usage_threshold` `(float: 80)` - Specifies the disk usage percent which
   111    Nomad tries to maintain by garbage collecting terminal allocations.
   112  
   113  - `gc_inode_usage_threshold` `(float: 70)` - Specifies the inode usage percent
   114    which Nomad tries to maintain by garbage collecting terminal allocations.
   115  
   116  - `gc_max_allocs` `(int: 50)` - Specifies the maximum number of allocations
   117    which a client will track before triggering a garbage collection of terminal
   118    allocations. This will _not_ limit the number of allocations a node can run at
   119    a time, however after `gc_max_allocs` every new allocation will cause terminal
   120    allocations to be GC'd.
   121  
   122  - `gc_parallel_destroys` `(int: 2)` - Specifies the maximum number of
   123    parallel destroys allowed by the garbage collector. This value should be
   124    relatively low to avoid high resource usage during garbage collections.
   125  
   126  - `no_host_uuid` `(bool: true)` - By default a random node UUID will be
   127    generated, but setting this to `false` will use the system's UUID. Before
   128    Nomad 0.6 the default was to use the system UUID.
   129  
   130  - `cni_path` `(string: "/opt/cni/bin")` - Sets the search path that is used for
   131    CNI plugin discovery. Multiple paths can be searched using colon delimited
   132    paths
   133  
   134  - `cni_config_dir` `(string: "/opt/cni/config")` - Sets the directory where CNI
   135    network configuration is located. The client will use this path when fingerprinting
   136    CNI networks. Filenames should use the `.conflist` extension.
   137  
   138  - `bridge_network_name` `(string: "nomad")` - Sets the name of the bridge to be
   139    created by nomad for allocations running with bridge networking mode on the
   140    client.
   141  
   142  - `bridge_network_subnet` `(string: "172.26.64.0/20")` - Specifies the subnet
   143    which the client will use to allocate IP addresses from.
   144  
   145  - `artifact` <code>([Artifact](#artifact-parameters): varied)</code> -
   146    Specifies controls on the behavior of task
   147    [`artifact`](/docs/job-specification/artifact) stanzas.
   148  
   149  - `template` <code>([Template](#template-parameters): nil)</code> - Specifies
   150    controls on the behavior of task
   151    [`template`](/docs/job-specification/template) stanzas.
   152  
   153  - `host_volume` <code>([host_volume](#host_volume-stanza): nil)</code> - Exposes
   154    paths from the host as volumes that can be mounted into jobs.
   155  
   156  - `host_network` <code>([host_network](#host_network-stanza): nil)</code> - Registers
   157    additional host networks with the node that can be selected when port mapping.
   158  
   159  - `cgroup_parent` `(string: "/nomad")` - Specifies the cgroup parent for which cgroup
   160    subsystems managed by Nomad will be mounted under. Currently this only applies to the
   161    `cpuset` subsystems. This field is ignored on non Linux platforms.
   162  
   163  ### `chroot_env` Parameters
   164  
   165  Drivers based on [isolated fork/exec](/docs/drivers/exec) implement file
   166  system isolation using chroot on Linux. The `chroot_env` map allows the chroot
   167  environment to be configured using source paths on the host operating system.
   168  The mapping format is:
   169  
   170  ```text
   171  source_path -> dest_path
   172  ```
   173  
   174  The following example specifies a chroot which contains just enough to run the
   175  `ls` utility:
   176  
   177  ```hcl
   178  client {
   179    chroot_env {
   180      "/bin/ls"           = "/bin/ls"
   181      "/etc/ld.so.cache"  = "/etc/ld.so.cache"
   182      "/etc/ld.so.conf"   = "/etc/ld.so.conf"
   183      "/etc/ld.so.conf.d" = "/etc/ld.so.conf.d"
   184      "/etc/passwd"       = "/etc/passwd"
   185      "/lib"              = "/lib"
   186      "/lib64"            = "/lib64"
   187    }
   188  }
   189  ```
   190  
   191  When `chroot_env` is unspecified, the `exec` driver will use a default chroot
   192  environment with the most commonly used parts of the operating system. Please
   193  see the [Nomad `exec` driver documentation](/docs/drivers/exec#chroot) for
   194  the full list.
   195  
   196  As of Nomad 1.2, Nomad will never attempt to embed the `alloc_dir` in the
   197  chroot as doing so would cause infinite recursion.
   198  
   199  ### `options` Parameters
   200  
   201  ~> Note: In Nomad 0.9 client configuration options for drivers were deprecated.
   202  See the [plugin stanza][plugin-stanza] documentation for more information.
   203  
   204  The following is not an exhaustive list of options for only the Nomad
   205  client. To find the options supported by each individual Nomad driver, please
   206  see the [drivers documentation](/docs/drivers).
   207  
   208  - `"driver.allowlist"` `(string: "")` - Specifies a comma-separated list of
   209    allowlisted drivers . If specified, drivers not in the allowlist will be
   210    disabled. If the allowlist is empty, all drivers are fingerprinted and enabled
   211    where applicable.
   212  
   213    ```hcl
   214    client {
   215      options = {
   216        "driver.allowlist" = "docker,qemu"
   217      }
   218    }
   219    ```
   220  
   221  - `"driver.denylist"` `(string: "")` - Specifies a comma-separated list of
   222    denylisted drivers . If specified, drivers in the denylist will be
   223    disabled.
   224  
   225    ```hcl
   226    client {
   227      options = {
   228        "driver.denylist" = "docker,qemu"
   229      }
   230    }
   231    ```
   232  
   233  - `"env.denylist"` `(string: see below)` - Specifies a comma-separated list of
   234    environment variable keys not to pass to these tasks. Nomad passes the host
   235    environment variables to `exec`, `raw_exec` and `java` tasks. If specified,
   236    the defaults are overridden. If a value is provided, **all** defaults are
   237    overridden (they are not merged).
   238  
   239    ```hcl
   240    client {
   241      options = {
   242        "env.denylist" = "MY_CUSTOM_ENVVAR"
   243      }
   244    }
   245    ```
   246  
   247    The default list is:
   248  
   249    ```text
   250    CONSUL_TOKEN
   251    CONSUL_HTTP_TOKEN
   252    VAULT_TOKEN
   253    NOMAD_LICENSE
   254    AWS_ACCESS_KEY_ID
   255    AWS_SECRET_ACCESS_KEY
   256    AWS_SESSION_TOKEN
   257    GOOGLE_APPLICATION_CREDENTIALS
   258    ```
   259  
   260  - `"user.denylist"` `(string: see below)` - Specifies a comma-separated
   261    denylist of usernames for which a task is not allowed to run. This only
   262    applies if the driver is included in `"user.checked_drivers"`. If a value is
   263    provided, **all** defaults are overridden (they are not merged).
   264  
   265    ```hcl
   266    client {
   267      options = {
   268        "user.denylist" = "root,ubuntu"
   269      }
   270    }
   271    ```
   272  
   273    The default list is:
   274  
   275    ```text
   276    root
   277    Administrator
   278    ```
   279  
   280  - `"user.checked_drivers"` `(string: see below)` - Specifies a comma-separated
   281    list of drivers for which to enforce the `"user.denylist"`. For drivers using
   282    containers, this enforcement is usually unnecessary. If a value is provided,
   283    **all** defaults are overridden (they are not merged).
   284  
   285    ```hcl
   286    client {
   287      options = {
   288        "user.checked_drivers" = "exec,raw_exec"
   289      }
   290    }
   291    ```
   292  
   293    The default list is:
   294  
   295    ```text
   296    exec
   297    qemu
   298    java
   299    ```
   300  
   301  - `"fingerprint.allowlist"` `(string: "")` - Specifies a comma-separated list of
   302    allowlisted fingerprinters. If specified, any fingerprinters not in the
   303    allowlist will be disabled. If the allowlist is empty, all fingerprinters are
   304    used.
   305  
   306    ```hcl
   307    client {
   308      options = {
   309        "fingerprint.allowlist" = "network"
   310      }
   311    }
   312    ```
   313  
   314  - `"fingerprint.denylist"` `(string: "")` - Specifies a comma-separated list of
   315    denylisted fingerprinters. If specified, any fingerprinters in the denylist
   316    will be disabled.
   317  
   318    ```hcl
   319    client {
   320      options = {
   321        "fingerprint.denylist" = "network"
   322      }
   323    }
   324    ```
   325  
   326  - `"fingerprint.network.disallow_link_local"` `(string: "false")` - Specifies
   327    whether the network fingerprinter should ignore link-local addresses in the
   328    case that no globally routable address is found. The fingerprinter will always
   329    prefer globally routable addresses.
   330  
   331    ```hcl
   332    client {
   333      options = {
   334        "fingerprint.network.disallow_link_local" = "true"
   335      }
   336    }
   337    ```
   338  
   339  ### `reserved` Parameters
   340  
   341  - `cpu` `(int: 0)` - Specifies the amount of CPU to reserve, in MHz.
   342  
   343  - `cores` `(int: 0)` - Specifies the number of CPU cores to reserve.
   344  
   345  - `memory` `(int: 0)` - Specifies the amount of memory to reserve, in MB.
   346  
   347  - `disk` `(int: 0)` - Specifies the amount of disk to reserve, in MB.
   348  
   349  - `reserved_ports` `(string: "")` - Specifies a comma-separated list of ports
   350    to reserve on all fingerprinted network devices. Ranges can be specified by
   351    using a hyphen separating the two inclusive ends. See also
   352    [`host_network`](#host_network-stanza) for reserving ports on specific host
   353    networks.
   354  
   355  
   356  ### `artifact` Parameters
   357  
   358  - `http_read_timeout` `(string: "30m")` - Specifies the maximum duration in
   359    which an HTTP download request must complete before it is canceled. Set to
   360    `0` to not enforce a limit.
   361  
   362  - `http_max_size` `(string: "100GB")` - Specifies the maximum size allowed for
   363    artifacts downloaded via HTTP. Set to `0` to not enforce a limit.
   364  
   365  - `gcs_timeout` `(string: "30m")` - Specifies the maximum duration in which a
   366    Google Cloud Storate operation must complete before it is canceled. Set to
   367    `0` to not enforce a limit.
   368  
   369  - `git_timeout` `(string: "30m")` - Specifies the maximum duration in which a
   370    Git operation must complete before it is canceled. Set to `0` to not enforce
   371    a limit.
   372  
   373  - `hg_timeout` `(string: "30m")` - Specifies the maximum duration in which a
   374    Mercurial operation must complete before it is canceled. Set to `0` to not
   375    enforce a limit.
   376  
   377  - `s3_timeout` `(string: "30m")` - Specifies the maximum duration in which an
   378    S3 operation must complete before it is canceled. Set to `0` to not enforce a
   379    limit.
   380  
   381  - `disable_filesystem_isolation` `(bool: false)` - Specifies whether filesystem
   382    isolation should be disabled for artifact downloads. Applies only to systems
   383    where filesystem isolation via [landlock] is possible (Linux kernel 5.13+).
   384  
   385  - `set_environment_variables` `(string:"")` - Specifies a comma separated list
   386    of environment variables that should be inherited by the artifact sandbox from
   387    the Nomad client's environment. By default a minimal environment is set including
   388    a `PATH` appropriate for the operating system.
   389  
   390  ### `template` Parameters
   391  
   392  - `function_denylist` `([]string: ["plugin", "writeToFile"])` - Specifies a
   393    list of template rendering functions that should be disallowed in job specs.
   394    By default the `plugin` and `writeToFile` functions are disallowed as they
   395    allow unrestricted root access to the host.
   396  
   397  - `disable_file_sandbox` `(bool: false)` - Allows templates access to arbitrary
   398    files on the client host via the `file` function. By default, templates can
   399    access files only within the [task working directory].
   400  
   401  - `max_stale` `(string: "87600h")` - This is the maximum interval to allow "stale"
   402    data. If `max_stale` is set to `0`, only the Consul leader will respond to queries, and
   403    requests that reach a follower will forward to the leader. In large clusters with
   404    many requests, this is not as scalable. This option allows any follower to respond
   405    to a query, so long as the last-replicated data is within this bound. Higher values
   406    result in less cluster load, but are more likely to have outdated data. This default
   407    of 10 years (`87600h`) matches the default Consul configuration.
   408  
   409  - `wait` `(map: { min = "5s" max = "4m" })` - Defines the minimum and maximum amount
   410    of time to wait before attempting to re-render a template. Consul Template re-renders
   411    templates whenever rendered variables from Consul, Nomad, or Vault change. However in
   412    order to minimize how often tasks are restarted or reloaded, Nomad will configure Consul
   413    Template with a backoff timer that will tick on an interval equal to the specified `min`
   414    value. Consul Template will always wait at least the as long as the `min` value specified.
   415    If the underlying data has not changed between two tick intervals, Consul Template will
   416    re-render. If the underlying data has changed, Consul Template will delay re-rendering
   417    until the underlying data stabilizes for at least one tick interval, or the configured
   418    `max` duration has elapsed. Once the `max` duration has elapsed, Consul Template will
   419    re-render the template with the data available at the time. This is useful to enable in
   420    systems where Consul is in a degraded state, or the referenced data values are changing
   421    rapidly, because it will reduce the number of times a template is rendered. This
   422    configuration is also exposed in the _task template stanza_ to allow overrides per task.
   423  
   424    ```hcl
   425    wait {
   426      min     = "5s"
   427      max     = "4m"
   428    }
   429    ```
   430  
   431  - `wait_bounds` `(map: nil)` - Defines client level lower and upper bounds for
   432    per-template `wait` configuration. If the individual template configuration has
   433    a `min` lower than `wait_bounds.min` or a `max` greater than the `wait_bounds.max`,
   434    the bounds will be enforced, and the template `wait` will be adjusted before being
   435    sent to `consul-template`.
   436  
   437    ```hcl
   438    wait_bounds {
   439      min     = "5s"
   440      max     = "10s"
   441    }
   442      ```
   443  
   444  - `block_query_wait` `(string: "5m")` - This is amount of time in seconds to wait
   445    for the results of a blocking query. Many endpoints in Consul support a feature known as
   446    "blocking queries". A blocking query is used to wait for a potential change
   447    using long polling.
   448  
   449  - `consul_retry` `(map: { attempts = 0 backoff = "250ms" max_backoff = "1m" })`-
   450    This controls the retry behavior when an error is returned from Consul. The template
   451    runner will not exit in the face of failure. Instead, it uses exponential back-off
   452    and retry functions to wait for the Consul cluster to become available, as is
   453    customary in distributed systems.
   454  
   455    ```hcl
   456    consul_retry {
   457      # This specifies the number of attempts to make before giving up. Each
   458      # attempt adds the exponential backoff sleep time. Setting this to
   459      # zero will implement an unlimited number of retries.
   460      attempts = 0
   461      # This is the base amount of time to sleep between retry attempts. Each
   462      # retry sleeps for an exponent of 2 longer than this base. For 5 retries,
   463      # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s.
   464      backoff = "250ms"
   465      # This is the maximum amount of time to sleep between retry attempts.
   466      # When max_backoff is set to zero, there is no upper limit to the
   467      # exponential sleep between retry attempts.
   468      # If max_backoff is set to 10s and backoff is set to 1s, sleep times
   469      # would be: 1s, 2s, 4s, 8s, 10s, 10s, ...
   470      max_backoff = "1m"
   471    }
   472    ```
   473  
   474  - `vault_retry` `(map: { attempts = 0 backoff = "250ms" max_backoff = "1m" })` -
   475    This controls the retry behavior when an error is returned from Vault. Consul
   476    Template is highly fault tolerant, meaning it does not exit in the face of failure.
   477    Instead, it uses exponential back-off and retry functions to wait for the cluster
   478    to become available, as is customary in distributed systems.
   479  
   480    ```hcl
   481    vault_retry {
   482      # This specifies the number of attempts to make before giving up. Each
   483      # attempt adds the exponential backoff sleep time. Setting this to
   484      # zero will implement an unlimited number of retries.
   485      attempts = 0
   486      # This is the base amount of time to sleep between retry attempts. Each
   487      # retry sleeps for an exponent of 2 longer than this base. For 5 retries,
   488      # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s.
   489      backoff = "250ms"
   490      # This is the maximum amount of time to sleep between retry attempts.
   491      # When max_backoff is set to zero, there is no upper limit to the
   492      # exponential sleep between retry attempts.
   493      # If max_backoff is set to 10s and backoff is set to 1s, sleep times
   494      # would be: 1s, 2s, 4s, 8s, 10s, 10s, ...
   495      max_backoff = "1m"
   496    }
   497    ```
   498  
   499  - `nomad_retry` `(map: { attempts = 0 backoff = "250ms" max_backoff = "1m" })` -
   500    This controls the retry behavior when an error is returned from Nomad. Consul
   501    Template is highly fault tolerant, meaning it does not exit in the face of failure.
   502    Instead, it uses exponential back-off and retry functions to wait for the cluster
   503    to become available, as is customary in distributed systems.
   504  
   505    ```hcl
   506    nomad_retry {
   507      # This specifies the number of attempts to make before giving up. Each
   508      # attempt adds the exponential backoff sleep time. Setting this to
   509      # zero will implement an unlimited number of retries.
   510      attempts = 0
   511      # This is the base amount of time to sleep between retry attempts. Each
   512      # retry sleeps for an exponent of 2 longer than this base. For 5 retries,
   513      # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s.
   514      backoff = "250ms"
   515      # This is the maximum amount of time to sleep between retry attempts.
   516      # When max_backoff is set to zero, there is no upper limit to the
   517      # exponential sleep between retry attempts.
   518      # If max_backoff is set to 10s and backoff is set to 1s, sleep times
   519      # would be: 1s, 2s, 4s, 8s, 10s, 10s, ...
   520      max_backoff = "1m"
   521    }
   522    ```
   523  
   524  ### `host_volume` Stanza
   525  
   526  The `host_volume` stanza is used to make volumes available to jobs.
   527  
   528  The key of the stanza corresponds to the name of the volume for use in the
   529  `source` parameter of a `"host"` type [`volume`](/docs/job-specification/volume)
   530  and ACLs.
   531  
   532  ```hcl
   533  client {
   534    host_volume "ca-certificates" {
   535      path = "/etc/ssl/certs"
   536      read_only = true
   537    }
   538  }
   539  ```
   540  
   541  #### `host_volume` Parameters
   542  
   543  - `path` `(string: "", required)` - Specifies the path on the host that should
   544    be used as the source when this volume is mounted into a task. The path must
   545    exist on client startup.
   546  
   547  - `read_only` `(bool: false)` - Specifies whether the volume should only ever be
   548    allowed to be mounted `read_only`, or if it should be writeable.
   549  
   550  ### `host_network` Stanza
   551  
   552  The `host_network` stanza is used to register additional host networks with
   553  the node that can be used when port mapping.
   554  
   555  The key of the stanza corresponds to the name of the network used in the
   556  [`host_network`](/docs/job-specification/network#host-networks).
   557  
   558  ```hcl
   559  client {
   560    host_network "public" {
   561      cidr = "203.0.113.0/24"
   562      reserved_ports = "22,80"
   563    }
   564  }
   565  ```
   566  
   567  #### `host_network` Parameters
   568  
   569  - `cidr` `(string: "")` - Specifies a cidr block of addresses to match against.
   570    If an address is found on the node that is contained by this cidr block, the
   571    host network will be registered with it.
   572  
   573  - `interface` `(string: "")` - Filters searching of addresses to a specific interface.
   574  
   575  - `reserved_ports` `(string: "")` - Specifies a comma-separated list of ports to
   576    reserve on all addresses associated with this network. Ranges can be specified by using
   577    a hyphen separating the two inclusive ends.
   578    [`reserved.reserved_ports`](#reserved_ports) are also reserved on each host
   579    network.
   580  
   581  ## `client` Examples
   582  
   583  ### Common Setup
   584  
   585  This example shows the most basic configuration for a Nomad client joined to a
   586  cluster.
   587  
   588  ```hcl
   589  client {
   590    enabled = true
   591    server_join {
   592      retry_join = [ "1.1.1.1", "2.2.2.2" ]
   593      retry_max = 3
   594      retry_interval = "15s"
   595    }
   596  }
   597  ```
   598  
   599  ### Reserved Resources
   600  
   601  This example shows a sample configuration for reserving resources to the client.
   602  This is useful if you want to allocate only a portion of the client's resources
   603  to jobs.
   604  
   605  ```hcl
   606  client {
   607    enabled = true
   608  
   609    reserved {
   610      cpu            = 500
   611      memory         = 512
   612      disk           = 1024
   613      reserved_ports = "22,80,8500-8600"
   614    }
   615  }
   616  ```
   617  
   618  ### Custom Metadata, Network Speed, and Node Class
   619  
   620  This example shows a client configuration which customizes the metadata, network
   621  speed, and node class. The scheduler can use this information while processing
   622  [constraints][metadata_constraint]. The metadata is completely user configurable;
   623  the values below are for illustrative purposes only.
   624  
   625  ```hcl
   626  client {
   627    enabled       = true
   628    node_class    = "prod"
   629  
   630    meta {
   631      owner           = "ops"
   632      cached_binaries = "redis,apache,nginx,jq,cypress,nodejs"
   633      rack            = "rack-12-1"
   634    }
   635  }
   636  ```
   637  
   638  [plugin-options]: #plugin-options
   639  [plugin-stanza]: /docs/configuration/plugin
   640  [server-join]: /docs/configuration/server_join 'Server Join'
   641  [metadata_constraint]: /docs/job-specification/constraint#user-specified-metadata 'Nomad User-Specified Metadata Constraint Example'
   642  [task working directory]: /docs/runtime/environment#task-directories 'Task directories'
   643  [go-sockaddr/template]: https://godoc.org/github.com/hashicorp/go-sockaddr/template
   644  [landlock]: https://docs.kernel.org/userspace-api/landlock.html