github.com/anth0d/nomad@v0.0.0-20221214183521-ae3a0a2cad06/website/content/docs/integrations/consul-connect.mdx (about)

     1  ---
     2  layout: docs
     3  page_title: Consul Service Mesh
     4  description: >-
     5    Learn how to use Nomad with Consul service mesh to enable secure service to service
     6    communication
     7  ---
     8  
     9  # Consul Service Mesh
    10  
    11  ~> **Note:** Nomad's service mesh integration requires Linux network namespaces.
    12  Consul service mesh will not run on Windows or macOS.
    13  
    14  [Consul service mesh](https://developer.hashicorp.com/consul/docs/connect) provides
    15  service-to-service connection authorization and encryption using mutual
    16  Transport Layer Security (TLS). Applications can use sidecar proxies in a
    17  service mesh configuration to automatically establish TLS connections for
    18  inbound and outbound connections without being aware of the service mesh at all.
    19  
    20  # Nomad with Consul Service Mesh Integration
    21  
    22  Nomad integrates with Consul to provide secure service-to-service communication
    23  between Nomad jobs and task groups. To support Consul service mesh, Nomad
    24  adds a new networking mode for jobs that enables tasks in the same task group to
    25  share their networking stack. With a few changes to the job specification, job
    26  authors can opt into service mesh integration. When service mesh is enabled, Nomad will
    27  launch a proxy alongside the application in the job file. The proxy (Envoy)
    28  provides secure communication with other applications in the cluster.
    29  
    30  Nomad job specification authors can use Nomad's Consul service mesh integration to
    31  implement [service segmentation](https://www.consul.io/use-cases/multi-platform-service-mesh) in a
    32  microservice architecture running in public clouds without having to directly
    33  manage TLS certificates. This is transparent to job specification authors as
    34  security features in service mesh continue to work even as the application scales up
    35  or down or gets rescheduled by Nomad.
    36  
    37  For using the Consul service mesh integration with Consul ACLs enabled, see the
    38  [Secure Nomad Jobs with Consul Service Mesh](https://learn.hashicorp.com/tutorials/nomad/consul-service-mesh)
    39  guide.
    40  
    41  # Nomad Consul Service Mesh Example
    42  
    43  The following section walks through an example to enable secure communication
    44  between a web dashboard and a backend counting service. The web dashboard and
    45  the counting service are managed by Nomad. Nomad additionally configures Envoy
    46  proxies to run along side these applications. The dashboard is configured to
    47  connect to the counting service via localhost on port 9001. The proxy is managed
    48  by Nomad, and handles mTLS communication to the counting service.
    49  
    50  ## Prerequisites
    51  
    52  ### Consul
    53  
    54  The Consul service mesh integration with Nomad requires [Consul 1.6 or
    55  later.](https://releases.hashicorp.com/consul/1.6.0/) The Consul agent can be
    56  run in dev mode with the following command:
    57  
    58  ~> **Note:** Nomad's Consul service mesh integration requires Consul in your `$PATH`
    59  
    60  ```shell-session
    61  $ consul agent -dev
    62  ```
    63  
    64  To use service mesh on a non-dev Consul agent, you will minimally need to enable the
    65  GRPC port and set `connect` to enabled by adding some additional information to
    66  your Consul client configurations, depending on format. Consul agents running TLS
    67  and a version greater than [1.14.0](https://releases.hashicorp.com/consul/1.14.0)
    68  should set the `grpc_tls` configuration parameter instead of `grpc`. Please see
    69  the Consul [port documentation](consul_ports) for further reference material.
    70  
    71  For HCL configurations:
    72  
    73  ```hcl
    74  # ...
    75  
    76  ports {
    77    grpc = 8502
    78  }
    79  
    80  connect {
    81    enabled = true
    82  }
    83  ```
    84  
    85  For JSON configurations:
    86  
    87  ```javascript
    88  {
    89    // ...
    90    "ports": {
    91      "grpc": 8502
    92    },
    93    "connect": {
    94       "enabled": true
    95    }
    96  }
    97  ```
    98  
    99  #### Consul ACLs
   100  
   101  ~> **Note:** Starting in Nomad v1.3.0, Consul Service Identity ACL tokens automatically
   102  generated by Nomad on behalf of Connect enabled services are now created in [`Local`]
   103  rather than Global scope, and are no longer replicated globally.
   104  
   105  To facilitate cross-Consul datacenter requests of Connect services registered by
   106  Nomad, Consul agents will need to be configured with [default anonymous][anon_token]
   107  ACL tokens with ACL policies of sufficient permissions to read service and node
   108  metadata pertaining to those requests. This mechanism is described in Consul [#7414][consul_acl].
   109  A typical Consul agent anonymous token may contain an ACL policy such as:
   110  
   111  ```hcl
   112  service_prefix "" { policy = "read" }
   113  node_prefix    "" { policy = "read" }
   114  ```
   115  
   116  ### Nomad
   117  
   118  Nomad must schedule onto a routable interface in order for the proxies to
   119  connect to each other. The following steps show how to start a Nomad dev agent
   120  configured for Consul service mesh.
   121  
   122  ```shell-session
   123  $ sudo nomad agent -dev-connect
   124  ```
   125  
   126  ### CNI Plugins
   127  
   128  Nomad uses CNI plugins to configure the network namespace used to secure the
   129  Consul service mesh sidecar proxy. All Nomad client nodes using network namespaces
   130  must have CNI plugins installed.
   131  
   132  The following commands install CNI plugins:
   133  
   134  ```shell-session
   135  curl -L -o cni-plugins.tgz "https://github.com/containernetworking/plugins/releases/download/v1.0.0/cni-plugins-linux-$( [ $(uname -m) = aarch64 ] && echo arm64 || echo amd64)"-v1.0.0.tgz
   136  sudo mkdir -p /opt/cni/bin
   137  sudo tar -C /opt/cni/bin -xzf cni-plugins.tgz
   138  ```
   139  
   140  Ensure the your Linux operating system distribution has been configured to allow
   141  container traffic through the bridge network to be routed via iptables. These
   142  tunables can be set as follows:
   143  
   144  ```shell-session
   145  echo 1 | sudo tee /proc/sys/net/bridge/bridge-nf-call-arptables
   146  echo 1 | sudo tee /proc/sys/net/bridge/bridge-nf-call-ip6tables
   147  echo 1 | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables
   148  ```
   149  
   150  To preserve these settings on startup of a client node, add a file including the
   151  following to `/etc/sysctl.d/` or remove the file your Linux distribution puts in
   152  that directory.
   153  
   154  ```
   155  net.bridge.bridge-nf-call-arptables = 1
   156  net.bridge.bridge-nf-call-ip6tables = 1
   157  net.bridge.bridge-nf-call-iptables = 1
   158  ```
   159  
   160  ## Run the Service Mesh-enabled Services
   161  
   162  Once Nomad and Consul are running, submit the following service mesh-enabled services
   163  to Nomad by copying the HCL into a file named `servicemesh.nomad` and running:
   164  `nomad job run servicemesh.nomad`
   165  
   166  ```hcl
   167  job "countdash" {
   168    datacenters = ["dc1"]
   169  
   170    group "api" {
   171      network {
   172        mode = "bridge"
   173      }
   174  
   175      service {
   176        name = "count-api"
   177        port = "9001"
   178  
   179        connect {
   180          sidecar_service {}
   181        }
   182      }
   183  
   184      task "web" {
   185        driver = "docker"
   186  
   187        config {
   188          image = "hashicorpdev/counter-api:v3"
   189        }
   190      }
   191    }
   192  
   193    group "dashboard" {
   194      network {
   195        mode = "bridge"
   196  
   197        port "http" {
   198          static = 9002
   199          to     = 9002
   200        }
   201      }
   202  
   203      service {
   204        name = "count-dashboard"
   205        port = "http"
   206  
   207        connect {
   208          sidecar_service {
   209            proxy {
   210              upstreams {
   211                destination_name = "count-api"
   212                local_bind_port  = 8080
   213              }
   214            }
   215          }
   216        }
   217      }
   218  
   219      task "dashboard" {
   220        driver = "docker"
   221  
   222        env {
   223          COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
   224        }
   225  
   226        config {
   227          image = "hashicorpdev/counter-dashboard:v3"
   228        }
   229      }
   230    }
   231  }
   232  ```
   233  
   234  The job contains two task groups: an API service and a web frontend.
   235  
   236  ### API Service
   237  
   238  The API service is defined as a task group with a bridge network:
   239  
   240  ```hcl
   241    group "api" {
   242      network {
   243        mode = "bridge"
   244      }
   245  
   246      # ...
   247    }
   248  ```
   249  
   250  Since the API service is only accessible via Consul service mesh, it does not define
   251  any ports in its network. The service stanza enables service mesh.
   252  
   253  ```hcl
   254    group "api" {
   255  
   256      # ...
   257  
   258      service {
   259        name = "count-api"
   260        port = "9001"
   261  
   262        connect {
   263          sidecar_service {}
   264        }
   265      }
   266  
   267      # ...
   268  
   269    }
   270  ```
   271  
   272  The `port` in the service stanza is the port the API service listens on. The
   273  Envoy proxy will automatically route traffic to that port inside the network
   274  namespace. Note that currently this cannot be a named port; it must be a
   275  hard-coded port value. See [GH-9907].
   276  
   277  ### Web Frontend
   278  
   279  The web frontend is defined as a task group with a bridge network and a static
   280  forwarded port:
   281  
   282  ```hcl
   283    group "dashboard" {
   284      network {
   285        mode = "bridge"
   286  
   287        port "http" {
   288          static = 9002
   289          to     = 9002
   290        }
   291      }
   292  
   293      # ...
   294  
   295    }
   296  ```
   297  
   298  The `static = 9002` parameter requests the Nomad scheduler reserve port 9002 on
   299  a host network interface. The `to = 9002` parameter forwards that host port to
   300  port 9002 inside the network namespace.
   301  
   302  This allows you to connect to the web frontend in a browser by visiting
   303  `http://<host_ip>:9002` as show below:
   304  
   305  [![Count Dashboard][count-dashboard]][count-dashboard]
   306  
   307  The web frontend connects to the API service via Consul service mesh.
   308  
   309  ```hcl
   310      service {
   311        name = "count-dashboard"
   312        port = "http"
   313  
   314        connect {
   315          sidecar_service {
   316            proxy {
   317              upstreams {
   318                destination_name = "count-api"
   319                local_bind_port  = 8080
   320              }
   321            }
   322          }
   323        }
   324      }
   325  ```
   326  
   327  The `upstreams` stanza defines the remote service to access (`count-api`) and
   328  what port to expose that service on inside the network namespace (`8080`).
   329  
   330  The web frontend is configured to communicate with the API service with an
   331  environment variable:
   332  
   333  ```hcl
   334        env {
   335          COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
   336        }
   337  ```
   338  
   339  The web frontend is configured via the `$COUNTING_SERVICE_URL`, so you must
   340  interpolate the upstream's address into that environment variable. Note that
   341  dashes (`-`) are converted to underscores (`_`) in environment variables so
   342  `count-api` becomes `count_api`.
   343  
   344  ## Limitations
   345  
   346  - The minimum Consul version to use Connect with Nomad is Consul v1.8.0.
   347  - The `consul` binary must be present in Nomad's `$PATH` to run the Envoy
   348    proxy sidecar on client nodes.
   349  - Consul service mesh using network namespaces is only supported on Linux.
   350  - Prior to Consul 1.9, the Envoy sidecar proxy will drop and stop accepting
   351    connections while the Nomad agent is restarting.
   352  
   353  [count-dashboard]: /img/count-dashboard.png
   354  [consul_acl]: https://github.com/hashicorp/consul/issues/7414
   355  [gh-9907]: https://github.com/hashicorp/nomad/issues/9907
   356  [`Local`]: https://developer.hashicorp.com/consul/docs/security/acl/acl-tokens#token-attributes
   357  [anon_token]: https://developer.hashicorp.com/consul/docs/security/acl/acl-tokens#special-purpose-tokens
   358  [consul_ports]: https://developer.hashicorp.com/consul/docs/agent/config/config-files#ports