github.com/KyaXTeam/consul@v1.4.5/website/source/docs/platform/k8s/run.html.md (about)

     1  ---
     2  layout: "docs"
     3  page_title: "Running Consul - Kubernetes"
     4  sidebar_current: "docs-platform-k8s-run"
     5  description: |-
     6    Consul can run directly on Kubernetes, both in server or client mode. For pure-Kubernetes workloads, this enables Consul to also exist purely within Kubernetes. For heterogeneous workloads, Consul agents can join a server running inside or outside of Kubernetes.
     7  ---
     8  
     9  # Running Consul on Kubernetes
    10  
    11  Consul can run directly on Kubernetes, both in server or client mode.
    12  For pure-Kubernetes workloads, this enables Consul to also exist purely
    13  within Kubernetes. For heterogeneous workloads, Consul agents can join
    14  a server running inside or outside of Kubernetes.
    15  
    16  This page starts with a large how-to section for various specific tasks.
    17  To learn more about the general architecture of Consul on Kubernetes, scroll
    18  down to the [architecture](/docs/platform/k8s/run.html#architecture) section.
    19  
    20  ## Helm Chart
    21  
    22  The recommended way to run Consul on Kubernetes is via the
    23  [Helm chart](/docs/platform/k8s/helm.html). This will install and configure
    24  all the necessary components to run Consul. The configuration enables you
    25  to run just a server cluster, just a client cluster, or both. Using the Helm
    26  chart, you can have a full Consul deployment up and running in seconds.
    27  
    28  While the Helm chart exposes dozens of useful configurations and automatically
    29  sets up complex resources, it **does not automatically operate Consul.**
    30  You are still responsible for learning how to monitor, backup,
    31  upgrade, etc. the Consul cluster.
    32  
    33  The Helm chart has no required configuration and will install a Consul
    34  cluster with sane defaults out of the box. Prior to going to production,
    35  it is highly recommended that you
    36  [learn about the configuration options](/docs/platform/k8s/helm.html#configuration-values-).
    37  
    38  ~> **Security Warning:** By default, the chart will install an insecure configuration
    39  of Consul. This provides a less complicated out-of-box experience for new users,
    40  but is not appropriate for a production setup. It is highly recommended to use
    41  a properly secured Kubernetes cluster or make sure that you understand and enable
    42  the [recommended security features](/docs/internals/security.html). Currently,
    43  some of these features are not supported in the Helm chart and require additional
    44  manual configuration.
    45  
    46  ## How-To
    47  
    48  ### Installing Consul
    49  
    50  To install Consul, clone the consul-helm repository, checkout the latest release, and install
    51  Consul. You can run `helm install` with the `--dry-run` flag to see the
    52  resources it would configure. In a production environment, you should always
    53  use the `--dry-run` flag prior to making any changes to the Consul cluster
    54  via Helm.
    55  
    56  ```sh
    57  # Clone the chart repo
    58  $ git clone https://github.com/hashicorp/consul-helm.git
    59  $ cd consul-helm
    60  
    61  # Checkout a tagged version
    62  $ git checkout v0.1.0
    63  
    64  # Run Helm
    65  $ helm install --name consul ./
    66  ...
    67  ```
    68  
    69  _That's it._ The Helm chart does everything to setup a recommended
    70  Consul-on-Kubernetes deployment.
    71  In a couple minutes, a Consul cluster will be formed and a leader
    72  elected and every node will have a running Consul agent.
    73  
    74  The defaults will install both server and client agents. To install
    75  only one or the other, see the
    76  [chart configuration values](/docs/platform/k8s/helm.html#configuration-values-).
    77  
    78  ### Viewing the Consul UI
    79  
    80  The Consul UI is enabled by default when using the Helm chart.
    81  For security reasons, it isn't exposed via a Service by default so you must
    82  use `kubectl port-forward` to visit the UI. Once the port is forwarded as
    83  shown below, navigate your browser to `http://localhost:8500`.
    84  
    85  ```
    86  $ kubectl port-forward consul-server-0 8500:8500
    87  ...
    88  ```
    89  
    90  The UI can also be exposed via a Kubernetes Service. To do this, configure
    91  the [`ui.service` chart values](/docs/platform/k8s/helm.html#v-ui-service).
    92  
    93  ### Joining an Existing Consul Cluster
    94  
    95  If you have a Consul cluster already running, you can configure your
    96  Kubernetes nodes to join this existing cluster.
    97  
    98  ```yaml
    99  global:
   100    enabled: false
   101  
   102  client:
   103    enabled: true
   104    join:
   105      - "provider=my-cloud config=val ..."
   106  ```
   107  
   108  The `values.yaml` file to configure the Helm chart sets the proper
   109  configuration to join an existing cluster.
   110  
   111  The `global.enabled` value first disables all chart components by default
   112  so that each component is opt-in. This allows us to _only_ setup the client
   113  agents. We then opt-in to the client agents by setting `client.enabled` to
   114  `true`.
   115  
   116  Next, `client.join` is set to an array of valid
   117  [`-retry-join` values](/docs/agent/options.html#retry-join). In the
   118  example above, a fake [cloud auto-join](/docs/agent/cloud-auto-join.html)
   119  value is specified. This should be set to resolve to the proper addresses of
   120  your existing Consul cluster.
   121  
   122  -> **Networking:** Note that for the Kubernetes nodes to join an existing
   123  cluster, the nodes (and specifically the agent pods) must be able to connect
   124  to all other server and client agents inside and _outside_ of Kubernetes.
   125  If this isn't possible, consider running the Kubernetes agents as a separate
   126  DC or adopting Enterprise for
   127  [network segments](/docs/enterprise/network-segments/index.html).
   128  
   129  ### Accessing the Consul HTTP API
   130  
   131  The Consul HTTP API should be accessed by communicating to the local agent
   132  running on the same node. While technically any listening agent (client or
   133  server) can respond to the HTTP API, communicating with the local agent
   134  has important caching behavior, and allows you to use the simpler
   135  [`/agent` endpoints for services and checks](/api/agent.html).
   136  
   137  For Consul installed via the Helm chart, a client agent is installed on
   138  each Kubernetes node. This is explained in the [architecture](/docs/platform/k8s/run.html#client-agents)
   139  section. To access the agent, you may use the
   140  [downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
   141  
   142  An example pod specification is shown below. In addition to pods, anything
   143  with a pod template can also access the downward API and can therefore also
   144  access Consul: StatefulSets, Deployments, Jobs, etc.
   145  
   146  ```yaml
   147  apiVersion: v1
   148  kind: Pod
   149  metadata:
   150    name: consul-example
   151  spec:
   152    containers:
   153      - name: example
   154        image: "consul:latest"
   155        env:
   156          - name: HOST_IP
   157            valueFrom:
   158              fieldRef:
   159                fieldPath: status.hostIP
   160        command:
   161          - "/bin/sh"
   162          - "-ec"
   163          - |
   164              export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
   165              consul kv put hello world
   166    restartPolicy: Never
   167  ```
   168  
   169  An example `Deployment` is also shown below to show how the host IP can
   170  be accessed from nested pod specifications:
   171  
   172  ```yaml
   173  apiVersion: apps/v1
   174  kind: Deployment
   175  metadata:
   176    name: consul-example-deployment
   177  spec:
   178    replicas: 1
   179    selector:
   180      matchLabels:
   181        app: consul-example
   182    template:
   183      metadata:
   184        labels:
   185          app: consul-example
   186      spec:
   187        containers:
   188          - name: example
   189            image: "consul:latest"
   190            env:
   191              - name: HOST_IP
   192                valueFrom:
   193                  fieldRef:
   194                    fieldPath: status.hostIP
   195            command:
   196              - "/bin/sh"
   197              - "-ec"
   198              - |
   199                  export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
   200                  consul kv put hello world
   201  ```
   202  
   203  ### Upgrading Consul on Kubernetes
   204  
   205  To upgrade Consul on Kubernetes, we follow the same pattern as
   206  [generally upgrading Consul](/docs/upgrading.html), except we can use
   207  the Helm chart to step through a rolling deploy. It is important to understand
   208  how to [generally upgrade Consul](/docs/upgrading.html) before reading this
   209  section.
   210  
   211  Upgrading Consul on Kubernetes will follow the same pattern: each server
   212  will be updated one-by-one. After that is successful, the clients will
   213  be updated in batches.
   214  
   215  #### Upgrading Consul Servers
   216  
   217  To initiate the upgrade, change the `server.image` value to the
   218  desired Consul version. For illustrative purposes, the example below will
   219  use `consul:123.456`. Also set the `server.updatePartition` value
   220  _equal to the number of server replicas_:
   221  
   222  ```yaml
   223  server:
   224    image: "consul:123.456"
   225    replicas: 3
   226    updatePartition: 3
   227  ```
   228  
   229  The `updatePartition` value controls how many instances of the server
   230  cluster are updated. Only instances with an index _greater than_ the
   231  `updatePartition` value are updated (zero-indexed). Therefore, by setting
   232  it equal to replicas, none should update yet.
   233  
   234  Next, run the upgrade. You should run this with `--dry-run` first to verify
   235  the changes that will be sent to the Kubernetes cluster.
   236  
   237  ```
   238  $ helm upgrade consul ./
   239  ...
   240  ```
   241  
   242  This should cause no changes (although the resource will be updated). If
   243  everything is stable, begin by decreasing the `updatePartition` value by one,
   244  and running `helm upgrade` again. This should cause the first Consul server
   245  to be stopped and restarted with the new image.
   246  
   247  Wait until the Consul server cluster is healthy again (30s to a few minutes)
   248  then decrease `updatePartition` and upgrade again. Continue until
   249  `updatePartition` is `0`. At this point, you may remove the
   250  `updatePartition` configuration. Your server upgrade is complete.
   251  
   252  #### Upgrading Consul Clients
   253  
   254  With the servers upgraded, it is time to upgrade the clients. To upgrade
   255  the clients, set the `client.image` value to the desired Consul version.
   256  Then, run `helm upgrade`. This will upgrade the clients in batches, waiting
   257  until the clients come up healthy before continuing.
   258  
   259  ## Architecture
   260  
   261  We recommend running Consul on Kubernetes with the same
   262  [general architecture](/docs/internals/architecture.html)
   263  as running it anywhere else. There are some benefits Kubernetes can provide
   264  that eases operating a Consul cluster and we document those below. The standard
   265  [production deployment guide](/docs/guides/deployment.html) is still an
   266  important read even if running Consul within Kubernetes.
   267  
   268  Each section below will outline the different components of running Consul
   269  on Kubernetes and an overview of the resources that are used within the
   270  Kubernetes cluster.
   271  
   272  ### Server Agents
   273  
   274  The server agents are run as a **StatefulSet**, using persistent volume
   275  claims to store the server state. This also ensures that the
   276  [node ID](/docs/agent/options.html#_node_id) is persisted so that servers
   277  can be rescheduled onto new IP addresses without causing issues. The server agents
   278  are configured with
   279  [anti-affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
   280  rules so that they are placed on different nodes. A readiness probe is
   281  configured that marks the pod as ready only when it has established a leader.
   282  
   283  A **Service** is registered to represent the servers and expose the various
   284  ports. The DNS address of this service is used to join the servers to each
   285  other without requiring any other access to the Kubernetes cluster. The
   286  service is configured to publish non-ready endpoints so that it can be used
   287  for joining during bootstrap and upgrades.
   288  
   289  Additionally, a **PodDisruptionBudget** is configured so the Consul server
   290  cluster maintains quorum during voluntary operational events. The maximum
   291  unavailable is `(n/2)-1` where `n` is the number of server agents.
   292  
   293  -> **Note:** Kubernetes and Helm do not delete Persistent Volumes or Persistent
   294  Volume Claims when a
   295  [StatefulSet is deleted](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage),
   296  so this must done manually when removing servers.
   297  
   298  ### Client Agents
   299  
   300  The client agents are run as a **DaemonSet**. This places one agent
   301  (within its own pod) on each Kubernetes node.
   302  The clients expose the Consul HTTP API via a static port (default 8500)
   303  bound to the host port. This enables all other pods on the node to connect
   304  to the node-local agent using the host IP that can be retrieved via the
   305  Kubernetes downward API. See
   306  [accessing the Consul HTTP API](/docs/platform/k8s/run.html#accessing-the-consul-http-api)
   307  for an example.
   308  
   309  There is a major limitation to this: there is no way to bind to a local-only
   310  host port. Therefore, any other node can connect to the agent. This should be
   311  considered for security. For a properly production-secured agent with TLS
   312  and ACLs, this is safe.
   313  
   314  Some people prefer to run **Consul agent per pod** architectures, since this
   315  makes it easy to register the pod as a service easily. However, this turns
   316  a pod into a "node" in Consul and also causes an explosion of resource usage
   317  since every pod needs a Consul agent. We recommend instead running an
   318  agent (in a dedicated pod) per node, via the DaemonSet. This maintains the
   319  node equivalence in Consul. Service registration should be handled via the
   320  catalog syncing feature with Services rather than pods.
   321  
   322  -> **Note:** Due to a limitation of anti-affinity rules with DaemonSets,
   323  a client-mode agent runs alongside server-mode agents in Kubernetes. This
   324  duplication wastes some resources, but otherwise functions perfectly fine.