github.com/replicatedhq/ship@v0.55.0/integration/base/shipapp-helm-values/expected/installer/consul/README.md (about)

     1  # Consul Helm Chart
     2  
     3  ## Prerequisites Details
     4  * Kubernetes 1.6+
     5  * PV support on underlying infrastructure
     6  
     7  ## StatefulSet Details
     8  * http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
     9  
    10  ## Chart Details
    11  This chart will do the following:
    12  
    13  * Implemented a dynamically scalable consul cluster using Kubernetes StatefulSet
    14  
    15  ## Installing the Chart
    16  
    17  To install the chart with the release name `my-release`:
    18  
    19  ```bash
    20  $ helm install --name my-release stable/consul
    21  ```
    22  
    23  ## Configuration
    24  
    25  The following table lists the configurable parameters of the consul chart and their default values.
    26  
    27  | Parameter               | Description                           | Default                                                    |
    28  | ----------------------- | ----------------------------------    | ---------------------------------------------------------- |
    29  | `Name`                  | Consul statefulset name               | `consul`                                                   |
    30  | `Image`                 | Container image name                  | `consul`                                                   |
    31  | `ImageTag`              | Container image tag                   | `1.0.0`                                                    |
    32  | `ImagePullPolicy`       | Container pull policy                 | `Always`                                                   |
    33  | `Replicas`              | k8s statefulset replicas              | `3`                                                        |
    34  | `Component`             | k8s selector key                      | `consul`                                                   |
    35  | `ConsulConfig`          | List of secrets and configMaps containing consul configuration | []                                |
    36  | `Cpu`                   | container requested cpu               | `100m`                                                     |
    37  | `DatacenterName`        | Consul Datacenter Name                | `dc1` (The consul default)                                 |
    38  | `DisableHostNodeId`     | Disable Node Id creation (uses random)| `false`                                                    |
    39  | `EncryptGossip`         | Whether or not gossip is encrypted    | `true`                                                     |
    40  | `GossipKey`             | Gossip-key to use by all members      | `nil`                                                      |
    41  | `Storage`               | Persistent volume size                | `1Gi`                                                      |
    42  | `StorageClass`          | Persistent volume storage class       | `nil`                                                      |
    43  | `HttpPort`              | Consul http listening port            | `8500`                                                     |
    44  | `Resources`             | Container resource requests and limits| `{}`                                                       |
    45  | `priorityClassName`     | priorityClassName                     | `nil`                                                      |
    46  | `RpcPort`               | Consul rpc listening port             | `8400`                                                     |
    47  | `SerflanPort`           | Container serf lan listening port     | `8301`                                                     |
    48  | `SerflanUdpPort`        | Container serf lan UDP listening port | `8301`                                                     |
    49  | `SerfwanPort`           | Container serf wan listening port     | `8302`                                                     |
    50  | `SerfwanUdpPort`        | Container serf wan UDP listening port | `8302`                                                     |
    51  | `ServerPort`            | Container server listening port       | `8300`                                                     |
    52  | `ConsulDnsPort`         | Container dns listening port          | `8600`                                                     |
    53  | `affinity`              | Consul affinity settings              | `see values.yaml`                                          |
    54  | `nodeSelector`          | Node labels for pod assignment        | `{}`                                                       |
    55  | `tolerations`           | Tolerations for pod assignment        | `[]`                                                       |
    56  | `maxUnavailable`        | Pod disruption Budget maxUnavailable  | `1`                                                        |
    57  | `ui.enabled`            | Enable Consul Web UI                  | `true`                                                     |
    58  | `uiIngress.enabled`     | Create Ingress for Consul Web UI      | `false`                                                    |
    59  | `uiIngress.annotations` | Associate annotations to the Ingress  | `{}`                                                       |
    60  | `uiIngress.labels`      | Associate labels to the Ingress       | `{}`                                                       |
    61  | `uiIngress.hosts`       | Associate hosts with the Ingress      | `[]`                                                       |
    62  | `uiIngress.tls`         | Associate TLS with the Ingress        | `{}`                                                       |
    63  | `uiService.enabled`     | Create dedicated Consul Web UI svc    | `true`                                                     |
    64  | `uiService.type`        | Dedicate Consul Web UI svc type       | `NodePort`                                                 |
    65  | `uiService.annotations` | Extra annotations for UI service      | `{}`                                                       |
    66  | `acl.enabled`           | Enable basic ACL configuration        | `false`                                                    |
    67  | `acl.masterToken`       | Master token that was provided in consul ACL config file | `""`                                    |
    68  | `acl.agentToken`        | Agent token that was provided in consul ACL config file | `""`                                     |
    69  | `test.image`            | Test container image requires kubectl + bash (used for helm test)   | `lachlanevenson/k8s-kubectl` |
    70  | `test.imageTag`         | Test container image tag  (used for helm test)     | `v1.4.8-bash`                                 |
    71  | `test.rbac.create`                      | Create rbac for test container                 | `false`                           |
    72  | `test.rbac.serviceAccountName`          | Name of existed service account for test container    | ``                         |
    73  
    74  Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
    75  
    76  Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
    77  
    78  ```bash
    79  $ helm install --name my-release -f values.yaml stable/consul
    80  ```
    81  > **Tip**: `ConsulConfig` is impossible to set using --set as it's not possible to set list of hashes with it at the moment, use a YAML file instead.
    82  
    83  > **Tip**: You can use the default [values.yaml](values.yaml)
    84  
    85  ## Further consul configuration
    86  
    87  To support passing in more detailed/complex configuration options using `secret`s or `configMap`s. As an example, here is what a `values.yaml` could look like:
    88  ```yaml
    89  ConsulConfig:
    90    - type: configMap
    91      name: consul-defaults
    92    - type: secret
    93      name: consul-secrets
    94  ```
    95  
    96  > These are both mounted as files in the consul pods, including the secrets. When they are changed, the cluster may need to be restarted.
    97  
    98  > **Important**: Kubernetes does not allow the volumes to be changed for a StatefulSet. If a new item needs to be added to this list, the StatefulSet needs to be deleted and re-created. The contents of each item can change and will be respected when the containers would read configuration (reload/restart).
    99  
   100  This would require the `consul-defaults` `configMap` and `consul-secrets` `secret` in the same `namespace`. There is no difference from the consul perspective, one could use only `secret`s, or only `configMap`s, or neither. They can each contain multiple consul configuration files (every `JSON` file contained in them will be interpreted as one). The order in which the configuration will be loaded is the same order as they are specified in the `ConsulConfig` setting (later overrides earlier). In case they contain multiple files, the order between those files is decided by consul (as per the [--config-dir](https://www.consul.io/docs/agent/options.html#_config_dir) argument in consul agent), but the order in `ConsulConfig` is still respected. The configuration generated by helm (this chart) is loaded last, and therefore overrides the configuration set here.
   101  
   102  ## Cleanup orphaned Persistent Volumes
   103  
   104  Deleting a StateFul will not delete associated Persistent Volumes.
   105  
   106  Do the following after deleting the chart release to clean up orphaned Persistent Volumes.
   107  
   108  ```bash
   109  $ kubectl delete pvc -l component=${RELEASE-NAME}-consul
   110  ```
   111  
   112  ## Pitfalls
   113  
   114  * When ACLs are enabled and `acl_default_policy` is set to `deny`, it is necessary to set the `acl_token` to a token that can perform at least the `consul members`, otherwise the kubernetes liveness probe will keep failing and the containers will be killed every 5 minutes.
   115    * Basic ACLs configuration can be done by setting `acl.enabled` to `true`, and setting values for `acl.masterToken` and `acl.agentToken`.
   116  
   117  ## Testing
   118  
   119  Helm tests are included and they confirm the first three cluster members have quorum.
   120  
   121  ```bash
   122  helm test <RELEASE_NAME>
   123  RUNNING: inky-marsupial-ui-test-nn6lv
   124  PASSED: inky-marsupial-ui-test-nn6lv
   125  ```
   126  
   127  It will confirm that there are at least 3 consul servers present.
   128  
   129  ## Cluster Health
   130  
   131  ```
   132  $ for i in <0..n>; do kubectl exec <consul-$i> -- sh -c 'consul members'; done
   133  ```
   134  eg.
   135  ```
   136  for i in {0..2}; do kubectl exec consul-$i --namespace=consul -- sh -c 'consul members'; done
   137  Node      Address          Status  Type    Build  Protocol  DC
   138  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   139  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   140  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   141  Node      Address          Status  Type    Build  Protocol  DC
   142  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   143  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   144  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   145  Node      Address          Status  Type    Build  Protocol  DC
   146  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   147  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   148  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   149  cluster is healthy
   150  ```
   151  
   152  ## Failover
   153  
   154  If any consul member fails it gets re-joined eventually.
   155  You can test the scenario by killing process of one of the pods:
   156  
   157  ```
   158  shell
   159  $ ps aux | grep consul
   160  $ kill CONSUL_PID
   161  ```
   162  
   163  ```
   164  kubectl logs consul-0 --namespace=consul
   165  Waiting for consul-0.consul to come up
   166  Waiting for consul-1.consul to come up
   167  Waiting for consul-2.consul to come up
   168  ==> WARNING: Expect Mode enabled, expecting 3 servers
   169  ==> Starting Consul agent...
   170  ==> Starting Consul agent RPC...
   171  ==> Consul agent running!
   172           Node name: 'consul-0'
   173          Datacenter: 'dc1'
   174              Server: true (bootstrap: false)
   175         Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
   176        Cluster Addr: 10.244.2.6 (LAN: 8301, WAN: 8302)
   177      Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
   178               Atlas: <disabled>
   179  
   180  ==> Log data will now stream in as it occurs:
   181  
   182      2016/08/18 19:20:35 [INFO] serf: EventMemberJoin: consul-0 10.244.2.6
   183      2016/08/18 19:20:35 [INFO] serf: EventMemberJoin: consul-0.dc1 10.244.2.6
   184      2016/08/18 19:20:35 [INFO] raft: Node at 10.244.2.6:8300 [Follower] entering Follower state
   185      2016/08/18 19:20:35 [INFO] serf: Attempting re-join to previously known node: consul-1: 10.244.3.8:8301
   186      2016/08/18 19:20:35 [INFO] consul: adding LAN server consul-0 (Addr: 10.244.2.6:8300) (DC: dc1)
   187      2016/08/18 19:20:35 [WARN] serf: Failed to re-join any previously known node
   188      2016/08/18 19:20:35 [INFO] consul: adding WAN server consul-0.dc1 (Addr: 10.244.2.6:8300) (DC: dc1)
   189      2016/08/18 19:20:35 [ERR] agent: failed to sync remote state: No cluster leader
   190      2016/08/18 19:20:35 [INFO] agent: Joining cluster...
   191      2016/08/18 19:20:35 [INFO] agent: (LAN) joining: [10.244.2.6 10.244.3.8 10.244.1.7]
   192      2016/08/18 19:20:35 [INFO] serf: EventMemberJoin: consul-1 10.244.3.8
   193      2016/08/18 19:20:35 [WARN] memberlist: Refuting an alive message
   194      2016/08/18 19:20:35 [INFO] serf: EventMemberJoin: consul-2 10.244.1.7
   195      2016/08/18 19:20:35 [INFO] serf: Re-joined to previously known node: consul-1: 10.244.3.8:8301
   196      2016/08/18 19:20:35 [INFO] consul: adding LAN server consul-1 (Addr: 10.244.3.8:8300) (DC: dc1)
   197      2016/08/18 19:20:35 [INFO] consul: adding LAN server consul-2 (Addr: 10.244.1.7:8300) (DC: dc1)
   198      2016/08/18 19:20:35 [INFO] agent: (LAN) joined: 3 Err: <nil>
   199      2016/08/18 19:20:35 [INFO] agent: Join completed. Synced with 3 initial agents
   200      2016/08/18 19:20:51 [INFO] agent.rpc: Accepted client: 127.0.0.1:36302
   201      2016/08/18 19:20:59 [INFO] agent.rpc: Accepted client: 127.0.0.1:36313
   202      2016/08/18 19:21:01 [INFO] agent: Synced node info
   203  ```
   204  
   205  ## Scaling using kubectl
   206  
   207  The consul cluster can be scaled up by running ``kubectl patch`` or ``kubectl edit``. For example,
   208  
   209  ```
   210  kubectl get pods -l "component=${RELEASE-NAME}-consul" --namespace=consul
   211  NAME       READY     STATUS    RESTARTS   AGE
   212  consul-0   1/1       Running   1          4h
   213  consul-1   1/1       Running   0          4h
   214  consul-2   1/1       Running   0          4h
   215  
   216  $ kubectl patch statefulset/consul -p '{"spec":{"replicas": 5}}'
   217  "consul" patched
   218  
   219  kubectl get pods -l "component=${RELEASE-NAME}-consul" --namespace=consul
   220  NAME       READY     STATUS    RESTARTS   AGE
   221  consul-0   1/1       Running   1          4h
   222  consul-1   1/1       Running   0          4h
   223  consul-2   1/1       Running   0          4h
   224  consul-3   1/1       Running   0          41s
   225  consul-4   1/1       Running   0          23s
   226  
   227  lachlanevenson@faux$ for i in {0..4}; do kubectl exec consul-$i --namespace=consul -- sh -c 'consul members'; done
   228  Node      Address          Status  Type    Build  Protocol  DC
   229  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   230  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   231  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   232  consul-3  10.244.2.7:8301  alive   server  0.6.4  2         dc1
   233  consul-4  10.244.2.8:8301  alive   server  0.6.4  2         dc1
   234  Node      Address          Status  Type    Build  Protocol  DC
   235  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   236  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   237  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   238  consul-3  10.244.2.7:8301  alive   server  0.6.4  2         dc1
   239  consul-4  10.244.2.8:8301  alive   server  0.6.4  2         dc1
   240  Node      Address          Status  Type    Build  Protocol  DC
   241  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   242  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   243  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   244  consul-3  10.244.2.7:8301  alive   server  0.6.4  2         dc1
   245  consul-4  10.244.2.8:8301  alive   server  0.6.4  2         dc1
   246  Node      Address          Status  Type    Build  Protocol  DC
   247  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   248  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   249  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   250  consul-3  10.244.2.7:8301  alive   server  0.6.4  2         dc1
   251  consul-4  10.244.2.8:8301  alive   server  0.6.4  2         dc1
   252  Node      Address          Status  Type    Build  Protocol  DC
   253  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   254  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   255  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   256  consul-3  10.244.2.7:8301  alive   server  0.6.4  2         dc1
   257  consul-4  10.244.2.8:8301  alive   server  0.6.4  2         dc1
   258  ```
   259  
   260  Scale down
   261  ```
   262  kubectl patch statefulset/consul -p '{"spec":{"replicas": 3}}' --namespace=consul
   263  "consul" patched
   264  lachlanevenson@faux$ kubectl get pods -l "component=${RELEASE-NAME}-consul" --namespace=consul
   265  NAME       READY     STATUS    RESTARTS   AGE
   266  consul-0   1/1       Running   1          4h
   267  consul-1   1/1       Running   0          4h
   268  consul-2   1/1       Running   0          4h
   269  lachlanevenson@faux$ for i in {0..2}; do kubectl exec consul-$i --namespace=consul -- sh -c 'consul members'; done
   270  Node      Address          Status  Type    Build  Protocol  DC
   271  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   272  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   273  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   274  consul-3  10.244.2.7:8301  failed  server  0.6.4  2         dc1
   275  consul-4  10.244.2.8:8301  failed  server  0.6.4  2         dc1
   276  Node      Address          Status  Type    Build  Protocol  DC
   277  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   278  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   279  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   280  consul-3  10.244.2.7:8301  failed  server  0.6.4  2         dc1
   281  consul-4  10.244.2.8:8301  failed  server  0.6.4  2         dc1
   282  Node      Address          Status  Type    Build  Protocol  DC
   283  consul-0  10.244.2.6:8301  alive   server  0.6.4  2         dc1
   284  consul-1  10.244.3.8:8301  alive   server  0.6.4  2         dc1
   285  consul-2  10.244.1.7:8301  alive   server  0.6.4  2         dc1
   286  consul-3  10.244.2.7:8301  failed  server  0.6.4  2         dc1
   287  consul-4  10.244.2.8:8301  failed  server  0.6.4  2         dc1
   288  ```