github.com/microsoft/fabrikate@v1.0.0-alpha.1.0.20210115014322-dc09194d0885/testdata/generate/infra/fabrikate-jaeger/helm_repos/jaeger/charts/elasticsearch/README.md (about) 1 # Elasticsearch Helm Chart 2 3 This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery. 4 Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions. 5 6 ## This Helm chart is deprecated 7 As mentioned in #10543 this chart has been deprecated in favour of the official [Elastic Helm Chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch). 8 We have made steps towards that goal by producing a [migration guide](https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/migration/README.md) to help people switch the management of their clusters over to the new Charts. 9 The Elastic Helm Chart supports version 6 and 7 of Elasticsearch and it was decided it would be easier for people to upgrade after migrating to the Elastic Helm Chart because it's upgrade process works better. 10 During deprecation process we want to make sure that Chart will do what people are using this chart to do. 11 Please look at the Elastic Helm Charts and if you see anything missing from please [open an issue](https://github.com/elastic/helm-charts/issues/new/choose) to let us know what you need. 12 The Elastic Chart repo is also in [Helm Hub](https://hub.helm.sh). 13 14 ## Warning for previous users 15 If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC. 16 If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart. 17 The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version. 18 If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0. 19 20 ## Prerequisites Details 21 22 * Kubernetes 1.10+ 23 * PV dynamic provisioning support on the underlying infrastructure 24 25 ## StatefulSets Details 26 * https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ 27 28 ## StatefulSets Caveats 29 * https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations 30 31 ## Todo 32 33 * Implement TLS/Auth/Security 34 * Smarter upscaling/downscaling 35 * Solution for memory locking 36 37 ## Chart Details 38 This chart will do the following: 39 40 * Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments 41 * Multi-role deployment: master, client (coordinating) and data nodes 42 * Statefulset Supports scaling down without degrading the cluster 43 44 ## Installing the Chart 45 46 To install the chart with the release name `my-release`: 47 48 ```bash 49 $ helm install --name my-release stable/elasticsearch 50 ``` 51 52 ## Deleting the Charts 53 54 Delete the Helm deployment as normal 55 56 ``` 57 $ helm delete my-release 58 ``` 59 60 Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them: 61 62 ``` 63 $ kubectl delete pvc -l release=my-release,component=data 64 ``` 65 66 ## Configuration 67 68 The following table lists the configurable parameters of the elasticsearch chart and their default values. 69 70 | Parameter | Description | Default | 71 | ------------------------------------ | ------------------------------------------------------------------- | --------------------------------------------------- | 72 | `appVersion` | Application Version (Elasticsearch) | `6.8.2` | 73 | `image.repository` | Container image name | `docker.elastic.co/elasticsearch/elasticsearch-oss` | 74 | `image.tag` | Container image tag | `6.8.2` | 75 | `image.pullPolicy` | Container pull policy | `IfNotPresent` | 76 | `image.pullSecrets` | container image pull secrets | `[]` | 77 | `initImage.repository` | Init container image name | `busybox` | 78 | `initImage.tag` | Init container image tag | `latest` | 79 | `initImage.pullPolicy` | Init container pull policy | `Always` | 80 | `schedulerName` | Name of the k8s scheduler (other than default) | `nil` | 81 | `cluster.name` | Cluster name | `elasticsearch` | 82 | `cluster.xpackEnable` | Writes the X-Pack configuration options to the configuration file | `false` | 83 | `cluster.config` | Additional cluster config appended | `{}` | 84 | `cluster.keystoreSecret` | Name of secret holding secure config options in an es keystore | `nil` | 85 | `cluster.env` | Cluster environment variables | `{MINIMUM_MASTER_NODES: "2"}` | 86 | `cluster.bootstrapShellCommand` | Post-init command to run in separate Job | `""` | 87 | `cluster.additionalJavaOpts` | Cluster parameters to be added to `ES_JAVA_OPTS` environment variable | `""` | 88 | `cluster.plugins` | List of Elasticsearch plugins to install | `[]` | 89 | `cluster.loggingYml` | Cluster logging configuration for ES v2 | see `values.yaml` for defaults | 90 | `cluster.log4j2Properties` | Cluster logging configuration for ES v5 and 6 | see `values.yaml` for defaults | 91 | `client.name` | Client component name | `client` | 92 | `client.replicas` | Client node replicas (deployment) | `2` | 93 | `client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` | 94 | `client.priorityClassName` | Client priorityClass | `nil` | 95 | `client.heapSize` | Client node heap size | `512m` | 96 | `client.podAnnotations` | Client Deployment annotations | `{}` | 97 | `client.nodeSelector` | Node labels for client pod assignment | `{}` | 98 | `client.tolerations` | Client tolerations | `[]` | 99 | `client.terminationGracePeriodSeconds` | Client nodes: Termination grace period (seconds) | `nil` | 100 | `client.serviceAnnotations` | Client Service annotations | `{}` | 101 | `client.serviceType` | Client service type | `ClusterIP` | 102 | `client.httpNodePort` | Client service HTTP NodePort port number. Has no effect if client.serviceType is not `NodePort`. | `nil` | 103 | `client.loadBalancerIP` | Client loadBalancerIP | `{}` | 104 | `client.loadBalancerSourceRanges` | Client loadBalancerSourceRanges | `{}` | 105 | `client.antiAffinity` | Client anti-affinity policy | `soft` | 106 | `client.nodeAffinity` | Client node affinity policy | `{}` | 107 | `client.initResources` | Client initContainer resources requests & limits | `{}` | 108 | `client.hooks.preStop` | Client nodes: Lifecycle hook script to execute prior the pod stops | `nil` | 109 | `client.hooks.preStart` | Client nodes: Lifecycle hook script to execute after the pod starts | `nil` | 110 | `client.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for client | `""` | 111 | `client.ingress.enabled` | Enable Client Ingress | `false` | 112 | `client.ingress.user` | If this & password are set, enable basic-auth on ingress | `nil` | 113 | `client.ingress.password` | If this & user are set, enable basic-auth on ingress | `nil` | 114 | `client.ingress.annotations` | Client Ingress annotations | `{}` | 115 | `client.ingress.hosts` | Client Ingress Hostnames | `[]` | 116 | `client.ingress.tls` | Client Ingress TLS configuration | `[]` | 117 | `client.exposeTransportPort` | Expose transport port 9300 on client service (ClusterIP) | `false` | 118 | `master.initResources` | Master initContainer resources requests & limits | `{}` | 119 | `master.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for master | `""` | 120 | `master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` | 121 | `master.name` | Master component name | `master` | 122 | `master.replicas` | Master node replicas (deployment) | `2` | 123 | `master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` | 124 | `master.priorityClassName` | Master priorityClass | `nil` | 125 | `master.podAnnotations` | Master Deployment annotations | `{}` | 126 | `master.nodeSelector` | Node labels for master pod assignment | `{}` | 127 | `master.tolerations` | Master tolerations | `[]` | 128 | `master.terminationGracePeriodSeconds` | Master nodes: Termination grace period (seconds) | `nil` | 129 | `master.heapSize` | Master node heap size | `512m` | 130 | `master.name` | Master component name | `master` | 131 | `master.persistence.enabled` | Master persistent enabled/disabled | `true` | 132 | `master.persistence.name` | Master statefulset PVC template name | `data` | 133 | `master.persistence.size` | Master persistent volume size | `4Gi` | 134 | `master.persistence.storageClass` | Master persistent volume Class | `nil` | 135 | `master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` | 136 | `master.readinessProbe` | Master container readiness probes | see `values.yaml` for defaults | 137 | `master.antiAffinity` | Master anti-affinity policy | `soft` | 138 | `master.nodeAffinity` | Master node affinity policy | `{}` | 139 | `master.podManagementPolicy` | Master pod creation strategy | `OrderedReady` | 140 | `master.updateStrategy` | Master node update strategy policy | `{type: "onDelete"}` | 141 | `master.hooks.preStop` | Master nodes: Lifecycle hook script to execute prior the pod stops | `nil` | 142 | `master.hooks.preStart` | Master nodes: Lifecycle hook script to execute after the pod starts | `nil` | 143 | `data.initResources` | Data initContainer resources requests & limits | `{}` | 144 | `data.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for data | `""` | 145 | `data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` | 146 | `data.replicas` | Data node replicas (statefulset) | `2` | 147 | `data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` | 148 | `data.priorityClassName` | Data priorityClass | `nil` | 149 | `data.heapSize` | Data node heap size | `1536m` | 150 | `data.hooks.drain.enabled` | Data nodes: Enable drain pre-stop and post-start hook | `true` | 151 | `data.hooks.preStop` | Data nodes: Lifecycle hook script to execute prior the pod stops. Ignored if `data.hooks.drain.enabled` is `true` | `nil` | 152 | `data.hooks.preStart` | Data nodes: Lifecycle hook script to execute after the pod starts. Ignored if `data.hooks.drain.enabled` is `true` | `nil`| 153 | `data.persistence.enabled` | Data persistent enabled/disabled | `true` | 154 | `data.persistence.name` | Data statefulset PVC template name | `data` | 155 | `data.persistence.size` | Data persistent volume size | `30Gi` | 156 | `data.persistence.storageClass` | Data persistent volume Class | `nil` | 157 | `data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` | 158 | `data.readinessProbe` | Readiness probes for data-containers | see `values.yaml` for defaults | 159 | `data.podAnnotations` | Data StatefulSet annotations | `{}` | 160 | `data.nodeSelector` | Node labels for data pod assignment | `{}` | 161 | `data.tolerations` | Data tolerations | `[]` | 162 | `data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` | 163 | `data.antiAffinity` | Data anti-affinity policy | `soft` | 164 | `data.nodeAffinity` | Data node affinity policy | `{}` | 165 | `data.podManagementPolicy` | Data pod creation strategy | `OrderedReady` | 166 | `data.updateStrategy` | Data node update strategy policy | `{type: "onDelete"}` | 167 | `sysctlInitContainer.enabled` | If true, the sysctl init container is enabled (does not stop chownInitContainer or extraInitContainers from running) | `true` | 168 | `chownInitContainer.enabled` | If true, the chown init container is enabled (does not stop sysctlInitContainer or extraInitContainers from running) | `true` | 169 | `extraInitContainers` | Additional init container passed through the tpl | `` | 170 | `podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | `{}` | 171 | `podSecurityPolicy.enabled` | Specify if a pod security policy must be created | `false` | 172 | `securityContext.enabled` | If true, add securityContext to client, master and data pods | `false` | 173 | `securityContext.runAsUser` | user ID to run containerized process | `1000` | 174 | `serviceAccounts.client.create` | If true, create the client service account | `true` | 175 | `serviceAccounts.client.name` | Name of the client service account to use or create | `{{ elasticsearch.client.fullname }}` | 176 | `serviceAccounts.master.create` | If true, create the master service account | `true` | 177 | `serviceAccounts.master.name` | Name of the master service account to use or create | `{{ elasticsearch.master.fullname }}` | 178 | `serviceAccounts.data.create` | If true, create the data service account | `true` | 179 | `serviceAccounts.data.name` | Name of the data service account to use or create | `{{ elasticsearch.data.fullname }}` | 180 | `testFramework.image` | `test-framework` image repository. | `dduportal/bats` | 181 | `testFramework.tag` | `test-framework` image tag. | `0.4.0` | 182 | `forceIpv6` | force to use IPv6 address to listen if set to true | `false` | 183 184 Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. 185 186 In terms of Memory resources you should make sure that you follow that equation: 187 188 - `${role}HeapSize < ${role}MemoryRequests < ${role}MemoryLimits` 189 190 The YAML value of cluster.config is appended to elasticsearch.yml file for additional customization ("script.inline: on" for example to allow inline scripting) 191 192 # Deep dive 193 194 ## Application Version 195 196 This chart aims to support Elasticsearch v2 to v6 deployments by specifying the `values.yaml` parameter `appVersion`. 197 198 ### Version Specific Features 199 200 * Memory Locking *(variable renamed)* 201 * Ingest Node *(v5)* 202 * X-Pack Plugin *(v5)* 203 204 Upgrade paths & more info: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html 205 206 ## Mlocking 207 208 This is a limitation in kubernetes right now. There is no way to raise the 209 limits of lockable memory, so that these memory areas won't be swapped. This 210 would degrade performance heavily. The issue is tracked in 211 [kubernetes/#3595](https://github.com/kubernetes/kubernetes/issues/3595). 212 213 ``` 214 [WARN ][bootstrap] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory 215 [WARN ][bootstrap] This can result in part of the JVM being swapped out. 216 [WARN ][bootstrap] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536 217 ``` 218 219 ## Minimum Master Nodes 220 > The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster. 221 222 >When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge. 223 224 >This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place. 225 226 >This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1 227 228 More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes 229 230 # Client and Coordinating Nodes 231 232 Elasticsearch v5 terminology has updated, and now refers to a `Client Node` as a `Coordinating Node`. 233 234 More info: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-node.html#coordinating-node 235 236 ## Enabling elasticsearch internal monitoring 237 Requires version 6.3+ and standard non `oss` repository defined. Starting with 6.3 Xpack is partially free and enabled by default. You need to set a new config to enable the collection of these internal metrics. (https://www.elastic.co/guide/en/elasticsearch/reference/6.3/monitoring-settings.html) 238 239 To do this through this helm chart override with the three following changes: 240 ``` 241 image.repository: docker.elastic.co/elasticsearch/elasticsearch 242 cluster.xpackEnable: true 243 cluster.env.XPACK_MONITORING_ENABLED: true 244 ``` 245 246 Note: to see these changes you will need to update your kibana repo to `image.repository: docker.elastic.co/kibana/kibana` instead of the `oss` version 247 248 249 ## Select right storage class for SSD volumes 250 251 ### GCE + Kubernetes 1.5 252 253 Create StorageClass for SSD-PD 254 255 ``` 256 $ kubectl create -f - <<EOF 257 kind: StorageClass 258 apiVersion: extensions/v1beta1 259 metadata: 260 name: ssd 261 provisioner: kubernetes.io/gce-pd 262 parameters: 263 type: pd-ssd 264 EOF 265 ``` 266 Create cluster with Storage class `ssd` on Kubernetes 1.5+ 267 268 ``` 269 $ helm install stable/elasticsearch --name my-release --set data.persistence.storageClass=ssd,data.storage=100Gi 270 ``` 271 272 ### Usage of the `tpl` Function 273 274 The `tpl` function allows us to pass string values from `values.yaml` through the templating engine. It is used for the following values: 275 276 * `extraInitContainers` 277 278 It is important that these values be configured as strings. Otherwise, installation will fail.