github.com/amy/helm@v2.7.2+incompatible/docs/install.md (about) 1 # Installing Helm 2 3 There are two parts to Helm: The Helm client (`helm`) and the Helm 4 server (Tiller). This guide shows how to install the client, and then 5 proceeds to show two ways to install the server. 6 7 ## Installing the Helm Client 8 9 The Helm client can be installed either from source, or from pre-built binary 10 releases. 11 12 ### From the Binary Releases 13 14 Every [release](https://github.com/kubernetes/helm/releases) of Helm 15 provides binary releases for a variety of OSes. These binary versions 16 can be manually downloaded and installed. 17 18 1. Download your [desired version](https://github.com/kubernetes/helm/releases) 19 2. Unpack it (`tar -zxvf helm-v2.0.0-linux-amd64.tgz`) 20 3. Find the `helm` binary in the unpacked directory, and move it to its 21 desired destination (`mv linux-amd64/helm /usr/local/bin/helm`) 22 23 From there, you should be able to run the client: `helm help`. 24 25 ### From Homebrew (macOS) 26 27 Members of the Kubernetes community have contributed a Helm formula build to 28 Homebrew. This formula is generally up to date. 29 30 ``` 31 brew install kubernetes-helm 32 ``` 33 34 (Note: There is also a formula for emacs-helm, which is a different 35 project.) 36 37 ## From Script 38 39 Helm now has an installer script that will automatically grab the latest version 40 of the Helm client and [install it locally](https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get). 41 42 You can fetch that script, and then execute it locally. It's well documented so 43 that you can read through it and understand what it is doing before you run it. 44 45 ``` 46 $ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh 47 $ chmod 700 get_helm.sh 48 $ ./get_helm.sh 49 ``` 50 51 Yes, you can `curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash` that if you want to live on the edge. 52 53 ### From Canary Builds 54 55 "Canary" builds are versions of the Helm software that are built from 56 the latest master branch. They are not official releases, and may not be 57 stable. However, they offer the opportunity to test the cutting edge 58 features. 59 60 Canary Helm binaries are stored in the [Kubernetes Helm GCS bucket](https://kubernetes-helm.storage.googleapis.com). 61 Here are links to the common builds: 62 63 - [Linux AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-linux-amd64.tar.gz) 64 - [macOS AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-darwin-amd64.tar.gz) 65 - [Experimental Windows AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-windows-amd64.zip) 66 67 ### From Source (Linux, macOS) 68 69 Building Helm from source is slightly more work, but is the best way to 70 go if you want to test the latest (pre-release) Helm version. 71 72 You must have a working Go environment with 73 [glide](https://github.com/Masterminds/glide) and Mercurial installed. 74 75 ```console 76 $ cd $GOPATH 77 $ mkdir -p src/k8s.io 78 $ cd src/k8s.io 79 $ git clone https://github.com/kubernetes/helm.git 80 $ cd helm 81 $ make bootstrap build 82 ``` 83 84 The `bootstrap` target will attempt to install dependencies, rebuild the 85 `vendor/` tree, and validate configuration. 86 87 The `build` target will compile `helm` and place it in `bin/helm`. 88 Tiller is also compiled, and is placed in `bin/tiller`. 89 90 ## Installing Tiller 91 92 Tiller, the server portion of Helm, typically runs inside of your 93 Kubernetes cluster. But for development, it can also be run locally, and 94 configured to talk to a remote Kubernetes cluster. 95 96 ### Easy In-Cluster Installation 97 98 The easiest way to install `tiller` into the cluster is simply to run 99 `helm init`. This will validate that `helm`'s local environment is set 100 up correctly (and set it up if necessary). Then it will connect to 101 whatever cluster `kubectl` connects to by default (`kubectl config 102 view`). Once it connects, it will install `tiller` into the 103 `kube-system` namespace. 104 105 After `helm init`, you should be able to run `kubectl get pods --namespace 106 kube-system` and see Tiller running. 107 108 You can explicitly tell `helm init` to... 109 110 - Install the canary build with the `--canary-image` flag 111 - Install a particular image (version) with `--tiller-image` 112 - Install to a particular cluster with `--kube-context` 113 - Install into a particular namespace with `--tiller-namespace` 114 115 Once Tiller is installed, running `helm version` should show you both 116 the client and server version. (If it shows only the client version, 117 `helm` cannot yet connect to the server. Use `kubectl` to see if any 118 `tiller` pods are running.) 119 120 Helm will look for Tiller in the `kube-system` namespace unless 121 `--tiller-namespace` or `TILLER_NAMESPACE` is set. 122 123 ### Installing Tiller Canary Builds 124 125 Canary images are built from the `master` branch. They may not be 126 stable, but they offer you the chance to test out the latest features. 127 128 The easiest way to install a canary image is to use `helm init` with the 129 `--canary-image` flag: 130 131 ```console 132 $ helm init --canary-image 133 ``` 134 135 This will use the most recently built container image. You can always 136 uninstall Tiller by deleting the Tiller deployment from the 137 `kube-system` namespace using `kubectl`. 138 139 ### Running Tiller Locally 140 141 For development, it is sometimes easier to work on Tiller locally, and 142 configure it to connect to a remote Kubernetes cluster. 143 144 The process of building Tiller is explained above. 145 146 Once `tiller` has been built, simply start it: 147 148 ```console 149 $ bin/tiller 150 Tiller running on :44134 151 ``` 152 153 When Tiller is running locally, it will attempt to connect to the 154 Kubernetes cluster that is configured by `kubectl`. (Run `kubectl config 155 view` to see which cluster that is.) 156 157 You must tell `helm` to connect to this new local Tiller host instead of 158 connecting to the one in-cluster. There are two ways to do this. The 159 first is to specify the `--host` option on the command line. The second 160 is to set the `$HELM_HOST` environment variable. 161 162 ```console 163 $ export HELM_HOST=localhost:44134 164 $ helm version # Should connect to localhost. 165 Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"} 166 Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"} 167 ``` 168 169 Importantly, even when running locally, Tiller will store release 170 configuration in ConfigMaps inside of Kubernetes. 171 172 ## Upgrading Tiller 173 174 As of Helm 2.2.0, Tiller can be upgraded using `helm init --upgrade`. 175 176 For older versions of Helm, or for manual upgrades, you can use `kubectl` to modify 177 the Tiller image: 178 179 ```console 180 $ export TILLER_TAG=v2.0.0-beta.1 # Or whatever version you want 181 $ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=gcr.io/kubernetes-helm/tiller:$TILLER_TAG 182 deployment "tiller-deploy" image updated 183 ``` 184 185 Setting `TILLER_TAG=canary` will get the latest snapshot of master. 186 187 ## Deleting or Reinstalling Tiller 188 189 Because Tiller stores its data in Kubernetes ConfigMaps, you can safely 190 delete and re-install Tiller without worrying about losing any data. The 191 recommended way of deleting Tiller is with `kubectl delete deployment 192 tiller-deploy --namespace kube-system`, or more concisely `helm reset`. 193 194 Tiller can then be re-installed from the client with: 195 196 ```console 197 $ helm init 198 ``` 199 200 ## Advanced Usage 201 202 `helm init` provides additional flags for modifying Tiller's deployment 203 manifest before it is installed. 204 205 ### Using `--node-selectors` 206 207 The `--node-selectors` flag allows us to specify the node labels required 208 for scheduling the Tiller pod. 209 210 The example below will create the specified label under the nodeSelector 211 property. 212 213 ``` 214 helm init --node-selectors "beta.kubernetes.io/os"="linux" 215 ``` 216 217 The installed deployment manifest will contain our node selector label. 218 219 ``` 220 ... 221 spec: 222 template: 223 spec: 224 nodeSelector: 225 beta.kubernetes.io/os: linux 226 ... 227 ``` 228 229 230 ### Using `--override` 231 232 `--override` allows you to specify properties of Tiller's 233 deployment manifest. Unlike the `--set` command used elsewhere in Helm, 234 `helm init --override` manipulates the specified properties of the final 235 manifest (there is no "values" file). Therefore you may specify any valid 236 value for any valid property in the deployment manifest. 237 238 #### Override annotation 239 240 In the example below we use `--override` to add the revision property and set 241 its value to 1. 242 243 ``` 244 helm init --override metadata.annotations."deployment\.kubernetes\.io/revision"="1" 245 ``` 246 Output: 247 248 ``` 249 apiVersion: extensions/v1beta1 250 kind: Deployment 251 metadata: 252 annotations: 253 deployment.kubernetes.io/revision: "1" 254 ... 255 ``` 256 257 #### Override affinity 258 259 In the example below we set properties for node affinity. Multiple 260 `--override` commands may be combined to modify different properties of the 261 same list item. 262 263 ``` 264 helm init --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"="1" --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key"="e2e-az-name" 265 ``` 266 267 The specified properties are combined into the 268 "preferredDuringSchedulingIgnoredDuringExecution" property's first 269 list item. 270 271 ``` 272 ... 273 spec: 274 strategy: {} 275 template: 276 ... 277 spec: 278 affinity: 279 nodeAffinity: 280 preferredDuringSchedulingIgnoredDuringExecution: 281 - preference: 282 matchExpressions: 283 - key: e2e-az-name 284 operator: "" 285 weight: 1 286 ... 287 ``` 288 289 ### Using `--output` 290 291 The `--output` flag allows us skip the installation of Tiller's deployment 292 manifest and simply output the deployment manifest to stdout in either 293 JSON or YAML format. The output may then be modified with tools like `jq` 294 and installed manually with `kubectl`. 295 296 In the example below we execute `helm init` with the `--output json` flag. 297 298 ``` 299 helm init --output json 300 ``` 301 302 The Tiller installation is skipped and the manifest is output to stdout 303 in JSON format. 304 305 ``` 306 "apiVersion": "extensions/v1beta1", 307 "kind": "Deployment", 308 "metadata": { 309 "creationTimestamp": null, 310 "labels": { 311 "app": "helm", 312 "name": "tiller" 313 }, 314 "name": "tiller-deploy", 315 "namespace": "kube-system" 316 }, 317 ... 318 ``` 319 320 321 ## Conclusion 322 323 In most cases, installation is as simple as getting a pre-built `helm` binary 324 and running `helm init`. This document covers additional cases for those 325 who want to do more sophisticated things with Helm. 326 327 Once you have the Helm Client and Tiller successfully installed, you can 328 move on to using Helm to manage charts.