github.com/Beeketing/helm@v2.12.1+incompatible/docs/install.md (about)

     1  # Installing Helm
     2  
     3  There are two parts to Helm: The Helm client (`helm`) and the Helm
     4  server (Tiller). This guide shows how to install the client, and then
     5  proceeds to show two ways to install the server.
     6  
     7  **IMPORTANT**: If you are responsible for ensuring your cluster is a controlled environment, especially when resources are shared, it is strongly recommended installing Tiller using a secured configuration. For guidance, see [Securing your Helm Installation](securing_installation.md).
     8  
     9  ## Installing the Helm Client
    10  
    11  The Helm client can be installed either from source, or from pre-built binary
    12  releases.
    13  
    14  ### From the Binary Releases
    15  
    16  Every [release](https://github.com/helm/helm/releases) of Helm
    17  provides binary releases for a variety of OSes. These binary versions
    18  can be manually downloaded and installed.
    19  
    20  1. Download your [desired version](https://github.com/helm/helm/releases)
    21  2. Unpack it (`tar -zxvf helm-v2.0.0-linux-amd64.tgz`)
    22  3. Find the `helm` binary in the unpacked directory, and move it to its
    23     desired destination (`mv linux-amd64/helm /usr/local/bin/helm`)
    24  
    25  From there, you should be able to run the client: `helm help`.
    26  
    27  ### From Snap (Linux)
    28  
    29  The Snap package for Helm is maintained by
    30  [Snapcrafters](https://github.com/snapcrafters/helm).
    31  
    32  ```
    33  $ sudo snap install helm --classic
    34  ```
    35  
    36  ### From Homebrew (macOS)
    37  
    38  Members of the Kubernetes community have contributed a Helm formula build to
    39  Homebrew. This formula is generally up to date.
    40  
    41  ```
    42  brew install kubernetes-helm
    43  ```
    44  
    45  (Note: There is also a formula for emacs-helm, which is a different
    46  project.)
    47  
    48  ### From Chocolatey (Windows)
    49  
    50  Members of the Kubernetes community have contributed a [Helm package](https://chocolatey.org/packages/kubernetes-helm) build to
    51  [Chocolatey](https://chocolatey.org/). This package is generally up to date.
    52  
    53  ```
    54  choco install kubernetes-helm
    55  ```
    56  
    57  ## From Script
    58  
    59  Helm now has an installer script that will automatically grab the latest version
    60  of the Helm client and [install it locally](https://raw.githubusercontent.com/helm/helm/master/scripts/get).
    61  
    62  You can fetch that script, and then execute it locally. It's well documented so
    63  that you can read through it and understand what it is doing before you run it.
    64  
    65  ```
    66  $ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
    67  $ chmod 700 get_helm.sh
    68  $ ./get_helm.sh
    69  ```
    70  
    71  Yes, you can `curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash` that if you want to live on the edge.
    72  
    73  ### From Canary Builds
    74  
    75  "Canary" builds are versions of the Helm software that are built from
    76  the latest master branch. They are not official releases, and may not be
    77  stable. However, they offer the opportunity to test the cutting edge
    78  features.
    79  
    80  Canary Helm binaries are stored in the [Kubernetes Helm GCS bucket](https://kubernetes-helm.storage.googleapis.com).
    81  Here are links to the common builds:
    82  
    83  - [Linux AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-linux-amd64.tar.gz)
    84  - [macOS AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-darwin-amd64.tar.gz)
    85  - [Experimental Windows AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-windows-amd64.zip)
    86  
    87  ### From Source (Linux, macOS)
    88  
    89  Building Helm from source is slightly more work, but is the best way to
    90  go if you want to test the latest (pre-release) Helm version.
    91  
    92  You must have a working Go environment with
    93  [glide](https://github.com/Masterminds/glide) installed.
    94  
    95  ```console
    96  $ cd $GOPATH
    97  $ mkdir -p src/k8s.io
    98  $ cd src/k8s.io
    99  $ git clone https://github.com/helm/helm.git
   100  $ cd helm
   101  $ make bootstrap build
   102  ```
   103  
   104  The `bootstrap` target will attempt to install dependencies, rebuild the
   105  `vendor/` tree, and validate configuration.
   106  
   107  The `build` target will compile `helm` and place it in `bin/helm`.
   108  Tiller is also compiled, and is placed in `bin/tiller`.
   109  
   110  ## Installing Tiller
   111  
   112  Tiller, the server portion of Helm, typically runs inside of your
   113  Kubernetes cluster. But for development, it can also be run locally, and
   114  configured to talk to a remote Kubernetes cluster.
   115  
   116  ### Special Note for RBAC Users
   117  
   118  Most cloud providers enable a feature called Role-Based Access Control - RBAC for short. If your cloud provider enables this feature, you will need to create a service account for Tiller with the right roles and permissions to access resources.
   119  
   120  Check the [Kubernetes Distribution Guide](kubernetes_distros.md) to see if there's any further points of interest on using Helm with your cloud provider. Also check out the guide on [Tiller and Role-Based Access Control](rbac.md) for more information on how to run Tiller in an RBAC-enabled Kubernetes cluster.
   121  
   122  ### Easy In-Cluster Installation
   123  
   124  The easiest way to install `tiller` into the cluster is simply to run
   125  `helm init`. This will validate that `helm`'s local environment is set
   126  up correctly (and set it up if necessary). Then it will connect to
   127  whatever cluster `kubectl` connects to by default (`kubectl config
   128  view`). Once it connects, it will install `tiller` into the
   129  `kube-system` namespace.
   130  
   131  After `helm init`, you should be able to run `kubectl get pods --namespace
   132  kube-system` and see Tiller running.
   133  
   134  You can explicitly tell `helm init` to...
   135  
   136  - Install the canary build with the `--canary-image` flag
   137  - Install a particular image (version) with `--tiller-image`
   138  - Install to a particular cluster with `--kube-context`
   139  - Install into a particular namespace with `--tiller-namespace`
   140  - Install Tiller with a Service Account with `--service-account` (for [RBAC enabled clusters](securing_installation.md#rbac))
   141  - Install Tiller without mounting a service account with `--automount-service-account false`
   142  
   143  Once Tiller is installed, running `helm version` should show you both
   144  the client and server version. (If it shows only the client version,
   145  `helm` cannot yet connect to the server. Use `kubectl` to see if any
   146  `tiller` pods are running.)
   147  
   148  Helm will look for Tiller in the `kube-system` namespace unless
   149  `--tiller-namespace` or `TILLER_NAMESPACE` is set.
   150  
   151  ### Installing Tiller Canary Builds
   152  
   153  Canary images are built from the `master` branch. They may not be
   154  stable, but they offer you the chance to test out the latest features.
   155  
   156  The easiest way to install a canary image is to use `helm init` with the
   157  `--canary-image` flag:
   158  
   159  ```console
   160  $ helm init --canary-image
   161  ```
   162  
   163  This will use the most recently built container image. You can always
   164  uninstall Tiller by deleting the Tiller deployment from the
   165  `kube-system` namespace using `kubectl`.
   166  
   167  ### Running Tiller Locally
   168  
   169  For development, it is sometimes easier to work on Tiller locally, and
   170  configure it to connect to a remote Kubernetes cluster.
   171  
   172  The process of building Tiller is explained above.
   173  
   174  Once `tiller` has been built, simply start it:
   175  
   176  ```console
   177  $ bin/tiller
   178  Tiller running on :44134
   179  ```
   180  
   181  When Tiller is running locally, it will attempt to connect to the
   182  Kubernetes cluster that is configured by `kubectl`. (Run `kubectl config
   183  view` to see which cluster that is.)
   184  
   185  You must tell `helm` to connect to this new local Tiller host instead of
   186  connecting to the one in-cluster. There are two ways to do this. The
   187  first is to specify the `--host` option on the command line. The second
   188  is to set the `$HELM_HOST` environment variable.
   189  
   190  ```console
   191  $ export HELM_HOST=localhost:44134
   192  $ helm version # Should connect to localhost.
   193  Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"}
   194  Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"}
   195  ```
   196  
   197  Importantly, even when running locally, Tiller will store release
   198  configuration in ConfigMaps inside of Kubernetes.
   199  
   200  ## Upgrading Tiller
   201  
   202  As of Helm 2.2.0, Tiller can be upgraded using `helm init --upgrade`.
   203  
   204  For older versions of Helm, or for manual upgrades, you can use `kubectl` to modify
   205  the Tiller image:
   206  
   207  ```console
   208  $ export TILLER_TAG=v2.0.0-beta.1        # Or whatever version you want
   209  $ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=gcr.io/kubernetes-helm/tiller:$TILLER_TAG
   210  deployment "tiller-deploy" image updated
   211  ```
   212  
   213  Setting `TILLER_TAG=canary` will get the latest snapshot of master.
   214  
   215  ## Deleting or Reinstalling Tiller
   216  
   217  Because Tiller stores its data in Kubernetes ConfigMaps, you can safely
   218  delete and re-install Tiller without worrying about losing any data. The
   219  recommended way of deleting Tiller is with `kubectl delete deployment
   220  tiller-deploy --namespace kube-system`, or more concisely `helm reset`.
   221  
   222  Tiller can then be re-installed from the client with:
   223  
   224  ```console
   225  $ helm init
   226  ```
   227  
   228  ## Advanced Usage
   229  
   230  `helm init` provides additional flags for modifying Tiller's deployment
   231  manifest before it is installed.
   232  
   233  ### Using `--node-selectors`
   234  
   235  The `--node-selectors` flag allows us to specify the node labels required
   236  for scheduling the Tiller pod.
   237  
   238  The example below will create the specified label under the nodeSelector
   239  property.
   240  
   241  ```
   242  helm init --node-selectors "beta.kubernetes.io/os"="linux"
   243  ```
   244  
   245  The installed deployment manifest will contain our node selector label.
   246  
   247  ```
   248  ...
   249  spec:
   250    template:
   251      spec:
   252        nodeSelector:
   253          beta.kubernetes.io/os: linux
   254  ...
   255  ```
   256  
   257  
   258  ### Using `--override`
   259  
   260  `--override` allows you to specify properties of Tiller's
   261  deployment manifest. Unlike the `--set` command used elsewhere in Helm,
   262  `helm init --override` manipulates the specified properties of the final
   263  manifest (there is no "values" file). Therefore you may specify any valid
   264  value for any valid property in the deployment manifest.
   265  
   266  #### Override annotation
   267  
   268  In the example below we use `--override` to add the revision property and set
   269  its value to 1.
   270  
   271  ```
   272  helm init --override metadata.annotations."deployment\.kubernetes\.io/revision"="1"
   273  ```
   274  Output:
   275  
   276  ```
   277  apiVersion: extensions/v1beta1
   278  kind: Deployment
   279  metadata:
   280    annotations:
   281      deployment.kubernetes.io/revision: "1"
   282  ...
   283  ```
   284  
   285  #### Override affinity
   286  
   287  In the example below we set properties for node affinity. Multiple
   288  `--override` commands may be combined to modify different properties of the
   289  same list item.
   290  
   291  ```
   292  helm init --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"="1" --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key"="e2e-az-name"
   293  ```
   294  
   295  The specified properties are combined into the
   296  "preferredDuringSchedulingIgnoredDuringExecution" property's first
   297  list item.
   298  
   299  ```
   300  ...
   301  spec:
   302    strategy: {}
   303    template:
   304      ...
   305      spec:
   306        affinity:
   307          nodeAffinity:
   308            preferredDuringSchedulingIgnoredDuringExecution:
   309            - preference:
   310                matchExpressions:
   311                - key: e2e-az-name
   312                  operator: ""
   313              weight: 1
   314  ...
   315  ```
   316  
   317  ### Using `--output`
   318  
   319  The `--output` flag allows us skip the installation of Tiller's deployment
   320  manifest and simply output the deployment manifest to stdout in either
   321  JSON or YAML format. The output may then be modified with tools like `jq`
   322  and installed manually with `kubectl`.
   323  
   324  In the example below we execute `helm init` with the `--output json` flag.
   325  
   326  ```
   327  helm init --output json
   328  ```
   329  
   330  The Tiller installation is skipped and the manifest is output to stdout
   331  in JSON format.
   332  
   333  ```
   334  "apiVersion": "extensions/v1beta1",
   335  "kind": "Deployment",
   336  "metadata": {
   337      "creationTimestamp": null,
   338      "labels": {
   339          "app": "helm",
   340          "name": "tiller"
   341      },
   342      "name": "tiller-deploy",
   343      "namespace": "kube-system"
   344  },
   345  ...
   346  ```
   347  
   348  ### Storage backends
   349  By default, `tiller` stores release information in `ConfigMaps` in the namespace
   350  where it is running. As of Helm 2.7.0, there is now a beta storage backend that
   351  uses `Secrets` for storing release information. This was added for additional
   352  security in protecting charts in conjunction with the release of `Secret` 
   353  encryption in Kubernetes. 
   354  
   355  To enable the secrets backend, you'll need to init Tiller with the following
   356  options:
   357  
   358  ```shell
   359  helm init --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'
   360  ```
   361  
   362  Currently, if you want to switch from the default backend to the secrets
   363  backend, you'll have to do the migration for this on your own. When this backend
   364  graduates from beta, there will be a more official path of migration
   365  
   366  ## Conclusion
   367  
   368  In most cases, installation is as simple as getting a pre-built `helm` binary
   369  and running `helm init`. This document covers additional cases for those
   370  who want to do more sophisticated things with Helm.
   371  
   372  Once you have the Helm Client and Tiller successfully installed, you can
   373  move on to using Helm to manage charts.