github.com/y-taka-23/helm@v2.8.0+incompatible/docs/install.md (about)

     1  # Installing Helm
     2  
     3  There are two parts to Helm: The Helm client (`helm`) and the Helm
     4  server (Tiller). This guide shows how to install the client, and then
     5  proceeds to show two ways to install the server.
     6  
     7  **IMPORTANT**: If you are responsible for ensuring your cluster is a controlled environment, especially when resources are shared, it is strongly recommended to install Tiller using a secured configuration. For guidance, see [Securing your Helm Installation](securing_installation.md).
     8  
     9  ## Installing the Helm Client
    10  
    11  The Helm client can be installed either from source, or from pre-built binary
    12  releases.
    13  
    14  ### From the Binary Releases
    15  
    16  Every [release](https://github.com/kubernetes/helm/releases) of Helm
    17  provides binary releases for a variety of OSes. These binary versions
    18  can be manually downloaded and installed.
    19  
    20  1. Download your [desired version](https://github.com/kubernetes/helm/releases)
    21  2. Unpack it (`tar -zxvf helm-v2.0.0-linux-amd64.tgz`)
    22  3. Find the `helm` binary in the unpacked directory, and move it to its
    23     desired destination (`mv linux-amd64/helm /usr/local/bin/helm`)
    24  
    25  From there, you should be able to run the client: `helm help`.
    26  
    27  ### From Homebrew (macOS)
    28  
    29  Members of the Kubernetes community have contributed a Helm formula build to
    30  Homebrew. This formula is generally up to date.
    31  
    32  ```
    33  brew install kubernetes-helm
    34  ```
    35  
    36  (Note: There is also a formula for emacs-helm, which is a different
    37  project.)
    38  
    39  ## From Script
    40  
    41  Helm now has an installer script that will automatically grab the latest version
    42  of the Helm client and [install it locally](https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get).
    43  
    44  You can fetch that script, and then execute it locally. It's well documented so
    45  that you can read through it and understand what it is doing before you run it.
    46  
    47  ```
    48  $ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
    49  $ chmod 700 get_helm.sh
    50  $ ./get_helm.sh
    51  ```
    52  
    53  Yes, you can `curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash` that if you want to live on the edge.
    54  
    55  ### From Canary Builds
    56  
    57  "Canary" builds are versions of the Helm software that are built from
    58  the latest master branch. They are not official releases, and may not be
    59  stable. However, they offer the opportunity to test the cutting edge
    60  features.
    61  
    62  Canary Helm binaries are stored in the [Kubernetes Helm GCS bucket](https://kubernetes-helm.storage.googleapis.com).
    63  Here are links to the common builds:
    64  
    65  - [Linux AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-linux-amd64.tar.gz)
    66  - [macOS AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-darwin-amd64.tar.gz)
    67  - [Experimental Windows AMD64](https://kubernetes-helm.storage.googleapis.com/helm-canary-windows-amd64.zip)
    68  
    69  ### From Source (Linux, macOS)
    70  
    71  Building Helm from source is slightly more work, but is the best way to
    72  go if you want to test the latest (pre-release) Helm version.
    73  
    74  You must have a working Go environment with
    75  [glide](https://github.com/Masterminds/glide) and Mercurial installed.
    76  
    77  ```console
    78  $ cd $GOPATH
    79  $ mkdir -p src/k8s.io
    80  $ cd src/k8s.io
    81  $ git clone https://github.com/kubernetes/helm.git
    82  $ cd helm
    83  $ make bootstrap build
    84  ```
    85  
    86  The `bootstrap` target will attempt to install dependencies, rebuild the
    87  `vendor/` tree, and validate configuration.
    88  
    89  The `build` target will compile `helm` and place it in `bin/helm`.
    90  Tiller is also compiled, and is placed in `bin/tiller`.
    91  
    92  ## Installing Tiller
    93  
    94  Tiller, the server portion of Helm, typically runs inside of your
    95  Kubernetes cluster. But for development, it can also be run locally, and
    96  configured to talk to a remote Kubernetes cluster.
    97  
    98  ### Easy In-Cluster Installation
    99  
   100  The easiest way to install `tiller` into the cluster is simply to run
   101  `helm init`. This will validate that `helm`'s local environment is set
   102  up correctly (and set it up if necessary). Then it will connect to
   103  whatever cluster `kubectl` connects to by default (`kubectl config
   104  view`). Once it connects, it will install `tiller` into the
   105  `kube-system` namespace.
   106  
   107  After `helm init`, you should be able to run `kubectl get pods --namespace
   108  kube-system` and see Tiller running.
   109  
   110  You can explicitly tell `helm init` to...
   111  
   112  - Install the canary build with the `--canary-image` flag
   113  - Install a particular image (version) with `--tiller-image`
   114  - Install to a particular cluster with `--kube-context`
   115  - Install into a particular namespace with `--tiller-namespace`
   116  
   117  Once Tiller is installed, running `helm version` should show you both
   118  the client and server version. (If it shows only the client version,
   119  `helm` cannot yet connect to the server. Use `kubectl` to see if any
   120  `tiller` pods are running.)
   121  
   122  Helm will look for Tiller in the `kube-system` namespace unless
   123  `--tiller-namespace` or `TILLER_NAMESPACE` is set.
   124  
   125  ### Installing Tiller Canary Builds
   126  
   127  Canary images are built from the `master` branch. They may not be
   128  stable, but they offer you the chance to test out the latest features.
   129  
   130  The easiest way to install a canary image is to use `helm init` with the
   131  `--canary-image` flag:
   132  
   133  ```console
   134  $ helm init --canary-image
   135  ```
   136  
   137  This will use the most recently built container image. You can always
   138  uninstall Tiller by deleting the Tiller deployment from the
   139  `kube-system` namespace using `kubectl`.
   140  
   141  ### Running Tiller Locally
   142  
   143  For development, it is sometimes easier to work on Tiller locally, and
   144  configure it to connect to a remote Kubernetes cluster.
   145  
   146  The process of building Tiller is explained above.
   147  
   148  Once `tiller` has been built, simply start it:
   149  
   150  ```console
   151  $ bin/tiller
   152  Tiller running on :44134
   153  ```
   154  
   155  When Tiller is running locally, it will attempt to connect to the
   156  Kubernetes cluster that is configured by `kubectl`. (Run `kubectl config
   157  view` to see which cluster that is.)
   158  
   159  You must tell `helm` to connect to this new local Tiller host instead of
   160  connecting to the one in-cluster. There are two ways to do this. The
   161  first is to specify the `--host` option on the command line. The second
   162  is to set the `$HELM_HOST` environment variable.
   163  
   164  ```console
   165  $ export HELM_HOST=localhost:44134
   166  $ helm version # Should connect to localhost.
   167  Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"}
   168  Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"}
   169  ```
   170  
   171  Importantly, even when running locally, Tiller will store release
   172  configuration in ConfigMaps inside of Kubernetes.
   173  
   174  ## Upgrading Tiller
   175  
   176  As of Helm 2.2.0, Tiller can be upgraded using `helm init --upgrade`.
   177  
   178  For older versions of Helm, or for manual upgrades, you can use `kubectl` to modify
   179  the Tiller image:
   180  
   181  ```console
   182  $ export TILLER_TAG=v2.0.0-beta.1        # Or whatever version you want
   183  $ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=gcr.io/kubernetes-helm/tiller:$TILLER_TAG
   184  deployment "tiller-deploy" image updated
   185  ```
   186  
   187  Setting `TILLER_TAG=canary` will get the latest snapshot of master.
   188  
   189  ## Deleting or Reinstalling Tiller
   190  
   191  Because Tiller stores its data in Kubernetes ConfigMaps, you can safely
   192  delete and re-install Tiller without worrying about losing any data. The
   193  recommended way of deleting Tiller is with `kubectl delete deployment
   194  tiller-deploy --namespace kube-system`, or more concisely `helm reset`.
   195  
   196  Tiller can then be re-installed from the client with:
   197  
   198  ```console
   199  $ helm init
   200  ```
   201  
   202  ## Advanced Usage
   203  
   204  `helm init` provides additional flags for modifying Tiller's deployment
   205  manifest before it is installed.
   206  
   207  ### Using `--node-selectors`
   208  
   209  The `--node-selectors` flag allows us to specify the node labels required
   210  for scheduling the Tiller pod.
   211  
   212  The example below will create the specified label under the nodeSelector
   213  property.
   214  
   215  ```
   216  helm init --node-selectors "beta.kubernetes.io/os"="linux"
   217  ```
   218  
   219  The installed deployment manifest will contain our node selector label.
   220  
   221  ```
   222  ...
   223  spec:
   224    template:
   225      spec:
   226        nodeSelector:
   227          beta.kubernetes.io/os: linux
   228  ...
   229  ```
   230  
   231  
   232  ### Using `--override`
   233  
   234  `--override` allows you to specify properties of Tiller's
   235  deployment manifest. Unlike the `--set` command used elsewhere in Helm,
   236  `helm init --override` manipulates the specified properties of the final
   237  manifest (there is no "values" file). Therefore you may specify any valid
   238  value for any valid property in the deployment manifest.
   239  
   240  #### Override annotation
   241  
   242  In the example below we use `--override` to add the revision property and set
   243  its value to 1.
   244  
   245  ```
   246  helm init --override metadata.annotations."deployment\.kubernetes\.io/revision"="1"
   247  ```
   248  Output:
   249  
   250  ```
   251  apiVersion: extensions/v1beta1
   252  kind: Deployment
   253  metadata:
   254    annotations:
   255      deployment.kubernetes.io/revision: "1"
   256  ...
   257  ```
   258  
   259  #### Override affinity
   260  
   261  In the example below we set properties for node affinity. Multiple
   262  `--override` commands may be combined to modify different properties of the
   263  same list item.
   264  
   265  ```
   266  helm init --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"="1" --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key"="e2e-az-name"
   267  ```
   268  
   269  The specified properties are combined into the
   270  "preferredDuringSchedulingIgnoredDuringExecution" property's first
   271  list item.
   272  
   273  ```
   274  ...
   275  spec:
   276    strategy: {}
   277    template:
   278      ...
   279      spec:
   280        affinity:
   281          nodeAffinity:
   282            preferredDuringSchedulingIgnoredDuringExecution:
   283            - preference:
   284                matchExpressions:
   285                - key: e2e-az-name
   286                  operator: ""
   287              weight: 1
   288  ...
   289  ```
   290  
   291  ### Using `--output`
   292  
   293  The `--output` flag allows us skip the installation of Tiller's deployment
   294  manifest and simply output the deployment manifest to stdout in either
   295  JSON or YAML format. The output may then be modified with tools like `jq`
   296  and installed manually with `kubectl`.
   297  
   298  In the example below we execute `helm init` with the `--output json` flag.
   299  
   300  ```
   301  helm init --output json
   302  ```
   303  
   304  The Tiller installation is skipped and the manifest is output to stdout
   305  in JSON format.
   306  
   307  ```
   308  "apiVersion": "extensions/v1beta1",
   309  "kind": "Deployment",
   310  "metadata": {
   311      "creationTimestamp": null,
   312      "labels": {
   313          "app": "helm",
   314          "name": "tiller"
   315      },
   316      "name": "tiller-deploy",
   317      "namespace": "kube-system"
   318  },
   319  ...
   320  ```
   321  
   322  ### Storage backends
   323  By default, `tiller` stores release information in `ConfigMaps` in the namespace
   324  where it is running. As of Helm 2.7.0, there is now a beta storage backend that
   325  uses `Secrets` for storing release information. This was added for additional
   326  security in protecting charts in conjunction with the release of `Secret` 
   327  encryption in Kubernetes. 
   328  
   329  To enable the secrets backend, you'll need to init Tiller with the following
   330  options:
   331  
   332  ```shell
   333  helm init --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'
   334  ```
   335  
   336  Currently, if you want to switch from the default backend to the secrets
   337  backend, you'll have to do the migration for this on your own. When this backend
   338  graduates from beta, there will be a more official path of migration
   339  
   340  ## Conclusion
   341  
   342  In most cases, installation is as simple as getting a pre-built `helm` binary
   343  and running `helm init`. This document covers additional cases for those
   344  who want to do more sophisticated things with Helm.
   345  
   346  Once you have the Helm Client and Tiller successfully installed, you can
   347  move on to using Helm to manage charts.