github.com/amtisyAts/helm@v2.17.0+incompatible/docs/install.md (about)

     1  # Installing Helm
     2  
     3  There are two parts to Helm: The Helm client (`helm`) and the Helm
     4  server (Tiller). This guide shows how to install the client, and then
     5  proceeds to show two ways to install the server.
     6  
     7  **IMPORTANT**: If you are responsible for ensuring your cluster is a controlled environment, especially when resources are shared, it is strongly recommended installing Tiller using a secured configuration. For guidance, see [Securing your Helm Installation](securing_installation.md).
     8  
     9  ## Installing the Helm Client
    10  
    11  The Helm client can be installed either from source, or from pre-built binary
    12  releases.
    13  
    14  ### From The Helm Project
    15  
    16  The Helm project provides two ways to fetch and install Helm. These are the
    17  official methods to get Helm releases. In addition to that, the Helm community
    18  provides methods to install Helm through different package managers. Installation
    19  through those methods can be found below the official methods.
    20  
    21  #### From the Binary Releases
    22  
    23  Every [release](https://github.com/helm/helm/releases) of Helm
    24  provides binary releases for a variety of OSes. These binary versions
    25  can be manually downloaded and installed.
    26  
    27  1. Download your [desired version](https://github.com/helm/helm/releases)
    28  2. Unpack it (`tar -zxvf helm-v2.0.0-linux-amd64.tgz`)
    29  3. Find the `helm` binary in the unpacked directory, and move it to its
    30     desired destination (`mv linux-amd64/helm /usr/local/bin/helm`)
    31  
    32  From there, you should be able to run the client: `helm help`.
    33  
    34  #### From Script
    35  
    36  Helm now has an installer script that will automatically grab the latest version
    37  of the Helm client and [install it locally](https://git.io/get_helm.sh).
    38  
    39  You can fetch that script, and then execute it locally. It's well documented so
    40  that you can read through it and understand what it is doing before you run it.
    41  
    42  ```
    43  $ curl -LO https://git.io/get_helm.sh
    44  $ chmod 700 get_helm.sh
    45  $ ./get_helm.sh
    46  ```
    47  
    48  Yes, you can `curl -L https://git.io/get_helm.sh | bash` that if you want to live on the edge.
    49  
    50  ### Through Package Managers
    51  
    52  The Helm community provides the ability to install Helm through operating system
    53  package managers. These are not supported by the Helm project and are not considered
    54  trusted 3rd parties.
    55  
    56  #### From Snap (Linux)
    57  
    58  The Snap package for Helm is maintained by
    59  [Snapcrafters](https://github.com/snapcrafters/helm).
    60  
    61  ```
    62  sudo snap install helm --classic
    63  ```
    64  
    65  #### From Homebrew (macOS)
    66  
    67  Members of the Helm community have contributed a Helm formula build to
    68  Homebrew. This formula is generally up to date.
    69  
    70  ```
    71  brew install kubernetes-helm
    72  ```
    73  
    74  (Note: There is also a formula for emacs-helm, which is a different
    75  project.)
    76  
    77  ### #From Chocolatey or scoop (Windows)
    78  
    79  Members of the Helm community have contributed a [Helm package](https://chocolatey.org/packages/kubernetes-helm) build to
    80  [Chocolatey](https://chocolatey.org/). This package is generally up to date.
    81  
    82  ```
    83  choco install kubernetes-helm
    84  ```
    85  
    86  The binary can also be installed via [`scoop`](https://scoop.sh) command-line installer.
    87  
    88  ```
    89  scoop install helm
    90  ```
    91  
    92  #### From Apt (Debian/Ubuntu)
    93  
    94  Members of the Helm community have contributed a [Helm
    95  package](https://helm.baltorepo.com/stable/debian/) for Apt. This package is generally up to date.
    96  
    97  ```
    98  curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
    99  sudo apt-get install apt-transport-https --yes
   100  echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
   101  sudo apt-get update
   102  sudo apt-get install helm2
   103  ```
   104  
   105  ### Development Builds
   106  
   107  In addition to releases you can download or install development snapshots of Helm.
   108  
   109  #### From Canary Builds
   110  
   111  "Canary" builds are versions of the Helm software that are built from
   112  the latest master branch. They are not official releases, and may not be
   113  stable. However, they offer the opportunity to test the cutting edge
   114  features.
   115  
   116  Canary Helm binaries are stored at [get.helm.sh](https://get.helm.sh).
   117  Here are links to the common builds:
   118  
   119  - [Linux AMD64](https://get.helm.sh/helm-canary-linux-amd64.tar.gz)
   120  - [macOS AMD64](https://get.helm.sh/helm-canary-darwin-amd64.tar.gz)
   121  - [Experimental Windows AMD64](https://get.helm.sh/helm-canary-windows-amd64.zip)
   122  
   123  #### From Source (Linux, macOS)
   124  
   125  Building Helm from source is slightly more work, but is the best way to
   126  go if you want to test the latest (pre-release) Helm version.
   127  
   128  You must have a working Go environment with
   129  [glide](https://github.com/Masterminds/glide) installed.
   130  
   131  ```console
   132  $ cd $GOPATH
   133  $ mkdir -p src/k8s.io
   134  $ cd src/k8s.io
   135  $ git clone https://github.com/helm/helm.git
   136  $ cd helm
   137  $ make bootstrap build
   138  ```
   139  
   140  The `bootstrap` target will attempt to install dependencies, rebuild the
   141  `vendor/` tree, and validate configuration.
   142  
   143  The `build` target will compile `helm` and place it in `bin/helm`.
   144  Tiller is also compiled, and is placed in `bin/tiller`.
   145  
   146  ## Installing Tiller
   147  
   148  Tiller, the server portion of Helm, typically runs inside of your
   149  Kubernetes cluster. But for development, it can also be run locally, and
   150  configured to talk to a remote Kubernetes cluster.
   151  
   152  ### Special Note for RBAC Users
   153  
   154  Most cloud providers enable a feature called Role-Based Access Control - RBAC for short. If your cloud provider enables this feature, you will need to create a service account for Tiller with the right roles and permissions to access resources.
   155  
   156  Check the [Kubernetes Distribution Guide](#kubernetes-distribution-guide) to see if there's any further points of interest on using Helm with your cloud provider. Also check out the guide on [Tiller and Role-Based Access Control](rbac.md) for more information on how to run Tiller in an RBAC-enabled Kubernetes cluster.
   157  
   158  ### Easy In-Cluster Installation
   159  
   160  The easiest way to install `tiller` into the cluster is simply to run
   161  `helm init`. This will validate that `helm`'s local environment is set
   162  up correctly (and set it up if necessary). Then it will connect to
   163  whatever cluster `kubectl` connects to by default (`kubectl config
   164  view`). Once it connects, it will install `tiller` into the
   165  `kube-system` namespace.
   166  
   167  After `helm init`, you should be able to run `kubectl get pods --namespace
   168  kube-system` and see Tiller running.
   169  
   170  You can explicitly tell `helm init` to...
   171  
   172  - Install the canary build with the `--canary-image` flag
   173  - Install a particular image (version) with `--tiller-image`
   174  - Install to a particular cluster with `--kube-context`
   175  - Install into a particular namespace with `--tiller-namespace`
   176  - Install Tiller with a Service Account with `--service-account` (for [RBAC enabled clusters](securing_installation.md#rbac))
   177  - Install Tiller without mounting a service account with `--automount-service-account false`
   178  
   179  Once Tiller is installed, running `helm version` should show you both
   180  the client and server version. (If it shows only the client version,
   181  `helm` cannot yet connect to the server. Use `kubectl` to see if any
   182  `tiller` pods are running.)
   183  
   184  Helm will look for Tiller in the `kube-system` namespace unless
   185  `--tiller-namespace` or `TILLER_NAMESPACE` is set.
   186  
   187  ### Installing Tiller Canary Builds
   188  
   189  Canary images are built from the `master` branch. They may not be
   190  stable, but they offer you the chance to test out the latest features.
   191  
   192  The easiest way to install a canary image is to use `helm init` with the
   193  `--canary-image` flag:
   194  
   195  ```console
   196  $ helm init --canary-image
   197  ```
   198  
   199  This will use the most recently built container image. You can always
   200  uninstall Tiller by deleting the Tiller deployment from the
   201  `kube-system` namespace using `kubectl`.
   202  
   203  ### Running Tiller Locally
   204  
   205  For development, it is sometimes easier to work on Tiller locally, and
   206  configure it to connect to a remote Kubernetes cluster.
   207  
   208  The process of building Tiller is explained above.
   209  
   210  Once `tiller` has been built, simply start it:
   211  
   212  ```console
   213  $ bin/tiller
   214  Tiller running on :44134
   215  ```
   216  
   217  When Tiller is running locally, it will attempt to connect to the
   218  Kubernetes cluster that is configured by `kubectl`. (Run `kubectl config
   219  view` to see which cluster that is.)
   220  
   221  You must tell `helm` to connect to this new local Tiller host instead of
   222  connecting to the one in-cluster. There are two ways to do this. The
   223  first is to specify the `--host` option on the command line. The second
   224  is to set the `$HELM_HOST` environment variable.
   225  
   226  ```console
   227  $ export HELM_HOST=localhost:44134
   228  $ helm version # Should connect to localhost.
   229  Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"}
   230  Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"}
   231  ```
   232  
   233  Importantly, even when running locally, Tiller will store release
   234  configuration in ConfigMaps inside of Kubernetes.
   235  
   236  ## Upgrading Tiller
   237  
   238  As of Helm 2.2.0, Tiller can be upgraded using `helm init --upgrade`.
   239  
   240  For older versions of Helm, or for manual upgrades, you can use `kubectl` to modify
   241  the Tiller image:
   242  
   243  ```console
   244  $ export TILLER_TAG=v2.0.0-beta.1        # Or whatever version you want
   245  $ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=ghcr.io/helm/tiller:$TILLER_TAG
   246  deployment "tiller-deploy" image updated
   247  ```
   248  
   249  Setting `TILLER_TAG=canary` will get the latest snapshot of master.
   250  
   251  ## Deleting or Reinstalling Tiller
   252  
   253  Because Tiller stores its data in Kubernetes ConfigMaps, you can safely
   254  delete and re-install Tiller without worrying about losing any data. The
   255  recommended way of deleting Tiller is with `kubectl delete deployment
   256  tiller-deploy --namespace kube-system`, or more concisely `helm reset`.
   257  
   258  Tiller can then be re-installed from the client with:
   259  
   260  ```console
   261  $ helm init
   262  ```
   263  
   264  ## Advanced Usage
   265  
   266  `helm init` provides additional flags for modifying Tiller's deployment
   267  manifest before it is installed.
   268  
   269  ### Using `--node-selectors`
   270  
   271  The `--node-selectors` flag allows us to specify the node labels required
   272  for scheduling the Tiller pod.
   273  
   274  The example below will create the specified label under the nodeSelector
   275  property.
   276  
   277  ```
   278  helm init --node-selectors "beta.kubernetes.io/os"="linux"
   279  ```
   280  
   281  The installed deployment manifest will contain our node selector label.
   282  
   283  ```
   284  ...
   285  spec:
   286    template:
   287      spec:
   288        nodeSelector:
   289          beta.kubernetes.io/os: linux
   290  ...
   291  ```
   292  
   293  
   294  ### Using `--override`
   295  
   296  `--override` allows you to specify properties of Tiller's
   297  deployment manifest. Unlike the `--set` command used elsewhere in Helm,
   298  `helm init --override` manipulates the specified properties of the final
   299  manifest (there is no "values" file). Therefore you may specify any valid
   300  value for any valid property in the deployment manifest.
   301  
   302  #### Override annotation
   303  
   304  In the example below we use `--override` to add the revision property and set
   305  its value to 1.
   306  
   307  ```
   308  helm init --override metadata.annotations."deployment\.kubernetes\.io/revision"="1"
   309  ```
   310  Output:
   311  
   312  ```
   313  apiVersion: apps/v1
   314  kind: Deployment
   315  metadata:
   316    annotations:
   317      deployment.kubernetes.io/revision: "1"
   318  ...
   319  ```
   320  
   321  #### Override affinity
   322  
   323  In the example below we set properties for node affinity. Multiple
   324  `--override` commands may be combined to modify different properties of the
   325  same list item.
   326  
   327  ```
   328  helm init --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"="1" --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key"="e2e-az-name"
   329  ```
   330  
   331  The specified properties are combined into the
   332  "preferredDuringSchedulingIgnoredDuringExecution" property's first
   333  list item.
   334  
   335  ```
   336  ...
   337  spec:
   338    strategy: {}
   339    template:
   340      ...
   341      spec:
   342        affinity:
   343          nodeAffinity:
   344            preferredDuringSchedulingIgnoredDuringExecution:
   345            - preference:
   346                matchExpressions:
   347                - key: e2e-az-name
   348                  operator: ""
   349              weight: 1
   350  ...
   351  ```
   352  
   353  ### Using `--output`
   354  
   355  The `--output` flag allows us skip the installation of Tiller's deployment
   356  manifest and simply output the deployment manifest to stdout in either
   357  JSON or YAML format. The output may then be modified with tools like `jq`
   358  and installed manually with `kubectl`.
   359  
   360  In the example below we execute `helm init` with the `--output json` flag.
   361  
   362  ```
   363  helm init --output json
   364  ```
   365  
   366  The Tiller installation is skipped and the manifest is output to stdout
   367  in JSON format.
   368  
   369  ```
   370  "apiVersion": "apps/v1",
   371  "kind": "Deployment",
   372  "metadata": {
   373      "creationTimestamp": null,
   374      "labels": {
   375          "app": "helm",
   376          "name": "tiller"
   377      },
   378      "name": "tiller-deploy",
   379      "namespace": "kube-system"
   380  },
   381  ...
   382  ```
   383  
   384  ### Storage backends
   385  By default, `tiller` stores release information in `ConfigMaps` in the namespace
   386  where it is running.
   387  
   388  #### Secret storage backend
   389  As of Helm 2.7.0, there is now a beta storage backend that
   390  uses `Secrets` for storing release information. This was added for additional
   391  security in protecting charts in conjunction with the release of `Secret`
   392  encryption in Kubernetes.
   393  
   394  To enable the secrets backend, you'll need to init Tiller with the following
   395  options:
   396  
   397  ```shell
   398  helm init --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'
   399  ```
   400  
   401  Currently, if you want to switch from the default backend to the secrets
   402  backend, you'll have to do the migration for this on your own. When this backend
   403  graduates from beta, there will be a more official path of migration
   404  
   405  #### SQL storage backend
   406  As of Helm 2.14.0 there is now a beta SQL storage backend that stores release
   407  information in an SQL database (only postgres has been tested so far).
   408  
   409  Using such a storage backend is particularly useful if your release information
   410  weighs more than 1MB (in which case, it can't be stored in ConfigMaps/Secrets
   411  because of internal limits in Kubernetes' underlying etcd key-value store).
   412  
   413  To enable the SQL backend, you'll need to deploy a SQL database and init Tiller
   414  with the following options:
   415  
   416  ```shell
   417  helm init \
   418    --override \
   419      'spec.template.spec.containers[0].args'='{--storage=sql,--sql-dialect=postgres,--sql-connection-string=postgresql://tiller-postgres:5432/helm?user=helm&password=changeme}'
   420  ```
   421  
   422  **PRODUCTION NOTES**: it's recommended to change the username and password of
   423  the SQL database in production deployments. Enabling SSL is also a good idea.
   424  Last, but not least, perform regular backups/snapshots of your SQL database.
   425  
   426  Currently, if you want to switch from the default backend to the SQL backend,
   427  you'll have to do the migration for this on your own. When this backend
   428  graduates from beta, there will be a more official migration path.
   429  
   430  ## Conclusion
   431  
   432  In most cases, installation is as simple as getting a pre-built `helm` binary
   433  and running `helm init`. This document covers additional cases for those
   434  who want to do more sophisticated things with Helm.
   435  
   436  Once you have the Helm Client and Tiller successfully installed, you can
   437  move on to using Helm to manage charts.