github.com/kubernetes-incubator/kube-aws@v0.16.4/docs/getting-started/step-4-update.md (about) 1 # Updating the Kubernetes cluster 2 3 ## Types of cluster update 4 There are two distinct categories of cluster update. 5 6 * **Parameter-level update**: Only changes to `cluster.yaml` and/or TLS assets in `credentials/` folder are reflected. To enact this type of update. Modifications to CloudFormation or cloud-config userdata templates will not be reflected. In this case, you do not have to re-render: 7 8 ```sh 9 kube-aws apply 10 ``` 11 12 * **Full update**: Any change (besides changes made to the etcd cluster- more on that later) will be enacted, including structural changes to CloudFormation and cloudinit templates. This is the type of upgrade that must be run on installing a new version of kube-aws, or more generally when cloudinit or CloudFormation templates are modified: 13 14 ```sh 15 kube-aws render stack 16 kube-aws render credentials 17 git diff # view changes to rendered assets 18 kube-aws apply 19 ``` 20 21 ## Certificate and access token rotation 22 23 The parameter-level update mechanism can be used to rotate in new TLS credentials and access tokens. 24 25 More concretely, steps should be taken in order to rotate your certs on nodes are: 26 27 * Optionally modify the `externalDNSName` attribute in `cluster.yaml` 28 * Remove all the `credentials/*.enc` which are cached encrypted certs/keys/tokens to prevent unnecessary node replacement when there's actually no update. See #107 and #237 for more context. 29 * Render new credentials using kube-aws render credentials: 30 31 ```sh 32 kube-aws render credentials 33 ``` 34 * Execute the update command like: 35 36 ```sh 37 kube-aws apply 38 ``` 39 40 There are cases where the service account tokens used by the system pods become invalid after credentials update, and 41 some of your system pods will break (especially `kube-dns`). Deleting the said secrets will solve the issue (see https://github.com/kubernetes-incubator/kube-aws/issues/1057). 42 43 ## The etcd caveat 44 45 There is no solution for hosting an etcd cluster in a way that is easily updateable in this fashion- so updates are automatically masked for the etcd instances. This means that, after the cluster is created, nothing about the etcd ec2 instances is allowed to be updated. 46 47 Fortunately, Flatcar update engine will take care of keeping the members of the etcd cluster up-to-date, but you as the operator will not be able to modify them after creation via the update mechanism. 48 49 In the (near) future, etcd will be hosted on Kubernetes and this problem will no longer be relevant. Rather than concocting overly complex band-aid, we've decided to "punt" on this issue of the time being. 50 51 Once you have successfully updated your cluster, you are ready to [add node pools to your cluster][getting-started-step-5]. 52 53 [getting-started-step-1]: step-1-configure.md 54 [getting-started-step-2]: step-2-render.md 55 [getting-started-step-3]: step-3-launch.md 56 [getting-started-step-4]: step-4-update.md 57 [getting-started-step-5]: step-5-add-node-pool.md 58 [getting-started-step-6]: step-6-configure-add-ons.md 59 [getting-started-step-7]: step-7-destroy.md