github.com/kubernetes-incubator/kube-aws@v0.16.4/docs/getting-started/step-2-render.md (about)

     1  # Configure your Kubernetes cluster on AWS
     2  
     3  This is the second step of [running Kubernetes on AWS](README.md). Before we launch our cluster, let's define a few parameters that the cluster requires.
     4  
     5  ## Cluster parameters
     6  
     7  ### EC2 key pair
     8  
     9  The keypair that will authenticate SSH access to your EC2 instances. The public half of this key pair will be configured on each Flatcar node.
    10  
    11  After creating a key pair, you will use the name you gave the keys to configure the cluster. Key pairs are only available to EC2 instances in the same region. More info in the [EC2 Keypair docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
    12  
    13  ### KMS key
    14  
    15  [Amazon KMS](http://docs.aws.amazon.com/kms/latest/developerguide/overview.html) keys are used to encrypt and decrypt cluster TLS assets. If you already have a KMS Key that you would like to use, you can skip creating a new key and provide the Arn string for your existing key.
    16  
    17  You can create a KMS key in the [AWS console](http://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html), or with the `aws` command line tool:
    18  
    19  ```sh
    20  $ aws kms --region=<your-region> create-key --description="kube-aws assets"
    21  {
    22      "KeyMetadata": {
    23          "CreationDate": 1458235139.724,
    24          "KeyState": "Enabled",
    25          "Arn": "arn:aws:kms:us-west-1:xxxxxxxxx:key/xxxxxxxxxxxxxxxxxxx",
    26          "AWSAccountId": "xxxxxxxxxxxxx",
    27          "Enabled": true,
    28          "KeyUsage": "ENCRYPT_DECRYPT",
    29          "KeyId": "xxxxxxxxx",
    30          "Description": "kube-aws assets"
    31      }
    32  }
    33  ```
    34  
    35  You will use the `KeyMetadata.Arn` string to identify your KMS key in the init step.
    36  
    37  ### External DNS name
    38  
    39  Select a DNS hostname where the cluster API will be accessible. Typically this hostname is available over the internet ("external"), so end users can connect from different networks. This hostname will be used to provision the TLS certificate for the API server, which encrypts traffic between your users and the API. Optionally, you can provide the certificates yourself, which is recommended for production clusters.
    40  
    41  When the cluster is created, the cluster will expose the TLS-secured API on an internet-facing ELB. kube-aws can automatically create an ALIAS record for the ELB in an *existing* [Route 53][route53] hosted zone specified via the `--hosted-zone-id` flag. If you have a DNS zone hosted in Route 53, you can configure for it below.
    42  
    43  You can also omit `--hosted-zone-id` by specifying the `--no-record-set` flag. However, then, you will need to create an Route53 ALIAS record or a CNAME record for the external DNS hostname you want to point to this ELB. You can find the public DNS name of the ELB after the cluster is created by invoking `kube-aws status`.
    44  
    45  ### S3 bucket
    46  
    47  [Amazon S3](http://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) buckets are used to transfer large stack templates generated by kube-aws to CloudFormation. If you already have a S3 bucket that you would like to use, you can skip creating a new bucket and provide the URI for your existing bucket.
    48  
    49  You can create a S3 bucket in the [AWS console](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#create-bucket-intro), or with the `aws` command line tool.
    50  
    51  The command varies among AWS regions.
    52  
    53  For the us-east-1 region:
    54  
    55  ```sh
    56  $ aws s3api --region=us-east-1 create-bucket --bucket <your-bucket-name>
    57  {
    58      "Location": "/<your-bucket-name>"
    59  }
    60  ```
    61  
    62  For other regions:
    63  
    64  ```sh
    65  $ aws s3api create-bucket --bucket my-bucket --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1
    66  ```
    67  
    68  ## Initialize an asset directory
    69  
    70  Create a directory on your local machine to hold the generated assets:
    71  
    72  ```sh
    73  $ mkdir my-cluster
    74  $ cd my-cluster
    75  ```
    76  
    77  Initialize the cluster CloudFormation stack with the KMS Arn, key pair name, and DNS name from the previous step:
    78  
    79  ```sh
    80  $ kube-aws init \
    81  --cluster-name=my-cluster-name \
    82  --external-dns-name=my-cluster-endpoint \
    83  --hosted-zone-id=hosted-zone-xxxxx \
    84  --region=us-west-1 \
    85  --availability-zone=us-west-1c \
    86  --key-name=key-pair-name \
    87  --kms-key-arn="arn:aws:kms:us-west-1:xxxxxxxxxx:key/xxxxxxxxxxxxxxxxxxx" \
    88  --s3-uri=s3://my-kube-aws-assets-bucket
    89  ```
    90  
    91  Here `us-west-1c` is used for parameter `--availability-zone`, but supported availability zone varies among AWS accounts.
    92  Please check if `us-west-1c` is supported by `aws ec2 --region us-west-1 describe-availability-zones`, if not switch to other supported availability zone. (e.g., `us-west-1a`, or `us-west-1b`)
    93  
    94  There will now be a `cluster.yaml` file in the asset directory. This is the main configuration file for your cluster.
    95  
    96  ### Render contents of the asset directory
    97  
    98  #### TLS certificates
    99  
   100  * In the simplest case, you can have kube-aws generate both your TLS identities and certificate authority for you.
   101  
   102    ```sh
   103    $ kube-aws render credentials --generate-ca
   104    ```
   105  
   106    This is not recommended for production, but is fine for development or testing purposes.
   107  
   108  * It is recommended that you supply your own immediate certificate signing authority and let kube-aws take care of generating the cluster TLS credentials.
   109  
   110    ```sh
   111    $ kube-aws render credentials --ca-cert-path=/path/to/ca-cert.pem --ca-key-path=/path/to/ca-key.pem
   112    ```
   113  
   114    If the CA key is encrypted (which it should), you will be prompted for the key passphrase. Although not recommended, `KUBE_AWS_CA_KEY_PASSPHRASE` environment variable can be set to automate this process.
   115  
   116    For more information on operating your own CA, check out this [awesome guide](https://jamielinux.com/docs/openssl-certificate-authority/).
   117  
   118  * In certain cases, such as users with advanced pre-existing PKI infrastructure, the operator may wish to pre-generate all cluster TLS assets. In this case, you can run `kube-aws render stack` and copy in your TLS assets into the `credentials/` folder before running `kube-aws apply`.
   119  
   120    ```sh
   121  
   122    ls -R credentials/
   123    admin-key.pem                   encryption-config.yaml          kiam-ca.pem                     service-account-key.pem
   124    admin.pem                       etcd-client-key.pem             kiam-server-key.pem             tokens.csv
   125    apiserver-aggregator-key.pem    etcd-client.pem                 kiam-server.pem                 worker-ca-key.pem
   126    apiserver-aggregator.pem        etcd-key.pem                    kube-controller-manager-key.pem worker-ca.pem
   127    apiserver-key.pem               etcd-trusted-ca.pem             kube-controller-manager.pem     worker-key.pem
   128    apiserver.pem                   etcd.pem                        kube-scheduler-key.pem          worker.pem
   129    ca-key.pem                      kiam-agent-key.pem              kube-scheduler.pem
   130    ca.pem                          kiam-agent.pem                  kubelet-tls-bootstrap-token
   131    ```
   132  
   133  ### Render cluster assets
   134  
   135  The next command generates the default set of cluster assets in your asset directory.
   136  
   137    ```sh
   138    $ kube-aws render stack
   139    ```
   140  
   141  Here's what the directory structure looks like:
   142  
   143  ```sh
   144  $ tree
   145  .
   146  ├── cluster.yaml
   147  ├── credentials
   148  │   ├── admin-key.pem
   149  │   ├── admin.pem
   150  │   ├── apiserver-aggregator-key.pem
   151  │   ├── apiserver-aggregator.pem
   152  │   ├── apiserver-key.pem
   153  │   ├── apiserver.pem
   154  │   ├── ca-key.pem
   155  │   ├── ca.pem
   156  │   ├── encryption-config.yaml
   157  │   ├── etcd-client-key.pem
   158  │   ├── etcd-client.pem
   159  │   ├── etcd-key.pem
   160  │   ├── etcd-trusted-ca.pem -> ca.pem
   161  │   ├── etcd.pem
   162  │   ├── kiam-agent-key.pem
   163  │   ├── kiam-agent.pem
   164  │   ├── kiam-ca.pem
   165  │   ├── kiam-server-key.pem
   166  │   ├── kiam-server.pem
   167  │   ├── kube-controller-manager-key.pem
   168  │   ├── kube-controller-manager.pem
   169  │   ├── kube-scheduler-key.pem
   170  │   ├── kube-scheduler.pem
   171  │   ├── kubelet-tls-bootstrap-token
   172  │   ├── service-account-key.pem
   173  │   ├── tokens.csv
   174  │   ├── worker-ca-key.pem -> ca-key.pem
   175  │   ├── worker-ca.pem -> ca.pem
   176  │   ├── worker-key.pem
   177  │   └── worker.pem
   178  ├── kubeconfig
   179  ├── plugins
   180  │   └── aws-iam-authenticator
   181  │       ├── files
   182  │       │   ├── authentication-token-webhook-config.yaml
   183  │       │   ├── controller-kubeconfig.yaml
   184  │       │   └── worker-kubeconfig.yaml
   185  │       ├── manifests
   186  │       │   ├── aws-auth-cm.yaml
   187  │       │   └── daemonset.yaml
   188  │       └── plugin.yaml
   189  ├── stack-templates
   190  │   ├── control-plane.json.tmpl
   191  │   ├── etcd.json.tmpl
   192  │   ├── network.json.tmpl
   193  │   ├── node-pool.json.tmpl
   194  │   └── root.json.tmpl
   195  └── userdata
   196      ├── cloud-config-controller
   197      ├── cloud-config-etcd
   198      └── cloud-config-worker
   199  ```
   200  
   201  These assets (templates and credentials) are used to create, update and interact with your Kubernetes cluster.
   202  
   203  At this point you should be ready to create your cluster. You can also now check the `my-cluster` asset directory into version control if you desire. The contents of this directory are your reproducible cluster assets. Please take care not to commit the `my-cluster/credentials` directory but rather to encrypt and/or put it to more secure storage, as it contains your TLS secrets and access tokens. If you're using git, the `credentials` directory will already be ignored for you.
   204  
   205  **PRODUCTION NOTE**: the TLS keys and certificates generated by `kube-aws` should *not* be used to deploy a production Kubernetes cluster.
   206  Each component certificate is only valid for 90 days, while the CA is valid for 365 days.
   207  If deploying a production Kubernetes cluster, consider establishing PKI independently of this tool first. [Read more below.][tls-note]
   208  
   209  **Did everything render correctly?**
   210  If you are familiar with Flatcar and the AWS platform, you may want to include some additional customizations or optional features. Read on below to explore more.
   211  
   212  [Yes, ready to launch the cluster][getting-started-step-3]
   213  
   214  [View optional features &amp; customizations below](#customizations-to-your-cluster)
   215  
   216  ## Customizations to your cluster
   217  
   218  You can now customize your cluster by editing asset files. Any changes to these files will require a `render` and `validate` workflow, covered below.
   219  
   220  ### Customize infrastructure
   221  
   222  * **cluster.yaml**
   223  
   224    This is the configuration file for your cluster. It contains the configuration parameters that are templated into your userdata and CloudFormation stack.
   225  
   226    Some common customizations are:
   227  
   228    - change the number of workers
   229    - specify tags applied to all created resources
   230    - create cluster inside an existing VPC
   231    - change controller and worker volume sizes
   232    <br/><br/>
   233  
   234  * **userdata/**
   235  
   236    * `cloud-config-worker`
   237    * `cloud-config-controller`
   238  
   239    This directory contains the [cloud-init](https://github.com/coreos/coreos-cloudinit) cloud-config userdata files. The Flatcar operating system supports automated provisioning via cloud-config files, which describe the various files, scripts and systemd actions necessary to produce a working cluster machine. These files are templated with your cluster configuration parameters and embedded into the CloudFormation stack template.
   240  
   241    Some common customizations are:
   242  
   243    - [mounting ephemeral disks][mount-disks]
   244    - [allow pods to mount RDB][rdb] or [iSCSI volumes][iscsi]
   245    - [allowing access to insecure container registries][insecure-registry]
   246    - [use host DNS configuration instead of a public DNS server][host-dns]
   247    - [changing your Flatcar auto-update settings][update]
   248    <br/><br/>
   249  
   250  * **stack-template.json**
   251  
   252    This file describes the [AWS CloudFormation](https://aws.amazon.com/cloudformation/) stack which encompasses all the AWS resources associated with your cluster. This JSON document is templated with configuration parameters, we well as the encoded userdata files.
   253  
   254    Some common customizations are:
   255  
   256    - tweak AutoScaling rules and timing
   257    - instance IAM roles
   258    - customize security groups beyond the initial configuration
   259    <br/><br/>
   260  
   261  * **credentials/**
   262  
   263    This directory contains both encryped and **unencrypted** TLS assets for your cluster, along with a pre-configured `kubeconfig` file which provides access to your cluster api via kubectl.
   264  
   265    You can also specify additional access tokens in `tokens.csv` as shown in the [official docs](https://kubernetes.io/docs/admin/authentication/#static-token-file).
   266  
   267  [mount-disks]: https://coreos.com/os/docs/latest/mounting-storage.html
   268  [insecure-registry]: https://coreos.com/os/docs/latest/registry-authentication.html#using-a-registry-without-ssl-configured
   269  [update]: https://coreos.com/os/docs/latest/cloud-config.html#update
   270  
   271  ### Kubernetes Container Runtime
   272  
   273  The kube-aws tool now optionally supports using rkt as the kubernetes container runtime. To configure rkt as the container runtime you must run with a Flatcar version >= `v1151.0.0` and configure the runtime flag.
   274  
   275  Edit the `cluster.yaml` file:
   276  
   277  ```yaml
   278  containerRuntime: rkt
   279  releaseChannel: stable
   280  ```
   281  
   282  Note that while using rkt as the runtime is now supported, it is still a new option as of the Kubernetes v1.4 release and has a few [known issues](http://kubernetes.io/docs/getting-started-guides/rkt/notes/).
   283  
   284  ### Calico network policy
   285  
   286  The cluster can be optionally configured to use Calico to provide [network policy](http://kubernetes.io/docs/user-guide/networkpolicies/). These policies limit and control how different pods, namespaces, etc can communicate with each other. These rules can be managed after the cluster is launched, but the feature needs to be turned on beforehand.
   287  
   288  Edit the `cluster.yaml` file:
   289  
   290  ```yaml
   291  useCalico: true
   292  ```
   293  
   294  ### Route53 Host Record
   295  
   296  `kube-aws` can create an ALIAS record for the controller's ELB in an existing Route53 hosted zone.
   297  
   298  Just run `kube-aws init` with the flag `--hosted-zone-id` to specify the id of the hosted zone in which the record is created.
   299  
   300  If you've run `kube-aws init` without the flag but with `--no-record-set`, edit the `cluster.yaml` file to add `loadBalancer.hostedZone.id` under the first item of `apiEndpoints` while setting `recordSetManaged` to `true` or removing it:
   301  
   302  ```yaml
   303  apiEndpoints:
   304  - name: default
   305    dNSName: kubernetes.staging.example.com
   306    loadBalancer:
   307      hostedZone:
   308        id: A12B3CDE4FG5HI
   309  
   310  # DEPRECATED: use hostedZoneId instead
   311  #hostedZone: staging.example.com
   312  
   313  # DEPRECATED: use loadBalancer.hostedZone.id instead
   314  #hostedZoneId: A12B3CDE4FG5HI
   315  
   316  # DEPRECATED: use loadBalancer.createRecordSet instead
   317  # This is even implied to be true when loadBalancer.hostedZone.id is specified
   318  #createRecordSet: true
   319  
   320  ```
   321  
   322  If `createRecordSet` is not set to true, the deployer will be responsible for making externalDNSName routable to the the ELB managing the controller nodes after the cluster is created.
   323  
   324  ### Multi-AZ Clusters
   325  
   326  Kube-aws supports "spreading" a cluster across any number of Availability Zones in a given region.
   327  
   328  __A word of caution about EBS and Persistent Volumes__: Any pods deployed to a Multi-AZ cluster must mount EBS volumes via [Persistent Volume Claims](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims). Specifying the ID of the EBS volume directly in the pod spec will not work consistently if nodes are spread across multiple zones.
   329  
   330  Read more about Kubernetes Multi-AZ cluster support [here](https://kubernetes.io/docs/setup/best-practices/multiple-zones/).
   331  
   332  #### A common pitfall when deploying multi-AZ clusters in combination with cluster-autoscaler
   333  
   334  [cluster-autoscaler](https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler) is a tool that automatically adjusts the number of Kubernetes worker nodes when:
   335  
   336  > * there is a pod that doesn’t have enough space to run in the cluster
   337  > * some nodes in the cluster are so underutilized, for an extended period of time, that they can be deleted and their pods will be easily placed on some other, existing nodes.
   338  > https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler
   339  
   340  A common pitfall in deploying cluster-autoscaler to a multi-AZ cluster is that you have to instruct an Auto Scaling Group not to spread over multiple availability zones or cluster-autoscaler results in instability while scaling out the nodes i.e. it takes unnecessary much time to finally bring up a node in the insufficient zone.
   341  
   342  > The autoscaling group should span 1 availability zone for the cluster autoscaler to work. If you want to distribute workloads evenly across zones, set up multiple ASGs, with a cluster autoscaler for each ASG. At the time of writing this, cluster autoscaler is unaware of availability zones and although autoscaling groups can contain instances in multiple availability zones when configured so, the cluster autoscaler can't reliably add nodes to desired zones. That's because AWS AutoScaling determines which zone to add nodes which is out of the control of the cluster autoscaler. For more information, see https://github.com/kubernetes/contrib/pull/1552#discussion_r75533090.
   343  > https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#deployment-specification
   344  
   345  Please read the following guides carefully and select the appropriate deployment according to your requirement regarding auto-scaling.
   346  
   347  #### For production cluster not requiring cluster-autoscaler
   348  
   349  If you don't need auto-scaling at all, or you need only the AWS-native auto-scaling i.e. the a combination of auto scaling groups, scaling policies, CloudWatch alarms,
   350  theoretically, you can safely go this way.
   351  
   352  Edit the `cluster.yaml` file to define multiple subnets, each with a different availability zone hence multi-AZ:
   353  
   354  ```yaml
   355   subnets:
   356     - availabilityZone: us-west-1a
   357       instanceCIDR: "10.0.0.0/24"
   358     - availabilityZone: us-west-1b
   359       instanceCIDR: "10.0.1.0/24"
   360  ```
   361  
   362  This implies that you rely on AWS AutoScaling for selecting which subnet hence which availability zone to add a node when an auto-scaling group's `DesiredCapacity` is increased.
   363  
   364  Please read [the AWS documentation for more details about AWS Auto Scaling](http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html).
   365  
   366  ### Certificates and Keys
   367  
   368  `kube-aws render` begins by initializing the TLS infrastructure needed to securely operate Kubernetes. If you have your own key/certificate management system, you can overwrite the generated TLS assets after `kube-aws render`. More information on [Kubernetes certificate generation.][k8s-openssl]
   369  
   370  When `kube-aws apply` creates the cluster stack, it will use whatever TLS assets it finds in the `credentials` folder at the time.
   371  
   372  This includes the certificate authority, signed server certificates for the Kubernetes API server and workers, and a signed client certificate for administrative use.
   373  
   374  * **APIServerCert, APIServerKey**
   375  
   376    The API server certificate will be valid for the value of `externalDNSName`, as well as a the DNS names used to route Kubernetes API requests inside the cluster.
   377  
   378    `kube-aws` does *not* manage a DNS zone for the cluster.
   379    This means that the deployer is responsible for ensuring the routability of the external DNS name to the public IP of the master node instance.
   380  
   381    The certificate and key granted to the kube-apiserver.
   382    This certificate will be presented to external clients of the Kubernetes cluster, so it should be valid for external DNS names, if necessary.
   383  
   384    Additionally, the certificate must have the following Subject Alternative Names (SANs).
   385    These IPs and DNS names are used within the cluster to route from applications to the Kubernetes API:
   386  
   387      - 127.0.0.1
   388      - 10.0.0.50
   389      - 10.3.0.1
   390      - kubernetes
   391      - kubernetes.default
   392      - kubernetes.default.svc
   393      - kubernetes.default.svc.cluster.local
   394  
   395  
   396  * **WorkerCert, WorkerKey**
   397  
   398    The certificate and key granted to the kubelets on worker instances.
   399    The certificate is shared across all workers, so it must be valid for all worker hostnames.
   400    This is achievable with the Subject Alternative Name (SAN) `*.*.compute.internal`, or `*.ec2.internal` if using the us-east-1 AWS region.
   401  
   402  * **CACert**
   403  
   404    The certificate authority's TLS certificate is used to sign other certificates in the cluster.
   405  
   406    These assets are stored unencrypted in your `credentials` folder, but are encyrpted using Amazon KMS before being embedded in the CloudFormation template.
   407  
   408    All keys and certs must be PEM-formatted and base64-encoded.
   409  
   410  ## Render and validate cluster assets
   411  
   412  After you have completed your customizations, re-render your assets with the new settings:
   413  
   414  ```sh
   415  $ kube-aws render credentials
   416  $ kube-aws render stack
   417  ```
   418  
   419  The `validate` command check the validity of your changes to the cloud-config userdata files and the CloudFormation stack description.
   420  
   421  This is an important step to make sure your stack will launch successfully:
   422  
   423  ```sh
   424  $ kube-aws validate
   425  ```
   426  
   427  If your files are valid, you are ready to [launch your cluster][getting-started-step-3].
   428  
   429  [getting-started-step-1]: step-1-configure.md
   430  [getting-started-step-2]: step-2-render.md
   431  [getting-started-step-3]: step-3-launch.md
   432  [getting-started-step-4]: step-4-update.md
   433  [getting-started-step-5]: step-5-add-node-pool.md
   434  [getting-started-step-6]: step-6-configure-add-ons.md
   435  [getting-started-step-7]: step-7-destroy.md
   436  [k8s-openssl]: openssl.md
   437  [tls-note]: #certificates-and-keys
   438  [route53]: https://aws.amazon.com/route53/
   439  [rdb]: https://github.com/coreos/coreos-kubernetes/blob/master/Documentation/kubelet-wrapper.md#allow-pods-to-use-rbd-volumes
   440  [iscsi]: https://github.com/coreos/coreos-kubernetes/blob/master/Documentation/kubelet-wrapper.md#allow-pods-to-use-iscsi-mounts
   441  [host-dns]: https://github.com/coreos/coreos-kubernetes/blob/master/Documentation/kubelet-wrapper.md#use-the-hosts-dns-configuration
   442  [node-pool]: kubernetes-on-aws-node-pool.md