github.com/kubernetes-incubator/kube-aws@v0.16.4/docs/getting-started/step-6-configure-add-ons.md (about)

     1  # Configuring Kubernetes Add-ons
     2  
     3  kube-aws has built-in supports for several Kubernetes add-ons known to require additional configurations beforehand.
     4  
     5  ## cluster-autoscaler
     6  
     7  [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) is an add-on which automatically
     8  scales in/out your k8s cluster by removing/adding worker nodes according to resource utilization per node.
     9  
    10  To enable cluster-autoscaler, add the below settings to your cluster.yaml:
    11  
    12  ```yaml
    13  addons:
    14    clusterAutoscaler:
    15      enabled: true
    16  worker:
    17    nodePools:
    18    - name: scaled
    19      autoScalingGroup:
    20        minSize: 1
    21        maxSize: 10
    22      autoscaling:
    23        clusterAutoscaler:
    24          enabled: true
    25    - name: notScaled
    26      autoScalingGroup:
    27        minSize: 2
    28        maxSize: 4
    29  ```
    30  
    31  The above example configuration would:
    32  
    33  * By `addons.clusterAutoscaler.enabled`:
    34    * Provide controller nodes appropriate IAM permissions to call necessary AWS APIs from CA
    35    * Create a k8s deployment to run CA on one of controller nodes, so that CA can utilize the IAM permissions
    36  * By `worker.nodePools[0].autoscaling.clusterAutoscaler.enabled`:
    37    * If there are unschedulable, pending pod(s) that is requesting more capacity, CA will add more nodes to the `scaled` node pool, up until the max size `10`
    38    * If there are no unschdulable, pending pod(s) that is waiting for more capacity and one or more nodes are in low utlization, CA will remove node(s), down until the min size `1`
    39  * The second node pool `notScaled` is scaled manually by YOU, because you had not the autoscaling on it(=missing `autoscaling.clusterAutoscaler.enabled`)
    40  
    41  ## kube2iam / kiam
    42   
    43  [kube2iam](https://github.com/jtblin/kube2iam) and [kiam](https://github.com/uswitch/kiam) are add-ons which provides IAM credentials for target IAM roles to pods running inside a Kubernetes cluster based on annotations.
    44  To allow kube2iam or kiam deployed to worker and controller nodes to assume target roles, you need the following configurations.
    45  
    46  1. IAM roles associated to worker and controller nodes requires an IAM policy:
    47   
    48    ```json
    49    {
    50      "Action": "sts:AssumeRole",
    51      "Resource": "*",
    52      "Effect": "Allow"
    53    }
    54    ```
    55  
    56    To add the policy to controller nodes, set `kubeAwsPlugins.kube2iam.enabled` or `kubeAwsPlugins.kiam.enabled` to `true` in your `cluster.yaml` (but not both).
    57  
    58  2. Target IAM roles needs to change trust relationships to allow kube-aws worker/controller IAM role to assume the target roles.
    59  
    60    As CloudFormation generates unpredictable role names containing random IDs by default, it is recommended to make them predictable at first so that you can easily automate configuring trust relationships afterwards.
    61    To make worker/controller role names predictable, set `controller.iam.role.name` for controller and `worker.nodePools[].iam.role.name` for worker nodes.
    62    `iam.role.name`s becomes suffixes of the resulting worker/controller role names. 
    63    
    64    Please beware that configuration of target roles' trust relationships are out-of-scope of kube-aws.
    65    Please see [the part of kube2iam doc](https://github.com/jtblin/kube2iam#iam-roles) or [the part of the kiam doc](https://github.com/uswitch/kiam/blob/master/docs/IAM.md)for more information.
    66    Basically, you need to point `Principal` to the ARN of a resulting worker/controller IAM role which would look like `arn:aws:iam::<your aws account id>:role/<stack-name>-<managed iam role name>`. 
    67  
    68  Finally, an example `cluster.yaml` usable with kube2iam would look like:
    69  
    70  ```yaml
    71  # for controller nodes
    72  controller:
    73    iam:
    74      role:
    75        name: mycontrollerrole
    76   
    77  kubeAwsPlugins:
    78    kube2iam:
    79      enabled: true
    80  
    81  # for worker nodes
    82  worker:
    83    nodePools:
    84    - name: mypool
    85      iam:
    86        role:
    87          name: myworkerrole
    88   ```
    89  
    90  See the relevant GitHub issues for [kube2iam](https://github.com/kubernetes-incubator/kube-aws/issues/253) and [kiam](https://github.com/kubernetes-incubator/kube-aws/issues/1055) or more information.
    91  
    92  You can reference controller and worker IAM Roles in a separate CloudFormation stack that provides roles to assume:
    93  
    94  ```yaml
    95  ...
    96  Parameters:
    97    KubeAWSStackName:
    98      Type: String
    99  Resources:
   100    IAMRole:
   101      Type: AWS::IAM::Role
   102      Properties:
   103        AssumeRolePolicyDocument:
   104          Statement:
   105            - Effect: Allow
   106              Action: sts:AssumeRole
   107              Principal:
   108                Service: ec2.amazonaws.com
   109            - Effect: Allow
   110              Action: sts:AssumeRole
   111              Principal:
   112                AWS:
   113                  Fn::ImportValue: !Sub "${KubeAWSStackName}-ControllerIAMRoleArn"
   114            - Effect: Allow
   115              Action: sts:AssumeRole
   116              Principal:
   117                AWS:
   118                  Fn::ImportValue: !Sub "${KubeAWSStackName}-NodePool<Node Pool Name>WorkerIAMRoleArn"
   119        ...
   120  ```
   121  
   122  When you are done with your cluster, [destroy your cluster][getting-started-step-7]
   123  
   124  [getting-started-step-1]: step-1-configure.md
   125  [getting-started-step-2]: step-2-render.md
   126  [getting-started-step-3]: step-3-launch.md
   127  [getting-started-step-4]: step-4-update.md
   128  [getting-started-step-5]: step-5-add-node-pool.md
   129  [getting-started-step-6]: step-6-configure-add-ons.md
   130  [getting-started-step-7]: step-7-destroy.md