github.com/kubernetes-incubator/kube-aws@v0.16.4/docs/getting-started/step-5-add-node-pool.md (about) 1 # Node Pool 2 3 Node Pool allows you to bring up additional pools of worker nodes each with a separate configuration including: 4 5 * Instance Type 6 * Storage Type/Size/IOPS 7 * Instance Profile 8 * Additional, User-Provided Security Group(s) 9 * Spot Price 10 * AWS service to manage your EC2 instances: [Auto Scaling](http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) or [Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html) 11 * [Node labels](http://kubernetes.io/docs/user-guide/node-selection/) 12 * [Taints](https://github.com/kubernetes/kubernetes/issues/17190) 13 14 ## Deploying a Multi-AZ cluster with cluster-autoscaler support with Node Pools 15 16 kube-aws creates a node pool in a single AZ by default. 17 On top of that, you can add one or more node pool in an another AZ to achieve Multi-AZ. 18 19 Assuming you already have a subnet and a node pool in the subnet: 20 21 ```yaml 22 subnets: 23 - name: managedPublicSubnetIn1a 24 availabilityZone: us-west-1a 25 instanceCIDR: 10.0.0.0/24 26 27 worker: 28 nodePools: 29 - name: pool1 30 subnets: 31 - name: managedPublicSubnetIn1a 32 ``` 33 34 35 Edit the `cluster.yaml` file to add the second node pool: 36 37 ```yaml 38 subnets: 39 - name: managedPublicSubnetIn1a 40 availabilityZone: us-west-1a 41 instanceCIDR: 10.0.0.0/24 42 - name: managedPublicSubnetIn1c 43 availabilityZone: us-west-1c 44 instanceCIDR: 10.0.1.0/24 45 46 worker: 47 nodePools: 48 - name: pool1 49 subnets: 50 - name: managedPublicSubnetIn1a 51 - name: pool2 52 subnets: 53 - name: managedPublicSubnetIn1c 54 ``` 55 56 Launch the secondary node pool by running `kube-aws apply`: 57 58 ``` 59 $ kube-aws apply 60 ``` 61 62 Beware that you have to associate only 1 AZ to a node pool or cluster-autoscaler may end up failing to reliably add nodes on demand due to the fact 63 that what cluster-autoscaler does is to increase/decrease the desired capacity hence it has no way to selectively add node(s) in a desired AZ. 64 65 Also note that deployment of cluster-autoscaler is currently out of scope of this documentation. 66 Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#deployment-specification) for instructions on it. 67 68 ## Customizing min/max size of the auto scaling group 69 70 If you've chosen to power your worker nodes in a node pool with an auto scaling group, you can customize `MinSize`, `MaxSize`, `RollingUpdateMinInstancesInService` in `cluster.yaml`: 71 72 Please read [the AWS documentation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#aws-properties-as-group-prop) for more information on `MinSize`, `MaxSize`, `MinInstancesInService` for ASGs. 73 74 ```yaml 75 worker: 76 nodePools: 77 - name: pool1 78 autoScalingGroup: 79 minSize: 1 80 maxSize: 3 81 rollingUpdateMinInstancesInService: 2 82 ``` 83 84 See [the detailed comments in `cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) for further information. 85 86 ## Deploying a node pool powered by Spot Fleet 87 88 Utilizing Spot Fleet gives us chances to dramatically reduce cost being spent on EC2 instances powering Kubernetes worker nodes while achieving reasonable availability. 89 AWS says cost reduction is up to 90% but the cost would slightly vary among instance types and other users' bids. 90 91 Spot Fleet support may change in backward-incompatible ways as it is still an experimental feature. 92 So, please use this feature at your own risk. 93 However, we'd greatly appreciate your feedbacks because they do accelerate improvements in this area! 94 95 ### Known Limitations 96 97 * Running `kube-aws apply` to increase or decrease `targetCapacity` of a spot fleet results in a complete replacement of the Spot Fleet hence some downtime. [This is due to how CloudFormation works for updating a Spot Fleet](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-spotfleet.html#d0e60520) 98 * It is recommended to temporarily bring up an another, spare node pool to maintain the whole cluster capacity at a certain level while replacing the spot fleet. 99 100 ### Pre-requisites 101 102 This feature assumes you already have the IAM role with ARN like "arn:aws:iam::youraccountid:role/aws-ec2-spot-fleet-role" in your own AWS account. 103 It implies that you've arrived "Spot Requests" in EC2 Dashboard in the AWS console at least once. 104 See [the AWS documentation describing pre-requisites for Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) for details. 105 106 ### Steps 107 108 To add a node pool powered by Spot Fleet, edit node pool's `cluster.yaml`: 109 110 ```yaml 111 worker: 112 nodePools: 113 - name: pool1 114 spotFleet: 115 targetCapacity: 3 116 ``` 117 118 To customize your launch specifications to diversify your pool among instance types other than the defaults, edit `cluster.yaml`: 119 120 ```yaml 121 worker: 122 nodePools: 123 - name: pool1 124 spotFleet: 125 targetCapacity: 5 126 launchSpecifications: 127 - weightedCapacity: 1 128 instanceType: t2.medium 129 - weightedCapacity: 2 130 instanceType: m3.large 131 - weightedCapacity: 2 132 instanceType: m4.large 133 ``` 134 135 This configuration would normally result in Spot Fleet to bring up 3 instances to meet your target capacity: 136 137 * 1x t2.medium = 1 capacity 138 * 1x m3.large = 2 capacity 139 * 1x m4.large = 2 capacity 140 141 This is achieved by the `diversified` strategy of Spot Fleet. 142 Please read [the AWS documentation describing Spot Fleet Allocation Strategy](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy) for more details. 143 144 Please also see [the detailed comments in `cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) and [the GitHub issue summarizing the initial implementation](https://github.com/kubernetes-incubator/kube-aws/issues/112) of this feature for further information. 145 146 You can optionally [configure various Kubernetes add-ons][getting-started-step-6] according to your requirements. 147 When you are done with your cluster, [destroy your cluster][getting-started-step-7] 148 149 [getting-started-step-1]: step-1-configure.md 150 [getting-started-step-2]: step-2-render.md 151 [getting-started-step-3]: step-3-launch.md 152 [getting-started-step-4]: step-4-update.md 153 [getting-started-step-5]: step-5-add-node-pool.md 154 [getting-started-step-6]: step-6-configure-add-ons.md 155 [getting-started-step-7]: step-7-destroy.md