github.com/SUSE/skuba@v1.4.17/ci/infra/aws/README.md (about)

     1  ## Introduction
     2  
     3  This terraform project creates the infrastructure needed to run a
     4  cluster on top of AWS using EC2 instances.
     5  
     6  ## Machine access
     7  
     8  It is important to have your public ssh key within the `authorized_keys`,
     9  this is done by `cloud-init` through a terraform variable called `authorized_keys`.
    10  
    11  All the instances have a `root` and a `ec2-user` user. The `ec2-user` user can
    12  perform `sudo` without specifying a password.
    13  
    14  Only the master nodes have a public IP associated with. All the worker nodes
    15  are located on a private subnet.
    16  
    17  The network structure resembles the one describe inside of this
    18  [AWS document](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html).
    19  
    20  ## Load balancer
    21  
    22  The deployment will also create a AWS load balancer sitting in front of the
    23  kubernetes API server. This is the control plane FQDN to use when defining
    24  the cluster.
    25  
    26  ## Starting the cluster
    27  
    28  ### Configuration
    29  
    30  You can use the `terraform.tfvars.example` as a template for configuring
    31  the Terraform variables, or create your own file like `my-cluster.auto.tfvars`:
    32  
    33  ```sh
    34  # customize any variables defined in variables.tf
    35  stack_name = "my-k8s-cluster"
    36  
    37  access_key = "<KEY>"
    38  
    39  secret_key = "<SECRET>"
    40  
    41  authorized_keys = ["ssh-rsa AAAAB3NzaC1y..."]
    42  ```
    43  
    44  
    45  The terraform files will create a new dedicated VPC for the kubernetes cluster.
    46  It's possible to join this VPC with other existing ones by specifying the IDs
    47  of the VPC to join inside of the `peer_vpc_ids` variable.
    48  
    49  ### Creating the infrastructure
    50  
    51  You can create the infrastructure by _applying_ the script with:
    52  
    53  Then you can use `terraform` to deploy the cluster
    54  
    55  ```sh
    56  $ terraform init
    57  $ terraform apply
    58  ```
    59  
    60  Alternatively, you could pass some of the configuration from environment
    61  variables on the command line, like this:
    62  
    63  ```sh
    64  $ TF_VAR_authorized_keys=\[\"`cat ~/.ssh/id_rsa.pub`\"\] terraform plan
    65  ```
    66  
    67  ### Creating the Kubernetes cluster
    68  
    69  Once the infrastructure has been created, you can obtain the details with
    70  `terraform output`:
    71  
    72  ```console
    73  $ terraform output
    74  control_plane.private_dns = {
    75      i-1234567890abcdef0 = ip-10-1-1-55.eu-central-1.compute.internal
    76  }
    77  control_plane.public_ip = {
    78      i-1234567890abcdef1 = 3.121.219.168
    79  }
    80  elb_address = k8s-elb-1487845812.eu-central-1.elb.amazonaws.com
    81  nodes.private_dns = {
    82      i-1234567890abcdef2 = ip-10-1-1-157.eu-central-1.compute.internal
    83  }
    84  ```
    85  
    86  Then you can initialize the cluster with `skuba cluster init`, using the Load Balancer (`elb_address` in the Terraform output) as the control plane endpoint:
    87  
    88  ```console
    89  $ skuba cluster init --control-plane k8s-elb-1487845812.eu-central-1.elb.amazonaws.com --cloud-provider aws my-devenv-cluster
    90  ** This is a BETA release and NOT intended for production usage. **
    91  [init] configuration files written to /home/user/my-devenv-cluster
    92  ```
    93  
    94  At this point we can bootstrap the first master:
    95  
    96  ```console
    97  $ cd my-devenv-cluster
    98  $ skuba node bootstrap --user ec2-user --sudo --target ip-10-1-1-55.eu-central-1.compute.internal ip-10-1-1-55.eu-central-1.compute.internal
    99  ```
   100  
   101  And the  you can add a worker node with:
   102  
   103  ```console
   104  $ cd my-devenv-cluster
   105  $ skuba node join --role worker --user ec2-user --sudo --target ip-10-1-1-157.eu-central-1.compute.internal ip-10-1-1-157.eu-central-1.compute.internal
   106  ```
   107  
   108  ### Using the cluster
   109  
   110  You must first point the `KUBECONFIG` environment variable to the `admin.conf`
   111  file created in the cluster configuration directory:
   112  
   113  ```console
   114  $ export KUBECONFIG=/home/user/my-devenv-cluster/admin.conf
   115  ```
   116  
   117  And then you are ready for running some `kubectl` command like:
   118  
   119  ```console
   120  $ kubectl get nodes
   121  ```
   122  
   123  ## Enable Cloud provider Interface
   124  
   125  ### Requirements
   126  
   127  Before proceeding you must have created IAM policies matching the ones described
   128  [here](https://github.com/kubernetes/cloud-provider-aws#iam-policy), one
   129  for the master nodes and one for the worker nodes.
   130  
   131  By default these terraform files **do not** create these policies. This is not
   132  done because some corporate users are entitled to create AWS resources, but
   133  due to security reasons, they do not have the privileges to create new IAM
   134  policies.
   135  
   136  If you do not have the privileges to create IAM policies you have to request
   137  to your organization the creation of such policies.
   138  Once this is done you have to specify their name inside of the following
   139  terraform variables:
   140  
   141    * `iam_profile_master`
   142    * `iam_profile_worker`
   143  
   144  **Note well:** this must be done before the infrastructure is created.
   145  
   146  
   147  On the other hand, if you have the privileges to create IAM policies you can
   148  let these terraform files take care of that for you by doing these operations.
   149  This is done automatically by leaving the `iam_profile_master` and
   150  `iam_profile_worker` variables unspecified.
   151  
   152  ### Cluster creation
   153  
   154  The cloud provider integration must be enabled when creating the cluster
   155  definition:
   156  
   157  ```
   158  skuba cluster init --control-plane <ELB created by terraform> --cloud-provider aws my-cluster
   159  ```
   160  
   161  **WARNING:** nodes must be bootstrapped/joined using their FQDN in order to
   162  have the CPI find them. For example:
   163  
   164  ```
   165  $ skuba node bootstrap -u ec2-user -s -t ip-172-28-1-225.eu-central-1.compute.internal ip-172-28-1-225.eu-central-1.compute.internal
   166  $ skuba node join --role worker -u ec2-user -s -t ip-172-28-1-15.eu-central-1.compute.internal ip-172-28-1-15.eu-central-1.compute.internal
   167  ...
   168  ```
   169  
   170  Refer to the [AWS Cloud Provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws)
   171  documentation for details on how to use these features in your cluster.
   172  
   173  ## Known limitations
   174  
   175  ### IP addresses
   176  
   177  `skuba` cannot currently access nodes through a bastion host, so all
   178  the nodes in the cluster must be directly reachable from the machine where
   179  `skuba` is being run. We must also consider that `skuba` must use
   180  the external IPs as `--target`s when initializing or joining the cluster,
   181  while we must specify the internal DNS names for registering the nodes
   182  in the cluster.
   183  
   184  ### Availability zones
   185  
   186  Right now all the nodes are created inside of the same availability zone.