sigs.k8s.io/cluster-api-provider-aws@v1.5.5/docs/book/src/topics/accessing-ec2-instances.md (about)

     1  # Accessing cluster instances
     2  
     3  ## Overview
     4  
     5  After running `clusterctl generate cluster` to generate the configuration for a new workload cluster (and then redirecting that output to a file for use with `kubectl apply`, or piping it directly to `kubectl apply`), the new workload cluster will be deployed. This document explains how to access the new workload cluster's nodes.
     6  
     7  ## Prerequisites
     8  
     9  1. `clusterctl generate cluster` was successfully executed to generate the configuration for a new workload cluster
    10  2. The configuration for the new workload cluster was applied to the management cluster using `kubectl apply` and the cluster is up and running in an AWS environment.
    11  3. The SSH key referenced by `clusterctl` in step 1 exists in AWS and is stored in the correct location locally for use by SSH (on macOS/Linux systems, this is typically `$HOME/.ssh`). This document will refer to this key as `cluster-api-provider-aws.sigs.k8s.io`.
    12  4. _(If using AWS Session Manager)_ The AWS CLI and the Session Manager plugin have been installed and configured.
    13  
    14  ## Methods for accessing nodes
    15  
    16  There are two ways to access cluster nodes once the workload cluster is up and running:
    17  
    18  * via SSH
    19  * via AWS Session Manager
    20  
    21  ### Accessing nodes via SSH
    22  
    23  By default, workload clusters created in AWS will _not_ support access via SSH apart from AWS Session Manager (see the section titled "Accessing nodes via AWS Session Manager"). However, the manifest for a workload cluster can be modified to include an SSH bastion host, created and managed by the management cluster, to enable SSH access to cluster nodes. The bastion node is created in a public subnet and provides SSH access from the world. It runs the official Ubuntu Linux image.
    24  
    25  #### Enabling the bastion host
    26  
    27  To configure the Cluster API Provider for AWS to create an SSH bastion host, add this line to the AWSCluster spec:
    28  
    29  ```yaml
    30  spec:
    31    bastion:
    32      enabled: true
    33  ```
    34  If this field is set and a specific AMI ID is not provided for the bastion (by setting spec.bastion.ami) then by default the latest AMI(Ubuntu 20.04 LTS OS) is looked up from [Ubuntu cloud images](https://ubuntu.com/server/docs/cloud-images/amazon-ec2) by CAPA controller and used in bastion host creation.
    35  
    36  #### Obtain public IP address of the bastion node
    37  
    38  Once the workload cluster is up and running after being configured for an SSH bastion host, you can use the `kubectl get awscluster` command to look up the public IP address of the bastion host (make sure the `kubectl` context is set to the management cluster). The output will look something like this:
    39  
    40  ```bash
    41  NAME   CLUSTER   READY   VPC                     BASTION IP
    42  test   test      true    vpc-1739285ed052be7ad   1.2.3.4
    43  ```
    44  
    45  #### Setting up the SSH key path
    46  
    47  Assumming that the `cluster-api-provider-aws.sigs.k8s.io` SSH key is stored in
    48  `$HOME/.ssh/cluster-api-provider-aws`, use this command to set up an environment variable for use in a later command:
    49  
    50  ```bash
    51  export CLUSTER_SSH_KEY=$HOME/.ssh/cluster-api-provider-aws
    52  ```
    53  
    54  #### Get private IP addresses of nodes in the cluster
    55  
    56  To get the private IP addresses of nodes in the cluster (nodes may be control plane nodes or worker nodes), use this `kubectl` command with the context set to the management cluster:
    57  
    58  ```bash
    59  kubectl get nodes -o custom-columns=NAME:.metadata.name,\
    60  IP:"{.status.addresses[?(@.type=='InternalIP')].address}"
    61  ```
    62  
    63  This will produce output that looks like this:
    64  
    65  ```bash
    66  NAME                                         IP
    67  ip-10-0-0-16.us-west-2.compute.internal   10.0.0.16
    68  ip-10-0-0-68.us-west-2.compute.internal   10.0.0.68
    69  ```
    70  
    71  The above command returns IP addresses of the nodes in the cluster. In this
    72  case, the values returned are `10.0.0.16` and `10.0.0.68`.
    73  
    74  ### Connecting to the nodes via SSH
    75  
    76  To access one of the nodes (either a control plane node or a worker node) via the SSH bastion host, use this command if you are using a non-EKS cluster:
    77  
    78  ```bash
    79  ssh -i ${CLUSTER_SSH_KEY} ubuntu@<NODE_IP> \
    80  	-o "ProxyCommand ssh -W %h:%p -i ${CLUSTER_SSH_KEY} ubuntu@${BASTION_HOST}"
    81  ```
    82  
    83  And use this command if you are using a EKS based cluster:
    84  
    85  ```bash
    86  ssh -i ${CLUSTER_SSH_KEY} ec2-user@<NODE_IP> \
    87  	-o "ProxyCommand ssh -W %h:%p -i ${CLUSTER_SSH_KEY} ubuntu@${BASTION_HOST}"
    88  ```
    89  
    90  
    91  If the whole document is followed, the value of `<NODE_IP>` will be either
    92  10.0.0.16 or 10.0.0.68.
    93  
    94  Alternately, users can add a configuration stanza to their SSH configuration file (typically found on macOS/Linux systems as `$HOME/.ssh/config`):
    95  
    96  ```text
    97  Host 10.0.*
    98    User ubuntu
    99    IdentityFile <CLUSTER_SSH_KEY>
   100    ProxyCommand ssh -W %h:%p ubuntu@<BASTION_HOST>
   101  ```
   102  
   103  ### Accessing nodes via AWS Session Manager
   104  
   105  All CAPA-published AMIs based on Ubuntu have the AWS SSM Agent pre-installed (as a Snap package; this was added in June 2018 to the base Ubuntu Server image for all 16.04 and later AMIs). This allows users to access cluster nodes directly, without the need for an SSH bastion host, using the AWS CLI and the Session Manager plugin.
   106  
   107  To access a cluster node (control plane node or worker node), you'll need the instance ID. You can retrieve the instance ID using this `kubectl` command with the context set to the management cluster:
   108  
   109  ```bash
   110  kubectl get awsmachines -o custom-columns=NAME:.metadata.name,INSTANCEID:.spec.providerID
   111  ```
   112  
   113  This will produce output similar to this:
   114  
   115  ```bash
   116  NAME                      INSTANCEID
   117  test-controlplane-52fhh   aws:////i-112bac41a19da1819
   118  test-controlplane-lc5xz   aws:////i-99aaef2381ada9228
   119  ```
   120  
   121  Users can then use the instance ID (everything after the `aws:////` prefix) to connect to the cluster node with this command:
   122  
   123  ```bash
   124  aws ssm start-session --target <INSTANCE_ID>
   125  ```
   126  
   127  This will log you into the cluster node as the `ssm-user` user ID.
   128  
   129  ## Additional Notes
   130  
   131  ### Using the AWS CLI instead of `kubectl`
   132  
   133  It is also possible to use AWS CLI commands instead of `kubectl` to gather information about the cluster nodes.
   134  
   135  For example, to use the AWS CLI to get the public IP address of the SSH bastion host, use this AWS CLI command:
   136  
   137  ```bash
   138  export BASTION_HOST=$(aws ec2 describe-instances --filter='Name=tag:Name,Values=<CLUSTER_NAME>-bastion' \
   139  	| jq '.Reservations[].Instances[].PublicIpAddress' -r)
   140  ```
   141  
   142  You should substitute the correct cluster name for `<CLUSTER_NAME>` in the above command. (**NOTE**: If `make manifests` was used to generate manifests, by default the `<CLUSTER_NAME>` is set to `test1`.)
   143  
   144  Similarly, to obtain the list of private IP addresses of the cluster nodes, use this AWS CLI command:
   145  
   146  ```bash
   147  for type in control-plane node
   148  do
   149  	aws ec2 describe-instances \
   150      --filter="Name=tag:sigs.k8s.io/cluster-api-provider-aws/role,\
   151      Values=${type}" \
   152  		| jq '.Reservations[].Instances[].PrivateIpAddress' -r
   153  done
   154  10.0.0.16
   155  10.0.0.68
   156  ```
   157  
   158  Finally, to obtain AWS instance IDs for cluster nodes, you can use this AWS CLI command:
   159  
   160  ```bash
   161  for type in control-plane node
   162  do
   163  	aws ec2 describe-instances \
   164      --filter="Name=tag:sigs.k8s.io/cluster-api-provider-aws/role,\
   165      Values=${type}" \
   166  		| jq '.Reservations[].Instances[].InstanceId' -r
   167  done
   168  i-112bac41a19da1819
   169  i-99aaef2381ada9228
   170  ```
   171  
   172  Note that your AWS CLI must be configured with credentials that enable you to query the AWS EC2 API.