github.com/openshift/installer@v1.4.17/docs/user/aws/install_edge-zones.md (about)

     1  # Install a cluster in AWS extending nodes to AWS Local Zones
     2  
     3  The steps below describe how to install a cluster in AWS extending worker nodes to Local Zones.
     4  
     5  This document is split into the following sections:
     6  
     7  - Prerequisites
     8  - [Local Zones](#local-zones)
     9    - [Install a cluster extending nodes to the Local Zone [new VPC]](#ipi-localzones) (4.14+)
    10    - [Install a cluster into existing VPC with Local Zone subnets](#ipi-localzones-existing-vpc) (4.13+)
    11    - [Extend worker nodes to AWS Local Zones in existing clusters [Day 2]](#day2-localzones)
    12  - [Wavelength Zones](#wavelength-zones)
    13    - [Install a cluster extending nodes to the Wavelength Zone [new VPC]](#ipi-wavelength-auto)
    14    - [Install a cluster on AWS in existing VPC with subnets in Wavelength Zone](#ipi-wavelength-byovpc)
    15  - [Use Cases](#use-cases)
    16  
    17  ## Prerequisites for edge zones
    18  
    19  ### Additional IAM permissions <a name="pre-iam-opt-in"></a>
    20  
    21  The AWS Local Zone deployment described in this document requires additional permission from the user creating the cluster allowing Local Zone group modification: `ec2:ModifyAvailabilityZoneGroup`
    22  
    23  Example of the permissive IAM Policy that can be attached to the User or Role:
    24  
    25  ```json
    26  {
    27    "Version": "2012-10-17",
    28    "Statement": [
    29      {
    30        "Action": [
    31          "ec2:ModifyAvailabilityZoneGroup"
    32        ],
    33        "Effect": "Allow",
    34        "Resource": "*"
    35      }
    36    ]
    37  }
    38  ```
    39  
    40  ___
    41  ___
    42  
    43  # Local Zones
    44  
    45  ## Install a cluster extending nodes to Local Zone <a name="ipi-localzones"></a>
    46  
    47  Starting on 4.14 you can install an OCP cluster in AWS extending nodes to the AWS Local Zones,
    48  letting the installation process automate all the steps from the subnet creation to
    49  node running through MachineSet manifests.
    50  
    51  There are some design considerations when using the fully automated process:
    52  
    53  - Read the [AWS Local Zones limitations](ocp-aws-localzone-limitations)
    54  - Cluster-wide network MTU: the Maximum Transmission Unit for the overlay network will automatically be adjusted when the edge pool configuration is set
    55  - Machine Network CIDR block allocation: the Machine CIDR blocks used to create the cluster will be sharded to smaller blocks depending on the number of zones provided on install-config.yaml to create the public and private subnets.
    56  - Internet egress traffic for private subnets: When using the installer automation to create subnets in Local Zones, the egress traffic for private subnets in AWS Local Zones will use the Nat Gateway from the parent zone, when the parent zone's route table is present, otherwise it will use the first route table for private subnets found in the region.
    57  
    58  The sections below describe how to create a cluster using a basic example with single-zone local, and a full example of retrieving all zones in the region.
    59  
    60  ### Prerequisites
    61  
    62  The prerequisite for installing a cluster using AWS Local Zones is to opt-in to every Local Zone group.
    63  
    64  For Local Zones, the group name must be the zone name without the letter (zone identifier). Example: for Local Zone `us-east-1-bos-1a` the zone group will be `us-east-1-bos-1`.
    65  
    66  It's also possible to query the group name reading the zone attribute:
    67  
    68  ```bash
    69  $ aws --region us-east-1 ec2 describe-availability-zones \
    70    --all-availability-zones \
    71    --filters Name=zone-name,Values=us-east-1-bos-1a \
    72    --query "AvailabilityZones[].GroupName" --output text
    73  us-east-1-bos-1
    74  ```
    75  
    76  ### Option 1. Steps to create a cluster with a single Local Zone
    77  
    78  <!-- > Note: this example preferably goes to the product documentation. -->
    79  
    80  Create a cluster in the region `us-east-1` extending worker nodes to AWS Local Zone `us-east-1-bos-1a`:
    81  
    82  - opt-in to the Zone Group
    83  
    84  ```bash
    85  aws ec2 modify-availability-zone-group \
    86      --region us-east-1 \
    87      --group-name us-east-1-bos-1a \
    88      --opt-in-status opted-in
    89  ```
    90  
    91  AWS will process the request in the background, it could take a few minutes. Check the field `OptInStatus` before proceeding:
    92  
    93  ```bash
    94  aws --region us-east-1 ec2 describe-availability-zones \
    95    --all-availability-zones \
    96    --filters Name=zone-name,Values=us-east-1-bos-1a \
    97    --query "AvailabilityZones[]"
    98  ```
    99  
   100  - Create the `install-config.yaml`:
   101  
   102  ```yaml
   103  apiVersion: v1
   104  publish: External
   105  baseDomain: devcluster.openshift.com
   106  metadata:
   107    name: "cluster-name"
   108  pullSecret: ...
   109  sshKey: ...
   110  platform:
   111    aws:
   112      region: us-east-1
   113  compute:
   114  - name: edge
   115    platform:
   116      aws:
   117        zones:
   118        - us-east-1-bos-1a
   119  ```
   120  
   121  - Create the cluster
   122  
   123  ```bash
   124  ./openshift-install create cluster
   125  ```
   126  
   127  ### Option 2. Steps to create a cluster with many zones
   128  
   129  Steps to create a cluster using the AWS Region `us-east-1` as a reference, selecting all Local Zones in the Region.
   130  
   131  - Build the lists for zone groups and names:
   132  
   133  ```bash
   134  mapfile -t local_zone_names < <(aws --region us-east-1 ec2 describe-availability-zones   --all-availability-zones   --filters Name=zone-type,Values=local-zone   --query "AvailabilityZones[].ZoneName" | jq -r .[])
   135  mapfile -t local_zone_groups < <(aws --region us-east-1 ec2 describe-availability-zones   --all-availability-zones   --filters Name=zone-type,Values=local-zone   --query "AvailabilityZones[].GroupName" | jq -r .[])
   136  ```
   137  
   138  - Opt-in the zone group:
   139  
   140  ```bash
   141  for zone_group in ${local_zone_groups[@]}; do
   142    aws ec2 modify-availability-zone-group \
   143      --region us-east-1 \
   144      --group-name ${zone_group} \
   145      --opt-in-status opted-in
   146  done
   147  ```
   148  
   149  - Export the zone list:
   150  
   151  ```bash
   152  $ for zone_name in ${local_zone_names[@]}; do echo "      - $zone_name"; done
   153        - us-east-1-atl-1a
   154        - us-east-1-bos-1a
   155        - us-east-1-bue-1a
   156        - us-east-1-chi-1a
   157        - us-east-1-dfw-1a
   158        - us-east-1-iah-1a
   159        - us-east-1-lim-1a
   160        - us-east-1-mci-1a
   161        - us-east-1-mia-1a
   162        - us-east-1-msp-1a
   163        - us-east-1-nyc-1a
   164        - us-east-1-phl-1a
   165        - us-east-1-qro-1a
   166        - us-east-1-scl-1a
   167  ```
   168  
   169  - Create the `install-config.yaml` with the local zone list:
   170  
   171  ```yaml
   172  apiVersion: v1
   173  publish: External
   174  baseDomain: devcluster.openshift.com
   175  metadata:
   176    name: "cluster-name"
   177  pullSecret: ...
   178  sshKey: ...
   179  platform:
   180    aws:
   181      region: us-east-1
   182  compute:
   183  - name: edge
   184    platform:
   185      aws:
   186        zones:
   187        - us-east-1-atl-1a
   188        - us-east-1-bos-1a
   189        - us-east-1-bue-1a
   190        - us-east-1-chi-1a
   191        - us-east-1-dfw-1a
   192        - us-east-1-iah-1a
   193        - us-east-1-lim-1a
   194        - us-east-1-mci-1a
   195        - us-east-1-mia-1a
   196        - us-east-1-msp-1a
   197        - us-east-1-nyc-1a
   198        - us-east-1-phl-1a
   199        - us-east-1-qro-1a
   200        - us-east-1-scl-1a
   201  ```
   202  
   203  For each specified zone, a CIDR block range will be allocated, and subnets created.
   204  
   205  - Create the cluster
   206  
   207  ```bash
   208  ./openshift-install create cluster
   209  ```
   210  
   211  ## Install a cluster into existing VPC with Local Zone subnets <a name="ipi-localzones-existing-vpc"></a>
   212  
   213  The steps below describe how to install a cluster in existing VPC with AWS Local Zones subnets using Edge Machine Pool, introduced in 4.12.
   214  
   215  The Edge Machine Pool was created to create a pool of workers running in the AWS Local Zones locations. This pool differs from the default compute pool on these items - Edge workers was not designed to run regular cluster workloads:
   216  - The resources in AWS Local Zones are more expensive than the normal availability zones
   217  - The latency between the application and end-users is lower in Local Zones and may vary for each location. So it will impact if some workloads like routers are mixed in the normal availability zones due to the unbalanced latency
   218  - Network Load Balancers do not support subnets in the Local Zones
   219  - The total time to connect to the applications running in Local Zones from the end-users close to the metropolitan region running the workload, is almost 10x faster than the parent region.
   220  
   221  Table of Contents:
   222  
   223  - [Prerequisites](#prerequisites)
   224      - [Additional IAM permissions](#prerequisites-iam)
   225  - [Create the Network stack](#create-network)
   226      - [Create the VPC](#create-network-vpc)
   227      - [Create the Local Zone subnet](#create-network-subnet)
   228          - [Opt-in zone group](#create-network-subnet-optin)
   229          - [Creating the Subnet using AWS CloudFormation](#create-network-subnet-cfn)
   230  - [Install](#install-cluster)
   231      - [Create the install-config.yaml](#create-config)
   232      - [Setting up the Edge Machine Pool](#create-config-edge-pool")
   233          - [Example edge pool created without customization](#create-config-edge-pool)
   234          - [Example edge pool with custom Instance type](#reate-config-edge-pool-example-ec2)
   235          - [Example edge pool with custom EBS type](#create-config-edge-pool-example-ebs)
   236      - [Create the cluster](#create-cluster-run)
   237  - [Uninstall](#uninstall)
   238      - [Destroy the cluster](#uninstall-destroy-cluster)
   239      - [Destroy the Local Zone subnet](#uninstall-destroy-subnet)
   240      - [Destroy the VPC](#uninstall-destroy-vpc)
   241  - [Use Cases](#use-cases)
   242      - [Example of a sample application deployment](#uc-deployment)
   243      - [User-workload ingress traffic](#uc-exposing-ingress)
   244  
   245  To install a cluster in an existing VPC with Local Zone subnets, you should provision the network resources and then add the subnet IDs to the `install-config.yaml`.
   246  
   247  ## Prerequisites <a name="prerequisites"></a>
   248  
   249  - [AWS Command Line Interface](aws-cli)
   250  - [openshift-install >= 4.12](openshift-install)
   251  - environment variables exported:
   252  
   253  ```bash
   254  export CLUSTER_NAME="ipi-localzones"
   255  
   256  # AWS Region and extra Local Zone group Information
   257  export AWS_REGION="us-west-2"
   258  export ZONE_GROUP_NAME="us-west-2-lax-1"
   259  export ZONE_NAME="us-west-2-lax-1a"
   260  
   261  # VPC Information
   262  export VPC_CIDR="10.0.0.0/16"
   263  export VPC_SUBNETS_BITS="10"
   264  export VPC_SUBNETS_COUNT="3"
   265  
   266  # Local Zone Subnet information
   267  export SUBNET_CIDR="10.0.192.0/22"
   268  export SUBNET_NAME="${CLUSTER_NAME}-public-usw2-lax-1a"
   269  ```
   270  
   271  ### Additional IAM permissions
   272  
   273  The AWS Local Zone deployment described in this document, requires the additional permission for the user creating the cluster to modify the Local Zone group: `ec2:ModifyAvailabilityZoneGroup`
   274  
   275  Example of the permissive IAM Policy that can be attached to the User or Role:
   276  
   277  ```json
   278  {
   279    "Version": "2012-10-17",
   280    "Statement": [
   281      {
   282        "Action": [
   283          "ec2:ModifyAvailabilityZoneGroup"
   284        ],
   285        "Effect": "Allow",
   286        "Resource": "*"
   287      }
   288    ]
   289  }
   290  ```
   291  
   292  ## Create the Network Stack <a name="create-network"></a>
   293  
   294  ### Create the VPC <a name="create-network-vpc"></a>
   295  
   296  The steps to install a cluster in an existing VPC are [detailed in the official documentation](aws-install-vpc). You can alternatively use [the CloudFormation templates to create the Network resources](aws-install-cloudformation), which will be used in this document.
   297  
   298  - Create the Stack
   299  
   300  ```bash
   301  INSTALLER_URL="https://raw.githubusercontent.com/openshift/installer/master"
   302  TPL_URL="${INSTALLER_URL}/upi/aws/cloudformation/01_vpc.yaml"
   303  
   304  aws cloudformation create-stack \
   305      --region ${AWS_REGION} \
   306      --stack-name ${CLUSTER_NAME}-vpc \
   307      --template-body ${TPL_URL} \
   308      --parameters \
   309          ParameterKey=VpcCidr,ParameterValue=${VPC_CIDR} \
   310          ParameterKey=SubnetBits,ParameterValue=${VPC_SUBNETS_BITS} \
   311          ParameterKey=AvailabilityZoneCount,ParameterValue=${VPC_SUBNETS_COUNT}
   312  ```
   313  
   314  - Wait for the stack to be created: `StackStatus=CREATE_COMPLETE`
   315  
   316  ```bash
   317  aws cloudformation wait stack-create-complete \
   318      --region ${AWS_REGION} \
   319      --stack-name ${CLUSTER_NAME}-vpc 
   320  ```
   321  
   322  - Export the VPC ID:
   323  
   324  ```bash
   325  export VPC_ID=$(aws cloudformation describe-stacks \
   326    --region ${AWS_REGION} --stack-name ${CLUSTER_NAME}-vpc \
   327    --query 'Stacks[0].Outputs[?OutputKey==`VpcId`].OutputValue' --output text)
   328  ```
   329  
   330  - Extract the subnets IDs to the environment variable list `SUBNETS`:
   331  
   332  ```bash
   333  mapfile -t SUBNETS < <(aws cloudformation describe-stacks \
   334    --region ${AWS_REGION} \
   335    --stack-name ${CLUSTER_NAME}-vpc \
   336    --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetIds`].OutputValue' \
   337    --output text | tr ',' '\n')
   338  mapfile -t -O "${#SUBNETS[@]}" SUBNETS < <(aws cloudformation describe-stacks \
   339    --region ${AWS_REGION} \
   340    --stack-name ${CLUSTER_NAME}-vpc  \
   341    --query 'Stacks[0].Outputs[?OutputKey==`PublicSubnetIds`].OutputValue' \
   342    --output text | tr ',' '\n')
   343  ```
   344  
   345  - Export the Public Route Table ID:
   346  
   347  ```bash
   348  export PUBLIC_RTB_ID=$(aws cloudformation describe-stacks \
   349    --region us-west-2 \
   350    --stack-name ${CLUSTER_NAME}-vpc \
   351    --query 'Stacks[0].Outputs[?OutputKey==`PublicRouteTableId`].OutputValue' --output text)
   352  ```
   353  
   354  - Make sure all variables have been correctly set:
   355  
   356  ```bash
   357  echo "SUBNETS=${SUBNETS[*]}
   358  VPC_ID=${VPC_ID}
   359  PUBLIC_RTB_ID=${PUBLIC_RTB_ID}"
   360  ```
   361  
   362  ### Create the Local Zone subnet <a name="create-network-subnet"></a>
   363  
   364  The following actions are required to create subnets in Local Zones:
   365  - choose the zone group to be enabled
   366  - opt-in the zone group
   367  
   368  #### Opt-in Zone groups <a name="create-network-subnet-optin"></a>
   369  
   370  Opt-in the zone group:
   371  
   372  ```bash
   373  aws ec2 modify-availability-zone-group \
   374      --region ${AWS_REGION} \
   375      --group-name ${ZONE_GROUP_NAME} \
   376      --opt-in-status opted-in
   377  ```
   378  
   379  #### Creating the Subnet using AWS CloudFormation <a name="create-network-subnet-cfn"></a>
   380  
   381  - Create the Stack for Local Zone subnet `us-west-2-lax-1a`
   382  
   383  ```bash
   384  INSTALLER_URL="https://raw.githubusercontent.com/openshift/installer/master"
   385  TPL_URL="${INSTALLER_URL}/upi/aws/cloudformation/01.99_net_local-zone.yaml"
   386  
   387  aws cloudformation create-stack \
   388      --region ${AWS_REGION} \
   389      --stack-name ${SUBNET_NAME} \
   390      --template-body ${TPL_URL} \
   391      --parameters \
   392          ParameterKey=VpcId,ParameterValue=${VPC_ID} \
   393          ParameterKey=ZoneName,ParameterValue=${ZONE_NAME} \
   394          ParameterKey=SubnetName,ParameterValue=${SUBNET_NAME} \
   395          ParameterKey=PublicSubnetCidr,ParameterValue=${SUBNET_CIDR} \
   396          ParameterKey=PublicRouteTableId,ParameterValue=${PUBLIC_RTB_ID}
   397  ```
   398  
   399  - Wait for the stack to be created `StackStatus=CREATE_COMPLETE`
   400  
   401  ```bash
   402  aws cloudformation wait stack-create-complete \
   403    --region ${AWS_REGION} \
   404    --stack-name ${SUBNET_NAME}
   405  ```
   406  
   407  - Export the Local Zone subnet ID
   408  
   409  ```bash
   410  export SUBNET_ID=$(aws cloudformation describe-stacks \
   411    --region ${AWS_REGION} \
   412    --stack-name ${SUBNET_NAME} \
   413    --query 'Stacks[0].Outputs[?OutputKey==`PublicSubnetIds`].OutputValue' --output text)
   414  
   415  # Append the Local Zone Subnet ID to the Subnet List
   416  SUBNETS+=(${SUBNET_ID})
   417  ```
   418  
   419  - Check the total of subnets. If you choose 3 AZs to be created on the VPC stack, you should have 7 subnets on this list:
   420  
   421  ```bash
   422  $ echo ${#SUBNETS[*]}
   423  7
   424  ```
   425  
   426  ## Install the cluster <a name="install-cluster"></a>
   427  
   428  To install the cluster in existing VPC with subnets in Local Zones, you should:
   429  - generate the `install-config.yaml`, or provide yours
   430  - add the subnet IDs by setting the option `platform.aws.subnets`
   431  - (optional) customize the `edge` compute pool
   432  
   433  ### Create the install-config.yaml <a name="create-config"></a>
   434  
   435  Create the `install-config.yaml` providing the subnet IDs recently created:
   436  
   437  - create the `install-config`
   438  
   439  ```bash
   440  $ ./openshift-install create install-config --dir ${CLUSTER_NAME}
   441  ? SSH Public Key /home/user/.ssh/id_rsa.pub
   442  ? Platform aws
   443  ? Region us-west-2
   444  ? Base Domain devcluster.openshift.com
   445  ? Cluster Name ipi-localzone
   446  ? Pull Secret [? for help] **
   447  INFO Install-Config created in: ipi-localzone     
   448  ```
   449  
   450  - Append the subnets to the `platform.aws.subnets`:
   451  
   452  ```bash
   453  $ echo "    subnets:"; for SB in ${SUBNETS[*]}; do echo "    - $SB"; done
   454      subnets:
   455      - subnet-0fc845d8e30fdb431
   456      - subnet-0a2675b7cbac2e537
   457      - subnet-01c0ac400e1920b47
   458      - subnet-0fee60966b7a93da6
   459      - subnet-002b48c0a91c8c641
   460      - subnet-093f00deb44ce81f4
   461      - subnet-0f85ae65796e8d107
   462  ```
   463  
   464  ### Setting up the Edge Machine Pool <a name="create-config-edge-pool"></a>
   465  
   466  Version 4.12 or later introduces a new compute pool named `edge` designed for
   467  the remote zones. The `edge` compute pool configuration is common between
   468  AWS Local Zone locations, but due to the limitation of resources (Instance Types
   469  and Sizes) of the Local Zone, the default instance type created may vary
   470  from the traditional worker pool.
   471  
   472  The default EBS for Local Zone locations is `gp2`, different than the default worker pool.
   473  
   474  The preferred list of instance types follows the same order of worker pools, depending
   475  on the availability of the location, one of those instances will be chosen*:
   476  > Note: This list can be updated over time
   477  - `m6i.xlarge`
   478  - `m5.xlarge`
   479  - `c5d.2xlarge`
   480  
   481  The `edge` compute pool will also create new labels to help developers
   482  deploy their applications onto those locations. The new labels introduced are:
   483      - `node-role.kubernetes.io/edge=''`
   484      - `zone_type=local-zone`
   485      - `zone_group=<Local Zone Group>`
   486  
   487  Finally, the Machine Sets created by the `edge` compute pool have `NoSchedule` taint to avoid the
   488  regular workloads spread out on those machines, and only user workloads will be allowed to run
   489  when the tolerations are defined on the pod spec (you can see the example in the following sections).
   490  
   491  By default, the `edge` compute pool will be created only when AWS Local Zone subnet IDs are added
   492  to the list of `platform.aws.subnets`.
   493  
   494  See below some examples of `install-config.yaml` with `edge` compute pool.
   495  
   496  #### Example edge pool created without customization <a name="create-config-edge-pool-example-def"></a>
   497  
   498  ```yaml
   499  apiVersion: v1
   500  baseDomain: devcluster.openshift.com
   501  metadata:
   502    name: ipi-localzone
   503  platform:
   504    aws:
   505      region: us-west-2
   506      subnets:
   507      - subnet-0fc845d8e30fdb431
   508      - subnet-0a2675b7cbac2e537
   509      - subnet-01c0ac400e1920b47
   510      - subnet-0fee60966b7a93da6
   511      - subnet-002b48c0a91c8c641
   512      - subnet-093f00deb44ce81f4
   513      - subnet-0f85ae65796e8d107
   514  pullSecret: '{"auths": ...}'
   515  sshKey: ssh-ed25519 AAAA...
   516  ```
   517  
   518  #### Example edge pool with custom Instance type <a name="create-config-edge-pool-example-ec2"></a>
   519  
   520  The Instance Type may differ between locations. You should check the AWS Documentation to check availability in the Local Zone that the cluster will run.
   521  
   522  `install-config.yaml` example customizing the Instance Type for the Edge Machine Pool:
   523  
   524  ```yaml
   525  apiVersion: v1
   526  baseDomain: devcluster.openshift.com
   527  metadata:
   528    name: ipi-localzone
   529  compute:
   530  - name: edge
   531    platform:
   532      aws:
   533        type: m5.4xlarge
   534  platform:
   535    aws:
   536      region: us-west-2
   537      subnets:
   538      - subnet-0fc845d8e30fdb431
   539      - subnet-0a2675b7cbac2e537
   540      - subnet-01c0ac400e1920b47
   541      - subnet-0fee60966b7a93da6
   542      - subnet-002b48c0a91c8c641
   543      - subnet-093f00deb44ce81f4
   544      - subnet-0f85ae65796e8d107
   545  pullSecret: '{"auths": ...}'
   546  sshKey: ssh-ed25519 AAAA...
   547  ```
   548  
   549  #### Example edge pool with custom EBS type <a name="create-config-edge-pool-example-ebs"></a>
   550  
   551  The EBS Type may differ between locations. You should check the AWS Documentation to check availability in the Local Zone that the cluster will run.
   552  
   553  `install-config.yaml` example customizing the EBS Type for the Edge Machine Pool:
   554  
   555  ```yaml
   556  apiVersion: v1
   557  baseDomain: devcluster.openshift.com
   558  metadata:
   559    name: ipi-localzone
   560  compute:
   561  - name: edge
   562    platform:
   563      aws:
   564        rootVolume:
   565          type: gp3
   566          size: 120
   567  platform:
   568    aws:
   569      region: us-west-2
   570      subnets:
   571      - subnet-0fc845d8e30fdb431
   572      - subnet-0a2675b7cbac2e537
   573      - subnet-01c0ac400e1920b47
   574      - subnet-0fee60966b7a93da6
   575      - subnet-002b48c0a91c8c641
   576      - subnet-093f00deb44ce81f4
   577      - subnet-0f85ae65796e8d107
   578  pullSecret: '{"auths": ...}'
   579  sshKey: ssh-ed25519 AAAA...
   580  ```
   581  
   582  ### Create the cluster <a name="create-cluster-run"></a>
   583  
   584  ```bash
   585  ./openshift-install create cluster --dir ${CLUSTER_NAME}
   586  ```
   587  
   588  ### Uninstall the cluster <a name="uninstall"></a>
   589  
   590  #### Destroy the cluster <a name="uninstall-destroy-cluster"></a>
   591  
   592  ```bash
   593  ./openshift-install destroy cluster --dir ${CLUSTER_NAME}
   594  ```
   595  
   596  #### Destroy the Local Zone subnets <a name="uninstall-destroy-subnet"></a>
   597  
   598  ```bash
   599  aws cloudformation delete-stack \
   600      --region ${AWS_REGION} \
   601      --stack-name ${SUBNET_NAME}
   602  ```
   603  
   604  #### Destroy the VPC <a name="uninstall-destroy-vpc"></a>
   605  
   606  ```bash
   607  aws cloudformation delete-stack \
   608      --region ${AWS_REGION} \
   609      --stack-name ${CLUSTER_NAME}-vpc
   610  ```
   611  
   612  ## Extend worker nodes to AWS Local Zones in existing clusters [Day 2] <a name="#day2-localzones"></a>
   613  
   614  The following steps are required to create worker nodes in AWS Local Zones:
   615  
   616  - Make sure the overlay network MTU is set correctly to support the AWS Local Zone limitations
   617  - Create subnets in AWS Local Zones, and dependencies (subnet association)
   618  - Create MachineSet to deploy compute nodes in Local Zone subnets
   619  
   620  When the cluster is installed using the edge compute pool, the MTU for the overlay network is automatically adjusted depending on the network plugin used.
   621  
   622  When the cluster was already installed without the edge compute pool, and without Local Zone support, the required dependencies must be satisfied. The steps below cover both scenarios.
   623  
   624  ### Adjust the MTU of the overlay network
   625  
   626  > You can skip this section if the cluster is already installed with Local Zone support.
   627  
   628  The [KCS](https://access.redhat.com/solutions/6996487) covers the required step to change the MTU from the overlay network.
   629  
   630  ***Example changing the default MTU (9001) to the maximum allowed for network plugin OVN-Kubernetes***:
   631  
   632  ```bash
   633  
   634  $ CLUSTER_MTU_CUR=$(oc get network.config.openshift.io/cluster --output=jsonpath={.status.clusterNetworkMTU})
   635  $ CLUSTER_MTU_NEW=1200
   636  
   637  $ oc patch Network.operator.openshift.io cluster --type=merge \
   638    --patch "{
   639      \"spec\":{
   640        \"migration\":{
   641          \"mtu\":{
   642            \"network\":{
   643              \"from\":${CLUSTER_MTU_CUR},
   644              \"to\":${CLUSTER_MTU_NEW}
   645            },
   646            \"machine\":{\"to\":9001}
   647          }}}}"
   648  ```
   649  
   650  Wait for the deployment to be finished, then remove the migration config:
   651  
   652  ```bash
   653  $ oc patch network.operator.openshift.io/cluster --type=merge \
   654    --patch "{
   655      \"spec\":{
   656        \"migration\":null,
   657        \"defaultNetwork\":{
   658          \"ovnKubernetesConfig\":{\"mtu\":${CLUSTER_MTU_NEW}}
   659          }}}"
   660  ```
   661  
   662  ### Setup subnet for Local Zone
   663  
   664  Prerequisites:
   665  
   666  - You must check the free CIDR Blocks available on the VPC
   667  - Only CloudFormation Templates for public subnets are provided, you must adapt them if need more advanced configuration
   668  
   669  Steps:
   670  
   671  - [Opt-in the Zone group](https://docs.openshift.com/container-platform/4.12/installing/installing_aws/installing-aws-localzone.html#installation-aws-add-local-zone-locations_installing-aws-localzone)
   672  - [Create the Local Zone subnet](https://docs.openshift.com/container-platform/4.12/installing/installing_aws/installing-aws-localzone.html#installation-creating-aws-vpc-localzone_installing-aws-localzone)
   673  
   674  
   675  ### Create the MachineSet
   676  
   677  The steps below describe how to create the MachineSet manifests for the AWS Local Zone node:
   678  
   679  - [Create the MachineSet manifest: Step 3](https://docs.openshift.com/container-platform/4.12/installing/installing_aws/installing-aws-localzone.html#installation-localzone-generate-k8s-manifest_installing-aws-localzone)
   680  
   681  Once it is created you can apply the configuration to the cluster:
   682  
   683  ***Example:***
   684  
   685  ```bash
   686  oc create -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml
   687  ```
   688  
   689  ___
   690  ___
   691  
   692  # Wavelength Zones
   693  
   694  ## Prerequisites
   695  
   696  ### Review Wavelength Zone limitations
   697  
   698  There are some design considerations when using the fully automated process in OpenShift:
   699  
   700  - Review the AWS Wavelength Zones documentation for [Overview](https://docs.aws.amazon.com/wavelength/latest/developerguide/what-is-wavelength.html) and [Quotas and considerations](https://docs.aws.amazon.com/wavelength/latest/developerguide/wavelength-quotas.html)
   701  - Cluster-wide network MTU: the Maximum Transmission Unit for the overlay network will automatically be adjusted when the edge pool configuration is set
   702  - Machine Network CIDR block allocation: the Machine CIDR blocks used to create the cluster will be sharded to smaller blocks depending on the number of zones provided on install-config.yaml to create the public and private subnets.
   703  - Internet egress traffic for private subnets: When using the installer automation to create subnets in Wavelength Zones, the egress traffic for private subnets in AWS Wavelength Zones will use the Nat Gateway from the parent zone, when the parent zone's route table is present, otherwise it will use the first route table for private subnets found in the region.
   704  
   705  ### Opt-into AWS Wavelength Zone
   706  
   707  Opt into AWS Wavelength Zones.
   708  
   709  Check the zone group name for the target zone (`us-east-1-wl1-bos-wlz-1`):
   710  
   711  ```sh
   712  $ aws --region us-east-1 ec2 describe-availability-zones \
   713    --all-availability-zones \
   714    --filters Name=zone-name,Values=us-east-1-wl1-bos-wlz-1 \
   715    --query "AvailabilityZones[].GroupName" --output text
   716  us-east-1-wl1
   717  ```
   718  
   719  Opt-in to the Zone Group
   720  
   721  ```bash
   722  aws ec2 modify-availability-zone-group \
   723      --region us-east-1 \
   724      --group-name us-east-1-wl1 \
   725      --opt-in-status opted-in
   726  ```
   727  
   728  The request will be processed in the background, it could take a few minutes. Check if the field `OptInStatus` has the value `opted-in` before proceeding:
   729  
   730  ```bash
   731  aws --region us-east-1 ec2 describe-availability-zones \
   732    --all-availability-zones \
   733    --filters Name=zone-name,Values=us-east-1-wl1-bos-wlz-1 \
   734    --query "AvailabilityZones[].OptInStatus"
   735  ```
   736  
   737  ## Install a cluster extending nodes to the Wavelength Zone [new VPC] <a name="#ipi-wavelength-auto"></a>
   738  
   739  ### Prerequisites
   740  
   741  #### Additional AWS Permissions
   742  
   743  IAM Permissions when the installer fully automates the creation and deletion of subnets in Wavelength zones.
   744  
   745  - [Opt-int permissions](#pre-iam-opt-in)
   746  
   747  - Permissions to create and delete the Carrier Gateway:
   748  
   749  ```json
   750  {
   751    "Version": "2012-10-17",
   752    "Statement": [
   753      {
   754        "Effect": "Allow",
   755        "Action": [
   756          "ec2:DeleteCarrierGateway",
   757          "ec2:CreateCarrierGateway"
   758        ],
   759        "Resource": "*"
   760      }
   761    ]
   762  }
   763  ```
   764  
   765  ### Create cluster
   766  
   767  Create a cluster in the region `us-east-1` extending worker nodes to AWS Local Zone `us-east-1-wl1-bos-wlz-1`:
   768  
   769  - Create the `install-config.yaml`:
   770  
   771  ```sh
   772  CLUSTER_NAME=aws-wlz
   773  INSTALL_DIR=${PWD}/installdir-${CLUSTER_NAME}
   774  mkdir $INSTALL_DIR
   775  cat << EOF > $INSTALL_DIR/install-config.yaml
   776  apiVersion: v1
   777  metadata:
   778    name: $CLUSTER_NAME
   779  publish: External
   780  pullSecret: '$(cat ~/.openshift/pull-secret-latest.json)'
   781  sshKey: |
   782    $(cat ~/.ssh/id_rsa.pub)
   783  baseDomain: devcluster.openshift.com
   784  platform:
   785    aws:
   786      region: us-east-1
   787  compute:
   788  - name: edge
   789    platform:
   790      aws:
   791        zones:
   792        - us-east-1-wl1-bos-wlz-1
   793  EOF
   794  ```
   795  
   796  - Create the cluster
   797  
   798  ```bash
   799  ./openshift-install create cluster --dir ${$INSTALL_DIR}
   800  ```
   801  
   802   Create the cluster
   803  
   804  ```bash
   805  ./openshift-install destroy cluster --dir ${$INSTALL_DIR}
   806  ```
   807  
   808  ## Install a cluster on AWS in existing VPC with subnets in Wavelength Zone <a name="#ipi-wavelength-byovpc"></a>
   809  
   810  This section describes how to create the CloudFormation stack to provision VPC and subnets in Wavelength Zones, and then install an OpenShift cluster into an existing network.
   811  
   812  ### Prerequisites
   813  
   814  - [Opt-into AWS Wavelength Zone](#opt-into-aws-wavelength-zone)
   815  
   816  ### Create the Network Stack (VPC and subnets)
   817  
   818  Steps:
   819  
   820  - Export the general variables for the cluster, and adapt them according to your environment:
   821  
   822  ```sh
   823  export CLUSTER_REGION=us-east-1
   824  export CLUSTER_NAME=wlz-byovpc
   825  export PULL_SECRET_FILE=${HOME}/path/to/pull-secret.json
   826  export BASE_DOMAIN=example.com
   827  export SSH_PUB_KEY_FILE=$HOME/.ssh/id_rsa.pub
   828  
   829  export CIDR_VPC="10.0.0.0/16"
   830  
   831  # Set the Wavelength Zone to create subnets
   832  export ZONE_NAME="us-east-1-wl1-nyc-wlz-1"
   833  export SUBNET_CIDR_PUB="10.0.128.0/24"
   834  export SUBNET_CIDR_PVT="10.0.129.0/24"
   835  ```
   836  
   837  - Export the CloudFormation template path (assuming you are in the root of the installer repository):
   838  
   839  ```sh
   840  TEMPLATE_NAME_VPC="upi/aws/cloudformation/01_vpc.yaml"
   841  TEMPLATE_NAME_CARRIER_GW="upi/aws/cloudformation/01_vpc_01_carrier_gateway.yaml"
   842  TEMPLATE_NAME_SUBNET="upi/aws/cloudformation/01_vpc_99_subnet.yaml"
   843  ```
   844  
   845  - Create the CloudFormation stack for VPC:
   846  
   847  ```sh
   848  export STACK_VPC=${CLUSTER_NAME}-vpc
   849  aws cloudformation create-stack \
   850    --region ${CLUSTER_REGION} \
   851    --stack-name ${STACK_VPC} \
   852    --template-body file://$TEMPLATE_NAME_VPC \
   853    --parameters \
   854      ParameterKey=VpcCidr,ParameterValue="${CIDR_VPC}" \
   855      ParameterKey=AvailabilityZoneCount,ParameterValue=3 \
   856      ParameterKey=SubnetBits,ParameterValue=12
   857  
   858  aws --region $CLUSTER_REGION cloudformation wait stack-create-complete --stack-name ${STACK_VPC}
   859  aws --region $CLUSTER_REGION cloudformation describe-stacks --stack-name ${STACK_VPC}
   860  
   861  export VPC_ID=$(aws --region $CLUSTER_REGION cloudformation describe-stacks \
   862    --stack-name ${STACK_VPC} \
   863    | jq -r '.Stacks[0].Outputs[] | select(.OutputKey=="VpcId").OutputValue' )
   864  ```
   865  
   866  - Create the Carrier Gateway:
   867  
   868  ```sh
   869  export STACK_CAGW=${CLUSTER_NAME}-cagw
   870  aws cloudformation create-stack \
   871    --region ${CLUSTER_REGION} \
   872    --stack-name ${STACK_CAGW} \
   873    --template-body file://$TEMPLATE_NAME_CARRIER_GW \
   874    --parameters \
   875      ParameterKey=VpcId,ParameterValue="${VPC_ID}" \
   876      ParameterKey=ClusterName,ParameterValue="${CLUSTER_NAME}"
   877  
   878  aws --region $CLUSTER_REGION cloudformation wait stack-create-complete --stack-name ${STACK_CAGW}
   879  aws --region $CLUSTER_REGION cloudformation describe-stacks --stack-name ${STACK_CAGW}
   880  ```
   881  
   882  - Extract the variables to create the subnets
   883  
   884  ```sh
   885  export ZONE_SUFFIX=$(echo ${ZONE_NAME/${CLUSTER_REGION}-/})
   886  
   887  export ROUTE_TABLE_PUB=$(aws --region $CLUSTER_REGION cloudformation describe-stacks \
   888    --stack-name ${STACK_CAGW} \
   889    | jq -r '.Stacks[0].Outputs[] | select(.OutputKey=="PublicRouteTableId").OutputValue' )
   890  
   891  export ROUTE_TABLE_PVT=$(aws --region $CLUSTER_REGION cloudformation describe-stacks \
   892    --stack-name ${STACK_VPC} \
   893    | jq -r '.Stacks[0].Outputs[]
   894      | select(.OutputKey=="PrivateRouteTableIds").OutputValue
   895      | split(",")[0] | split("=")[1]' \
   896  )
   897  
   898  # Review the variables (optional)
   899  cat <<EOF
   900  CLUSTER_REGION=$CLUSTER_REGION
   901  VPC_ID=$VPC_ID
   902  AZ_NAME=$AZ_NAME
   903  AZ_SUFFIX=$AZ_SUFFIX
   904  ZONE_GROUP_NAME=$ZONE_GROUP_NAME
   905  ROUTE_TABLE_PUB=$ROUTE_TABLE_PUB
   906  ROUTE_TABLE_PVT=$ROUTE_TABLE_PVT
   907  SUBNET_CIDR_PUB=$SUBNET_CIDR_PUB
   908  SUBNET_CIDR_PVT=$SUBNET_CIDR_PVT
   909  EOF
   910  ```
   911  
   912  - Create the CloudFormation stack to provision the public and private subnets:
   913  
   914  ```sh
   915  export STACK_SUBNET=${CLUSTER_NAME}-subnets-${AZ_SUFFIX}
   916  aws cloudformation create-stack \
   917    --region ${CLUSTER_REGION} \
   918    --stack-name ${STACK_SUBNET} \
   919    --template-body file://$TEMPLATE_NAME_SUBNET \
   920    --parameters \
   921      ParameterKey=VpcId,ParameterValue="${VPC_ID}" \
   922      ParameterKey=ClusterName,ParameterValue="${CLUSTER_NAME}" \
   923      ParameterKey=ZoneName,ParameterValue="${AZ_NAME}" \
   924      ParameterKey=PublicRouteTableId,ParameterValue="${ROUTE_TABLE_PUB}" \
   925      ParameterKey=PublicSubnetCidr,ParameterValue="${SUBNET_CIDR_PUB}" \
   926      ParameterKey=PrivateRouteTableId,ParameterValue="${ROUTE_TABLE_PVT}" \
   927      ParameterKey=PrivateSubnetCidr,ParameterValue="${SUBNET_CIDR_PVT}"
   928  
   929  aws --region $CLUSTER_REGION cloudformation wait stack-create-complete --stack-name ${STACK_SUBNET}
   930  aws --region $CLUSTER_REGION cloudformation describe-stacks --stack-name ${STACK_SUBNET}
   931  ```
   932  
   933  ### Create the cluster
   934  
   935  - Extract the subnets to be used in the install-config.yaml:
   936  
   937  ```sh
   938  # Regular Availability Zones (public and private) from VPC CloudFormation Stack
   939  mapfile -t SUBNETS < <(aws --region $CLUSTER_REGION cloudformation describe-stacks   --stack-name "${STACK_VPC}" --query "Stacks[0].Outputs[?OutputKey=='PrivateSubnetIds'].OutputValue" --output text | tr ',' '\n')
   940  
   941  mapfile -t -O "${#SUBNETS[@]}" SUBNETS < <(aws --region $CLUSTER_REGION cloudformation describe-stacks   --stack-name "${STACK_VPC}" --query "Stacks[0].Outputs[?OutputKey=='PublicSubnetIds'].OutputValue" --output text | tr ',' '\n')
   942  
   943  # Private subnet for Wavelenth Zones from subnets CloudFormation Stack
   944  mapfile -t -O "${#SUBNETS[@]}" SUBNETS < <(aws --region $CLUSTER_REGION cloudformation describe-stacks   --stack-name "${STACK_SUBNET}" --query "Stacks[0].Outputs[?OutputKey=='PrivateSubnetIds'].OutputValue" --output text | tr ',' '\n')
   945  ```
   946  
   947  - Create install-config.yaml:
   948  
   949  ```sh
   950  cat <<EOF > ./install-config.yaml
   951  apiVersion: v1
   952  publish: External
   953  baseDomain: ${BASE_DOMAIN}
   954  metadata:
   955    name: "${CLUSTER_NAME}"
   956  platform:
   957    aws:
   958      region: ${CLUSTER_REGION}
   959      subnets:
   960  $(for SB in ${SUBNETS[*]}; do echo "    - $SB"; done)
   961  pullSecret: '$(cat ${PULL_SECRET_FILE} | awk -v ORS= -v OFS= '{$1=$1}1')'
   962  sshKey: |
   963    $(cat ${SSH_PUB_KEY_FILE})
   964  EOF
   965  ```
   966  
   967  - Create the cluster:
   968  
   969  ```sh
   970  ./openshift-install create cluster
   971  ```
   972  
   973  ### Destroy the cluster and network dependencies
   974  
   975  - Destroy the cluster:
   976  
   977  ```sh
   978  ./openshift-install destroy cluster
   979  ```
   980  
   981  - Destroy the subnet stack:
   982  
   983  ```sh
   984  aws cloudformation delete-stack \
   985      --region ${AWS_REGION} \
   986      --stack-name ${STACK_SUBNET}
   987  ```
   988  
   989  - Destroy the Carrier Gateway stack:
   990  
   991  ```sh
   992  aws cloudformation delete-stack \
   993      --region ${AWS_REGION} \
   994      --stack-name ${STACK_CAGW}
   995  ```
   996  
   997  - Destroy the VPC Stack:
   998  
   999  ```sh
  1000  aws cloudformation delete-stack \
  1001      --region ${AWS_REGION} \
  1002      --stack-name ${STACK_VPC}
  1003  ```
  1004  
  1005  ___
  1006  ___
  1007  
  1008  # Use Cases <a name="use-cases"></a>
  1009  
  1010  > Note: part of this document was added to the official documentation: [Post-installation configuration / Cluster tasks / Creating user workloads in AWS Local Zones](ocp-aws-localzones-day2-user-workloads)
  1011  
  1012  ## Example of a sample application deployment <a name="uc-deployment"></a>
  1013  
  1014  The example below creates one sample application on the node running in the Local zone, setting the tolerations needed to pin the pod on the correct node:
  1015  
  1016  ```bash
  1017  cat << EOF | oc create -f -
  1018  apiVersion: v1
  1019  kind: Namespace
  1020  metadata:
  1021    name: local-zone-demo
  1022  ---
  1023  apiVersion: apps/v1
  1024  kind: Deployment
  1025  metadata:
  1026    name: local-zone-demo-app-nyc-1
  1027    namespace: local-zone-demo
  1028  spec:
  1029    selector:
  1030      matchLabels:
  1031        app: local-zone-demo-app-nyc-1
  1032    replicas: 1
  1033    template:
  1034      metadata:
  1035        labels:
  1036          app: local-zone-demo-app-nyc-1
  1037          machine.openshift.io/zone-group: ${ZONE_GROUP_NAME}
  1038      spec:
  1039        nodeSelector:
  1040          machine.openshift.io/zone-group: ${ZONE_GROUP_NAME}
  1041        tolerations:
  1042        - key: "node-role.kubernetes.io/edge"
  1043          operator: "Equal"
  1044          value: ""
  1045          effect: "NoSchedule"
  1046        containers:
  1047          - image: openshift/origin-node
  1048            command:
  1049             - "/bin/socat"
  1050            args:
  1051              - TCP4-LISTEN:8080,reuseaddr,fork
  1052              - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"'
  1053            imagePullPolicy: Always
  1054            name: echoserver
  1055            ports:
  1056              - containerPort: 8080
  1057  ---
  1058  apiVersion: v1
  1059  kind: Service 
  1060  metadata:
  1061    name:  local-zone-demo-app-nyc-1 
  1062    namespace: local-zone-demo
  1063  spec:
  1064    ports:
  1065      - port: 80
  1066        targetPort: 8080
  1067        protocol: TCP
  1068    type: NodePort
  1069    selector:
  1070      app: local-zone-demo-app-nyc-1
  1071  EOF
  1072  ```
  1073  
  1074  ## User-workload ingress traffic <a name="uc-exposing-ingress"></a>
  1075  
  1076  To expose the applications to the internet on AWS Local Zones, application developers
  1077  must expose the applications using an external Load Balancer, for example, AWS Application Load Balancers (ALB). The
  1078  [ALB Operator](https://docs.openshift.com/container-platform/4.11/networking/aws_load_balancer_operator/install-aws-load-balancer-operator.html) is available through OLM on 4.11+.
  1079  
  1080  To explore the best of deploying applications on the AWS Local Zone locations, at least one new
  1081  ALB `Ingress` must be provisioned by location to expose the services deployed on the
  1082  zones.
  1083  
  1084  If the cluster-admin decides to share the ALB `Ingress` subnets between different locations,
  1085  it will impact drastically the latency for the end-users when the traffic is routed to
  1086  backends (compute nodes) placed in different zones that the traffic entered by the Ingress/Load Balancer.
  1087  
  1088  The ALB deployment is not covered by this documentation.
  1089  
  1090  ___
  1091  
  1092  [openshift-install]: https://docs.openshift.com/container-platform/4.11/installing/index.html
  1093  [aws-cli]: https://aws.amazon.com/cli/
  1094  [aws-install-vpc]: https://docs.openshift.com/container-platform/4.11/installing/installing_aws/installing-aws-vpc.html
  1095  [aws-install-cloudformation]: https://docs.openshift.com/container-platform/4.11/installing/installing_aws/installing-aws-user-infra.html
  1096  [aws-local-zones]: https://aws.amazon.com/about-aws/global-infrastructure/localzones
  1097  [aws-local-zones-features]: https://aws.amazon.com/about-aws/global-infrastructure/localzones/features
  1098  [ocp-aws-localzone-limitations]: https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-localzone.html#cluster-limitations-local-zone_installing-aws-localzone
  1099  [ocp-aws-localzones-day2-user-workloads]: https://docs.openshift.com/container-platform/4.13/post_installation_configuration/cluster-tasks.html#installation-extend-edge-nodes-aws-local-zones_post-install-cluster-tasks