github.com/openshift/installer@v1.4.17/docs/user/azure/install_upi_azurestack.md (about)

     1  # Install: User-Provided Infrastructure on Azure Stack Hub
     2  
     3  The steps for performing a user-provided infrastructure install are outlined here. Several
     4  [Azure Resource Manager][azuretemplates] templates are provided to assist in
     5  completing these steps or to help model your own. You are also free to create
     6  the required resources through other methods; the templates are just an
     7  example.
     8  
     9  ## Prerequisites
    10  
    11  * all prerequisites from [README](README.md)
    12  * the following binaries installed and in $PATH:
    13    * [openshift-install][openshiftinstall]
    14      * It is recommended that the OpenShift installer CLI version is the same of the cluster being deployed. The version used in this example is 4.9 GA.
    15    * [az (Azure CLI)][azurecli] installed and aunthenticated
    16      * `az` should be [configured to connect to the Azure Stack Hub instance][configurecli]
    17      * Commands flags and structure may vary between `az` versions. The recommended version used in this example is 2.26.1.
    18    * python3
    19    * [jq][jqjson]
    20    * [yq][yqyaml] (N.B. there are multiple versions of `yq`, some with different syntaxes.)
    21  
    22  ## Create an install config
    23  
    24  Create an install config, `install-config.yaml`. Here is a minimal example:
    25  
    26  ```yaml
    27  apiVersion: v1
    28  baseDomain: <example.com>
    29  compute:
    30  - name: worker
    31    platform: {}
    32    replicas: 0
    33  metadata:
    34    name: padillon
    35  platform:
    36    azure:
    37      armEndpoint: <azurestack-arm-endpoint>
    38      baseDomainResourceGroupName: <resource-group-for-example.com>
    39      cloudName: AzureStackCloud
    40      region: <azurestack-region>
    41  pullSecret: <redacted>
    42  sshKey: |
    43    <pubkey>
    44  ```
    45  
    46  We'll be providing the compute machines ourselves, so we set compute replicas to 0.
    47  
    48  Azure Stack is not supported by the interactive wizard, but you can use public Azure credentials to create an install config with [the usual approach](install.md#create-configuration) and then edit according to the example above.
    49  
    50  ### Additional Trust Bundle for Internal Certificate Authorities (Optional)
    51  
    52  If your Azure Stack environment uses an internal CA, add the necessary certificate bundle in .pem format to the [`additionalTrustBundle`](../customization.md#additional-trust-bundle). You will also need to [update the cluster proxy
    53  manifest][proxy-ca] and [add the CA to the ignition shim][ign-ca] in later steps.
    54  
    55  ## Credentials
    56  
    57  Both Azure and Azure Stack credentials are stored by the installer at `~/.azure/osServicePrincipal.json`. The installer will request the required information if no credentials are found.
    58  
    59  ```console
    60  $ openshift-install create manifests
    61  ? azure subscription id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    62  ? azure tenant id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    63  ? azure service principal client id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    64  ? azure service principal client secret xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    65  INFO Saving user credentials to "/home/user_id/.azure/osServicePrincipal.json"
    66  ```
    67  
    68  ### Extract data from install config
    69  
    70  Some data from the install configuration file will be used on later steps. Export them as environment variables with:
    71  
    72  ```sh
    73  export CLUSTER_NAME=$(yq -r .metadata.name install-config.yaml)
    74  export AZURE_REGION=$(yq -r .platform.azure.region install-config.yaml)
    75  export SSH_KEY=$(yq -r .sshKey install-config.yaml | xargs)
    76  export BASE_DOMAIN=$(yq -r .baseDomain install-config.yaml)
    77  export BASE_DOMAIN_RESOURCE_GROUP=$(yq -r .platform.azure.baseDomainResourceGroupName install-config.yaml)
    78  ```
    79  
    80  ## Create manifests
    81  
    82  Create manifests to enable customizations that are not exposed via the install configuration.
    83  
    84  ```console
    85  $ openshift-install create manifests
    86  INFO Credentials loaded from file "/home/user_id/.azure/osServicePrincipal.json"
    87  INFO Consuming "Install Config" from target directory
    88  WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
    89  ```
    90  
    91  ### Remove control plane machines and machinesets
    92  
    93  Remove the control plane machines and compute machinesets from the manifests.
    94  We'll be providing those ourselves and don't want to involve the [machine-API operator][machine-api-operator].
    95  
    96  ```sh
    97  rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
    98  rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml
    99  rm -f openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml
   100  ```
   101  
   102  ### Make control-plane nodes unschedulable
   103  
   104  Currently [emptying the compute pools](#empty-the-compute-pool) makes control-plane nodes schedulable.
   105  But due to a [Kubernetes limitation][kubernetes-service-load-balancers-exclude-masters], router pods running on control-plane nodes will not be reachable by the ingress load balancer.
   106  Update the scheduler configuration to keep router pods and other workloads off the control-plane nodes:
   107  
   108  ```sh
   109  python3 -c '
   110  import yaml;
   111  path = "manifests/cluster-scheduler-02-config.yml";
   112  data = yaml.full_load(open(path));
   113  data["spec"]["mastersSchedulable"] = False;
   114  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   115  ```
   116  
   117  ### Remove DNS Zones
   118  
   119  We don't want [the ingress operator][ingress-operator] to create DNS records (we're going to do it manually) so we need to remove
   120  the `publicZone` section from the DNS configuration in manifests.
   121  
   122  ```sh
   123  python3 -c '
   124  import yaml;
   125  path = "manifests/cluster-dns-02-config.yml";
   126  data = yaml.full_load(open(path));
   127  del data["spec"]["publicZone"];
   128  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   129  ```
   130  
   131  ### Resource Group Name and Infra ID
   132  
   133  The OpenShift cluster has been assigned an identifier in the form of `<cluster_name>-<random_string>`. This identifier, called "Infra ID", will be used as
   134  the base name of most resources that will be created in this example. Export the Infra ID as an environment variable that will be used later in this example:
   135  
   136  ```sh
   137  export INFRA_ID=$(yq -r .status.infrastructureName manifests/cluster-infrastructure-02-config.yml)
   138  ```
   139  
   140  Also, all resources created in this Azure deployment will exist as part of a [resource group][azure-resource-group]. The resource group name is also
   141  based on the Infra ID, in the form of `<cluster_name>-<random_string>-rg`. Export the resource group name to an environment variable that will be used later:
   142  
   143  ```sh
   144  export RESOURCE_GROUP=$(yq -r .status.platformStatus.azure.resourceGroupName manifests/cluster-infrastructure-02-config.yml)
   145  ```
   146  
   147  **Optional:** it's possible to choose any other name for the Infra ID and/or the resource group, but in that case some adjustments in manifests are needed.
   148  A Python script is provided to help with these adjustments. Export the `INFRA_ID` and the `RESOURCE_GROUP` environment variables with the desired names, copy the
   149  [`setup-manifests.py`](../../../upi/azure/setup-manifests.py) script locally and invoke it with:
   150  
   151  ```sh
   152  python3 setup-manifests.py $RESOURCE_GROUP $INFRA_ID
   153  ```
   154  
   155  ### Create Cluster Credentials
   156  
   157  Azure Stack Hub can only operate in `Manual` credentials mode. So we must set the Cloud Credential Operator to Manual:
   158  
   159  ```shell
   160  cat >> manifests/cco-configmap.yaml << EOF
   161  apiVersion: v1
   162  kind: ConfigMap
   163  metadata:
   164    name: cloud-credential-operator-config
   165    namespace: openshift-cloud-credential-operator
   166    annotations:
   167      release.openshift.io/create-only: "true"
   168  data:
   169    disabled: "true"
   170  EOF
   171  ```
   172  
   173  Then follow the [official documentation for creating manual credentials][manual-credentials].
   174  
   175  Please follow the instructions above, but the result should be similar to this example (note the filenames are arbitrary):
   176  
   177  ```console
   178  $ cat manifests/*credentials-secret.yaml
   179  apiVersion: v1
   180  kind: Secret
   181  metadata:
   182      name: azure-cloud-credentials
   183      namespace: openshift-cloud-controller-manager
   184  stringData:
   185    azure_subscription_id: <subscription-id>
   186    azure_client_id: <client-id>
   187    azure_client_secret: <secret>
   188    azure_tenant_id: <tenant>
   189    azure_resource_prefix: <$INFRA_ID>
   190    azure_resourcegroup: <$RESOURCE_GROUP>
   191    azure_region: <$REGION>
   192  apiVersion: v1
   193  kind: Secret
   194  metadata:
   195      name: installer-cloud-credentials
   196      namespace: openshift-image-registry
   197  stringData:
   198    azure_subscription_id: <subscription-id>
   199    azure_client_id: <client-id>
   200    azure_client_secret: <secret>
   201    azure_tenant_id: <tenant>
   202    azure_resource_prefix: <$INFRA_ID>
   203    azure_resourcegroup: <$RESOURCE_GROUP>
   204    azure_region: <$REGION>
   205  apiVersion: v1
   206  kind: Secret
   207  metadata:
   208      name: cloud-credentials
   209      namespace: openshift-ingress-operator
   210  stringData:
   211    azure_subscription_id: <subscription-id>
   212    azure_client_id: <client-id>
   213    azure_client_secret: <secret>
   214    azure_tenant_id: <tenant>
   215    azure_resource_prefix: <$INFRA_ID>
   216    azure_resourcegroup: <$RESOURCE_GROUP>
   217    azure_region: <$REGION>
   218  apiVersion: v1
   219  kind: Secret
   220  metadata:
   221    name: azure-cloud-credentials
   222    namespace: openshift-machine-api
   223  stringData:
   224    azure_subscription_id: <subscription-id>
   225    azure_client_id: <client-id>
   226    azure_client_secret: <secret>
   227    azure_tenant_id: <tenant>
   228    azure_resource_prefix: <$INFRA_ID>
   229    azure_resourcegroup: <$RESOURCE_GROUP>
   230    azure_region: <$REGION>
   231  ```
   232  
   233  NOTE: Any credentials for a credential request in Tech Preview must be excluded, or they will cause the
   234  installation to fail. As of 4.10, there is one credential request, for the `capi-opearator`, which is in
   235  tech preview. Any credential requests from a feature gate can simply be removed before you create the credentials:
   236  
   237  ```shell
   238  $ grep "release.openshift.io/feature-set" *
   239  0000_30_capi-operator_00_credentials-request.yaml:    release.openshift.io/feature-set: TechPreviewNoUpgrade
   240  $ rm 0000_30_capi-operator_00_credentials-request.yaml
   241  ```
   242  
   243  ### Set Cluster to use the Internal Certificate Authority (Optional)
   244  
   245  If your Azure Stack environment uses an internal CA, update `.spec.trustedCA.name` to use `user-ca-bundle` in `./manifests/cluster-proxy-01-config.yaml`:
   246  
   247  ```shell
   248  $ cat manifests/cluster-proxy-01-config.yaml 
   249  apiVersion: config.openshift.io/v1
   250  kind: Proxy
   251  metadata:
   252    creationTimestamp: null
   253    name: cluster
   254  spec:
   255    trustedCA:
   256      name: user-ca-bundle
   257  status: {}
   258  ```
   259  
   260  You will also need to update the ignition shim to include the CA.
   261  ## Create ignition configs
   262  
   263  Now we can create the bootstrap ignition configs:
   264  
   265  ```console
   266  $ openshift-install create ignition-configs
   267  INFO Consuming Openshift Manifests from target directory
   268  INFO Consuming Worker Machines from target directory
   269  INFO Consuming Common Manifests from target directory
   270  INFO Consuming Master Machines from target directory
   271  ```
   272  
   273  After running the command, several files will be available in the directory.
   274  
   275  ```console
   276  $ tree
   277  .
   278  ├── auth
   279  │   └── kubeconfig
   280  ├── bootstrap.ign
   281  ├── master.ign
   282  ├── metadata.json
   283  └── worker.ign
   284  ```
   285  
   286  ## Create The Resource Group
   287  
   288  Use the command below to create the resource group in the selected Azure region:
   289  
   290  ```sh
   291  az group create --name $RESOURCE_GROUP --location $AZURE_REGION
   292  ```
   293  
   294  ## Upload the files to a Storage Account
   295  
   296  The deployment steps will read the Red Hat Enterprise Linux CoreOS virtual hard disk (VHD) image and the bootstrap ignition config file
   297  from a blob. Create a storage account that will be used to store them and export its key as an environment variable.
   298  
   299  ```sh
   300  az storage account create -g $RESOURCE_GROUP --location $AZURE_REGION --name ${CLUSTER_NAME}sa --kind Storage --sku Standard_LRS
   301  export ACCOUNT_KEY=`az storage account keys list -g $RESOURCE_GROUP --account-name ${CLUSTER_NAME}sa --query "[0].value" -o tsv`
   302  ```
   303  
   304  ### Copy the cluster image
   305  
   306  In order to create VMs, the RHCOS VHD must be available in the Azure Stack
   307  environment. The VHD should be downloaded locally, decompressed, and uploaded to a
   308  storage blob.
   309  
   310  First, download and decompress the VHD, note that the decompressed file is 16GB:
   311  
   312  ```sh
   313  $ export COMPRESSED_VHD_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location')
   314  $ $ curl -O -L $COMPRESSED_VHD_URL 
   315    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
   316                                   Dload  Upload   Total   Spent    Left  Speed
   317  100  983M  100  983M    0     0  9416k      0  0:01:46  0:01:46 --:--:-- 10.5M
   318  $ gunzip rhcos-49.84.202107010027-0-azurestack.x86_64.vhd.gz
   319  ```
   320  
   321  Next, create a container for the VHD:
   322  
   323  ```sh
   324  az storage container create --name vhd --account-name ${CLUSTER_NAME}sa
   325  ```
   326  
   327  As mentioned above, the VHD size is massive. If you have a fast upload speed,
   328  you may be satisfied to upload the whole 16GB file:
   329  
   330  ```sh
   331  az storage blob upload --account-name ${CLUSTER_NAME}sa --account-key $ACCOUNT_KEY -c vhd -n "rhcos.vhd" -f rhcos-49.84.202107010027-0-azurestack.x86_64.vhd
   332  ```
   333  
   334  If your local connection speed is too slow to upload a 16 GB file, consider using a VM in your Azure Stack Hub instance or another cloud provider.
   335  
   336  ### Upload the bootstrap ignition
   337  
   338  Create a blob storage container and upload the generated `bootstrap.ign` file:
   339  
   340  ```sh
   341  az storage container create --name files --account-name "${CLUSTER_NAME}sa" --public-access blob --account-key "$ACCOUNT_KEY"
   342  az storage blob upload --account-name "${CLUSTER_NAME}sa" --account-key "$ACCOUNT_KEY" -c "files" -f "bootstrap.ign" -n "bootstrap.ign"
   343  ```
   344  
   345  ## Create the DNS zones
   346  
   347  A few DNS records are required for clusters that use user-provisioned infrastructure. Feel free to choose the DNS strategy that fits you best.
   348  
   349  This example adds records to an Azure Stack public zone. For external (internet) visibility, your authoritative DNS zone, such as [Azure's own DNS solution][azure-dns],
   350  can delegate to the DNS nameservers for you Azure Stack environment.
   351  Note that the public zone doesn't necessarily need to exist in the same resource group of the
   352  cluster deployment itself and may even already exist in your organization for the desired base domain. If that's the case, you can skip the public DNS
   353  zone creation step, but make sure the install config generated earlier [reflects that scenario](customization.md#cluster-scoped-properties).
   354  
   355  Create the new *public* DNS zone in the resource group exported in the `BASE_DOMAIN_RESOURCE_GROUP` environment variable, or just skip this step if you're going
   356  to use one that already exists in your organization:
   357  
   358  ```sh
   359  az network dns zone create -g "$BASE_DOMAIN_RESOURCE_GROUP" -n "${CLUSTER_NAME}.${BASE_DOMAIN}"
   360  ```
   361  
   362  ## Deployment
   363  
   364  The key parts of this UPI deployment are the [Azure Resource Manager][azuretemplates] templates, which are responsible
   365  for deploying most resources. They're provided as a few json files following the "NN_name.json" pattern. In the
   366  next steps we're going to deploy each one of them in order, using [az (Azure CLI)][azurecli] and providing the expected parameters.
   367  
   368  ## Deploy the Virtual Network
   369  
   370  In this example we're going to create a Virtual Network and subnets specifically for the OpenShift cluster. You can skip this step
   371  if the cluster is going to live in a VNet already existing in your organization, or you can edit the `01_vnet.json` file to your
   372  own needs (e.g. change the subnets address prefixes in CIDR format).
   373  
   374  Copy the [`01_vnet.json`](../../../upi/azurestack/01_vnet.json) ARM template locally.
   375  
   376  Create the deployment using the `az` client:
   377  
   378  ```sh
   379  az deployment group create -g $RESOURCE_GROUP \
   380    --template-file "01_vnet.json" \
   381    --parameters baseName="$INFRA_ID"
   382  ```
   383  
   384  ## Deploy the image
   385  
   386  Copy the [`02_storage.json`](../../../upi/azurestack/02_storage.json) ARM template locally.
   387  
   388  Create the deployment using the `az` client:
   389  
   390  ```sh
   391  export VHD_BLOB_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key $ACCOUNT_KEY -c vhd -n "rhcos.vhd" -o tsv`
   392  
   393  az deployment group create -g $RESOURCE_GROUP \
   394    --template-file "02_storage.json" \
   395    --parameters vhdBlobURL="$VHD_BLOB_URL" \
   396    --parameters baseName="$INFRA_ID"
   397  ```
   398  
   399  ## Deploy the load balancers
   400  
   401  Copy the [`03_infra.json`](../../../upi/azurestack/03_infra.json) ARM template locally.
   402  
   403  Deploy the load balancer and public IP addresses using the `az` client:
   404  
   405  ```sh
   406  az deployment group create -g $RESOURCE_GROUP \
   407    --template-file "03_infra.json" \
   408    --parameters baseName="$INFRA_ID"
   409  ```
   410  
   411  Create an `api` and `api-int` DNS record in the *public* zone for the API public load balancer. Note that the `BASE_DOMAIN_RESOURCE_GROUP` must point to the resource group where the public DNS zone exists.
   412  
   413  ```sh
   414  export PUBLIC_IP=$(az network public-ip list -g "$RESOURCE_GROUP" --query "[?name=='${INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv)
   415  export PRIVATE_IP=$(az network lb frontend-ip show -g "$RESOURCE_GROUP" --lb-name "${INFRA_ID}-internal" -n internal-lb-ip --query "privateIpAddress" -o tsv)
   416  az network dns record-set a add-record -g "$RESOURCE_GROUP" -z "${CLUSTER_NAME}.${BASE_DOMAIN}" -n api -a "$PUBLIC_IP" --ttl 60
   417  az network dns record-set a add-record -g "$RESOURCE_GROUP" -z "${CLUSTER_NAME}.${BASE_DOMAIN}" -n api-int -a "$PRIVATE_IP" --ttl 60
   418  ```
   419  
   420  ## Launch the temporary cluster bootstrap
   421  
   422  Copy the [`04_bootstrap.json`](../../../upi/azurestack/04_bootstrap.json) ARM template locally.
   423  
   424  Create the deployment using the `az` client:
   425  
   426  ### Create the Bootstrap Ignition Shim
   427  
   428  If your Azure Stack environment uses a public certificate authority, you can create the ignition shim like this:
   429  
   430  ```sh
   431  export BOOTSTRAP_URL=$(az storage blob url --account-name "${INFRA_ID}sa" --account-key "$ACCOUNT_KEY" -c "files" -n "bootstrap.ign" -o tsv)
   432  export BOOTSTRAP_IGNITION=$(jq -rcnM --arg v "3.2.0" --arg url "$BOOTSTRAP_URL" '{ignition:{version:$v,config:{replace:{source:$url}}}}' | base64 | tr -d '\n')
   433  ```
   434  
   435  ### Create the Bootstrap Ignition Shim with an Internal Certificate Authority (Optional)
   436  
   437  If your Azure Stack environments uses an internal CA, you will need to add the PEM encoded bundle to the bootstrap ignition
   438  shim so that your bootstrap VM will be able to pull the bootstrap ignition from the storage account. Assuming your CA
   439  is in a file called `CA.pem` you can add the bundle to the shim like this:
   440  
   441  ```sh
   442  export CA="data:text/plain;charset=utf-8;base64,$(cat CA.pem |base64 |tr -d '\n')"
   443  export BOOTSTRAP_URL=$(az storage blob url --account-name "${INFRA_ID}sa" --account-key "$ACCOUNT_KEY" -c "files" -n "bootstrap.ign" -o tsv)
   444  export BOOTSTRAP_IGNITION=$(jq -rcnM --arg v "3.2.0" --arg url "$BOOTSTRAP_URL" --arg cert "$CA" '{ignition:{version:$v,security:{tls:{certificateAuthorities:[{source:$cert}]}},config:{replace:{source:$url}}}}' | base64 | tr -d '\n')
   445  ```
   446  
   447  ### Deploy the Bootstrap VM
   448  
   449  az deployment group create --verbose -g "$RESOURCE_GROUP" \
   450    --template-file "04_bootstrap.json" \
   451    --parameters bootstrapIgnition="$BOOTSTRAP_IGNITION" \
   452    --parameters sshKeyData="$SSH_KEY" \
   453    --parameters baseName="$INFRA_ID" \
   454    --parameters diagnosticsStorageAccountName="${INFRA_ID}sa"
   455  ```
   456  
   457  ## Launch the permanent control plane
   458  
   459  Copy the [`05_masters.json`](../../../upi/azurestack/05_masters.json) ARM template locally.
   460  
   461  Create the deployment using the `az` client:
   462  
   463  ```sh
   464  export MASTER_IGNITION=$(cat master.ign | base64 | tr -d '\n')
   465  
   466  az deployment group create -g "$RESOURCE_GROUP" \
   467    --template-file "05_masters.json" \
   468    --parameters masterIgnition="$MASTER_IGNITION" \
   469    --parameters sshKeyData="$SSH_KEY" \
   470    --parameters baseName="$INFRA_ID" \
   471    --parameters masterVMSize="Standard_DS4_v2" \
   472    --parameters diskSizeGB="1023" \
   473    --parameters diagnosticsStorageAccountName="${INFRA_ID}sa"
   474  ```
   475  
   476  ## Wait for the bootstrap complete
   477  
   478  Wait until cluster bootstrapping has completed:
   479  
   480  ```console
   481  $ openshift-install wait-for bootstrap-complete --log-level debug
   482  DEBUG OpenShift Installer v4.n
   483  DEBUG Built from commit 6b629f0c847887f22c7a95586e49b0e2434161ca
   484  INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster.basedomain.com:6443...
   485  DEBUG Still waiting for the Kubernetes API: the server could not find the requested resource
   486  DEBUG Still waiting for the Kubernetes API: the server could not find the requested resource
   487  DEBUG Still waiting for the Kubernetes API: Get https://api.cluster.basedomain.com:6443/version?timeout=32s: dial tcp: connect: connection refused
   488  INFO API v1.14.n up
   489  INFO Waiting up to 30m0s for bootstrapping to complete...
   490  DEBUG Bootstrap status: complete
   491  INFO It is now safe to remove the bootstrap resources
   492  ```
   493  
   494  Once the bootstrapping process is complete you can deallocate and delete bootstrap resources:
   495  
   496  ```sh
   497  az network nsg rule delete -g $RESOURCE_GROUP --nsg-name ${INFRA_ID}-nsg --name bootstrap_ssh_in
   498  az vm stop -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap
   499  az vm deallocate -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap
   500  az vm delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap --yes
   501  az disk delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap_OSDisk --no-wait --yes
   502  az network nic delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap-nic --no-wait
   503  az storage blob delete --account-key $ACCOUNT_KEY --account-name ${CLUSTER_NAME}sa --container-name files --name bootstrap.ign
   504  az network public-ip delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap-ssh-pip
   505  ```
   506  
   507  ## Access the OpenShift API
   508  
   509  You can now use the `oc` or `kubectl` commands to talk to the OpenShift API. The admin credentials are in `auth/kubeconfig`. For example:
   510  
   511  ```sh
   512  export KUBECONFIG="$PWD/auth/kubeconfig"
   513  oc get nodes
   514  oc get clusteroperator
   515  ```
   516  
   517  Note that only the API will be up at this point. The OpenShift web console will run on the compute nodes.
   518  
   519  ## Launch compute nodes
   520  
   521  You may create compute nodes by launching individual instances discretely or by automated processes outside the cluster (e.g. Auto Scaling Groups).
   522  You can also take advantage of the built in cluster scaling mechanisms and the machine API in OpenShift.
   523  
   524  In this example, we'll manually launch three instances via the provided ARM template. Additional instances can be launched by editing the `06_workers.json` file.
   525  
   526  Copy the [`06_workers.json`](../../../upi/azurestack/06_workers.json) ARM template locally.
   527  
   528  Create the deployment using the `az` client:
   529  
   530  ```sh
   531  export WORKER_IGNITION=`cat worker.ign | base64 | tr -d '\n'`
   532  
   533  az deployment group create -g $RESOURCE_GROUP \
   534    --template-file "06_workers.json" \
   535    --parameters workerIgnition="$WORKER_IGNITION" \
   536    --parameters sshKeyData="$SSH_KEY" \
   537    --parameters baseName="$INFRA_ID" \
   538    --parameters diagnosticsStorageAccountName="${INFRA_ID}sa"
   539  ```
   540  
   541  ### Approve the worker CSRs
   542  
   543  Even after they've booted up, the workers will not show up in `oc get nodes`.
   544  
   545  Instead, they will create certificate signing requests (CSRs) which need to be approved. Eventually, you should see `Pending` entries looking like the ones below.
   546  You can use `watch oc get csr -A` to watch until the pending CSR's are available.
   547  
   548  ```console
   549  $ oc get csr -A
   550  NAME        AGE    REQUESTOR                                                                   CONDITION
   551  csr-8bppf   2m8s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   552  csr-dj2w4   112s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   553  csr-ph8s8   11s    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   554  csr-q7f6q   19m    system:node:master01                                                        Approved,Issued
   555  csr-5ztvt   19m    system:node:master02                                                        Approved,Issued
   556  csr-576l2   19m    system:node:master03                                                        Approved,Issued
   557  csr-htmtm   19m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   558  csr-wpvxq   19m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   559  csr-xpp49   19m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   560  ```
   561  
   562  You should inspect each pending CSR with the `oc describe csr <name>` command and verify that it comes from a node you recognise. If it does, they can be approved:
   563  
   564  ```console
   565  $ oc adm certificate approve csr-8bppf csr-dj2w4 csr-ph8s8
   566  certificatesigningrequest.certificates.k8s.io/csr-8bppf approved
   567  certificatesigningrequest.certificates.k8s.io/csr-dj2w4 approved
   568  certificatesigningrequest.certificates.k8s.io/csr-ph8s8 approved
   569  ```
   570  
   571  Approved nodes should now show up in `oc get nodes`, but they will be in the `NotReady` state. They will create a second CSR which must also be reviewed and approved.
   572  Repeat the process of inspecting the pending CSR's and approving them.
   573  
   574  Once all CSR's are approved, the node should switch to `Ready` and pods will be scheduled on it.
   575  
   576  ```console
   577  $ oc get nodes
   578  NAME       STATUS   ROLES    AGE     VERSION
   579  master01   Ready    master   23m     v1.14.6+cebabbf7a
   580  master02   Ready    master   23m     v1.14.6+cebabbf7a
   581  master03   Ready    master   23m     v1.14.6+cebabbf7a
   582  node01     Ready    worker   2m30s   v1.14.6+cebabbf7a
   583  node02     Ready    worker   2m35s   v1.14.6+cebabbf7a
   584  node03     Ready    worker   2m34s   v1.14.6+cebabbf7a
   585  ```
   586  
   587  ### Add the Ingress DNS Records
   588  
   589  Create DNS records in the public zone pointing at the ingress load balancer. Use A, CNAME, etc. records, as you see fit.
   590  You can create either a wildcard `*.apps.{baseDomain}.` or [specific records](#specific-route-records) for every route (more on the specific records below).
   591  
   592  First, wait for the ingress default router to create a load balancer and populate the `EXTERNAL-IP` column:
   593  
   594  ```console
   595  $ oc -n openshift-ingress get service router-default
   596  NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
   597  router-default   LoadBalancer   172.30.20.10   35.130.120.110   80:32288/TCP,443:31215/TCP   20
   598  ```
   599  
   600  Add a `*.apps` record to the *public* DNS zone:
   601  
   602  ```sh
   603  export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
   604  az network dns record-set a add-record -g $BASE_DOMAIN_RESOURCE_GROUP -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a $PUBLIC_IP_ROUTER --ttl 300
   605  ```
   606  
   607  Or, in case of adding this cluster to an already existing public zone, use instead:
   608  
   609  ```sh
   610  export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
   611  az network dns record-set a add-record -g $BASE_DOMAIN_RESOURCE_GROUP -z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a $PUBLIC_IP_ROUTER --ttl 300
   612  ```
   613  
   614  #### Specific route records
   615  
   616  If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes. Use the command below to check what they are:
   617  
   618  ```console
   619  $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
   620  oauth-openshift.apps.cluster.basedomain.com
   621  console-openshift-console.apps.cluster.basedomain.com
   622  downloads-openshift-console.apps.cluster.basedomain.com
   623  alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com
   624  grafana-openshift-monitoring.apps.cluster.basedomain.com
   625  prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com
   626  ```
   627  
   628  ## Wait for the installation complete
   629  
   630  Wait until cluster is ready:
   631  
   632  ```console
   633  $ openshift-install wait-for install-complete --log-level debug
   634  DEBUG Built from commit 6b629f0c847887f22c7a95586e49b0e2434161ca
   635  INFO Waiting up to 30m0s for the cluster at https://api.cluster.basedomain.com:6443 to initialize...
   636  DEBUG Still waiting for the cluster to initialize: Working towards 4.2.12: 99% complete, waiting on authentication, console, monitoring
   637  DEBUG Still waiting for the cluster to initialize: Working towards 4.2.12: 100% complete
   638  DEBUG Cluster is initialized
   639  INFO Waiting up to 10m0s for the openshift-console route to be created...
   640  DEBUG Route found in openshift-console namespace: console
   641  DEBUG Route found in openshift-console namespace: downloads
   642  DEBUG OpenShift console route is created
   643  INFO Install complete!
   644  INFO To access the cluster as the system:admin user when using 'oc', run
   645      export KUBECONFIG=${PWD}/auth/kubeconfig
   646  INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster.basedomain.com
   647  INFO Login to the console with user: kubeadmin, password: REDACTED
   648  ```
   649  
   650  [azuretemplates]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/template-deployment-overview
   651  [openshiftinstall]: https://github.com/openshift/installer
   652  [azurecli]: https://docs.microsoft.com/en-us/cli/azure/
   653  [configurecli]: https://docs.microsoft.com/en-us/azure-stack/user/azure-stack-version-profiles-azurecli2?view=azs-2102&tabs=ad-lin#connect-with-azure-cli
   654  [jqjson]: https://stedolan.github.io/jq/
   655  [yqyaml]: https://kislyuk.github.io/yq/
   656  [ingress-operator]: https://github.com/openshift/cluster-ingress-operator
   657  [machine-api-operator]: https://github.com/openshift/machine-api-operator
   658  [azure-identity]: https://docs.microsoft.com/en-us/azure/architecture/framework/security/identity
   659  [azure-resource-group]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups
   660  [azure-dns]: https://docs.microsoft.com/en-us/azure/dns/dns-overview
   661  [kubernetes-service-load-balancers-exclude-masters]: https://github.com/kubernetes/kubernetes/issues/65618
   662  [manual-credentials]: https://docs.openshift.com/container-platform/4.8/installing/installing_azure/manually-creating-iam-azure.html
   663  [azure-vhd-utils]: https://github.com/microsoft/azure-vhd-utils
   664  [proxy-ca]: #set-cluster-to-use-the-internal-certificate-authority-optional
   665  [ign-ca]: #create-the-bootstrap-ignition-shim-with-an-internal-certificate-authority-optional