github.com/openshift/installer@v1.4.17/docs/user/azure/install_upi.md (about)

     1  # Install: User Provided Infrastructure (UPI)
     2  
     3  The steps for performing a user-provided infrastructure install are outlined here. Several
     4  [Azure Resource Manager][azuretemplates] templates are provided to assist in
     5  completing these steps or to help model your own. You are also free to create
     6  the required resources through other methods; the templates are just an
     7  example.
     8  
     9  ## Prerequisites
    10  
    11  * all prerequisites from [README](README.md)
    12  * the following binaries installed and in $PATH:
    13    * [openshift-install][openshiftinstall]
    14      * It is recommended that the OpenShift installer CLI version is the same of the cluster being deployed. The version used in this example is 4.3.0 GA.
    15    * [az (Azure CLI)][azurecli] installed and authenticated
    16      * Commands flags and structure may vary between `az` versions. The recommended version used in this example is 2.0.80.
    17    * python3
    18    * [jq][jqjson]
    19    * [yq][yqyaml]
    20  
    21  ## Create an install config
    22  
    23  Create an install configuration as for [the usual approach](install.md#create-configuration).
    24  
    25  ```console
    26  $ openshift-install create install-config
    27  ? SSH Public Key /home/user_id/.ssh/id_rsa.pub
    28  ? Platform azure
    29  ? azure subscription id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    30  ? azure tenant id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    31  ? azure service principal client id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    32  ? azure service principal client secret xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    33  INFO Saving user credentials to "/home/user_id/.azure/osServicePrincipal.json"
    34  ? Region centralus
    35  ? Base Domain example.com
    36  ? Cluster Name test
    37  ? Pull Secret [? for help]
    38  ```
    39  
    40  Note that we're going to have a new Virtual Network and subnetworks created specifically for this deployment, but it is also possible to use a networking
    41  infrastructure already existing in your organization. Please refer to the [customization instructions](customization.md) for more details about setting
    42  up an install config for that scenario.
    43  
    44  ### Extract data from install config
    45  
    46  Some data from the install configuration file will be used on later steps. Export them as environment variables with:
    47  
    48  ```sh
    49  export CLUSTER_NAME=`yq -r .metadata.name install-config.yaml`
    50  export AZURE_REGION=`yq -r .platform.azure.region install-config.yaml`
    51  export BASE_DOMAIN=`yq -r .baseDomain install-config.yaml`
    52  export BASE_DOMAIN_RESOURCE_GROUP=`yq -r .platform.azure.baseDomainResourceGroupName install-config.yaml`
    53  ```
    54  
    55  ### Empty the compute pool
    56  
    57  We'll be providing the compute machines ourselves, so edit the resulting `install-config.yaml` to set `replicas` to 0 for the `compute` pool:
    58  
    59  ```sh
    60  python3 -c '
    61  import yaml;
    62  path = "install-config.yaml";
    63  data = yaml.full_load(open(path));
    64  data["compute"][0]["replicas"] = 0;
    65  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
    66  ```
    67  
    68  ## Create manifests
    69  
    70  Create manifests to enable customizations that are not exposed via the install configuration.
    71  
    72  ```console
    73  $ openshift-install create manifests
    74  INFO Credentials loaded from file "/home/user_id/.azure/osServicePrincipal.json"
    75  INFO Consuming "Install Config" from target directory
    76  WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
    77  ```
    78  
    79  ### Remove control plane machines and machinesets
    80  
    81  Remove the control plane machines and compute machinesets from the manifests.
    82  We'll be providing those ourselves and don't want to involve the [machine-API operator][machine-api-operator].
    83  
    84  ```sh
    85  rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
    86  rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml
    87  rm -f openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml
    88  ```
    89  
    90  ### Make control-plane nodes unschedulable
    91  
    92  Currently [emptying the compute pools](#empty-the-compute-pool) makes control-plane nodes schedulable.
    93  But due to a [Kubernetes limitation][kubernetes-service-load-balancers-exclude-masters], router pods running on control-plane nodes will not be reachable by the ingress load balancer.
    94  Update the scheduler configuration to keep router pods and other workloads off the control-plane nodes:
    95  
    96  ```sh
    97  python3 -c '
    98  import yaml;
    99  path = "manifests/cluster-scheduler-02-config.yml";
   100  data = yaml.full_load(open(path));
   101  data["spec"]["mastersSchedulable"] = False;
   102  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   103  ```
   104  
   105  ### Remove DNS Zones
   106  
   107  We don't want [the ingress operator][ingress-operator] to create DNS records (we're going to do it manually) so we need to remove
   108  the `privateZone` and `publicZone` sections from the DNS configuration in manifests.
   109  
   110  ```sh
   111  python3 -c '
   112  import yaml;
   113  path = "manifests/cluster-dns-02-config.yml";
   114  data = yaml.full_load(open(path));
   115  del data["spec"]["publicZone"];
   116  del data["spec"]["privateZone"];
   117  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   118  ```
   119  
   120  ### Resource Group Name and Infra ID
   121  
   122  The OpenShift cluster has been assigned an identifier in the form of `<cluster_name>-<random_string>`. This identifier, called "Infra ID", will be used as
   123  the base name of most resources that will be created in this example. Export the Infra ID as an environment variable that will be used later in this example:
   124  
   125  ```sh
   126  export INFRA_ID=`yq -r '.status.infrastructureName' manifests/cluster-infrastructure-02-config.yml`
   127  ```
   128  
   129  Also, all resources created in this Azure deployment will exist as part of a [resource group][azure-resource-group]. The resource group name is also
   130  based on the Infra ID, in the form of `<cluster_name>-<random_string>-rg`. Export the resource group name to an environment variable that will be user later:
   131  
   132  ```sh
   133  export RESOURCE_GROUP=`yq -r '.status.platformStatus.azure.resourceGroupName' manifests/cluster-infrastructure-02-config.yml`
   134  ```
   135  
   136  **Optional:** it's possible to choose any other name for the Infra ID and/or the resource group, but in that case some adjustments in manifests are needed.
   137  A Python script is provided to help with these adjustments. Export the `INFRA_ID` and the `RESOURCE_GROUP` environment variables with the desired names, copy the
   138  [`setup-manifests.py`](../../../upi/azure/setup-manifests.py) script locally and invoke it with:
   139  
   140  ```sh
   141  python3 setup-manifests.py $RESOURCE_GROUP $INFRA_ID
   142  ```
   143  
   144  ## Create ignition configs
   145  
   146  Now we can create the bootstrap ignition configs:
   147  
   148  ```console
   149  $ openshift-install create ignition-configs
   150  INFO Consuming Openshift Manifests from target directory
   151  INFO Consuming Worker Machines from target directory
   152  INFO Consuming Common Manifests from target directory
   153  INFO Consuming Master Machines from target directory
   154  ```
   155  
   156  After running the command, several files will be available in the directory.
   157  
   158  ```console
   159  $ tree
   160  .
   161  ├── auth
   162  │   └── kubeconfig
   163  ├── bootstrap.ign
   164  ├── master.ign
   165  ├── metadata.json
   166  └── worker.ign
   167  ```
   168  
   169  ## Create The Resource Group and identity
   170  
   171  Use the command below to create the resource group in the selected Azure region:
   172  
   173  ```sh
   174  az group create --name $RESOURCE_GROUP --location $AZURE_REGION
   175  ```
   176  
   177  Also, create an identity which will be used to grant the required access to cluster operators:
   178  
   179  ```sh
   180  az identity create -g $RESOURCE_GROUP -n ${INFRA_ID}-identity
   181  ```
   182  
   183  ## Upload the files to a Storage Account
   184  
   185  The deployment steps will read the Red Hat Enterprise Linux CoreOS virtual hard disk (VHD) image and the bootstrap ignition config file
   186  from a blob. Create a storage account that will be used to store them and export its key as an environment variable.
   187  
   188  ```sh
   189  az storage account create -g $RESOURCE_GROUP --location $AZURE_REGION --name ${CLUSTER_NAME}sa --kind Storage --sku Standard_LRS
   190  export ACCOUNT_KEY=`az storage account keys list -g $RESOURCE_GROUP --account-name ${CLUSTER_NAME}sa --query "[0].value" -o tsv`
   191  ```
   192  
   193  ### Copy the cluster image
   194  
   195  Given the size of the RHCOS VHD, it's not possible to run the deployments with this file stored locally on your machine.
   196  We must copy and store it in a storage container instead. To do so, first create a blob storage container and then copy the VHD.
   197  
   198  ```sh
   199  export OCP_ARCH="x86_64" # or "aarch64"
   200  az storage container create --name vhd --account-name ${CLUSTER_NAME}sa
   201  export VHD_URL=$(openshift-install coreos print-stream-json | jq -r --arg arch "$OCP_ARCH" '.architectures[$arch]."rhel-coreos-extensions"."azure-disk".url')
   202  az storage blob copy start --account-name ${CLUSTER_NAME}sa --account-key $ACCOUNT_KEY --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "$VHD_URL"
   203  ```
   204  
   205  To track the progress, you can use:
   206  
   207  ```sh
   208  status="unknown"
   209  while [ "$status" != "success" ]
   210  do
   211    status=`az storage blob show --container-name vhd --name "rhcos.vhd" --account-name ${CLUSTER_NAME}sa --account-key $ACCOUNT_KEY -o tsv --query properties.copy.status`
   212    echo $status
   213  done
   214  ```
   215  
   216  ### Upload the bootstrap ignition
   217  
   218  Create a blob storage container and upload the generated `bootstrap.ign` file:
   219  
   220  ```sh
   221  az storage container create --name files --account-name ${CLUSTER_NAME}sa
   222  az storage blob upload --account-name ${CLUSTER_NAME}sa --account-key $ACCOUNT_KEY -c "files" -f "bootstrap.ign" -n "bootstrap.ign"
   223  ```
   224  
   225  ## Create the DNS zones
   226  
   227  A few DNS records are required for clusters that use user-provisioned infrastructure. Feel free to choose the DNS strategy that fits you best.
   228  
   229  In this example we're going to use [Azure's own DNS solution][azure-dns], so we're going to create a new public DNS zone for external (internet) visibility, and
   230  a private DNS zone for internal cluster resolution. Note that the public zone don't necessarily need to exist in the same resource group of the
   231  cluster deployment itself and may even already exist in your organization for the desired base domain. If that's the case, you can skip the public DNS
   232  zone creation step, but make sure the install config generated earlier [reflects that scenario](customization.md#cluster-scoped-properties).
   233  
   234  Create the new *public* DNS zone in the resource group exported in the `BASE_DOMAIN_RESOURCE_GROUP` environment variable, or just skip this step if you're going
   235  to use one that already exists in your organization:
   236  
   237  ```sh
   238  az network dns zone create -g $BASE_DOMAIN_RESOURCE_GROUP -n ${CLUSTER_NAME}.${BASE_DOMAIN}
   239  ```
   240  
   241  Create the *private* zone in the same resource group of the rest of this deployment:
   242  
   243  ```sh
   244  az network private-dns zone create -g $RESOURCE_GROUP -n ${CLUSTER_NAME}.${BASE_DOMAIN}
   245  ```
   246  
   247  ## Grant access to the identity
   248  
   249  Grant the *Contributor* role to the Azure identity so that the Ingress Operator can create a public IP and its load balancer. You can do that with:
   250  
   251  ```sh
   252  export PRINCIPAL_ID=`az identity show -g $RESOURCE_GROUP -n ${INFRA_ID}-identity --query principalId --out tsv`
   253  export RESOURCE_GROUP_ID=`az group show -g $RESOURCE_GROUP --query id --out tsv`
   254  az role assignment create --assignee "$PRINCIPAL_ID" --role 'Contributor' --scope "$RESOURCE_GROUP_ID"
   255  ```
   256  
   257  ## Deployment
   258  
   259  The key part of this UPI deployment are the [Azure Resource Manager][azuretemplates] templates, which are responsible
   260  for deploying most resources. They're provided as a few json files named following the "NN_name.json" pattern. In the
   261  next steps we're going to deploy each one of them in order, using [az (Azure CLI)][azurecli] and providing the expected parameters.
   262  
   263  ## Deploy the Virtual Network
   264  
   265  In this example we're going to create a Virtual Network and subnets specifically for the OpenShift cluster. You can skip this step
   266  if the cluster is going to live in a VNet already existing in your organization, or you can edit the `01_vnet.json` file to your
   267  own needs (e.g. change the subnets address prefixes in CIDR format).
   268  
   269  Copy the [`01_vnet.json`](../../../upi/azure/01_vnet.json) ARM template locally.
   270  
   271  Create the deployment using the `az` client:
   272  
   273  ```sh
   274  az deployment group create -g $RESOURCE_GROUP \
   275    --template-file "01_vnet.json" \
   276    --parameters baseName="$INFRA_ID"
   277  ```
   278  
   279  Link the VNet just created to the private DNS zone:
   280  
   281  ```sh
   282  az network private-dns link vnet create -g $RESOURCE_GROUP -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n ${INFRA_ID}-network-link -v "${INFRA_ID}-vnet" -e false
   283  ```
   284  
   285  ## Deploy the image
   286  
   287  Copy the [`02_storage.json`](../../../upi/azure/02_storage.json) ARM template locally.
   288  
   289  Create the deployment using the `az` client:
   290  
   291  ```sh
   292  export VHD_BLOB_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key $ACCOUNT_KEY -c vhd -n "rhcos.vhd" -o tsv`
   293  export STORAGE_ACCOUNT_ID=`az storage account show -g ${RESOURCE_GROUP} --name ${CLUSTER_NAME}sa --query id -o tsv`
   294  export AZ_ARCH=`echo $OCP_ARCH | sed 's/x86_64/x64/;s/aarch64/Arm64/'`
   295  
   296  az deployment group create -g $RESOURCE_GROUP \
   297    --template-file "02_storage.json" \
   298    --parameters vhdBlobURL="$VHD_BLOB_URL" \
   299    --parameters baseName="$INFRA_ID" \
   300    --parameters storageAccount="${CLUSTER_NAME}sa" \
   301    --parameters architecture="$AZ_ARCH"
   302  ```
   303  
   304  ## Deploy the load balancers
   305  
   306  Copy the [`03_infra.json`](../../../upi/azure/03_infra.json) ARM template locally.
   307  
   308  Deploy the load balancers and public IP addresses using the `az` client:
   309  
   310  ```sh
   311  az deployment group create -g $RESOURCE_GROUP \
   312    --template-file "03_infra.json" \
   313    --parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \
   314    --parameters baseName="$INFRA_ID"
   315  ```
   316  
   317  Create an `api` DNS record in the *public* zone for the API public load balancer. Note that the `BASE_DOMAIN_RESOURCE_GROUP` must point to the resource group where the public DNS zone exists.
   318  
   319  ```sh
   320  export PUBLIC_IP=`az network public-ip list -g $RESOURCE_GROUP --query "[?name=='${INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv`
   321  az network dns record-set a add-record -g $BASE_DOMAIN_RESOURCE_GROUP -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n api -a $PUBLIC_IP --ttl 60
   322  ```
   323  
   324  Or, in case of adding this cluster to an already existing public zone, use instead:
   325  
   326  ```sh
   327  export PUBLIC_IP=`az network public-ip list -g $RESOURCE_GROUP --query "[?name=='${INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv`
   328  az network dns record-set a add-record -g $BASE_DOMAIN_RESOURCE_GROUP -z ${BASE_DOMAIN} -n api.${CLUSTER_NAME} -a $PUBLIC_IP --ttl 60
   329  ```
   330  
   331  ## Launch the temporary cluster bootstrap
   332  
   333  Copy the [`04_bootstrap.json`](../../../upi/azure/04_bootstrap.json) ARM template locally.
   334  
   335  Create the deployment using the `az` client:
   336  
   337  ```sh
   338  bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'`
   339  export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry $bootstrap_url_expiry --account-name ${CLUSTER_NAME}sa --account-key $ACCOUNT_KEY -o tsv`
   340  export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.1.0" --arg url $BOOTSTRAP_URL '{ignition:{version:$v,config:{replace:{source:$url}}}}' | base64 | tr -d '\n'`
   341  
   342  az deployment group create -g $RESOURCE_GROUP \
   343    --template-file "04_bootstrap.json" \
   344    --parameters bootstrapIgnition="$BOOTSTRAP_IGNITION" \
   345    --parameters baseName="$INFRA_ID"
   346  ```
   347  
   348  ## Launch the permanent control plane
   349  
   350  Copy the [`05_masters.json`](../../../upi/azure/05_masters.json) ARM template locally.
   351  
   352  Create the deployment using the `az` client:
   353  
   354  ```sh
   355  export MASTER_IGNITION=`cat master.ign | base64 | tr -d '\n'`
   356  
   357  az deployment group create -g $RESOURCE_GROUP \
   358    --template-file "05_masters.json" \
   359    --parameters masterIgnition="$MASTER_IGNITION" \
   360    --parameters baseName="$INFRA_ID"
   361  ```
   362  
   363  ## Wait for the bootstrap complete
   364  
   365  Wait until cluster bootstrapping has completed:
   366  
   367  ```console
   368  $ openshift-install wait-for bootstrap-complete --log-level debug
   369  DEBUG OpenShift Installer v4.n
   370  DEBUG Built from commit 6b629f0c847887f22c7a95586e49b0e2434161ca
   371  INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster.basedomain.com:6443...
   372  DEBUG Still waiting for the Kubernetes API: the server could not find the requested resource
   373  DEBUG Still waiting for the Kubernetes API: the server could not find the requested resource
   374  DEBUG Still waiting for the Kubernetes API: Get https://api.cluster.basedomain.com:6443/version?timeout=32s: dial tcp: connect: connection refused
   375  INFO API v1.14.n up
   376  INFO Waiting up to 30m0s for bootstrapping to complete...
   377  DEBUG Bootstrap status: complete
   378  INFO It is now safe to remove the bootstrap resources
   379  ```
   380  
   381  Once the bootstrapping process is complete you can deallocate and delete bootstrap resources:
   382  
   383  ```sh
   384  az network nsg rule delete -g $RESOURCE_GROUP --nsg-name ${INFRA_ID}-nsg --name bootstrap_ssh_in
   385  az vm stop -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap
   386  az vm deallocate -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap
   387  az vm delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap --yes
   388  az disk delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap_OSDisk --no-wait --yes
   389  az network nic delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap-nic --no-wait
   390  az storage blob delete --account-key $ACCOUNT_KEY --account-name ${CLUSTER_NAME}sa --container-name files --name bootstrap.ign
   391  az network public-ip delete -g $RESOURCE_GROUP --name ${INFRA_ID}-bootstrap-ssh-pip
   392  ```
   393  
   394  ## Access the OpenShift API
   395  
   396  You can now use the `oc` or `kubectl` commands to talk to the OpenShift API. The admin credentials are in `auth/kubeconfig`. For example:
   397  
   398  ```sh
   399  export KUBECONFIG="$PWD/auth/kubeconfig"
   400  oc get nodes
   401  oc get clusteroperator
   402  ```
   403  
   404  Note that only the API will be up at this point. The OpenShift web console will run on the compute nodes.
   405  
   406  ## Launch compute nodes
   407  
   408  You may create compute nodes by launching individual instances discretely or by automated processes outside the cluster (e.g. Auto Scaling Groups).
   409  You can also take advantage of the built in cluster scaling mechanisms and the machine API in OpenShift.
   410  
   411  In this example, we'll manually launch three instances via the provided ARM template. Additional instances can be launched by editing the `06_workers.json` file.
   412  
   413  Copy the [`06_workers.json`](../../../upi/azure/06_workers.json) ARM template locally.
   414  
   415  Create the deployment using the `az` client:
   416  
   417  ```sh
   418  export WORKER_IGNITION=`cat worker.ign | base64 | tr -d '\n'`
   419  
   420  az deployment group create -g $RESOURCE_GROUP \
   421    --template-file "06_workers.json" \
   422    --parameters workerIgnition="$WORKER_IGNITION" \
   423    --parameters baseName="$INFRA_ID"
   424  ```
   425  
   426  ### Approve the worker CSRs
   427  
   428  Even after they've booted up, the workers will not show up in `oc get nodes`.
   429  
   430  Instead, they will create certificate signing requests (CSRs) which need to be approved. Eventually, you should see `Pending` entries looking like the ones below.
   431  You can use `watch oc get csr -A` to watch until the pending CSR's are available.
   432  
   433  ```console
   434  $ oc get csr -A
   435  NAME        AGE    REQUESTOR                                                                   CONDITION
   436  csr-8bppf   2m8s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   437  csr-dj2w4   112s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   438  csr-ph8s8   11s    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   439  csr-q7f6q   19m    system:node:master01                                                        Approved,Issued
   440  csr-5ztvt   19m    system:node:master02                                                        Approved,Issued
   441  csr-576l2   19m    system:node:master03                                                        Approved,Issued
   442  csr-htmtm   19m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   443  csr-wpvxq   19m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   444  csr-xpp49   19m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   445  ```
   446  
   447  You should inspect each pending CSR with the `oc describe csr <name>` command and verify that it comes from a node you recognize. If it does, they can be approved:
   448  
   449  ```console
   450  $ oc adm certificate approve csr-8bppf csr-dj2w4 csr-ph8s8
   451  certificatesigningrequest.certificates.k8s.io/csr-8bppf approved
   452  certificatesigningrequest.certificates.k8s.io/csr-dj2w4 approved
   453  certificatesigningrequest.certificates.k8s.io/csr-ph8s8 approved
   454  ```
   455  
   456  Approved nodes should now show up in `oc get nodes`, but they will be in the `NotReady` state. They will create a second CSR which must also be reviewed and approved.
   457  Repeat the process of inspecting the pending CSR's and approving them.
   458  
   459  Once all CSR's are approved, the node should switch to `Ready` and pods will be scheduled on it.
   460  
   461  ```console
   462  $ oc get nodes
   463  NAME       STATUS   ROLES    AGE     VERSION
   464  master01   Ready    master   23m     v1.14.6+cebabbf7a
   465  master02   Ready    master   23m     v1.14.6+cebabbf7a
   466  master03   Ready    master   23m     v1.14.6+cebabbf7a
   467  node01     Ready    worker   2m30s   v1.14.6+cebabbf7a
   468  node02     Ready    worker   2m35s   v1.14.6+cebabbf7a
   469  node03     Ready    worker   2m34s   v1.14.6+cebabbf7a
   470  ```
   471  
   472  ### Add the Ingress DNS Records
   473  
   474  Create DNS records in the *public* and *private* zones pointing at the ingress load balancer. Use A, CNAME, etc. records, as you see fit.
   475  You can create either a wildcard `*.apps.{baseDomain}.` or [specific records](#specific-route-records) for every route (more on the specific records below).
   476  
   477  First, wait for the ingress default router to create a load balancer and populate the `EXTERNAL-IP` column:
   478  
   479  ```console
   480  $ oc -n openshift-ingress get service router-default
   481  NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
   482  router-default   LoadBalancer   172.30.20.10   35.130.120.110   80:32288/TCP,443:31215/TCP   20
   483  ```
   484  
   485  Add a `*.apps` record to the *public* DNS zone:
   486  
   487  ```sh
   488  export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
   489  az network dns record-set a add-record -g $BASE_DOMAIN_RESOURCE_GROUP -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a $PUBLIC_IP_ROUTER --ttl 300
   490  ```
   491  
   492  Or, in case of adding this cluster to an already existing public zone, use instead:
   493  
   494  ```sh
   495  export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
   496  az network dns record-set a add-record -g $BASE_DOMAIN_RESOURCE_GROUP -z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a $PUBLIC_IP_ROUTER --ttl 300
   497  ```
   498  
   499  Finally, add a `*.apps` record to the *private* DNS zone:
   500  
   501  ```sh
   502  export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
   503  az network private-dns record-set a create -g $RESOURCE_GROUP -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps --ttl 300
   504  az network private-dns record-set a add-record -g $RESOURCE_GROUP -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a $PUBLIC_IP_ROUTER
   505  ```
   506  
   507  #### Specific route records
   508  
   509  If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes. Use the command below to check what they are:
   510  
   511  ```console
   512  $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
   513  oauth-openshift.apps.cluster.basedomain.com
   514  console-openshift-console.apps.cluster.basedomain.com
   515  downloads-openshift-console.apps.cluster.basedomain.com
   516  alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com
   517  grafana-openshift-monitoring.apps.cluster.basedomain.com
   518  prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com
   519  ```
   520  
   521  ## Wait for the installation complete
   522  
   523  Wait until cluster is ready:
   524  
   525  ```console
   526  $ openshift-install wait-for install-complete --log-level debug
   527  DEBUG Built from commit 6b629f0c847887f22c7a95586e49b0e2434161ca
   528  INFO Waiting up to 30m0s for the cluster at https://api.cluster.basedomain.com:6443 to initialize...
   529  DEBUG Still waiting for the cluster to initialize: Working towards 4.2.12: 99% complete, waiting on authentication, console, monitoring
   530  DEBUG Still waiting for the cluster to initialize: Working towards 4.2.12: 100% complete
   531  DEBUG Cluster is initialized
   532  INFO Waiting up to 10m0s for the openshift-console route to be created...
   533  DEBUG Route found in openshift-console namespace: console
   534  DEBUG Route found in openshift-console namespace: downloads
   535  DEBUG OpenShift console route is created
   536  INFO Install complete!
   537  INFO To access the cluster as the system:admin user when using 'oc', run
   538      export KUBECONFIG=${PWD}/auth/kubeconfig
   539  INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster.basedomain.com
   540  INFO Login to the console with user: kubeadmin, password: REDACTED
   541  ```
   542  
   543  [azuretemplates]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/template-deployment-overview
   544  [openshiftinstall]: https://github.com/openshift/installer
   545  [azurecli]: https://docs.microsoft.com/en-us/cli/azure/
   546  [jqjson]: https://stedolan.github.io/jq/
   547  [yqyaml]: https://yq.readthedocs.io/en/latest/
   548  [ingress-operator]: https://github.com/openshift/cluster-ingress-operator
   549  [machine-api-operator]: https://github.com/openshift/machine-api-operator
   550  [azure-identity]: https://docs.microsoft.com/en-us/azure/architecture/framework/security/identity
   551  [azure-resource-group]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups
   552  [azure-dns]: https://docs.microsoft.com/en-us/azure/dns/dns-overview
   553  [kubernetes-service-load-balancers-exclude-masters]: https://github.com/kubernetes/kubernetes/issues/65618