github.com/openshift/installer@v1.4.17/docs/user/openstack/install_upi.md (about)

     1  # Installing OpenShift on OpenStack User-Provisioned Infrastructure
     2  
     3  The User-Provisioned Infrastructure (UPI) process installs OpenShift in stages, providing opportunities for modifications or integrating with existing infrastructure.
     4  
     5  It contrasts with the fully-automated Installer-Provisioned Infrastructure (IPI) which creates everything in one go.
     6  
     7  With UPI, creating the cloud (OpenStack) resources (e.g. Nova servers, Neutron ports, security groups) is the responsibility of the person deploying OpenShift.
     8  
     9  The installer is still used to generate the ignition files and monitor the installation process.
    10  
    11  This provides a greater flexibility at the cost of a more explicit and interactive process.
    12  
    13  Below is a step-by-step guide to a UPI installation that mimics an automated IPI installation; prerequisites and steps described below should be adapted to the constraints of the target infrastructure.
    14  
    15  Please be aware of the [Known Issues](known-issues.md#known-issues-specific-to-user-provisioned-installations)
    16  of this method of installation.
    17  
    18  ## Table of Contents
    19  
    20  
    21  - [Installing OpenShift on OpenStack User-Provisioned Infrastructure](#installing-openshift-on-openstack-user-provisioned-infrastructure)
    22    - [Table of Contents](#table-of-contents)
    23    - [Prerequisites](#prerequisites)
    24    - [Install Ansible](#install-ansible)
    25      - [RHEL](#rhel)
    26      - [Fedora](#fedora)
    27    - [OpenShift Configuration Directory](#openshift-configuration-directory)
    28    - [Red Hat Enterprise Linux CoreOS (RHCOS)](#red-hat-enterprise-linux-coreos-rhcos)
    29    - [API and Ingress Floating IP Addresses](#api-and-ingress-floating-ip-addresses)
    30    - [Network identifier](#network-identifier)
    31    - [Create network, API and ingress ports](#create-network-api-and-ingress-ports)
    32    - [Install Config](#install-config)
    33      - [Configure the machineNetwork.CIDR apiVIP and ingressVIP](#configure-the-machinenetworkcidr-apivip-and-ingressvip)
    34      - [Empty Compute Pools](#empty-compute-pools)
    35    - [Edit Manifests](#edit-manifests)
    36      - [Remove Machines and MachineSets](#remove-machines-and-machinesets)
    37      - [Set control-plane nodes to desired schedulable state](#set-control-plane-nodes-to-desired-schedulable-state)
    38    - [Ignition Config](#ignition-config)
    39      - [Infra ID](#infra-id)
    40      - [Bootstrap Ignition](#bootstrap-ignition)
    41        - [Edit the Bootstrap Ignition](#edit-the-bootstrap-ignition)
    42        - [Upload the Boostrap Ignition](#upload-the-boostrap-ignition)
    43          - [Example: Glance image service](#example-glance-image-service)
    44      - [Create the Bootstrap Ignition Shim](#create-the-bootstrap-ignition-shim)
    45        - [Ignition file served by server using self-signed certificate](#ignition-file-served-by-server-using-self-signed-certificate)
    46      - [Master Ignition](#master-ignition)
    47    - [Network Topology](#network-topology)
    48      - [Security Groups](#security-groups)
    49      - [Update Network, Subnet, Router and ports](#update-network-subnet-router-and-ports)
    50      - [Subnet DNS (optional)](#subnet-dns-optional)
    51    - [Bootstrap](#bootstrap)
    52    - [Control Plane](#control-plane)
    53      - [Wait for the Control Plane to Complete](#wait-for-the-control-plane-to-complete)
    54      - [Access the OpenShift API](#access-the-openshift-api)
    55      - [Delete the Bootstrap Resources](#delete-the-bootstrap-resources)
    56    - [Compute Nodes](#compute-nodes)
    57      - [Approve the worker CSRs](#approve-the-worker-csrs)
    58      - [Wait for the OpenShift Installation to Complete](#wait-for-the-openshift-installation-to-complete)
    59      - [Compute Nodes with SR-IOV NICs](#compute-nodes-with-sr-iov-nics)
    60    - [Destroy the OpenShift Cluster](#destroy-the-openshift-cluster)
    61  
    62  ## Prerequisites
    63  
    64  The `inventory.yaml` file contains variables which should be reviewed and adjusted if needed.
    65  
    66  > **Note**
    67  > Some of the default pods (e.g. the `openshift-router`) require at least two nodes so that is the effective minimum.
    68  
    69  The requirements for UPI are broadly similar to the [ones for OpenStack IPI][ipi-reqs]:
    70  
    71  [ipi-reqs]: ./README.md#openstack-requirements
    72  
    73  - OpenStack account with `clouds.yaml`
    74    - input in the `openshift-install` wizard
    75  - Nova flavors
    76    - inventory: `os_flavor_master` and `os_flavor_worker`
    77  - An external subnet for external connectivity. Required if any of the floating IPs is set in the inventory.
    78    - inventory: `os_external_network`
    79  - The `openshift-install` binary
    80  - A subnet range for the Nova servers / OpenShift Nodes, that does not conflict with your existing network
    81    - inventory: `os_subnet_range`
    82  - A cluster name you will want to use
    83    - input in the `openshift-install` wizard
    84  - A base domain
    85    - input in the `openshift-install` wizard
    86  - OpenShift Pull Secret
    87    - input in the `openshift-install` wizard
    88  - A DNS zone you can configure
    89    - it must be the resolver for the base domain, for the installer and for the end-user machines
    90    - it will host two records: for API and apps access
    91  
    92  ## Install Ansible
    93  
    94  This repository contains [Ansible playbooks][ansible-upi] to deploy OpenShift on OpenStack.
    95  
    96  They can be downloaded from Github with this script:
    97  
    98  ```sh
    99  RELEASE="release-4.14"; xargs -n 1 curl -O <<< "
   100          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/bootstrap.yaml
   101          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/common.yaml
   102          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/compute-nodes.yaml
   103          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/control-plane.yaml
   104          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-bootstrap.yaml
   105          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-compute-nodes.yaml
   106          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-control-plane.yaml
   107          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-network.yaml
   108          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-security-groups.yaml
   109          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-containers.yaml
   110          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/inventory.yaml
   111          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/network.yaml
   112          https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/security-groups.yaml"
   113  ```
   114  
   115  For installing a different version, change the branch (`release-4.14`)
   116  accordingly (e.g. `release-4.12`).
   117  
   118  **Requirements:**
   119  
   120  * Python (>=3.8 for ansible-core, currently tested by CI or lower if using Ansible 2.9)
   121  * Ansible (>=2.10 is currently tested by CI, 2.9 is EOL soon)
   122  * Python modules required in the playbooks. Namely:
   123    * openstackclient
   124    * openstacksdk
   125    * netaddr
   126  * Ansible collections required in the playbooks. Namely:
   127    * openstack.cloud
   128    * ansible.utils
   129    * community.general
   130  
   131  ### RHEL
   132  
   133  From a RHEL box, make sure that the repository origins are all set:
   134  
   135  ```sh
   136  sudo subscription-manager register # if not done already
   137  sudo subscription-manager attach --pool=$YOUR_POOLID # if not done already
   138  sudo subscription-manager repos --disable=* # if not done already
   139  sudo subscription-manager repos \
   140    --enable=rhel-8-for-x86_64-baseos-rpms \ # change RHEL version if needed
   141    --enable=rhel-8-for-x86_64-appstream-rpms # change RHEL version if needed
   142  ```
   143  
   144  Then install the package:
   145  ```sh
   146  sudo dnf install ansible-core
   147  ```
   148  
   149  Make sure that `python` points to Python3:
   150  ```sh
   151  sudo alternatives --set python /usr/bin/python3
   152  ```
   153  
   154  To avoid packages not found or mismatches, we use pip to install the dependencies:
   155  ```sh
   156  python3 -m pip install --upgrade pip
   157  python3 -m pip install yq openstackclient openstacksdk netaddr
   158  ```
   159  
   160  ### Fedora
   161  
   162  This command installs all required dependencies on Fedora:
   163  
   164  ```sh
   165  sudo dnf install python3-openstackclient ansible-core python3-openstacksdk python3-netaddr
   166  ```
   167  
   168  [ansible-upi]: ../../../upi/openstack "Ansible Playbooks for Openstack UPI"
   169  
   170  ## Ansible Collections
   171  
   172  The Ansible Collections are not packaged (yet) on recent versions of OSP and RHEL when `ansible-core` is
   173  installed instead of Ansible 2.9. So the collections need to be installed from `ansible-galaxy`.
   174  
   175  ```sh
   176  ansible-galaxy collection install "openstack.cloud:<2.0.0" ansible.utils community.general
   177  ```
   178  
   179  ## OpenShift Configuration Directory
   180  
   181  All the configuration files, logs and installation state are kept in a single directory:
   182  
   183  ```sh
   184  $ mkdir -p openstack-upi
   185  $ cd openstack-upi
   186  ```
   187  
   188  ## Red Hat Enterprise Linux CoreOS (RHCOS)
   189  
   190  A proper [RHCOS][rhcos] image in the OpenStack cluster or project is required for successful installation.
   191  
   192  Get the RHCOS image for your OpenShift version [here][rhcos-image]. You should download images with the highest version that is less than or equal to the OpenShift version that you install. Use the image versions that match your OpenShift version if they are available.
   193  
   194  The OpenStack RHCOS image corresponding to a given openshift-install binary can be extracted from the binary itself. If the jq tool is available, extraction can be done programmatically:
   195  <!--- e2e-openstack-upi: INCLUDE START --->
   196  ```sh
   197  $ curl -sSL --remote-name "$(openshift-install coreos print-stream-json | jq --raw-output '.architectures.x86_64.artifacts.openstack.formats."qcow2.gz".disk.location')"
   198  $ export RHCOSVERSION="$(openshift-install coreos print-stream-json | jq --raw-output '.architectures.x86_64.artifacts.openstack.release')"
   199  ```
   200  <!--- e2e-openstack-upi: INCLUDE END --->
   201  
   202  The OpenStack QCOW2 image is only available in a compressed format with the `.gz` extension; it must be decompressed locally before uploading it to Glance. The following command will unpack the image into `rhcos-${RHCOSVERSION}-openstack.x86_64.qcow2`:
   203  
   204  <!--- e2e-openstack-upi: INCLUDE START --->
   205  ```sh
   206  $ gunzip rhcos-${RHCOSVERSION}-openstack.x86_64.qcow2.gz
   207  ```
   208  <!--- e2e-openstack-upi: INCLUDE END --->
   209  
   210  Next step is to create a Glance image.
   211  
   212  > **Note**
   213  > This document will use `rhcos-${CLUSTER_NAME}` as the Glance image name. The name of the Glance image must be the one configured as `os_image_rhcos` in `inventory.yaml`.
   214  
   215  <!--- e2e-openstack-upi: INCLUDE START --->
   216  ```sh
   217  $ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-${RHCOSVERSION}-openstack.x86_64.qcow2 "rhcos-${CLUSTER_NAME}"
   218  ```
   219  <!--- e2e-openstack-upi: INCLUDE END --->
   220  
   221  > **Note**
   222  > Depending on your OpenStack environment you can upload the RHCOS image as `raw` or `qcow2`.
   223  > See [Disk and container formats for images](https://docs.openstack.org/image-guide/introduction.html#disk-and-container-formats-for-images) for more information.
   224  
   225  [qemu_guest_agent]: https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html
   226  If the RHCOS image being used supports it,  the [KVM Qemu Guest Agent][qemu_guest_agent] may be used to enable optional
   227  access between OpenStack KVM hypervisors and the cluster nodes.
   228  
   229  To enable this feature, you must add the `hw_qemu_guest_agent=yes` property to the image:
   230  
   231  ```
   232  $ openstack image set --property hw_qemu_guest_agent=yes "rhcos-${CLUSTER_NAME}"
   233  ```
   234  
   235  Finally validate that the image was successfully created:
   236  
   237  ```sh
   238  $ openstack image show "rhcos-${CLUSTER_NAME}"
   239  ```
   240  
   241  [rhcos]: https://www.openshift.com/learn/coreos/
   242  [rhcos-image]: https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/
   243  
   244  ## API and Ingress Floating IP Addresses
   245  
   246  If the variables `os_api_fip`, `os_ingress_fip` and `os_bootstrap_fip` are found in `inventory.yaml`, the corresponding floating IPs will be attached to the API load balancer, to the worker nodes load balancer and to the temporary machine used for the install process, respectively. Note that `os_external_network` is a requirement for those.
   247  
   248  > **Note**
   249  > Throughout this document, we will use `203.0.113.23` as the public IP address
   250  > for the OpenShift API endpoint and `203.0.113.19` as the public IP for the
   251  > ingress (`*.apps`) endpoint. `203.0.113.20` will be the public IP used for
   252  > the bootstrap machine.
   253  
   254  ```sh
   255  $ openstack floating ip create --description "OpenShift API" <external>
   256  => 203.0.113.23
   257  $ openstack floating ip create --description "OpenShift Ingress" <external>
   258  => 203.0.113.19
   259  $ openstack floating ip create --description "bootstrap machine" <external>
   260  => 203.0.113.20
   261  ```
   262  
   263  The OpenShift API (for the OpenShift administrators and app developers) will be at `api.<cluster name>.<cluster domain>` and the Ingress (for the apps' end users) at `*.apps.<cluster name>.<cluster domain>`.
   264  
   265  Create these two records in your DNS zone:
   266  
   267  ```plaintext
   268  api.openshift.example.com.    A 203.0.113.23
   269  *.apps.openshift.example.com. A 203.0.113.19
   270  ```
   271  
   272  They will need to be available to your developers, end users as well as the OpenShift installer process later in this guide.
   273  
   274  ## Network identifier
   275  
   276  Resources like network, subnet (or subnets), router and API and ingress ports need to have unique name to not interfere with other deployments running on the same OpenStack cloud.
   277  Please, keep in mind, those OpenStack resources will have different name scheme then all the other resources which will be created on next steps, although they will be tagged by the infraID later on.
   278  Let's create environment variable `OS_NET_ID` and `netid.json` file, which will be used by ansible playbooks later on.
   279  
   280  <!--- e2e-openstack-upi: INCLUDE START --->
   281  ```sh
   282  $ export OS_NET_ID="openshift-$(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '"%02x"')"
   283  $ echo "{\"os_net_id\": \"$OS_NET_ID\"}" | tee netid.json
   284  ```
   285  <!--- e2e-openstack-upi: INCLUDE END --->
   286  
   287  Make sure your shell session has the `$OS_NET_ID` environment variable set when you run the commands later in this document.
   288  
   289  Note, this identifier has nothing in common with OpenShift `infraID` defined later on.
   290  
   291  ## Create network, API and ingress ports
   292  
   293  Please note that value of the API and Ingress VIPs fields will be overwritten in the `inventory.yaml` with the respective addresses assigned to the Ports. Run the following playbook to create necessary resources:
   294  
   295  <!--- e2e-openstack-upi: INCLUDE START --->
   296  ```sh
   297  $ ansible-playbook -i inventory.yaml network.yaml
   298  ```
   299  <!--- e2e-openstack-upi: INCLUDE END --->
   300  
   301  > **Note**
   302  > These OpenStack resources will be deleted by the `down-network.yaml` playbook.
   303  
   304  ## Install Config
   305  
   306  Run the `create install-config` subcommand and fill in the desired entries:
   307  
   308  ```sh
   309  $ openshift-install create install-config
   310  ? SSH Public Key </home/user/.ssh/id_rsa.pub>
   311  ? Platform <openstack>
   312  ? Cloud <openstack>
   313  ? ExternalNetwork <external>
   314  ? APIFloatingIPAddress <203.0.113.23>
   315  ? FlavorName <m1.xlarge>
   316  ? Base Domain <example.com>
   317  ? Cluster Name <openshift>
   318  ```
   319  
   320  Most of these are self-explanatory. `Cloud` is the cloud name in your `clouds.yaml` i.e. what's set as your `OS_CLOUD` environment variable.
   321  
   322  *Cluster Name* and *Base Domain* will together form the fully qualified domain name which the API interface will expect to the called, and the default name with which OpenShift will expose newly created applications.
   323  
   324  Given the values above, the OpenShift API will be available at:
   325  
   326  ```plaintext
   327  https://api.openshift.example.com:6443/
   328  ```
   329  
   330  Afterwards, you should have `install-config.yaml` in your current directory:
   331  
   332  ```sh
   333  $ tree
   334  .
   335  └── install-config.yaml
   336  ```
   337  
   338  ### Configure the machineNetwork.CIDR apiVIP and ingressVIP
   339  
   340  The `machineNetwork` represents the OpenStack network which will be used to connect all the OpenShift cluster nodes.
   341  The `machineNetwork.CIDR` defines the IP range, in CIDR notation, from which the installer will choose what IP addresses
   342  to assign the nodes.  The `apiVIPs` and `ingressVIPs` are the IP addresses the installer will assign to the cluster API and
   343  ingress VIPs, respectively.
   344  
   345  In the previous step, ansible playbook added default values for the
   346  `machineNetwork.CIDR`, and then it assigned selected by Neutron IP addresses for
   347  `apiVIPs` and `ingressVIPs` to appropriate fields inventory file - os_ingressVIP
   348  and os_apiVIP for single stack installation, and additionally os_ingressVIP6 and
   349  os_apiVIP6 for dualstack out of `machineNetwork.CIDR`.
   350  
   351  Following script will fill into `intall-config.yaml` the value for `machineNetwork`, `apiVIPs`, `ingressVIPs`, `controlPlanePort`
   352  for single-stack and dual-stack and `networkType`, `clusterNetwork` and `serviceNetwork` only for dual-stack, using `inventory.yaml`
   353  values:
   354  
   355  <!--- e2e-openstack-upi: INCLUDE START --->
   356  ```sh
   357  $ python -c 'import os
   358  import sys
   359  import yaml
   360  import re
   361  re_os_net_id = re.compile(r"{{\s*os_net_id\s*}}")
   362  os_net_id = os.getenv("OS_NET_ID")
   363  path = "common.yaml"
   364  facts = None
   365  for _dict in yaml.safe_load(open(path))[0]["tasks"]:
   366      if "os_network" in _dict.get("set_fact", {}):
   367          facts = _dict["set_fact"]
   368          break
   369  if not facts:
   370      print("Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.")
   371      sys.exit(1)
   372  os_network = re_os_net_id.sub(os_net_id, facts["os_network"])
   373  os_subnet = re_os_net_id.sub(os_net_id, facts["os_subnet"])
   374  path = "install-config.yaml"
   375  data = yaml.safe_load(open(path))
   376  inventory = yaml.safe_load(open("inventory.yaml"))["all"]["hosts"]["localhost"]
   377  machine_net = [{"cidr": inventory["os_subnet_range"]}]
   378  api_vips = [inventory["os_apiVIP"]]
   379  ingress_vips = [inventory["os_ingressVIP"]]
   380  ctrl_plane_port = {"network": {"name": os_network}, "fixedIPs": [{"subnet": {"name": os_subnet}}]}
   381  if inventory.get("os_subnet6_range"):
   382      os_subnet6 = re_os_net_id.sub(os_net_id, facts["os_subnet6"])
   383      machine_net.append({"cidr": inventory["os_subnet6_range"]})
   384      api_vips.append(inventory["os_apiVIP6"])
   385      ingress_vips.append(inventory["os_ingressVIP6"])
   386      data["networking"]["networkType"] = "OVNKubernetes"
   387      data["networking"]["clusterNetwork"].append({"cidr": inventory["cluster_network6_cidr"], "hostPrefix": inventory["cluster_network6_prefix"]})
   388      data["networking"]["serviceNetwork"].append(inventory["service_subnet6_range"])
   389      ctrl_plane_port["fixedIPs"].append({"subnet": {"name": os_subnet6}})
   390  data["networking"]["machineNetwork"] = machine_net
   391  data["platform"]["openstack"]["apiVIPs"] = api_vips
   392  data["platform"]["openstack"]["ingressVIPs"] = ingress_vips
   393  data["platform"]["openstack"]["controlPlanePort"] = ctrl_plane_port
   394  del data["platform"]["openstack"]["externalDNS"]
   395  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   396  ```
   397  <!--- e2e-openstack-upi: INCLUDE END --->
   398  
   399  > **Note**
   400  > All the scripts in this guide work only with Python 3.
   401  > You can also choose to edit the `install-config.yaml` file by hand.
   402  
   403  ### Empty Compute Pools
   404  
   405  UPI will not rely on the Machine API for node creation. Instead, we will create the compute nodes ("workers") manually.
   406  
   407  We will set their count to `0` in `install-config.yaml`. Look under `compute` -> (first entry) -> `replicas`.
   408  
   409  This command will do it for you:
   410  <!--- e2e-openstack-upi: INCLUDE START --->
   411  ```sh
   412  $ python -c '
   413  import yaml
   414  path = "install-config.yaml"
   415  data = yaml.safe_load(open(path))
   416  data["compute"][0]["replicas"] = 0
   417  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   418  ```
   419  <!--- e2e-openstack-upi: INCLUDE END --->
   420  
   421  ### Modify NetworkType (Required for OpenShift SDN)
   422  
   423  By default the `networkType` is set to `OVNKubernetes` on the `install-config.yaml`.
   424  
   425  If an installation with OpenShift SDN is desired, you must modify the `networkType` field. Note, that dual-stack only supports `OVNKubernetes` network type.
   426  
   427  This command will do it for you:
   428  
   429  ```sh
   430  $ python -c '
   431  import yaml
   432  path = "install-config.yaml"
   433  data = yaml.safe_load(open(path))
   434  data["networking"]["networkType"] = "OpenShiftSDN"
   435  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   436  ```
   437  
   438  ## Edit Manifests
   439  
   440  We are not relying on the Machine API so we can delete the control plane Machines and compute MachineSets from the manifests.
   441  
   442  **WARNING**: The `install-config.yaml` file will be automatically deleted in the next section. If you want to keep it around, copy it elsewhere now!
   443  
   444  First, let's turn the install config into manifests:
   445  <!--- e2e-openstack-upi: INCLUDE START --->
   446  ```sh
   447  $ openshift-install create manifests
   448  ```
   449  <!--- e2e-openstack-upi: INCLUDE END --->
   450  
   451  ```sh
   452  $ tree
   453  .
   454  ├── manifests
   455  │   ├── 04-openshift-machine-config-operator.yaml
   456  │   ├── cloud-provider-config.yaml
   457  │   ├── cluster-config.yaml
   458  │   ├── cluster-dns-02-config.yml
   459  │   ├── cluster-infrastructure-02-config.yml
   460  │   ├── cluster-ingress-02-config.yml
   461  │   ├── cluster-network-01-crd.yml
   462  │   ├── cluster-network-02-config.yml
   463  │   ├── cluster-proxy-01-config.yaml
   464  │   ├── cluster-scheduler-02-config.yml
   465  │   ├── cvo-overrides.yaml
   466  │   ├── kube-cloud-config.yaml
   467  │   ├── kube-system-configmap-root-ca.yaml
   468  │   ├── machine-config-server-tls-secret.yaml
   469  │   └── openshift-config-secret-pull-secret.yaml
   470  └── openshift
   471      ├── 99_cloud-creds-secret.yaml
   472      ├── 99_kubeadmin-password-secret.yaml
   473      ├── 99_openshift-cluster-api_master-machines-0.yaml
   474      ├── 99_openshift-cluster-api_master-machines-1.yaml
   475      ├── 99_openshift-cluster-api_master-machines-2.yaml
   476      ├── 99_openshift-cluster-api_master-user-data-secret.yaml
   477      ├── 99_openshift-cluster-api_worker-machineset-0.yaml
   478      ├── 99_openshift-cluster-api_worker-user-data-secret.yaml
   479      ├── 99_openshift-machineconfig_master.yaml
   480      ├── 99_openshift-machineconfig_worker.yaml
   481      ├── 99_rolebinding-cloud-creds-secret-reader.yaml
   482      └── 99_role-cloud-creds-secret-reader.yaml
   483  
   484  2 directories, 38 files
   485  ```
   486  
   487  ### Remove Machines and MachineSets
   488  
   489  Remove the control-plane Machines and compute MachineSets, because we'll be providing those ourselves and don't want to involve the
   490  [machine-API operator][mao] and [cluster-control-plane-machine-set operator][ccpmso]:
   491  <!--- e2e-openstack-upi: INCLUDE START --->
   492  ```sh
   493  $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml
   494  ```
   495  <!--- e2e-openstack-upi: INCLUDE END --->
   496  Leave the compute MachineSets in if you want to create compute machines via the machine API. However, some references must be updated in the machineset spec (`openshift/99_openshift-cluster-api_worker-machineset-0.yaml`) to match your environment:
   497  
   498  * The OS image: `spec.template.spec.providerSpec.value.image`
   499  
   500  [mao]: https://github.com/openshift/machine-api-operator
   501  [ccpmso]: https://github.com/openshift/cluster-control-plane-machine-set-operator
   502  
   503  ### Set control-plane nodes to desired schedulable state
   504  
   505  Currently [emptying the compute pools][empty-compute-pools] makes control-plane nodes schedulable. Let's update the scheduler configuration to match the desired configuration defined on the `inventory.yaml`:
   506  <!--- e2e-openstack-upi: INCLUDE START --->
   507  ```sh
   508  $ python -c '
   509  import yaml
   510  inventory = yaml.safe_load(open("inventory.yaml"))
   511  inventory_os_compute_nodes_number = inventory["all"]["hosts"]["localhost"]["os_compute_nodes_number"]
   512  path = "manifests/cluster-scheduler-02-config.yml"
   513  data = yaml.safe_load(open(path))
   514  if not inventory_os_compute_nodes_number:
   515     data["spec"]["mastersSchedulable"] = True
   516  else:
   517     data["spec"]["mastersSchedulable"] = False
   518  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   519  ```
   520  <!--- e2e-openstack-upi: INCLUDE END --->
   521  
   522  [empty-compute-pools]: #empty-compute-pools
   523  
   524  ## Ignition Config
   525  
   526  Next, we will turn these manifests into [Ignition][ignition] files. These will be used to configure the Nova servers on boot (Ignition performs a similar function as cloud-init).
   527  <!--- e2e-openstack-upi: INCLUDE START --->
   528  ```sh
   529  $ openshift-install create ignition-configs
   530  ```
   531  <!--- e2e-openstack-upi: INCLUDE END --->
   532  ```sh
   533  $ tree
   534  .
   535  ├── auth
   536  │   ├── kubeadmin-password
   537  │   └── kubeconfig
   538  ├── bootstrap.ign
   539  ├── master.ign
   540  ├── metadata.json
   541  └── worker.ign
   542  ```
   543  
   544  [ignition]: https://coreos.com/ignition/docs/latest/
   545  
   546  ### Infra ID
   547  
   548  The OpenShift cluster has been assigned an identifier in the form of `<cluster name>-<random string>`. You do not need this for anything, but it is a good idea to keep it around.
   549  You can see the various metadata about your future cluster in `metadata.json`.
   550  
   551  The Infra ID is under the `infraID` key:
   552  <!--- e2e-openstack-upi: INCLUDE START --->
   553  ```sh
   554  $ export INFRA_ID=$(jq -r .infraID metadata.json)
   555  ```
   556  <!--- e2e-openstack-upi: INCLUDE END --->
   557  ```sh
   558  $ echo $INFRA_ID
   559  openshift-qlvwv
   560  ```
   561  
   562  We'll use the `infraID` as the prefix for all the OpenStack resources we'll create. That way, you'll be able to have multiple deployments in the same OpenStack project without name conflicts.
   563  
   564  Make sure your shell session has the `$INFRA_ID` environment variable set when you run the commands later in this document.
   565  
   566  ### Bootstrap Ignition
   567  
   568  #### Edit the Bootstrap Ignition
   569  
   570  We need to set the bootstrap hostname explicitly, and in the case of OpenStack using self-signed certificate, the CA cert file. The IPI installer does this automatically, but for now UPI does not.
   571  
   572  We will update the ignition file (`bootstrap.ign`) to create the following files:
   573  
   574  **`/etc/hostname`**:
   575  
   576  ```plaintext
   577  openshift-qlvwv-bootstrap
   578  ```
   579  
   580  (using the `infraID`)
   581  
   582  **`/opt/openshift/tls/cloud-ca-cert.pem`** (if applicable).
   583  
   584  > **Note**
   585  > We recommend you back up the Ignition files before making any changes!
   586  
   587  You can edit the Ignition file manually or run this Python script:
   588  
   589  <!--- e2e-openstack-upi: INCLUDE START --->
   590  ```python
   591  import base64
   592  import json
   593  import os
   594  
   595  with open('bootstrap.ign', 'r') as f:
   596      ignition = json.load(f)
   597  
   598  storage = ignition.get('storage', {})
   599  files = storage.get('files', [])
   600  
   601  infra_id = os.environ.get('INFRA_ID', 'openshift').encode()
   602  hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
   603  files.append(
   604  {
   605      'path': '/etc/hostname',
   606      'mode': 420,
   607      'contents': {
   608          'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64,
   609      },
   610  })
   611  
   612  ca_cert_path = os.environ.get('OS_CACERT', '')
   613  if ca_cert_path:
   614      with open(ca_cert_path, 'r') as f:
   615          ca_cert = f.read().encode().strip()
   616          ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()
   617  
   618      files.append(
   619      {
   620          'path': '/opt/openshift/tls/cloud-ca-cert.pem',
   621          'mode': 420,
   622          'contents': {
   623              'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64,
   624          },
   625      })
   626  
   627  storage['files'] = files
   628  ignition['storage'] = storage
   629  
   630  with open('bootstrap.ign', 'w') as f:
   631      json.dump(ignition, f)
   632  ```
   633  <!--- e2e-openstack-upi: INCLUDE END --->
   634  
   635  Feel free to make any other changes.
   636  
   637  #### Upload the Boostrap Ignition
   638  
   639  The generated boostrap ignition file tends to be quite large (around 300KB -- it contains all the manifests, master and worker ignitions etc.). This is generally too big to be passed to the server directly (the OpenStack Nova user data limit is 64KB).
   640  
   641  To boot it up, we will create a smaller Ignition file that will be passed to Nova as user data and that will download the main ignition file upon execution.
   642  
   643  The main file needs to be uploaded to an HTTP(S) location the Bootstrap node will be able to access.
   644  
   645  Choose the storage that best fits your needs and availability.
   646  
   647  **IMPORTANT**: The `bootstrap.ign` contains sensitive information such as your `clouds.yaml` credentials. It should not be accessible by the public! It will only be used once during the Nova boot of the Bootstrap server. We strongly recommend you restrict the access to that server only and delete the file afterwards.
   648  
   649  Possible choices include:
   650  
   651  * Glance (see the example below);
   652  * Swift;
   653  * Amazon S3;
   654  * Internal web server inside your organization;
   655  * A throwaway Nova server in `$INFRA_ID-nodes` hosting a static web server exposing the file.
   656  
   657  In this guide, we will assume the file is at the following URL:
   658  
   659  https://static.example.com/bootstrap.ign
   660  
   661  ##### Example: Glance image service
   662  
   663  Create the `bootstrap-ign-${INFRA_ID}` image and upload the `bootstrap.ign` file:
   664  
   665  <!--- e2e-openstack-upi: INCLUDE START --->
   666  ```sh
   667  $ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign "bootstrap-ign-${INFRA_ID}"
   668  ```
   669  <!--- e2e-openstack-upi: INCLUDE END --->
   670  
   671  > **Note**
   672  > Make sure the created image has `active` status.
   673  
   674  Copy and save `file` value of the output, it should look like `/v2/images/<image_id>/file`.
   675  
   676  Get Glance public URL:
   677  
   678  ```sh
   679  $ openstack catalog show image
   680  ```
   681  
   682  By default Glance service doesn't allow anonymous access to the data. So, if you use Glance to store the ignition config, then you also need to provide a valid auth token in the `ignition.config.merge.httpHeaders` field.
   683  
   684  The token can be obtained with this command:
   685  <!--- e2e-openstack-upi: INCLUDE START --->
   686  ```sh
   687  $ export GLANCE_TOKEN="$(openstack token issue -c id -f value)"
   688  ```
   689  <!--- e2e-openstack-upi: INCLUDE END --->
   690  
   691  Note that this token can be generated as any OpenStack user with Glance read access; this particular token will only be used for downloading the Ignition file.
   692  
   693  The command will return the token to be added to the `ignition.config.merge[0].httpHeaders` property in the Bootstrap Ignition Shim (see [below](#create-the-bootstrap-ignition-shim)).
   694  
   695  Combine the public URL with the `file` value to get the link to your bootstrap ignition, in the format `<glance_public_url>/v2/images/<image_id>/file`:
   696  <!--- e2e-openstack-upi: INCLUDE START --->
   697  ```sh
   698  $ export BOOTSTRAP_URL="$(openstack catalog show glance -f json | jq -r '.endpoints[] | select(.interface=="public").url')$(openstack image show -f value -c file bootstrap-ign-${INFRA_ID})"
   699  ```
   700  <!--- e2e-openstack-upi: INCLUDE END --->
   701  
   702  Example of the link to be put in the `source` property of the Ignition Shim (see below): `https://public.glance.example.com:9292/v2/images/b7e2b84e-15cf-440a-a113-3197518da024/file`.
   703  
   704  ### Create the Bootstrap Ignition Shim
   705  
   706  As mentioned before due to Nova user data size limit, we will need to create a new Ignition file that will load the bulk of the Bootstrap node configuration. This will be similar to the existing `master.ign` and `worker.ign` files.
   707  
   708  Create a file called `$INFRA_ID-bootstrap-ignition.json` (fill in your `infraID`) with the following contents:
   709  <!--- e2e-openstack-upi: INCLUDE START --->
   710  ```${INFRA_ID}-bootstrap-ignition.json
   711  {
   712    "ignition": {
   713      "config": {
   714        "merge": [
   715          {
   716            "httpHeaders": [
   717              {
   718                "name": "X-Auth-Token",
   719                "value": "${GLANCE_TOKEN}"
   720              }
   721            ],
   722            "source": "${BOOTSTRAP_URL}"
   723          }
   724        ]
   725      },
   726      "version": "3.1.0"
   727    }
   728  }
   729  ```
   730  <!--- e2e-openstack-upi: INCLUDE END --->
   731  
   732  Replace the `ignition.config.merge.source` value to the URL hosting the `bootstrap.ign` file you've uploaded previously. If using Glance, the `X-Auth-Token` value with the Glance token.
   733  
   734  #### Ignition file served by server using self-signed certificate
   735  
   736  In order for the bootstrap node to retrieve the ignition file when it is served by a server using self-signed certificate, it is necessary to add the CA certificate to the `ignition.security.tls.certificateAuthorities` in the ignition file. Here is how you might do it.
   737  
   738  Encode the certificate to base64:
   739  ```sh
   740  $ openssl x509 -in "$OS_CACERT" | base64 -w0
   741  ```
   742  
   743  Add the base64-encoded certificate to the ignition shim:
   744  ```json
   745  {
   746    "ignition": {
   747      "security": {
   748        "tls": {
   749          "certificateAuthorities": [
   750            {
   751              "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>"
   752            }
   753          ]
   754        }
   755      },
   756      "version": "3.1.0"
   757    }
   758  }
   759  ```
   760  
   761  Or programmatically add the certificate to the bootstrap ignition shim with Python:
   762  <!--- e2e-openstack-upi: INCLUDE START --->
   763  ```python
   764  import base64
   765  import json
   766  import os
   767  
   768  ca_cert_path = os.environ.get('OS_CACERT', '')
   769  if ca_cert_path:
   770      with open(ca_cert_path, 'r') as f:
   771          ca_cert = f.read().encode().strip()
   772          ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()
   773  
   774      certificateAuthority = {
   775          'source': 'data:text/plain;charset=utf-8;base64,'+ca_cert_b64,
   776      }
   777  else:
   778      exit()
   779  
   780  infra_id = os.environ.get('INFRA_ID', 'openshift')
   781  
   782  bootstrap_ignition_shim = infra_id + '-bootstrap-ignition.json'
   783  
   784  with open(bootstrap_ignition_shim, 'r') as f:
   785      ignition_data = json.load(f)
   786  
   787  ignition = ignition_data.get('ignition', {})
   788  security = ignition.get('security', {})
   789  tls = security.get('tls', {})
   790  certificateAuthorities = tls.get('certificateAuthorities', [])
   791  
   792  certificateAuthorities.append(certificateAuthority)
   793  tls['certificateAuthorities'] = certificateAuthorities
   794  security['tls'] = tls
   795  ignition['security'] = security
   796  ignition_data['ignition'] = ignition
   797  
   798  with open(bootstrap_ignition_shim, 'w') as f:
   799      json.dump(ignition_data, f)
   800  ```
   801  <!--- e2e-openstack-upi: INCLUDE END --->
   802  
   803  ### Master Ignition
   804  
   805  Similar to bootstrap, we need to make sure the hostname is set to the expected value (it must match the name of the Nova server exactly).
   806  
   807  Since that value will be different for each master node, we need to create one Ignition file per master node.
   808  
   809  We will deploy three Control plane (master) nodes. Their Ignition configs can be created like so:
   810  
   811  <!--- e2e-openstack-upi: INCLUDE START --->
   812  ```sh
   813  $ for index in $(seq 0 2); do
   814      MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
   815      python -c "import base64, json, sys
   816  ignition = json.load(sys.stdin)
   817  storage = ignition.get('storage', {})
   818  files = storage.get('files', [])
   819  files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip()}})
   820  storage['files'] = files
   821  ignition['storage'] = storage
   822  json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
   823  done
   824  ```
   825  <!--- e2e-openstack-upi: INCLUDE END --->
   826  
   827  This should create files `openshift-qlvwv-master-0-ignition.json`, `openshift-qlvwv-master-1-ignition.json` and `openshift-qlvwv-master-2-ignition.json`.
   828  
   829  If you look inside, you will see that they contain very little. In fact, most of the master Ignition is served by the Machine Config Server running on the bootstrap node and the masters contain only enough to know where to look for the rest.
   830  
   831  You can make your own changes here.
   832  
   833  > **Note**
   834  > The worker nodes do not require any changes to their Ignition, but you can make your own by editing `worker.ign`.
   835  
   836  ## Network Topology
   837  
   838  In this section we'll create all the networking pieces necessary to host the OpenShift cluster: security groups, network, subnet, router, ports.
   839  
   840  ### Security Groups
   841  <!--- e2e-openstack-upi: INCLUDE START --->
   842  ```sh
   843  $ ansible-playbook -i inventory.yaml security-groups.yaml
   844  ```
   845  <!--- e2e-openstack-upi: INCLUDE END --->
   846  
   847  The playbook creates one Security group for the Control Plane and one for the Compute nodes, then attaches rules for enabling communication between the nodes.
   848  ### Update Network, Subnet, Router and ports
   849  <!--- e2e-openstack-upi: INCLUDE START --->
   850  ```sh
   851  $ ansible-playbook -i inventory.yaml update-network-resources.yaml
   852  ```
   853  <!--- e2e-openstack-upi: INCLUDE END --->
   854  
   855  The playbook sets tags to network, subnets, ports and router. It also attaches the floating IP to the API and Ingress ports and set the security group on those ports.
   856  
   857  ### Subnet DNS (optional)
   858  
   859  > **Note**
   860  > This step is optional and only necessary if you want to control the default resolvers your Nova servers will use.
   861  
   862  During deployment, the OpenShift nodes will need to be able to resolve public name records to download the OpenShift images and so on. They will also need to resolve the OpenStack API endpoint.
   863  
   864  The default resolvers are often set up by the OpenStack administrator in Neutron. However, some deployments do not have default DNS servers set, meaning the servers are not able to resolve any records when they boot.
   865  
   866  If you are in this situation, you can add resolvers to your Neutron subnet (`openshift-qlvwv-nodes`). These will be put into `/etc/resolv.conf` on your servers post-boot.
   867  
   868  For example, if you want to add the following nameservers: `198.51.100.86` and `198.51.100.87`, you can run this command:
   869  
   870  ```sh
   871  $ openstack subnet set --dns-nameserver <198.51.100.86> --dns-nameserver <198.51.100.87> "$INFRA_ID-nodes"
   872  ```
   873  
   874  ## Bootstrap
   875  <!--- e2e-openstack-upi: INCLUDE START --->
   876  ```sh
   877  $ ansible-playbook -i inventory.yaml bootstrap.yaml
   878  ```
   879  <!--- e2e-openstack-upi: INCLUDE END --->
   880  
   881  The playbook sets the *allowed address pairs* on each port attached to our OpenShift nodes.
   882  
   883  Since the keepalived-managed IP addresses are not attached to any specific server, Neutron would block their traffic by default. By passing them to `--allowed-address` the traffic can flow freely through.
   884  
   885  An additional Floating IP is also attached to the bootstrap port. This is not necessary for the deployment (and we will delete the bootstrap resources afterwards). However, if the bootstrapping phase fails for any reason, the installer will try to SSH in and download the bootstrap log. That will only succeed if the node is reachable (which in general means a floating IP).
   886  
   887  After the bootstrap server is active, you can check the console log to see that it is getting the ignition correctly:
   888  
   889  ```sh
   890  $ openstack console log show "$INFRA_ID-bootstrap"
   891  ```
   892  
   893  You can also SSH into the server (using its floating IP address) and check on the bootstrapping progress:
   894  
   895  ```sh
   896  $ ssh core@203.0.113.24
   897  [core@openshift-qlvwv-bootstrap ~]$ journalctl -b -f -u bootkube.service
   898  ```
   899  
   900  ## Control Plane
   901  <!--- e2e-openstack-upi: INCLUDE START --->
   902  ```sh
   903  $ ansible-playbook -i inventory.yaml control-plane.yaml
   904  ```
   905  <!--- e2e-openstack-upi: INCLUDE END --->
   906  
   907  Our control plane will consist of three nodes. The servers will be passed the `master-?-ignition.json` files prepared earlier.
   908  
   909  The playbook places the Control Plane in a Server Group with "soft anti-affinity" policy.
   910  
   911  The master nodes should load the initial Ignition and then keep waiting until the bootstrap node stands up the Machine Config Server which will provide the rest of the configuration.
   912  
   913  ### Wait for the Control Plane to Complete
   914  
   915  When that happens, the masters will start running their own pods, run etcd and join the "bootstrap" cluster. Eventually, they will form a fully operational control plane.
   916  
   917  You can monitor this via the following command:
   918  <!--- e2e-openstack-upi: INCLUDE START --->
   919  ```sh
   920  $ openshift-install wait-for bootstrap-complete
   921  ```
   922  <!--- e2e-openstack-upi: INCLUDE END --->
   923  
   924  Eventually, it should output the following:
   925  
   926  ```plaintext
   927  INFO API v1.14.6+f9b5405 up
   928  INFO Waiting up to 30m0s for bootstrapping to complete...
   929  ```
   930  
   931  This means the masters have come up successfully and are joining the cluster.
   932  
   933  Eventually, the `wait-for` command should end with:
   934  
   935  ```plaintext
   936  INFO It is now safe to remove the bootstrap resources
   937  ```
   938  
   939  ### Access the OpenShift API
   940  
   941  You can use the `oc` or `kubectl` commands to talk to the OpenShift API. The admin credentials are in `auth/kubeconfig`:
   942  <!--- e2e-openstack-upi: INCLUDE START --->
   943  ```sh
   944  $ export KUBECONFIG="$PWD/auth/kubeconfig"
   945  ```
   946  <!--- e2e-openstack-upi: INCLUDE END --->
   947  ```sh
   948  $ oc get nodes
   949  $ oc get pods -A
   950  ```
   951  
   952  > **Note**
   953  > Only the API will be up at this point. The OpenShift UI will run on the compute nodes.
   954  
   955  ### Delete the Bootstrap Resources
   956  <!--- e2e-openstack-upi: INCLUDE START --->
   957  ```sh
   958  $ ansible-playbook -i inventory.yaml down-bootstrap.yaml
   959  ```
   960  <!--- e2e-openstack-upi: INCLUDE END --->
   961  
   962  The teardown playbook deletes the bootstrap port and server.
   963  
   964  Now the bootstrap floating IP can also be destroyed.
   965  
   966  If you haven't done so already, you should also disable the bootstrap Ignition URL. If you are following the Glance example:
   967  
   968  <!--- e2e-openstack-upi: INCLUDE START --->
   969  ```sh
   970  $ openstack image delete "bootstrap-ign-${INFRA_ID}"
   971  $ openstack token revoke "$GLANCE_TOKEN"
   972  ```
   973  <!--- e2e-openstack-upi: INCLUDE END --->
   974  
   975  
   976  ## Compute Nodes
   977  <!--- e2e-openstack-upi: INCLUDE START --->
   978  ```sh
   979  $ ansible-playbook -i inventory.yaml compute-nodes.yaml
   980  ```
   981  <!--- e2e-openstack-upi: INCLUDE END --->
   982  
   983  This process is similar to the masters, but the workers need to be approved before they're allowed to join the cluster.
   984  
   985  The workers need no ignition override.
   986  
   987  ### Compute Nodes with SR-IOV NICs
   988  
   989  Using single root I/O virtualization (SR-IOV) networking as an additional network in OpenShift can be beneficial for applications that require high bandwidth and low latency. To enable this in your cluster, you will need to install the [SR-IOV Network Operator](https://docs.openshift.com/container-platform/4.6/networking/hardware_networks/installing-sriov-operator.html). If you are not sure whether your cluster supports this feature, please refer to the [SR-IOV hardware networks documentation](https://docs.openshift.com/container-platform/4.6/networking/hardware_networks/about-sriov.html). If you are planning an openstack deployment with SR-IOV networks and need addition resources, check the [OpenStack SR-IOV deployment docs](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/network_functions_virtualization_planning_and_configuration_guide/index#assembly_sriov_parameters). Once you meet these requirements, you can start provisioning an SR-IOV network and subnet in OpenStack.
   990  
   991  ```sh
   992  openstack network create radio --provider-physical-network radio --provider-network-type vlan --provider-segment 120
   993  
   994  openstack subnet create radio --network radio --subnet-range <your CIDR range> --dhcp
   995  ```
   996  
   997  Your compute nodes will need to have two types of ports for this feature to work. One port needs to connect the node to your OpenShift network so that it can join the cluster and communicate with the other nodes. The other type of port is for your SR-IOV traffic. The OpenShift networking port should be created the same way we normally create ports for compute nodes.
   998  
   999  ```sh
  1000  openstack port create os_port_worker_0 --network <infraID>-network --security-group <infraID>-worker --fixed-ip subnet=<infraID>-nodes,ip-address=<a fixed IP> --allowed-address ip-address=<infraID>-ingress-port
  1001  ```
  1002  
  1003  The SR-IOV port(s) must be created explicitly by the user and passed as a NIC during instance creation, otherwise the `vnic-type` will not be `direct` and it will not work.
  1004  
  1005  ```sh
  1006  openstack port create radio_port --vnic-type direct --network radio --fixed-ip subnet=radio,ip-address=<a fixed ip> --tag=radio --disable-port-security
  1007  ```
  1008  
  1009  When you create your instance, make sure that the SR-IOV port and the OCP port you created for it are added as NICs.
  1010  
  1011  ```sh
  1012  openstack server create --image "rhcos-${CLUSTER_NAME}" --flavor ocp --user-data <ocp project>/build-artifacts/worker.ign --nic port-id=<os_port_worker_0 ID> --nic port-id=<radio_port ID> --config-drive true worker-<worker_id>.<cluster_name>.<cluster_domain>
  1013  ```
  1014  
  1015  ### Approve the worker CSRs
  1016  
  1017  Even after they've booted up, the workers will not show up in `oc get nodes`.
  1018  
  1019  Instead, they will create certificate signing requests (CSRs) which need to be approved. You can watch for the CSRs here:
  1020  
  1021  ```sh
  1022  $ watch oc get csr -A
  1023  ```
  1024  
  1025  Eventually, you should see `Pending` entries looking like this
  1026  
  1027  ```sh
  1028  $ oc get csr -A
  1029  NAME        AGE    REQUESTOR                                                                   CONDITION
  1030  csr-2scwb   16m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1031  csr-5jwqf   16m    system:node:openshift-qlvwv-master-0                                        Approved,Issued
  1032  csr-88jp8   116s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
  1033  csr-9dt8f   15m    system:node:openshift-qlvwv-master-1                                        Approved,Issued
  1034  csr-bqkw5   16m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1035  csr-dpprd   6s     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
  1036  csr-dtcws   24s    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
  1037  csr-lj7f9   16m    system:node:openshift-qlvwv-master-2                                        Approved,Issued
  1038  csr-lrtlk   15m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1039  csr-wkm94   16m    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1040  ```
  1041  
  1042  You should inspect each pending CSR and verify that it comes from a node you recognize:
  1043  
  1044  ```sh
  1045  $ oc describe csr csr-88jp8
  1046  Name:               csr-88jp8
  1047  Labels:             <none>
  1048  Annotations:        <none>
  1049  CreationTimestamp:  Wed, 23 Oct 2019 13:22:51 +0200
  1050  Requesting User:    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper
  1051  Status:             Pending
  1052  Subject:
  1053           Common Name:    system:node:openshift-qlvwv-worker-0
  1054           Serial Number:
  1055           Organization:   system:nodes
  1056  Events:  <none>
  1057  ```
  1058  
  1059  If it does (this one is for `openshift-qlvwv-worker-0` which we've created earlier), you can approve it:
  1060  
  1061  ```sh
  1062  $ oc adm certificate approve csr-88jp8
  1063  ```
  1064  
  1065  Approved nodes should now show up in `oc get nodes`, but they will be in the `NotReady` state. They will create a second CSR which you should also review:
  1066  
  1067  ```sh
  1068  $ oc get csr -A
  1069  NAME        AGE     REQUESTOR                                                                   CONDITION
  1070  csr-2scwb   17m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1071  csr-5jwqf   17m     system:node:openshift-qlvwv-master-0                                         Approved,Issued
  1072  csr-7mv4d   13s     system:node:openshift-qlvwv-worker-1                                         Pending
  1073  csr-88jp8   3m29s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1074  csr-9dt8f   17m     system:node:openshift-qlvwv-master-1                                         Approved,Issued
  1075  csr-bqkw5   18m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1076  csr-bx7p4   28s     system:node:openshift-qlvwv-worker-0                                         Pending
  1077  csr-dpprd   99s     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1078  csr-dtcws   117s    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1079  csr-lj7f9   17m     system:node:openshift-qlvwv-master-2                                         Approved,Issued
  1080  csr-lrtlk   17m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1081  csr-wkm94   18m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  1082  csr-wqpfd   21s     system:node:openshift-qlvwv-worker-2                                         Pending
  1083  ```
  1084  
  1085  (we see the CSR approved earlier as well as a new `Pending` one for the same node: `openshift-qlvwv-worker-0`)
  1086  
  1087  And approve:
  1088  
  1089  ```sh
  1090  $ oc adm certificate approve csr-bx7p4
  1091  ```
  1092  
  1093  Once this CSR is approved, the node should switch to `Ready` and pods will be scheduled on it.
  1094  
  1095  ### Wait for the OpenShift Installation to Complete
  1096  
  1097  Run the following command to verify the OpenShift cluster is fully deployed:
  1098  
  1099  ```sh
  1100  $ openshift-install --log-level debug wait-for install-complete
  1101  ```
  1102  
  1103  Upon success, it will print the URL to the OpenShift Console (the web UI) as well as admin username and password to log in.
  1104  
  1105  ## Destroy the OpenShift Cluster
  1106  
  1107  <!--- e2e-openstack-upi(deprovision): INCLUDE START --->
  1108  ```sh
  1109  $ ansible-playbook -i inventory.yaml  \
  1110  	down-bootstrap.yaml      \
  1111  	down-control-plane.yaml  \
  1112  	down-compute-nodes.yaml  \
  1113  	down-containers.yaml     \
  1114  	down-network.yaml        \
  1115  	down-security-groups.yaml
  1116  ```
  1117  <!--- e2e-openstack-upi(deprovision): INCLUDE END --->
  1118  
  1119  Delete the RHCOS image if it's no longer useful.
  1120  
  1121  <!--- e2e-openstack-upi(deprovision): INCLUDE START --->
  1122  ```sh
  1123  openstack image delete "rhcos-${CLUSTER_NAME}"
  1124  ```
  1125  <!--- e2e-openstack-upi(deprovision): INCLUDE END --->
  1126  
  1127  Then, remove the `api` and `*.apps` DNS records.
  1128  
  1129  The floating IPs can also be deleted if not useful any more.