github.com/openshift/installer@v1.4.17/docs/user/ovirt/install_upi.md (about)

     1  # Install: oVirt/RHV User-Provided Infrastructure
     2  
     3  This User-Provisioned Infrastructure (UPI) process is based on several and
     4  customizable steps to allow the user to integrate into an existing infrastructure.
     5  
     6  Creating and configuring oVirt/RHV resources is the responsibility of the user
     7  deploying OpenShift.
     8  
     9  The OpenShift installer will still be used in several steps of the process to generate
    10  mandatory ignition files and to monitor the installation process itself.
    11  
    12  ## Table of Contents
    13  
    14  * [Prerequisites](#prerequisites)
    15    * [Ansible and oVirt roles](#ansible-and-ovirt-roles)
    16    * [Network Requirements](#network-requirements)
    17      * [Load Balancers](#load-balancers)
    18      * [DNS](#dns)
    19    * [RHCOS image](#rhcos-image)
    20  * [Getting Ansible playbooks](#getting-ansible-playbooks)
    21  * [Assets directory](#assets-directory)
    22  * [Inventory explained](#inventory-explained)
    23    * [General section](#general-section)
    24    * [RHCOS section](#rhcos-section)
    25    * [Profiles and VMs](#profiles-and-vms)
    26      * [Profiles section](#profiles-section)
    27      * [VMs section](#vms-section)
    28  * [Install Config](#install-config)
    29    * [Set compute replicas to zero](#set-compute-replicas-to-zero)
    30    * [Set machine network](#set-machine-network)
    31    * [Set platform to none](#set-platform-to-none)
    32  * [Manifests](#manifests)
    33    * [Set control-plane nodes unschedulable](#set-control-plane-nodes-unschedulable)
    34  * [Ignition configs](#ignition-configs)
    35  * [Create templates and VMs](#create-templates-and-vms)
    36  * [Bootstrap](#bootstrap)
    37  * [Master nodes](#master-nodes)
    38  * [Wait for Control Plane](#wait-for-control-plane)
    39  * [OpenShift API](#openshift-api)
    40  * [Retire Bootstrap](#retire-bootstrap)
    41  * [Worker nodes](#worker-nodes)
    42      * [Approve CSRs](#approve-csrs)
    43  * [Wait for Installation Complete](#wait-for-installation-complete)
    44  * [Destroy OpenShift cluster](#destroy-openshift-cluster)
    45  
    46  # Prerequisites
    47  The [inventory.yml](../../../upi/ovirt/inventory.yml) file all the variables used by this installation can be customized as per
    48  user needs.
    49  The requirements for the UPI in terms of minimum resources are broadly the same as 
    50  the [IPI](./install_ipi.md#minimum-resources).
    51  
    52  - oVirt/RHV account stored in the [ovirt-config.yaml](https://github.com/openshift/installer/blob/master/docs/user/ovirt/install_ipi.md#ovirt-configyaml)
    53    - this file is generated by the `openshift-install` binary installer following the CLI wizard.
    54  - Name of the oVirt/RHV cluster to use
    55      - contained in the [inventory.yml](../../../upi/ovirt/inventory.yml) and input in the `openshift-install`.
    56  - A base domain name of the OpenShift cluster
    57      - input in the `openshift-install`.
    58  - A name of the OpenShift cluster
    59      - input in the `openshift-install`.
    60  - OpenShift Pull Secret
    61      - input in the `openshift-install`.
    62  - A DNS zone
    63    - to configure the resolution names for the OpenShift cluster base domain.
    64  - LoadBalancers
    65    - for bootstrap and control-plane machines.
    66    - for machines running the ingress router (usually compute nodes).
    67  
    68  ## Ansible and oVirt roles
    69  To use the UPI process described here the following are required:
    70  
    71  - Python3
    72  
    73  **Note**:  
    74  Currently most of Linux distros provides Python 3 by default.
    75  
    76  - Ansible 
    77  
    78  **Note for CentOS users**:  
    79  Depending on which version  the system is running will be required [epel-release](https://fedoraproject.org/wiki/EPEL) repo enabled.
    80  
    81  ```
    82    $ sudo dnf install ansible
    83  ```
    84  - python3-ovirt-engine-sdk4
    85  
    86  ```
    87    $ sudo dnf install python3-ovirt-engine-sdk4
    88  ```
    89  - ovirt.image-template Ansible role (distributed as ovirt-ansible-image-template package on oVirt Manager)
    90  - ovirt.vm-infra Ansible role (distributed as ovirt-ansible-vm-infra package on oVirt Manager)
    91  
    92  ```
    93    $ sudo ansible-galaxy install ovirt-ansible-vm-infra ovirt-ansible-image-template
    94  ```
    95  
    96  
    97  To be sure to follow the UPI installation process, Ansible scripts and the binary openshift-install 
    98  should be executed from the oVirt/RHV Manager or from a machine with access to the REST API of the 
    99  oVirt/RHV Manager and with all the oVirt roles available (installed by default on the Manager 
   100  machine).
   101  
   102  ## Network Requirements
   103  The UPI installation process assumes that the user satisfies some network requirements providing them through the
   104  existing infrastructure.
   105  During the boot the RHCOS based machines require an IP address in `initramfs` in order to establish a network connection to get their
   106  ignition config files.
   107  One of the recommended ways is to use a DHCP server to manage the machines in the long-term, maybe configuring the DHCP server
   108  itself to provide persistent IP addresses and host names to the cluster machines.
   109  
   110  Network connectivity between machines should be configured to allow cluster components to communicate:
   111  
   112  - Kubernetes NodePort
   113    Machines require connectivity to every other machine for OpenShift platform components through the port range `30000`-`32767` .
   114  
   115  - OpenShift reserved
   116    Connectivity to reserved port ranges `10250`-`10259` and `9000`-`9999` should be granted on every machine.
   117  
   118  - Machines to control-plane
   119    Connectivity to ports on ranges `2379`-`2380` (for etcd, peer and metrics) is required for control-plane machines and on
   120    port `6443` for Kubernetes API.
   121  
   122  
   123  ### Load Balancers
   124  Before installing the OpenShift Container Platform, two load balancers (layer-4) must be provided by the user infrastructure,
   125  one for the API and one for the Ingress Controller (to allow ingress to applications).
   126  
   127  - Load balancer for port `6443` and `22623` on control-plane and bootstrap machines (the bootstrap can be removed after control-plane
   128    initialization completes).
   129    The `6443` must be both internal and external reachable and is needed by the Kubernetes API server.
   130    Port `22623` must be accessible to nodes within the cluster.
   131  
   132  - Load balancer for port `443` and `80` for machines running the ingress router (usually worker nodes in the default configuration).
   133    Both ports must be accessible from within and outside the cluster.
   134  
   135  **NOTE**: the rules above can also be set on the same load balancer server. 
   136  
   137  
   138  ### DNS
   139  The UPI installation process requires the user to setup the existing infrastructure provided DNS to allow the correct resolution of
   140  the main components and services
   141  
   142  - Kubernetes API
   143    DNS records `api.<cluster_name>.<base_domain>` (internal and external resolution) and `api-int.<cluster_name>.<base_domain>` 
   144    (internal resolution) must be added to point to the Load balancer targeting the control plane machines. 
   145  
   146  - OpenShift routes
   147    A DNS record `*.apps.<cluster_name>.<base_domain>` must be provided to point to the Load balancer configured to manage the
   148    traffic for the ingress router (ports `443` and `80` of the compute machines).
   149  
   150  **NOTE**: the DNS records above may also point to the same IP in case you are using only one load balancer configured with the rules described
   151  in the [previous section](#load-balancers).
   152  
   153  ## RHCOS image
   154  This UPI installation process requires a proper RHCOS (Red Hat Enterprise Linux CoreOS) image URL to be set in the [inventory.yml](../../../upi/ovirt/inventory.yml) file.
   155  
   156  The RHCOS images can be found [here](https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/) and you have to choose
   157  the URL related to the `OpenStack` qcow2 image type, like in the example below
   158  
   159  ```
   160  https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.6.0-0.nightly-2020-07-16-122837/rhcos-4.6.0-0.nightly-2020-07-16-122837-x86_64-openstack.x86_64.qcow2.gz
   161  ```
   162  
   163  The version of the image should be chosen according to the OpenShift version you're about to install (in general less than or equal to the OCP
   164  version). 
   165  Once you have the URL set in the [inventory.yml](../../../upi/ovirt/inventory.yml) a dedicated Ansible playbook will be in charge to download the `qcow2.gz` file, uncompress it
   166  in a specified folder and use it to create oVirt/RHV templates.
   167  
   168  ## Getting Ansible playbooks
   169  All the Ansible playbooks used in this UPI installation process are available [here](https://github.com/openshift/installer/tree/master/upi/ovirt)
   170  and can be downloaded with the following utility script
   171  
   172  ```sh
   173  RELEASE="release-4.6"; \
   174  curl -L -X GET https://api.github.com/repos/openshift/installer/contents/upi/ovirt\?ref\=${RELEASE} | 
   175  grep 'download_url.*\.yml' | 
   176  awk '{ print $2 }' | sed -r 's/("|",)//g' | 
   177  xargs -n 1 curl -O
   178  ```
   179  
   180  Different versions of the oVirt UPI playbooks can be downloaded changing the RELEASE environment variable to the desired branch 
   181  (please be aware that this UPI work started with the `release-4.6`).
   182  
   183  ## Assets directory
   184  Before proceeding with the installation is **required** to set an environment variable with the path (absolute or relative according to your preferences) 
   185  of the directory in which the `openshift-install` command will put all the artifacts and that we'll also refer to in the [inventory.yml](../../../upi/ovirt/inventory.yml)`.
   186  
   187  ```sh
   188  $ export ASSETS_DIR=./wrk
   189  ```
   190  
   191  ## Inventory Explained
   192  This section shows an example of [inventory.yml](../../../upi/ovirt/inventory.yml), used to specify the variables needed for the UPI installation process, with a brief explanation of the sections included.
   193  
   194  ```YAML
   195  ---
   196  all:
   197    vars:
   198  
   199      # ---
   200      # General section
   201      # ---
   202      ovirt_cluster: "Default" 
   203      ocp:
   204        assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}"
   205        ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml"
   206  
   207      # ---
   208      # RHCOS section
   209      # ---
   210      rhcos:
   211        image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest-4.6/rhcos-4.6.0-0.nightly-2020-07-16-122837-x86_64-openstack.x86_64.qcow2.gz"
   212        local_cmp_image_path: "/tmp/rhcos.qcow2.gz"
   213        local_image_path: "/tmp/rhcos.qcow2"
   214  
   215      # ---
   216      # Profiles section
   217      # ---
   218      control_plane:
   219        cluster: "{{ ovirt_cluster }}"
   220        memory: 16GiB
   221        sockets: 4
   222        cores: 1
   223        template: rhcos_tpl
   224        operating_system: "rhcos_x64"
   225        type: high_performance
   226        graphical_console:
   227          headless_mode: false
   228          protocol:
   229          - spice
   230          - vnc
   231        disks:
   232        - size: 120GiB
   233          name: os
   234          interface: virtio_scsi
   235          storage_domain: depot_nvme
   236        nics:
   237        - name: nic1
   238          network: lab
   239          profile: lab
   240  
   241      compute:
   242        cluster: "{{ ovirt_cluster }}"
   243        memory: 16GiB
   244        sockets: 4
   245        cores: 1
   246        template: worker_rhcos_tpl
   247        operating_system: "rhcos_x64"
   248        type: high_performance
   249        graphical_console:
   250          headless_mode: false
   251          protocol:
   252          - spice
   253          - vnc
   254        disks:
   255        - size: 120GiB
   256          name: os
   257          interface: virtio_scsi
   258          storage_domain: depot_nvme
   259        nics:
   260        - name: nic1
   261          network: lab
   262          profile: lab
   263  
   264      # ---
   265      # VMs section
   266      # ---
   267      vms:
   268      - name: "{{ metadata.infraID }}-bootstrap"
   269        ocp_type: bootstrap
   270        profile: "{{ control_plane }}"
   271        type: server
   272      - name: "{{ metadata.infraID }}-master0"
   273        ocp_type: master
   274        profile: "{{ control_plane }}"
   275      - name: "{{ metadata.infraID }}-master1"
   276        ocp_type: master
   277        profile: "{{ control_plane }}"
   278      - name: "{{ metadata.infraID }}-master2"
   279        ocp_type: master
   280        profile: "{{ control_plane }}"
   281      - name: "{{ metadata.infraID }}-worker0"
   282        ocp_type: worker
   283        profile: "{{ compute }}"
   284      - name: "{{ metadata.infraID }}-worker1"
   285        ocp_type: worker
   286        profile: "{{ compute }}"
   287      - name: "{{ metadata.infraID }}-worker2"
   288        ocp_type: worker
   289        profile: "{{ compute }}"
   290  ```
   291  
   292  ### General section
   293  Variables in this section are mandatory and allow the user to specify 
   294  
   295  * `ovirt_cluster`: the name of the ovirt cluster in which you'll install the OCP cluster.
   296  * `ocp.assets_dir`: is the path of the folder in which the `openshift-install` command will put all the files built in different stages.
   297  * `ocp.ovirt_config_path`: path of the `ovirt-config.yaml`, generated by the `openshift-install` in the [first stage](#install-config), containing the ovirt credentials 
   298  (necessary to interact with the oVirt/RHV Manager REST API).
   299  
   300  ### RHCOS section
   301  The `rhcos` variable contains the RHCOS public URL (`image_url`) for downloading the image in the local specified path (`local_cmp_image_path`) and
   302  uncompressing it (in a file described by `local_image_path`)  before being able to use it.
   303  
   304  ```YAML
   305    rhcos:
   306      image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.6.0-0.nightly-2020-07-16-122837/rhcos-4.6.0-0.nightly-2020-07-16-122837-x86_64-openstack.x86_64.qcow2.gz"
   307      local_cmp_image_path: "/tmp/rhcos.qcow2.gz"
   308      local_image_path: "/tmp/rhcos.qcow2"
   309  ```
   310  
   311  Please refer to the [specific paragraph](#rhcos-image) to learn more about which version of RHCOS image to choose and where to find it.
   312  
   313  ### Profiles and VMs
   314  The latest and important part of the [inventory.yml](../../../upi/ovirt/inventory.yml) is related to `profiles` and `vms` definition using the capabilities
   315  offered by the ovirt.vm-infra role, whose documentation can be found [here](#https://github.com/oVirt/ovirt-ansible-vm-infra).
   316  In the following paragraphs we'll explain the meaning of the basic parameters from the OCP point of view.
   317  
   318  #### Profiles section
   319  This section is mainly composed of two variable, `control_plane` and `compute`, that define the two different profiles used respectively
   320  for masters (and bootstrap) and workers.
   321  
   322  ```YAML
   323      control_plane:
   324        cluster: "{{ ovirt_cluster }}"
   325        memory: 16GiB
   326        sockets: 4
   327        cores: 1
   328        template: rhcos_tpl
   329        operating_system: "rhcos_x64"
   330        type: high_performance
   331        disks:
   332        - size: 120GiB
   333          name: os
   334          interface: virtio_scsi
   335          storage_domain: depot_nvme
   336        nics:
   337        - name: nic1
   338          network: lab
   339          profile: lab
   340  
   341      compute:
   342        cluster: "{{ ovirt_cluster }}"
   343        memory: 16GiB
   344        sockets: 4
   345        cores: 1
   346        template: worker_rhcos_tpl
   347        operating_system: "rhcos_x64"
   348        type: high_performance
   349        disks:
   350        - size: 120GiB
   351          name: os
   352          interface: virtio_scsi
   353          storage_domain: depot_nvme
   354        nics:
   355        - name: nic1
   356          network: lab
   357          profile: lab
   358  ```
   359  
   360  The user can customize the parameters of both profiles according to the needs and the minimum requirements.
   361  
   362  * `cluster`: it's already set according to the value of the variable in the [General Section](#general-section).
   363  * `memory, sockets, cores`: mandatory parameters necessary to define the common specs of the VMs generated from this profile.
   364  * `template`: name of the template that the virtual machine will be based on (refer to [this](#create-templates-and-vms) for further information about templates).
   365  * `operating_system`: sets the vm OS type. With oVirt/RHV 4.4 it's mandatory to use the value `rhcos_x64` to allow the `ignition script` 
   366  to be correctly passed to the VM. 
   367  * `type`: it's the type that the VM will have once created.
   368  * `disks`: in this section the specs of the disk must be set according to the basic requirements of OCP in terms of capacity and storage 
   369  performances. It's possible to choose different storage-domains for control_plane and compute nodes.
   370  * `nics`: defines the specs like the name of the nic and the network that the Vms will use. The virtual network interface profile can also be specified.
   371  The MAC address will be taken from the oVirt/RHV MAC pool.
   372  
   373  #### VMs section
   374  In this last section of the [inventory.yml](../../../upi/ovirt/inventory.yml) there's the definition of the `vms` variable, containing all the node instance that the user plans to create
   375  to deploy the OCP cluster (remember that there are minimum requirements in terms of number of master and worker nodes).
   376  
   377  In the last section there's the list of all the vms that will be created and their role expressed by the `ocp_type`.
   378  
   379  ```YAML
   380    vms:
   381      - name: "{{ metadata.infraID }}-bootstrap"
   382        ocp_type: bootstrap
   383        profile: "{{ control_plane }}"
   384        type: server
   385      - name: "{{ metadata.infraID }}-master0"
   386        ocp_type: master
   387        profile: "{{ control_plane }}"
   388      - name: "{{ metadata.infraID }}-master1"
   389        ocp_type: master
   390        profile: "{{ control_plane }}"
   391      - name: "{{ metadata.infraID }}-master2"
   392        ocp_type: master
   393        profile: "{{ control_plane }}"
   394      - name: "{{ metadata.infraID }}-worker0"
   395        ocp_type: worker
   396        profile: "{{ compute }}"
   397      - name: "{{ metadata.infraID }}-worker1"
   398        ocp_type: worker
   399        profile: "{{ compute }}"
   400  ```
   401  
   402  As you can see above the `vms` variable is basically defined by a list of elements each one with at least three mandatory attributes
   403  
   404  * `name`: name of the virtual machine to create.
   405  * `ocp_type`: the role of the virtual machine in the OCP cluster (possible values are `bootstrap`, `master`, `worker`).
   406  * `profile`: name of the profile (`control_plane` or `compute`) from which to inherit common specs
   407  
   408  Additional attributes can be specified to override the ones defined in the inheriting profile
   409  
   410  * `type`: is re-defined as `server` in the `bootstrap` vm
   411  
   412  It's also possible to use all the attributes documented in the [oVirt.vm-infra role](#https://github.com/oVirt/ovirt-ansible-vm-infra)
   413  (e.g.: fixed MAC address for each vm that could help to assign permanent IP through a DHCP).
   414  
   415  **Note**:
   416  Looking at the `vms` attribute `name` setting, you can see that we are using the variable `metadata.infraID` whose value is 
   417  obtained parsing the `metadata.json` file generated using the command `openshift-install create ignition-configs` (read more about it
   418  [here](#ignition-configs)).
   419  There's a specific set of Ansible tasks ([common-auth.yml](https://github.com/openshift/installer/blob/master/upi/ovirt/common-auth.yml)) included in all
   420  the UPI playbooks that contains the code to read the `infraID` from the specific file located in the `ocp.assets_dir` 
   421  
   422  ```YAML
   423  ---
   424  - name: include metadata.json vars
   425    include_vars:
   426      file: "{{ ocp.assets_dir }}/metadata.json"
   427      name: metadata
   428    
   429    ...
   430  
   431  
   432  ```
   433  
   434  ## Install config
   435  Run the `openshift-install` to create the initial `install-config` using as assets directory the same that is specified in
   436  the [inventory.yml](../../../upi/ovirt/inventory.yml) (`working_path`).
   437  
   438  ```sh
   439  $ openshift-install create install-config --dir $ASSETS_DIR
   440  ? SSH Public Key /home/user/.ssh/id_dsa.pub
   441  ? Platform <ovirt>
   442  ? Engine FQDN[:PORT] [? for help] <engine.fqdn>
   443  ? Enter ovirt-engine username <admin@internal>
   444  ? Enter password <******>
   445  ? oVirt cluster <cluster>
   446  ? oVirt storage <storage>
   447  ? oVirt network <net> 
   448  ? Internal API virtual IP <172.16.0.252>
   449  ? Ingress virtual IP <172.16.0.251>
   450  ? Base Domain <example.org>
   451  ? Cluster Name <ocp4>
   452  ? Pull Secret [? for help] <********>
   453  ```
   454  
   455  *Internal API* and *Ingress* are the IPs added following the above DNS instructions
   456  
   457  - `api.ocp4.example.org`: 172.16.0.252
   458  - `*.apps.ocp4.example.org`: 172.16.0.251 
   459  
   460  *Cluster Name* (`ocp4`) and *Base Domain* (`example.org`) joint together will form the FQDN of the OCP cluster
   461  used to expose the API interface (`https://api.ocp4.example.org:6443/`)
   462  and the newly created applications (e.g. `https://console-openshift-console.apps.ocp4.example.org`).
   463  
   464  You can obtain a new Pull secret from [here](https://console.redhat.com/openshift/install/pull-secret).
   465  
   466  The result of this first step is the creation of a `install-config.yaml` in the specified assets directory:
   467  
   468  ```sh
   469  $ tree
   470  .
   471  └── wrk
   472      └── install-config.yaml
   473  ```
   474  
   475  File `$HOME/.ovirt/ovirt-config.yaml` was also created for you by the `openshift-install` containing all the connection
   476  parameters needed to reach the oVirt/RHV engine and use its REST API.
   477  
   478  **NOTE:**
   479  Some of the parameters added during the `openshift-install` workflow, in particular the `Internal API virtual IP` and
   480  `Ingress virtual IP`, will not be used because already configured in your infrastructure DNS (see [DNS](#dns) section).
   481  Other parameters like `oVirt cluster`, `oVirt storage`, `oVirt network`, will be used as specified in the [inventory.yml](../../../upi/ovirt/inventory.yml)
   482  and removed from the `install-config.yaml` with the previously mentioned `virtual IPs`, using a script reported in a 
   483  [section below](#set-platform-to-none).
   484  
   485  ### Set compute replicas to zero
   486  Machine API will not be used by the UPI to create nodes, we'll create compute nodes explicitly with Ansible scripts.
   487  Therefore we'll set the number of compute nodes to zero replicas using the following python script:
   488  
   489  ```sh
   490  $ python3 -c 'import os, yaml
   491  path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"]
   492  conf = yaml.safe_load(open(path))
   493  conf["compute"][0]["replicas"] = 0
   494  open(path, "w").write(yaml.dump(conf, default_flow_style=False))'
   495  ```
   496  
   497  **NOTE**: All the Python snippets in this document work with both Python 3 and Python 2.
   498  
   499  ### Set machine network
   500  OpenShift installer sets a default IP range for nodes and we need to change it according to our infrastructure.
   501  We'll set the range to `172.16.0.0/16` (we can use the following python script for this):
   502  
   503  ```sh
   504  $ python3 -c 'import os, yaml
   505  path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"]
   506  conf = yaml.safe_load(open(path))
   507  conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16"
   508  open(path, "w").write(yaml.dump(conf, default_flow_style=False))'
   509  ```
   510  
   511  ### Set platform to none
   512  The UPI for oVirt/RHV is similar to the bare-metal installation process and for now we don't need the specific ovirt
   513  platform section in the `install-config.yaml`, all the settings needed are specified in the [inventory.yml](../../../upi/ovirt/inventory.yml).
   514  We'll remove the section 
   515  
   516  ```sh
   517  $ python3 -c 'import os, yaml
   518  path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"]
   519  conf = yaml.safe_load(open(path))
   520  platform = conf["platform"]
   521  del platform["ovirt"]
   522  platform["none"] = {}
   523  open(path, "w").write(yaml.dump(conf, default_flow_style=False))'
   524  ```
   525  
   526  ## Manifests
   527  Editing manifests is an action required for the UPI and to generate them we can use again the binary installer
   528  
   529  ```sh
   530  $ openshift-install create manifests --dir $ASSETS_DIR
   531  ```
   532  ```sh
   533  $ tree
   534  .
   535  └── wrk
   536      ├── manifests
   537      │   ├── 04-openshift-machine-config-operator.yaml
   538      │   ├── cluster-config.yaml
   539      │   ├── cluster-dns-02-config.yml
   540      │   ├── cluster-infrastructure-02-config.yml
   541      │   ├── cluster-ingress-02-config.yml
   542      │   ├── cluster-network-01-crd.yml
   543      │   ├── cluster-network-02-config.yml
   544      │   ├── cluster-proxy-01-config.yaml
   545      │   ├── cluster-scheduler-02-config.yml
   546      │   ├── cvo-overrides.yaml
   547      │   ├── kube-cloud-config.yaml
   548      │   ├── kube-system-configmap-root-ca.yaml
   549      │   ├── machine-config-server-tls-secret.yaml
   550      │   └── openshift-config-secret-pull-secret.yaml
   551      └── openshift
   552          ├── 99_kubeadmin-password-secret.yaml
   553          ├── 99_openshift-cluster-api_master-user-data-secret.yaml
   554          ├── 99_openshift-cluster-api_worker-user-data-secret.yaml
   555          ├── 99_openshift-machineconfig_99-master-ssh.yaml
   556          ├── 99_openshift-machineconfig_99-worker-ssh.yaml
   557          └── openshift-install-manifests.yaml
   558  ```
   559  
   560  The command above will write manifests consuming the `install-config.yaml` and will show a warning message.
   561  If you plan on reusing the `install-config.yaml` file, back it up before you generate manifests.
   562  
   563  ```sh
   564  INFO Consuming Install Config from target directory 
   565  WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings 
   566  ```
   567  
   568  ### Set control-plane nodes unschedulable
   569  Setting compute replicas to zero makes control-plane nodes schedulable which is something we don't want for now 
   570  (router pods can run also on control-plane nodes but there are some Kubernetes limitation that will prevent those pods
   571  to be reachable by the ingress load balancer).
   572  Setting the control-plan as unschedulable means modifying the `manifests/cluster-scheduler-02-config.yml` setting
   573  `masterSchedulable` to `False`.
   574  
   575  ```sh 
   576  $ python3 -c 'import os, yaml
   577  path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"]
   578  data = yaml.safe_load(open(path))
   579  data["spec"]["mastersSchedulable"] = False
   580  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   581  ```
   582  
   583  ## Ignition configs
   584  The next step is the one needed to build [Ignition](https://coreos.com/ignition/docs/latest/) files from the manifests just modified.
   585  Ignition files have to be fetched by RHCOS machine initramfs to perform configurations that will bring to a live final node.
   586  We will use again the binary installer.
   587  
   588  ```sh 
   589  $ openshift-install create ignition-configs --dir $ASSETS_DIR
   590  ```
   591  
   592  ```sh
   593  $ tree
   594  .
   595  └── wrk
   596      ├── auth
   597      │   ├── kubeadmin-password
   598      │   └── kubeconfig
   599      ├── bootstrap.ign
   600      ├── master.ign
   601      ├── metadata.json
   602      └── worker.ign
   603  ```
   604  Other than the ignition files the installer generated
   605  
   606  - `auth` folder containing the admin credentials necessary to
   607    connect to the cluster via the `oc` or `kubectl` CLI utilities.
   608  - `metadata.json` with information like the OCP cluster name, OCP cluster ID and the `infraID`
   609    (generated for the current running installation).
   610  
   611  The `infraID` will be used by the UPI Ansible playbooks as prefix for the VMs created during the installation
   612  process avoiding name clashes in case of multiple installations in the same oVirt/RHV cluster.
   613  
   614  **Note:** certificates contained into ignition config files expire after 24 hours. You must complete the cluster installation
   615  and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
   616  
   617  
   618  ## Create templates and VMs
   619  After having checked that all our variables in the [inventory.yml](../../../upi/ovirt/inventory.yml) fit the needs, we can run the first of our Ansible provisioning
   620  playbooks.
   621  
   622  ```sh
   623  $ ansible-playbook -i inventory.yml create-templates-and-vms.yml
   624  ```
   625  
   626  This playbook will use the connection parameters for the oVirt/RHV engine stored in the `$HOME/.ovirt/ovirt-config.yaml`
   627  reading also the `metadata.json` from the assets directory.
   628  
   629  According to the variables
   630  
   631  ```YAML
   632    rhcos:
   633      image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/4.6.0-0.nightly-2020-07-16-122837/rhcos-4.6.0-0.nightly-2020-07-16-122837-x86_64-openstack.x86_64.qcow2.gz"
   634      local_cmp_image_path: "/tmp/rhcos.qcow2.gz"
   635      local_image_path: "/tmp/rhcos.qcow2"
   636  ```
   637  
   638  the RHCOS image will be downloaded (in case not already existing locally), stored locally and extracted to be uploaded on the oVirt/RHV node
   639  and used for template creation.
   640  The user can check the RHCOS image for the OpenShift version he want to use from [here](https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/).
   641  
   642  Templates will be created according to the names specified in the [inventory.yml](../../../upi/ovirt/inventory.yml) for the `control_plane` and `compute` profile (in case of
   643  different names two different templates will be created).
   644  In case the user want to have different templates for each OCP installation in the same cluster, it is possible to customize the template name
   645  in the [inventory.yml](../../../upi/ovirt/inventory.yml) prepending the `infraID` (like it is done for VMs' names).
   646  
   647  ```YAML
   648    control_plane:
   649      cluster: "{{ ovirt_cluster }}"
   650      memory: 16GiB
   651      sockets: 4
   652      cores: 1
   653      template: "{{ metadata.infraID }}-rhcos_tpl"
   654      operating_system: "rhcos_x64"
   655      ...
   656  ```
   657  
   658  At the end of the execution the VMs specified will be created and left in stopped mode. This allows the user to fetch any information from the
   659  VMs that can help configuring other infrastructure elements (e.g.: getting MAC addresses to feed a DHCP server to assign permanent IPs).
   660  
   661  ## Bootstrap
   662  
   663  ```sh
   664  $ ansible-playbook -i inventory.yml bootstrap.yml
   665  ```
   666  The playbook starts the bootstrap VM passing it the `bootstrap.ign` ignition file contained in the assets directory. That will allow the bootstrap
   667  node to configure itself and be ready to serve ignition files for the master nodes.
   668  The user can check the console inside oVirt/RHV UI or can connect to the VM via SSH.
   669  Running the following command from inside the bootstrap VM enables them to closely monitor the bootstrap process.
   670  
   671  ```sh
   672  $ ssh core@<boostrap.ip>
   673  [core@ocp4-lk6b4-bootstrap ~]$ journalctl -b -f -u release-image.service -u bootkube.service
   674  ```
   675  
   676  ## Master nodes
   677  
   678  ```sh
   679  $ ansible-playbook -i inventory.yml masters.yml
   680  ```
   681  
   682  The `masters.yml` will start our control-plane made of three masters (but it can be customized) passing the `master.ign` ignition file to each of the VMs.
   683  `master.ign` ignition file contains a directive that instructs masters to fetch the ignition from the URL
   684  
   685  ```sh
   686  https://api-int.ocp4.example.org:22623/config/master
   687  ```
   688  
   689  targeted by the Load balancer that manages the traffic on port `22623` (accessible only inside the cluster) driving it to masters and bootstrap.
   690  
   691  ## Wait for Control Plane
   692  The user can monitor the control-plane bootstrap process with the following command:
   693  
   694  ```sh
   695  $ openshift-install wait-for bootstrap-complete --dir $ASSETS_DIR
   696  ```
   697  
   698  After some time the output of the command will be the following
   699  
   700  ```sh
   701  INFO API v1.18.3+b74c5ed up
   702  INFO Waiting up to 40m0s for bootstrapping to complete... 
   703  ```
   704  
   705  After all the pods on master nodes and the etcd will be up and running the installer will show the following output:
   706  
   707  ```sh
   708  INFO It is now safe to remove the bootstrap resources
   709  ```
   710  
   711  ## OpenShift API
   712  The OpenShift API can be accessed via the `oc` or `kubectl` using the admin credentials contained in the assets directory
   713  in the file `auth/kubeconfig`:
   714  
   715  ```sh
   716  $ export KUBECONFIG=$ASSETS_DIR/auth/kubeconfig
   717  $ oc get nodes
   718  $ oc get pods -A
   719  ```
   720  ## Retire Bootstrap
   721  After the `wait-for` command says that the bootstrap process is complete, it is possible to remove the bootstrap VM
   722  
   723  ```sh
   724  $ ansible-playbook -i inventory.yml retire-bootstrap.yml
   725  ```
   726  
   727  and the user can remove it also from the Load balancer directives.
   728  
   729  ## Worker nodes
   730  ```sh
   731  $ ansible-playbook -i inventory.yml workers.yml
   732  ```
   733  
   734  This is similar to what we did for masters but in this case workers won't automatically join the cluster, we'll need to 
   735  approve their respective pending CSRs (Certificate Signing Requests).
   736  
   737  ### Approve CSRs
   738  CSRs for nodes joining the cluster will need to be approved by the administrator. The following command helps to list
   739  pending requests
   740  
   741  ```sh
   742  $ oc get csr -A
   743  ```
   744  
   745  Eventually one pending CSR per node will be shown
   746  
   747  ```sh
   748  NAME        AGE    SIGNERNAME                                    REQUESTOR                                                                   CONDITION
   749  csr-2lnxd   63m    kubernetes.io/kubelet-serving                 system:node:ocp4-lk6b4-master0.ocp4.example.org                             Approved,Issued
   750  csr-hff4q   64m    kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   751  csr-hsn96   60m    kubernetes.io/kubelet-serving                 system:node:ocp4-lk6b4-master2.ocp4.example.org                             Approved,Issued
   752  csr-m724n   6m2s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   753  csr-p4dz2   60m    kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   754  csr-t9vfj   60m    kubernetes.io/kubelet-serving                 system:node:ocp4-lk6b4-master1.ocp4.example.org                             Approved,Issued
   755  csr-tggtr   61m    kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   756  csr-wcbrf   7m6s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   757  ```
   758  
   759  To filter and watch pending CSRs the following command can be used
   760  
   761  ```sh
   762  $ watch "oc get csr -A | grep pending -i"
   763  ```
   764  that refresh the output every two seconds
   765  ```sh
   766  Every 2.0s: oc get csr -A | grep pending -i
   767  
   768  csr-m724n   10m   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   769  csr-wcbrf   11m   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
   770  ```
   771  
   772  Every `Pending` request should be inspected
   773  
   774  ```sh
   775  $ oc describe csr csr-m724n
   776  Name:               csr-m724n
   777  Labels:             <none>
   778  Annotations:        <none>
   779  CreationTimestamp:  Sun, 19 Jul 2020 15:59:37 +0200
   780  Requesting User:    system:serviceaccount:openshift-machine-config-operator:node-bootstrapper
   781  Signer:             kubernetes.io/kube-apiserver-client-kubelet
   782  Status:             Pending
   783  Subject:
   784           Common Name:    system:node:ocp4-lk6b4-worker1.ocp4.example.org
   785           Serial Number:  
   786           Organization:   system:nodes
   787  Events:  <none>
   788  ```
   789  and finally approved
   790  
   791  ```sh
   792  $ oc adm certificate approve csr-m724n
   793  ```
   794  
   795  After approving the first requests, another CSR per each worker will be issued and will need to be approved to have worker nodes not only 
   796  joining the cluster, but also becoming `Ready` and having pods scheduled on them.
   797  
   798  ## Wait for Installation Complete
   799  The following command can now be run to follow the installation process till it's complete:
   800  
   801  ```sh
   802  $ openshift-install wait-for install-complete --dir $ASSETS_DIR --log-level debug
   803  ```
   804  
   805  Eventually it will give as output
   806  
   807  - URL to reach the OpenShift Console (web UI).
   808  - Username and password for the admin login.
   809  
   810  
   811  ## Destroy OpenShift cluster
   812  
   813  ```ssh
   814  $ ansible-playbook -i inventory.yml \
   815      retire-bootstrap.yml \
   816      retire-masters.yml   \
   817      retire-workers.yml
   818  ```
   819  
   820  Removing DNS added records, Load balancers and any other infrastructure configuration is left to the user.