github.com/openshift/installer@v1.4.17/docs/design/openstack/networking-infrastructure.md (about)

     1  # OpenStack IPI Networking Infrastructure
     2  
     3  The `OpenStack` platform installer uses an internal networking solution identical to
     4  the [baremetal networking infrastructure](../baremetal/networking-infrastructure.md).
     5  For an overview of the quotas required, and the entrypoints created when
     6  you build an OpenStack IPI cluster, see the [user docs](../../user/openstack/README.md).
     7  
     8  
     9  ## Load-balanced control plane access
    10  
    11  Access to the Kubernetes API (port 6443) from clients both external
    12  and internal to the cluster, and access to ignition configs (port 22623) from clients within the
    13  cluster is load-balanced across control plane machines.
    14  These services are initially hosted by the bootstrap node until the control
    15  plane is up. Then, control is pivoted to the control plane machines. We will go into further detail on
    16  that process in the [Virtual IPs section](#virtual-ips).
    17  
    18  ## Virtual IPs
    19  
    20  We use virtual IP addresses, VIPs, managed by Keepalived to provide high
    21  availability access to essential APIs and services. For more info on how this
    22  works, please read about what [Keepalived is](https://www.keepalived.org/) and
    23  about the underlying [VRRP
    24  algorithm](https://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol)
    25  that it runs. In our current implementation, we have 2 highly available VIPs
    26  that we manage.  Ingress VIP handles requests to services managed by OpenShift
    27  and the API VIP handles requests to the openshift API. Our VIP addresses are
    28  chosen and validated from the nodes subnet in the openshift installer, however,
    29  the services we run to manage the internal networking infrastructure, such as
    30  Keepalived, dns, and loadbalancers, are manged by the
    31  [machine-config-operator(MCO)](https://github.com/openshift/machine-config-operator/tree/master/docs).
    32  The MCO has been configured to run static pods on the Bootstrap, Master, and
    33  Worker Nodes that run our internal networking infrastructure. Files run on the
    34  bootstrap node can be found
    35  [here](https://github.com/openshift/machine-config-operator/tree/master/manifests/on-prem).
    36  Files run on both master and worker nodes can be found
    37  [here](https://github.com/openshift/machine-config-operator/tree/master/templates/common/openstack/files).
    38  Files run on only master nodes can be found
    39  [here](https://github.com/openshift/machine-config-operator/tree/master/templates/master).
    40  Lastly, files run only on worker nodes can be found
    41  [here](https://github.com/openshift/machine-config-operator/tree/master/templates/worker).
    42  
    43  ## Infrastructure Walkthrough
    44  
    45  The bootstrap node is responsible for running temporary networking infrastructure while the Master
    46  nodes are still coming up. The bootstrap node will run a CoreDNS instance, as well as
    47  Keepalived. While the bootstrap node is up, it will have priority running the API VIP.
    48  
    49  The Master nodes run dhcp, HAProxy, CoreDNS, and Keepalived. Haproxy loadbalances incoming requests
    50  to the API across all running masters. It also runs a stats and healthcheck server. Keepalived manages both VIPs on the master, where each
    51  master has an equal chance of being assigned one of the VIPs. Initially, the bootstrap node has the highest priority for hosting the API VIP, so they will point to addresses there at startup. Meanwhile, the master nodes will try to get the control plane, and the OpenShift API up. Keepalived implements periodic health checks for each VIP that are used to determine the weight assigned to each server. The server with the highest weight is assigned the VIP. Keepalived has two separate healthchecks that attempt to reach the OpenShift API and CoreDNS on the localhost of each master node. When the API on a master node is reachable, Keepalived substantially increases it's weight for that VIP, making its priority higher than that of the bootstrap node and any node that does not yet have the that service running. This ensures that nodes that are incapable of serving DNS records or the OpenShift API do not get assigned the respective VIP. The Ingress VIP is also managed by a healthcheck that queries for an OCP Router HAProxy healthcheck, not the HAProxy we stand up in  static pods for the API. This makes sure that the Ingress VIP is pointing to a server that is running the necessary OpenShift Ingress Operator resources to enable external access to the node.
    52  
    53  The Worker Nodes run dhcp, CoreDNS, and Keepalived. On workers, Keepalived is only responsible for managing
    54  the Ingress VIP. It's algorithm is the same as the one run on the masters.