github.com/openshift/installer@v1.4.17/docs/user/gcp/install_upi.md (about)

     1  # Install: User-Provided Infrastructure
     2  The steps for performing a user-provided infrastructure install are outlined here. Several
     3  [Deployment Manager][deploymentmanager] templates are provided to assist in
     4  completing these steps or to help model your own. You are also free to create
     5  the required resources through other methods; the templates are just an
     6  example.
     7  
     8  ## Prerequisites
     9  * all prerequisites from [README](README.md)
    10  * the following binaries installed and in $PATH:
    11    * gcloud
    12    * gsutil
    13  * gcloud authenticated to an account with [additional](iam.md) roles:
    14    * Deployment Manager Editor
    15    * Service Account Key Admin
    16  * the following API Services enabled:
    17    * Cloud Deployment Manager V2 API (deploymentmanager.googleapis.com)
    18  
    19  ## Create Ignition configs
    20  The machines will be started manually. Therefore, it is required to generate
    21  the bootstrap and machine Ignition configs and store them for later steps.
    22  Use a [staged install](../overview.md#multiple-invocations) to enable desired customizations.
    23  
    24  ### Create an install config
    25  Create an install configuration as for [the usual approach](install.md#create-configuration).
    26  
    27  If you are installing into a [Shared VPC (XPN)][sharedvpc],
    28  skip this step and create the `install-config.yaml` manually using the documentation references/examples.
    29  The installer will not be able to access the public DNS zone in the host project for the base domain prompt.
    30  
    31  ```console
    32  $ openshift-install create install-config
    33  ? SSH Public Key /home/user_id/.ssh/id_rsa.pub
    34  ? Platform gcp
    35  ? Project ID example-project
    36  ? Region us-east1
    37  ? Base Domain example.com
    38  ? Cluster Name openshift
    39  ? Pull Secret [? for help]
    40  ```
    41  
    42  ### Empty the compute pool (optional)
    43  If you do not want the cluster to provision compute machines, edit the resulting `install-config.yaml` to set `replicas` to 0 for the `compute` pool.
    44  
    45  ```sh
    46  python -c '
    47  import yaml;
    48  path = "install-config.yaml";
    49  data = yaml.full_load(open(path));
    50  data["compute"][0]["replicas"] = 0;
    51  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
    52  ```
    53  
    54  ```console
    55  compute:
    56  - architecture: amd64
    57    hyperthreading: Enabled
    58    name: worker
    59    platform: {}
    60    replicas: 0
    61  ```
    62  
    63  ### Enable private cluster setting (optional)
    64  If you want to provision a private cluster, edit the resulting `install-config.yaml` to set `publish` to `Internal`.
    65  
    66  If you are installing into a [Shared VPC (XPN)][sharedvpc],
    67  `publish` must be set to `Internal`.
    68  The installer will not be able to access the public DNS zone for the the base domain in the host project, which is required for External clusters.
    69  This can be reversed in a step [below](enable-external-ingress-optional).
    70  
    71  ```sh
    72  python -c '
    73  import yaml;
    74  path = "install-config.yaml";
    75  data = yaml.full_load(open(path));
    76  data["publish"] = "Internal";
    77  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
    78  ```
    79  
    80  ```console
    81  publish: Internal
    82  ```
    83  
    84  ### Create manifests
    85  Create manifest to enable customizations which are not exposed via the install configuration.
    86  
    87  ```console
    88  $ openshift-install create manifests
    89  INFO Consuming "Install Config" from target directory
    90  ```
    91  
    92  ### Remove control plane machines
    93  Remove the control plane machines and machinesets from the manifests.
    94  We'll be providing those ourselves and don't want to involve [the machine-API operator][machine-api-operator].
    95  
    96  ```sh
    97  rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
    98  rm -f openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml
    99  ```
   100  
   101  ### Remove compute machinesets (optional)
   102  If you do not want the cluster to provision compute machines, remove the compute machinesets from the manifests as well.
   103  
   104  ```sh
   105  rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml
   106  ```
   107  
   108  ### Make control-plane nodes unschedulable
   109  Currently [emptying the compute pools](#empty-compute-pools) makes control-plane nodes schedulable.
   110  But due to a [Kubernetes limitation][kubernetes-service-load-balancers-exclude-masters], router pods running on control-plane nodes will not be reachable by the ingress load balancer.
   111  Update the scheduler configuration to keep router pods and other workloads off the control-plane nodes:
   112  
   113  ```sh
   114  python -c '
   115  import yaml;
   116  path = "manifests/cluster-scheduler-02-config.yml";
   117  data = yaml.full_load(open(path));
   118  data["spec"]["mastersSchedulable"] = False;
   119  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   120  ```
   121  
   122  ```console
   123  spec:
   124    mastersSchedulable: false
   125  ```
   126  
   127  ### Remove DNS Zones (optional)
   128  If you don't want [the ingress operator][ingress-operator] to create DNS records on your behalf, remove the `privateZone` and `publicZone` sections from the DNS configuration.
   129  If you do so, you'll need to [add ingress DNS records manually](#add-the-ingress-dns-records) later on.
   130  
   131  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   132  remove the `privateZone` section from the DNS configuration.
   133  The `publicZone` will not exist because of `publish: Internal` in `install-config.yaml`.
   134  Remove the `publicZone` line from the command to avoid an error.
   135  
   136  ```sh
   137  python -c '
   138  import yaml;
   139  path = "manifests/cluster-dns-02-config.yml";
   140  data = yaml.full_load(open(path));
   141  del data["spec"]["publicZone"];
   142  del data["spec"]["privateZone"];
   143  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   144  ```
   145  
   146  ```console
   147  spec:
   148    baseDomain: example.com
   149  ```
   150  
   151  ### Update the cloud-provider manifest ([Shared VPC (XPN)][sharedvpc] only)
   152  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   153  update the cloud provider configuration so it understands the network and subnetworks are in a different project (host project).
   154  Otherwise skip this step.
   155  
   156  ```sh
   157  export HOST_PROJECT="example-shared-vpc"
   158  export HOST_PROJECT_NETWORK_NAME="example-network"
   159  export HOST_PROJECT_COMPUTE_SUBNET_NAME="example-worker-subnet"
   160  
   161  sed -i "s/    subnetwork-name.*/    network-project-id = ${HOST_PROJECT}\\n    network-name    = ${HOST_PROJECT_NETWORK_NAME}\\n    subnetwork-name = ${HOST_PROJECT_COMPUTE_SUBNET_NAME}/" manifests/cloud-provider-config.yaml
   162  ```
   163  
   164  ```console
   165    config: |+
   166      [global]
   167      project-id      = example-project
   168      regional        = true
   169      multizone       = true
   170      node-tags       = opensh-ptzzx-master
   171      node-tags       = opensh-ptzzx-worker
   172      node-instance-prefix = opensh-ptzzx
   173      external-instance-groups-prefix = opensh-ptzzx
   174      network-project-id = example-shared-vpc
   175      network-name    = example-network
   176      subnetwork-name = example-worker-subnet
   177  ```
   178  
   179  ### Enable external ingress (optional)
   180  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   181  and you set `publish: Internal` in the `install-config.yaml` but really wanted `publish: External`
   182  then edit the `cluster-ingress-default-ingresscontroller.yaml` manifest to enable external ingress.
   183  
   184  ```sh
   185  python -c '
   186  import yaml;
   187  path = "manifests/cluster-ingress-default-ingresscontroller.yaml";
   188  data = yaml.full_load(open(path));
   189  data["spec"]["endpointPublishingStrategy"]["loadBalancer"]["scope"] = "External";
   190  open(path, "w").write(yaml.dump(data, default_flow_style=False))'
   191  ```
   192  
   193  ```console
   194   spec:
   195    endpointPublishingStrategy:
   196      loadBalancer:
   197        scope: External
   198  ```
   199  
   200  ### Create Ignition configs
   201  Now we can create the bootstrap Ignition configs.
   202  
   203  ```console
   204  $ openshift-install create ignition-configs
   205  ```
   206  
   207  After running the command, several files will be available in the directory.
   208  
   209  ```console
   210  $ tree
   211  .
   212  ├── auth
   213  │   └── kubeconfig
   214  ├── bootstrap.ign
   215  ├── master.ign
   216  ├── metadata.json
   217  └── worker.ign
   218  ```
   219  
   220  ### Extract infrastructure name from Ignition metadata
   221  By default, Ignition generates a unique cluster identifier comprised of the
   222  cluster name specified during the invocation of the installer and a short
   223  string known internally as the infrastructure name. These values are seeded
   224  in the initial manifests within the Ignition configuration. To use the output
   225  of the default, generated `ignition-configs` extracting the internal
   226  infrastructure name is necessary.
   227  
   228  An example of a way to get this is below:
   229  
   230  ```console
   231  $ jq -r .infraID metadata.json
   232  openshift-vw9j6
   233  ```
   234  
   235  ## Export variables to be used in examples below.
   236  
   237  ```sh
   238  export BASE_DOMAIN='example.com'
   239  export BASE_DOMAIN_ZONE_NAME='example'
   240  export NETWORK_CIDR='10.0.0.0/16'
   241  export MASTER_SUBNET_CIDR='10.0.0.0/17'
   242  export WORKER_SUBNET_CIDR='10.0.128.0/17'
   243  
   244  export KUBECONFIG=auth/kubeconfig
   245  export CLUSTER_NAME=$(jq -r .clusterName metadata.json)
   246  export INFRA_ID=$(jq -r .infraID metadata.json)
   247  export PROJECT_NAME=$(jq -r .gcp.projectID metadata.json)
   248  export REGION=$(jq -r .gcp.region metadata.json)
   249  export ZONE_0=$(gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9)
   250  export ZONE_1=$(gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9)
   251  export ZONE_2=$(gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9)
   252  
   253  export MASTER_IGNITION=$(cat master.ign)
   254  export WORKER_IGNITION=$(cat worker.ign)
   255  ```
   256  
   257  ## Create the VPC
   258  Create the VPC, network, and subnets for the cluster.
   259  This step can be skipped if installing into a pre-existing VPC, such as a [Shared VPC (XPN)][sharedvpc].
   260  
   261  Copy [`01_vpc.py`](../../../upi/gcp/01_vpc.py) locally.
   262  
   263  Create a resource definition file: `01_vpc.yaml`
   264  
   265  ```console
   266  $ cat <<EOF >01_vpc.yaml
   267  imports:
   268  - path: 01_vpc.py
   269  resources:
   270  - name: cluster-vpc
   271    type: 01_vpc.py
   272    properties:
   273      infra_id: '${INFRA_ID}'
   274      region: '${REGION}'
   275      master_subnet_cidr: '${MASTER_SUBNET_CIDR}'
   276      worker_subnet_cidr: '${WORKER_SUBNET_CIDR}'
   277  EOF
   278  ```
   279  - `infra_id`: the infrastructure name (INFRA_ID above)
   280  - `region`: the region to deploy the cluster into (for example us-east1)
   281  - `master_subnet_cidr`: the CIDR for the master subnet (for example 10.0.0.0/17)
   282  - `worker_subnet_cidr`: the CIDR for the worker subnet (for example 10.0.128.0/17)
   283  
   284  Create the deployment using gcloud.
   285  
   286  ```sh
   287  gcloud deployment-manager deployments create ${INFRA_ID}-vpc --config 01_vpc.yaml
   288  ```
   289  
   290  ## Configure VPC variables
   291  Configure the variables based on the VPC created with `01_vpc.yaml`.
   292  If you are using a pre-existing VPC, such as a [Shared VPC (XPN)][sharedvpc], set these to the `.selfLink` of the targeted resources.
   293  
   294  ```sh
   295  export CLUSTER_NETWORK=$(gcloud compute networks describe ${INFRA_ID}-network --format json | jq -r .selfLink)
   296  export CONTROL_SUBNET=$(gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink)
   297  export COMPUTE_SUBNET=$(gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink)
   298  ```
   299  
   300  ## Create DNS entries and load balancers
   301  Create the DNS zone and load balancers for the cluster.
   302  You can exclude the DNS zone or external load balancer by removing their associated section(s) from the `02_infra.yaml`.
   303  If you choose to exclude the DNS zone, you will need to create it some other way and ensure it is populated with the necessary records as documented below.
   304  
   305  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   306  exclude the DNS section as it must be created in the host project.
   307  
   308  Copy [`02_dns.py`](../../../upi/gcp/02_dns.py) locally.
   309  Copy [`02_lb_ext.py`](../../../upi/gcp/02_lb_ext.py) locally.
   310  Copy [`02_lb_int.py`](../../../upi/gcp/02_lb_int.py) locally.
   311  
   312  Create a resource definition file: `02_infra.yaml`
   313  
   314  ```console
   315  $ cat <<EOF >02_infra.yaml
   316  imports:
   317  - path: 02_dns.py
   318  - path: 02_lb_ext.py
   319  - path: 02_lb_int.py
   320  resources:
   321  - name: cluster-dns
   322    type: 02_dns.py
   323    properties:
   324      infra_id: '${INFRA_ID}'
   325      cluster_domain: '${CLUSTER_NAME}.${BASE_DOMAIN}'
   326      cluster_network: '${CLUSTER_NETWORK}'
   327  - name: cluster-lb-ext
   328    type: 02_lb_ext.py
   329    properties:
   330      infra_id: '${INFRA_ID}'
   331      region: '${REGION}'
   332  - name: cluster-lb-int
   333    type: 02_lb_int.py
   334    properties:
   335      cluster_network: '${CLUSTER_NETWORK}'
   336      control_subnet: '${CONTROL_SUBNET}'
   337      infra_id: '${INFRA_ID}'
   338      region: '${REGION}'
   339      zones:
   340      - '${ZONE_0}'
   341      - '${ZONE_1}'
   342      - '${ZONE_2}'
   343  EOF
   344  ```
   345  - `infra_id`: the infrastructure name (INFRA_ID above)
   346  - `region`: the region to deploy the cluster into (for example us-east1)
   347  - `cluster_domain`: the domain for the cluster (for example openshift.example.com)
   348  - `cluster_network`: the URI to the cluster network
   349  - `control_subnet`: the URI to the control subnet
   350  - `zones`: the zones to deploy the control plane instances into (for example us-east1-b, us-east1-c, us-east1-d)
   351  
   352  Create the deployment using gcloud.
   353  
   354  ```sh
   355  gcloud deployment-manager deployments create ${INFRA_ID}-infra --config 02_infra.yaml
   356  ```
   357  
   358  ## Configure infra variables
   359  If you excluded the `cluster-lb-ext` section above, then skip `CLUSTER_PUBLIC_IP`.
   360  
   361  ```sh
   362  export CLUSTER_IP=$(gcloud compute addresses describe ${INFRA_ID}-cluster-ip --region=${REGION} --format json | jq -r .address)
   363  export CLUSTER_PUBLIC_IP=$(gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address)
   364  ```
   365  
   366  ## Add DNS entries
   367  The templates do not create DNS entries due to limitations of Deployment Manager, so we must create them manually.
   368  
   369  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   370  use the `--account` and `--project` parameters to perform these actions in the host project.
   371  
   372  ### Add internal DNS entries
   373  
   374  ```sh
   375  if [ -f transaction.yaml ]; then rm transaction.yaml; fi
   376  gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
   377  gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
   378  gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
   379  gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
   380  ```
   381  
   382  ### Add external DNS entries (optional)
   383  If you deployed external load balancers with `02_infra.yaml`, you can deploy external DNS entries.
   384  
   385  ```sh
   386  if [ -f transaction.yaml ]; then rm transaction.yaml; fi
   387  gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
   388  gcloud dns record-sets transaction add ${CLUSTER_PUBLIC_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
   389  gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
   390  ```
   391  
   392  ## Create firewall rules and IAM roles
   393  Create the firewall rules and IAM roles for the cluster.
   394  You can exclude either of these by removing their associated section(s) from the `02_infra.yaml`.
   395  If you choose to do so, you will need to create the required resources some other way.
   396  Details about these resources can be found in the imported python templates.
   397  
   398  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   399  exclude the firewall section as they must be created in the host project.
   400  
   401  Copy [`03_firewall.py`](../../../upi/gcp/03_firewall.py) locally.
   402  Copy [`03_iam.py`](../../../upi/gcp/03_iam.py) locally.
   403  
   404  Create a resource definition file: `03_security.yaml`
   405  
   406  ```console
   407  $ cat <<EOF >03_security.yaml
   408  imports:
   409  - path: 03_firewall.py
   410  - path: 03_iam.py
   411  resources:
   412  - name: cluster-firewall
   413    type: 03_firewall.py
   414    properties:
   415      allowed_external_cidr: '0.0.0.0/0'
   416      infra_id: '${INFRA_ID}'
   417      cluster_network: '${CLUSTER_NETWORK}'
   418      network_cidr: '${NETWORK_CIDR}'
   419  - name: cluster-iam
   420    type: 03_iam.py
   421    properties:
   422      infra_id: '${INFRA_ID}'
   423  EOF
   424  ```
   425  - `allowed_external_cidr`: limits access to the cluster API and ssh to the bootstrap host. (for example External: 0.0.0.0/0, Internal: ${NETWORK_CIDR})
   426  - `infra_id`: the infrastructure name (INFRA_ID above)
   427  - `region`: the region to deploy the cluster into (for example us-east1)
   428  - `cluster_network`: the URI to the cluster network
   429  - `network_cidr`: the CIDR of the vpc network (for example 10.0.0.0/16)
   430  
   431  Create the deployment using gcloud.
   432  
   433  ```sh
   434  gcloud deployment-manager deployments create ${INFRA_ID}-security --config 03_security.yaml
   435  ```
   436  
   437  ## Configure security variables
   438  Configure the variables based on the `03_security.yaml` deployment.
   439  If you excluded the IAM section, ensure these are set to the `.email` of their associated resources.
   440  
   441  ```sh
   442  export MASTER_SERVICE_ACCOUNT=$(gcloud iam service-accounts list --filter "email~^${INFRA_ID}-m@${PROJECT_NAME}." --format json | jq -r '.[0].email')
   443  export WORKER_SERVICE_ACCOUNT=$(gcloud iam service-accounts list --filter "email~^${INFRA_ID}-w@${PROJECT_NAME}." --format json | jq -r '.[0].email')
   444  ```
   445  
   446  ## Add required roles to IAM service accounts
   447  The templates do not create the policy bindings due to limitations of Deployment Manager, so we must create them manually.
   448  
   449  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   450  ensure these service accounts have `roles/compute.networkUser` access to each of the host project subnets used by the cluster so the instances can use the networks.
   451  Also ensure the master service account has `roles/compute.networkViewer` access to the host project itself so the gcp-cloud-provider can look for firewall settings as part of ingress controller operations.
   452  
   453  ```sh
   454  gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin"
   455  gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin"
   456  gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin"
   457  gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser"
   458  gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
   459  
   460  gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer"
   461  gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
   462  ```
   463  
   464  ## Generate a service-account-key for signing the bootstrap.ign url
   465  
   466  ```sh
   467  gcloud iam service-accounts keys create service-account-key.json --iam-account=${MASTER_SERVICE_ACCOUNT}
   468  ```
   469  
   470  ## Create the cluster image.
   471  Locate the RHCOS image source and create a cluster image.
   472  
   473  ```sh
   474  export IMAGE_SOURCE=$(curl https://raw.githubusercontent.com/openshift/installer/master/data/data/coreos/rhcos.json | jq -r '.architecture.x86_64.images.gcp')
   475  export IMAGE_NAME=$(echo "${IMAGE_SOURCE}" | jq -r '.name')
   476  export IMAGE_PROJECT=$(echo "${IMAGE_SOURCE}" | jq -r '.project')
   477  export CLUSTER_IMAGE=$(gcloud compute images describe ${IMAGE_NAME} --project ${IMAGE_PROJECT} --format json | jq -r .selfLink)
   478  ```
   479  
   480  ## Upload the bootstrap.ign to a new bucket
   481  Create a bucket and upload the bootstrap.ign file.
   482  
   483  ```sh
   484  gsutil mb gs://${INFRA_ID}-bootstrap-ignition
   485  gsutil cp bootstrap.ign gs://${INFRA_ID}-bootstrap-ignition/
   486  ```
   487  
   488  Create a signed URL for the bootstrap instance to use to access the Ignition
   489  config. Export the URL from the output as a variable.
   490  
   491  ```sh
   492  export BOOTSTRAP_IGN=$(gsutil signurl -d 1h service-account-key.json gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print $5}')
   493  ```
   494  
   495  ## Launch temporary bootstrap resources
   496  
   497  Copy [`04_bootstrap.py`](../../../upi/gcp/04_bootstrap.py) locally.
   498  
   499  Create a resource definition file: `04_bootstrap.yaml`
   500  
   501  ```console
   502  $ cat <<EOF >04_bootstrap.yaml
   503  imports:
   504  - path: 04_bootstrap.py
   505  resources:
   506  - name: cluster-bootstrap
   507    type: 04_bootstrap.py
   508    properties:
   509      infra_id: '${INFRA_ID}'
   510      region: '${REGION}'
   511      zone: '${ZONE_0}'
   512      cluster_network: '${CLUSTER_NETWORK}'
   513      control_subnet: '${CONTROL_SUBNET}'
   514      image: '${CLUSTER_IMAGE}'
   515      machine_type: 'n1-standard-4'
   516      root_volume_size: '128'
   517      bootstrap_ign: '${BOOTSTRAP_IGN}'
   518  EOF
   519  ```
   520  - `infra_id`: the infrastructure name (INFRA_ID above)
   521  - `region`: the region to deploy the cluster into (for example us-east1)
   522  - `zone`: the zone to deploy the bootstrap instance into (for example us-east1-b)
   523  - `cluster_network`: the URI to the cluster network
   524  - `control_subnet`: the URI to the control subnet
   525  - `image`: the URI to the RHCOS image
   526  - `machine_type`: the machine type of the instance (for example n1-standard-4)
   527  - `bootstrap_ign`: the URL output when creating a signed URL above.
   528  
   529  You can add custom tags to `04_bootstrap.py` as needed
   530  
   531  ```console
   532              'tags': {
   533                  'items': [
   534                      context.properties['infra_id'] + '-master',
   535                      context.properties['infra_id'] + '-bootstrap',
   536                      'my-custom-tag-example'
   537                  ]
   538              },
   539  ```
   540  
   541  Create the deployment using gcloud.
   542  
   543  ```sh
   544  gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml
   545  ```
   546  
   547  ## Add the bootstrap instance to the load balancers
   548  The templates do not manage load balancer membership due to limitations of Deployment
   549  Manager, so we must add the bootstrap node manually.
   550  
   551  ### Add bootstrap instance to internal load balancer instance group
   552  
   553  ```sh
   554  gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-bootstrap-ig --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap
   555  ```
   556  
   557  ### Add bootstrap instance group to the internal load balancer backend service
   558  
   559  ```sh
   560  gcloud compute backend-services add-backend ${INFRA_ID}-api-internal --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-ig --instance-group-zone=${ZONE_0}
   561  ```
   562  
   563  ## Launch permanent control plane
   564  
   565  Copy [`05_control_plane.py`](../../../upi/gcp/05_control_plane.py) locally.
   566  
   567  Create a resource definition file: `05_control_plane.yaml`
   568  
   569  ```console
   570  $ cat <<EOF >05_control_plane.yaml
   571  imports:
   572  - path: 05_control_plane.py
   573  resources:
   574  - name: cluster-control-plane
   575    type: 05_control_plane.py
   576    properties:
   577      infra_id: '${INFRA_ID}'
   578      zones:
   579      - '${ZONE_0}'
   580      - '${ZONE_1}'
   581      - '${ZONE_2}'
   582      control_subnet: '${CONTROL_SUBNET}'
   583      image: '${CLUSTER_IMAGE}'
   584      machine_type: 'n1-standard-4'
   585      root_volume_size: '128'
   586      service_account_email: '${MASTER_SERVICE_ACCOUNT}'
   587      ignition: '${MASTER_IGNITION}'
   588  EOF
   589  ```
   590  - `infra_id`: the infrastructure name (INFRA_ID above)
   591  - `region`: the region to deploy the cluster into (for example us-east1)
   592  - `zones`: the zones to deploy the control plane instances into (for example us-east1-b, us-east1-c, us-east1-d)
   593  - `control_subnet`: the URI to the control subnet
   594  - `image`: the URI to the RHCOS image
   595  - `machine_type`: the machine type of the instance (for example n1-standard-4)
   596  - `service_account_email`: the email address for the master service account created above
   597  - `ignition`: the contents of the master.ign file
   598  
   599  You can add custom tags to `05_control_plane.py` as needed
   600  
   601  ```console
   602              'tags': {
   603                  'items': [
   604                      context.properties['infra_id'] + '-master',
   605                      'my-custom-tag-example'
   606                  ]
   607              },
   608  ```
   609  
   610  Create the deployment using gcloud.
   611  
   612  ```sh
   613  gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --config 05_control_plane.yaml
   614  ```
   615  
   616  ## Add control plane instances to load balancers
   617  The templates do not manage load balancer membership due to limitations of Deployment
   618  Manager, so we must add the control plane nodes manually.
   619  
   620  ### Add control plane instances to internal load balancer instance groups
   621  
   622  ```sh
   623  gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-ig --zone=${ZONE_0} --instances=${INFRA_ID}-master-0
   624  gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-ig --zone=${ZONE_1} --instances=${INFRA_ID}-master-1
   625  gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-ig --zone=${ZONE_2} --instances=${INFRA_ID}-master-2
   626  ```
   627  
   628  ### Add control plane instances to external load balancer target pools (optional)
   629  If you deployed external load balancers with `02_infra.yaml`, add the control plane instances to the target pool.
   630  
   631  ```sh
   632  gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-master-0
   633  gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-master-1
   634  gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-master-2
   635  ```
   636  
   637  ## Launch additional compute nodes
   638  You may create compute nodes by launching individual instances discretely
   639  or by automated processes outside the cluster (e.g. Auto Scaling Groups). You
   640  can also take advantage of the built in cluster scaling mechanisms and the
   641  machine API in OpenShift, as mentioned [above](#create-ignition-configs). In
   642  this example, we'll manually launch one instance via the Deployment Manager
   643  template. Additional instances can be launched by including additional
   644  resources of type 06_worker.py in the file.
   645  
   646  Copy [`06_worker.py`](../../../upi/gcp/06_worker.py) locally.
   647  
   648  Create a resource definition file: `06_worker.yaml`
   649  ```console
   650  $ cat <<EOF >06_worker.yaml
   651  imports:
   652  - path: 06_worker.py
   653  resources:
   654  - name: 'worker-0'
   655    type: 06_worker.py
   656    properties:
   657      infra_id: '${INFRA_ID}'
   658      zone: '${ZONE_0}'
   659      compute_subnet: '${COMPUTE_SUBNET}'
   660      image: '${CLUSTER_IMAGE}'
   661      machine_type: 'n1-standard-4'
   662      root_volume_size: '128'
   663      service_account_email: '${WORKER_SERVICE_ACCOUNT}'
   664      ignition: '${WORKER_IGNITION}'
   665  - name: 'worker-1'
   666    type: 06_worker.py
   667    properties:
   668      infra_id: '${INFRA_ID}'
   669      zone: '${ZONE_1}'
   670      compute_subnet: '${COMPUTE_SUBNET}'
   671      image: '${CLUSTER_IMAGE}'
   672      machine_type: 'n1-standard-4'
   673      root_volume_size: '128'
   674      service_account_email: '${WORKER_SERVICE_ACCOUNT}'
   675      ignition: '${WORKER_IGNITION}'
   676  EOF
   677  ```
   678  - `name`: the name of the compute node (for example worker-0)
   679  - `infra_id`: the infrastructure name (INFRA_ID above)
   680  - `zone`: the zone to deploy the worker node into (for example us-east1-b)
   681  - `compute_subnet`: the URI to the compute subnet
   682  - `image`: the URI to the RHCOS image
   683  - `machine_type`: The machine type of the instance (for example n1-standard-4)
   684  - `service_account_email`: the email address for the worker service account created above
   685  - `ignition`: the contents of the worker.ign file
   686  
   687  You can add custom tags to `06_worker.py` as needed
   688  
   689  ```console
   690              'tags': {
   691                  'items': [
   692                      context.properties['infra_id'] + '-worker',
   693                      'my-custom-tag-example'
   694                  ]
   695              },
   696  ```
   697  
   698  Create the deployment using gcloud.
   699  
   700  ```sh
   701  gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml
   702  ```
   703  
   704  ## Monitor for `bootstrap-complete`
   705  
   706  ```console
   707  $ openshift-install wait-for bootstrap-complete
   708  INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
   709  INFO API v1.12.4+c53f462 up
   710  INFO Waiting up to 30m0s for the bootstrap-complete event...
   711  ```
   712  
   713  ## Destroy bootstrap resources
   714  At this point, you should delete the bootstrap resources.
   715  
   716  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   717  it is safe to remove any bootstrap-specific firewall rules at this time.
   718  
   719  ```sh
   720  gcloud compute backend-services remove-backend ${INFRA_ID}-api-internal --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0}
   721  gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign
   722  gsutil rb gs://${INFRA_ID}-bootstrap-ignition
   723  gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap
   724  ```
   725  
   726  ## Approving the CSR requests for nodes
   727  The CSR requests for client and server certificates for nodes joining the cluster will need to be approved by the administrator.
   728  Nodes that have not been provisioned by the cluster need their associated `system:serviceaccount` certificate approved to join the cluster.
   729  You can view them with:
   730  
   731  ```console
   732  $ oc get csr
   733  NAME        AGE     REQUESTOR                                                                   CONDITION
   734  csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   735  csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
   736  csr-b96j4   25s     system:node:ip-10-0-52-215.us-east-2.compute.internal                       Approved,Issued
   737  csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
   738  csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
   739  ...
   740  ```
   741  
   742  Administrators should carefully examine each CSR request and approve only the ones that belong to the nodes created by them.
   743  CSRs can be approved by name, for example:
   744  
   745  ```sh
   746  oc adm certificate approve csr-bfd72
   747  ```
   748  
   749  ## Add the Ingress DNS Records
   750  If you removed the DNS Zone configuration [earlier](#remove-dns-zones), you'll need to manually create some DNS records pointing at the ingress load balancer.
   751  You can create either a wildcard `*.apps.{baseDomain}.` or specific records (more on the specific records below).
   752  You can use A, CNAME, etc. records, as you see fit.
   753  
   754  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   755  use the `--account` and `--project` parameters to perform these actions in the host project.
   756  
   757  ### Wait for the ingress-router to create a load balancer and populate the `EXTERNAL-IP`
   758  
   759  ```console
   760  $ oc -n openshift-ingress get service router-default
   761  NAME             TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
   762  router-default   LoadBalancer   172.30.18.154   35.233.157.184   80:32288/TCP,443:31215/TCP   98
   763  ```
   764  
   765  ### Add the internal *.apps DNS record
   766  
   767  ```sh
   768  export ROUTER_IP=$(oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}')
   769  
   770  if [ -f transaction.yaml ]; then rm transaction.yaml; fi
   771  gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
   772  gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone
   773  gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
   774  ```
   775  
   776  ### Add the external *.apps DNS record (optional)
   777  
   778  ```sh
   779  if [ -f transaction.yaml ]; then rm transaction.yaml; fi
   780  gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
   781  gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
   782  gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
   783  ```
   784  
   785  If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes:
   786  
   787  ```console
   788  $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
   789  oauth-openshift.apps.your.cluster.domain.example.com
   790  console-openshift-console.apps.your.cluster.domain.example.com
   791  downloads-openshift-console.apps.your.cluster.domain.example.com
   792  alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com
   793  grafana-openshift-monitoring.apps.your.cluster.domain.example.com
   794  prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com
   795  ```
   796  
   797  ## Add the Ingress firewall rules (optional)
   798  If you are installing into a [Shared VPC (XPN)][sharedvpc],
   799  you'll need to manually create some firewall rules for the ingress services.
   800  These rules would normally be created by the ingress controller via the gcp cloud provider.
   801  When the cloud provider detects Shared VPC (XPN), it will instead emit cluster events informing which firewall rules need to be created.
   802  Either create each rule as requested by the events (option A), or create cluster-wide firewall rules for all services (option B).
   803  
   804  Use the `--account` and `--project` parameters to perform these actions in the host project.
   805  
   806  ### Add firewall rules based on cluster events (option A)
   807  When the cluster is first provisioned, and as services are later created and modified, the gcp cloud provider may generate events informing of firewall rules required to be manually created in order to allow access to these services.
   808  
   809  ```console
   810  Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`
   811  ```
   812  
   813  Create the firewall rules as instructed.
   814  
   815  ### Add a cluster-wide health check firewall rule. (option B)
   816  Add a single firewall rule to allow the gce health checks to access all of the services.
   817  This enables the ingress load balancers to determine the health status of their instances.
   818  
   819  ```sh
   820  gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="${CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress-hc
   821  ```
   822  
   823  ### Add a cluster-wide service firewall rule. (option B)
   824  Add a single firewall rule to allow access to all cluster services.
   825  If you want your cluster to be private, you can use `--source-ranges=${NETWORK_CIDR}`.
   826  This rule may need to be updated accordingly when adding services on ports other than `tcp:80,tcp:443`.
   827  
   828  ```sh
   829  gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="${CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress
   830  ```
   831  
   832  ## Monitor for cluster completion
   833  
   834  ```console
   835  $ openshift-install wait-for install-complete
   836  INFO Waiting up to 30m0s for the cluster to initialize...
   837  ```
   838  
   839  Also, you can observe the running state of your cluster pods:
   840  
   841  ```console
   842  $ oc get clusterversion
   843  NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
   844  version             False       True          24m     Working towards 4.2.0-0.okd-2019-08-05-204819: 99% complete
   845  
   846  $ oc get clusteroperators
   847  NAME                                       VERSION                         AVAILABLE   PROGRESSING   DEGRADED   SINCE
   848  authentication                             4.2.0-0.okd-2019-08-05-204819   True        False         False      6m18s
   849  cloud-credential                           4.2.0-0.okd-2019-08-05-204819   True        False         False      17m
   850  cluster-autoscaler                         4.2.0-0.okd-2019-08-05-204819   True        False         False      80s
   851  console                                    4.2.0-0.okd-2019-08-05-204819   True        False         False      3m57s
   852  dns                                        4.2.0-0.okd-2019-08-05-204819   True        False         False      22m
   853  image-registry                             4.2.0-0.okd-2019-08-05-204819   True        False         False      5m4s
   854  ingress                                    4.2.0-0.okd-2019-08-05-204819   True        False         False      4m38s
   855  insights                                   4.2.0-0.okd-2019-08-05-204819   True        False         False      21m
   856  kube-apiserver                             4.2.0-0.okd-2019-08-05-204819   True        False         False      12m
   857  kube-controller-manager                    4.2.0-0.okd-2019-08-05-204819   True        False         False      12m
   858  kube-scheduler                             4.2.0-0.okd-2019-08-05-204819   True        False         False      11m
   859  machine-api                                4.2.0-0.okd-2019-08-05-204819   True        False         False      18m
   860  machine-config                             4.2.0-0.okd-2019-08-05-204819   True        False         False      22m
   861  marketplace                                4.2.0-0.okd-2019-08-05-204819   True        False         False      5m38s
   862  monitoring                                 4.2.0-0.okd-2019-08-05-204819   True        False         False      86s
   863  network                                    4.2.0-0.okd-2019-08-05-204819   True        False         False      14m
   864  node-tuning                                4.2.0-0.okd-2019-08-05-204819   True        False         False      6m8s
   865  openshift-apiserver                        4.2.0-0.okd-2019-08-05-204819   True        False         False      6m48s
   866  openshift-controller-manager               4.2.0-0.okd-2019-08-05-204819   True        False         False      12m
   867  openshift-samples                          4.2.0-0.okd-2019-08-05-204819   True        False         False      67s
   868  operator-lifecycle-manager                 4.2.0-0.okd-2019-08-05-204819   True        False         False      15m
   869  operator-lifecycle-manager-catalog         4.2.0-0.okd-2019-08-05-204819   True        False         False      15m
   870  operator-lifecycle-manager-packageserver   4.2.0-0.okd-2019-08-05-204819   True        False         False      6m48s
   871  service-ca                                 4.2.0-0.okd-2019-08-05-204819   True        False         False      17m
   872  service-catalog-apiserver                  4.2.0-0.okd-2019-08-05-204819   True        False         False      6m18s
   873  service-catalog-controller-manager         4.2.0-0.okd-2019-08-05-204819   True        False         False      6m19s
   874  storage                                    4.2.0-0.okd-2019-08-05-204819   True        False         False      6m20s
   875  
   876  $ oc get pods --all-namespaces
   877  NAMESPACE                                               NAME                                                                READY     STATUS      RESTARTS   AGE
   878  kube-system                                             etcd-member-ip-10-0-3-111.us-east-2.compute.internal                1/1       Running     0          35m
   879  kube-system                                             etcd-member-ip-10-0-3-239.us-east-2.compute.internal                1/1       Running     0          37m
   880  kube-system                                             etcd-member-ip-10-0-3-24.us-east-2.compute.internal                 1/1       Running     0          35m
   881  openshift-apiserver-operator                            openshift-apiserver-operator-6d6674f4f4-h7t2t                       1/1       Running     1          37m
   882  openshift-apiserver                                     apiserver-fm48r                                                     1/1       Running     0          30m
   883  openshift-apiserver                                     apiserver-fxkvv                                                     1/1       Running     0          29m
   884  openshift-apiserver                                     apiserver-q85nm                                                     1/1       Running     0          29m
   885  ...
   886  openshift-service-ca-operator                           openshift-service-ca-operator-66ff6dc6cd-9r257                      1/1       Running     0          37m
   887  openshift-service-ca                                    apiservice-cabundle-injector-695b6bcbc-cl5hm                        1/1       Running     0          35m
   888  openshift-service-ca                                    configmap-cabundle-injector-8498544d7-25qn6                         1/1       Running     0          35m
   889  openshift-service-ca                                    service-serving-cert-signer-6445fc9c6-wqdqn                         1/1       Running     0          35m
   890  openshift-service-catalog-apiserver-operator            openshift-service-catalog-apiserver-operator-549f44668b-b5q2w       1/1       Running     0          32m
   891  openshift-service-catalog-controller-manager-operator   openshift-service-catalog-controller-manager-operator-b78cr2lnm     1/1       Running     0          31m
   892  ```
   893  
   894  [deploymentmanager]: https://cloud.google.com/deployment-manager/docs
   895  [ingress-operator]: https://github.com/openshift/cluster-ingress-operator
   896  [kubernetes-service-load-balancers-exclude-masters]: https://github.com/kubernetes/kubernetes/issues/65618
   897  [machine-api-operator]: https://github.com/openshift/machine-api-operator
   898  [sharedvpc]: https://cloud.google.com/vpc/docs/shared-vpc