github.com/containers/libpod@v1.9.4-0.20220419124438-4284fd425507/contrib/cirrus/README.md (about)

     1  ![PODMAN logo](../../logo/podman-logo-source.svg)
     2  
     3  # Cirrus-CI
     4  
     5  Similar to other integrated github CI/CD services, Cirrus utilizes a simple
     6  YAML-based configuration/description file: ``.cirrus.yml``.  Ref: https://cirrus-ci.org/
     7  
     8  
     9  ## Workflow
    10  
    11  All tasks execute in parallel, unless there are conditions or dependencies
    12  which alter this behavior.  Within each task, each script executes in sequence,
    13  so long as any previous script exited successfully.  The overall state of each
    14  task (pass or fail) is set based on the exit status of the last script to execute.
    15  
    16  ### ``gating`` Task
    17  
    18  ***N/B: Steps below are performed by automation***
    19  
    20  1. Launch a purpose-built container in Cirrus's community cluster.
    21     For container image details, please see
    22     [the contributors guide](https://github.com/containers/libpod/blob/master/CONTRIBUTING.md#go-format-and-lint).
    23  
    24  3. ``validate``: Perform standard `make validate` source verification,
    25     Should run for less than a minute or two.
    26  
    27  4. ``lint``: Execute regular `make lint` to check for any code cruft.
    28     Should also run for less than a few minutes.
    29  
    30  5. ``vendor``: runs `make vendor-in-container` followed by `./hack/tree_status.sh` to check
    31     whether the git tree is clean. The reasoning for that is to make sure that
    32     the vendor.conf, the code and the vendored packages in ./vendor are in sync
    33     at all times.
    34  
    35  ### ``meta`` Task
    36  
    37  ***N/B: Steps below are performed by automation***
    38  
    39  1. Launch a container built from definition in ``./contrib/imgts``.
    40  
    41  2. Update VM Image metadata to help track usage across all automation.
    42  
    43  4. Always exits successfully unless there's a major problem.
    44  
    45  
    46  ### ``testing`` Task
    47  
    48  ***N/B: Steps below are performed by automation***
    49  
    50  1. After `gating` passes, spin up one VM per
    51     `matrix: image_name` item. Once accessible, ``ssh``
    52     into each VM as the `root` user.
    53  
    54  2. ``setup_environment.sh``: Configure root's `.bash_profile`
    55      for all subsequent scripts (each run in a new shell).  Any
    56      distribution-specific environment variables are also defined
    57      here.  For example, setting tags/flags to use compiling.
    58  
    59  5. ``integration_test.sh``: Execute integration-testing.  This is
    60     much more involved, and relies on access to external
    61     resources like container images and code from other repositories.
    62     Total execution time is capped at 2-hours (includes all the above)
    63     but this script normally completes in less than an hour.
    64  
    65  
    66  ### ``special_testing_cross`` Task
    67  
    68  Confirm that cross-compile of podman-remote functions for both `windows`
    69  and `darwin` targets.
    70  
    71  
    72  ### ``special_testing_cgroupv2`` Task
    73  
    74  Use the latest Fedora release with the required kernel options pre-set for
    75  exercising cgroups v2 with Podman integration tests.  Also depends on
    76  having `SPECIALMODE` set to 'cgroupv2`
    77  
    78  
    79  ### ``test_build_cache_images_task`` Task
    80  
    81  Modifying the contents of cache-images is tested by making changes to
    82  one or more of the ``./contrib/cirrus/packer/*_setup.sh`` files.  Then
    83  in the PR description, add the magic string:  ``[CI:IMG]``
    84  
    85  ***N/B: Steps below are performed by automation***
    86  
    87  1. ``setup_environment.sh``: Same as for other tasks.
    88  
    89  2. ``build_vm_images.sh``: Utilize [the packer tool](http://packer.io/docs/)
    90     to produce new VM images.  Create a new VM from each base-image, connect
    91     to them with ``ssh``, and perform the steps as defined by the
    92     ``$PACKER_BASE/libpod_images.yml`` file:
    93  
    94      1. On a base-image VM, as root, copy the current state of the repository
    95         into ``/tmp/libpod``.
    96      2. Execute distribution-specific scripts to prepare the image for
    97         use.  For example, ``fedora_setup.sh``.
    98      3. If successful, shut down each VM and record the names, and dates
    99         into a json manifest file.
   100      4. Move the manifest file, into a google storage bucket object.
   101         This is a retained as a secondary method for tracking/auditing
   102         creation of VM images, should it ever be needed.
   103  
   104  ### ``verify_test_built_images`` Task
   105  
   106  Only runs following successful ``test_build_cache_images_task`` task.  Uses
   107  images following the standard naming format; ***however, only runs a limited
   108  sub-set of automated tests***.  Validating newly built images fully, requires
   109  updating ``.cirrus.yml``.
   110  
   111  ***N/B: Steps below are performed by automation***
   112  
   113  1. Using the just build VM images, launch VMs and wait for them to boot.
   114  
   115  2. Execute the `setup_environment.sh` as in the `testing` task.
   116  
   117  2. Execute the `integration_test.sh` as in the `testing` task.
   118  
   119  
   120  ***Manual Steps:***  Assuming the automated steps pass, then
   121  you'll find the new image names displayed at the end of the
   122  `test_build_cache_images`.  For example:
   123  
   124  
   125  ```
   126  ...cut...
   127  
   128  [+0747s] ==> Builds finished. The artifacts of successful builds are:
   129  [+0747s] --> ubuntu-18: A disk image was created: ubuntu-18-libpod-5664838702858240
   130  [+0747s] --> fedora-29: A disk image was created: fedora-29-libpod-5664838702858240
   131  [+0747s] --> fedora-30: A disk image was created: fedora-30-libpod-5664838702858240
   132  [+0747s] --> ubuntu-19: A disk image was created: ubuntu-19-libpod-5664838702858240
   133  ```
   134  
   135  Notice the suffix on all the image names comes from the env. var. set in
   136  *.cirrus.yml*: `BUILT_IMAGE_SUFFIX: "-${CIRRUS_REPO_NAME}-${CIRRUS_BUILD_ID}"`.
   137  Edit `.cirrus.yml`, in the top-level `env` section, update the suffix variable
   138  used at runtime to launch VMs for testing:
   139  
   140  
   141  ```yaml
   142  env:
   143      ...cut...
   144      ####
   145      #### Cache-image names to test with (double-quotes around names are critical)
   146      ###
   147      _BUILT_IMAGE_SUFFIX: "libpod-5664838702858240"
   148      FEDORA_CACHE_IMAGE_NAME: "fedora-30-${_BUILT_IMAGE_SUFFIX}"
   149      PRIOR_FEDORA_CACHE_IMAGE_NAME: "fedora-29-${_BUILT_IMAGE_SUFFIX}"
   150      ...cut...
   151  ```
   152  
   153  ***NOTES:***
   154  * If re-using the same PR with new images in `.cirrus.yml`,
   155    take care to also *update the PR description* to remove
   156    the magic ``[CI:IMG]`` string.  Keeping it and
   157    `--force` pushing would needlessly cause Cirrus-CI to build
   158    and test images again.
   159  * In the future, if you need to review the log from the build that produced
   160    the referenced image:
   161  
   162    * Note the Build ID from the image name (for example `5664838702858240`).
   163    * Go to that build in the Cirrus-CI WebUI, using the build ID in the URL.
   164      (For example `https://cirrus-ci.com/build/5664838702858240`.
   165    * Choose the *test_build_cache_images* task.
   166    * Open the *build_vm_images* script section.
   167  
   168  ### `release` Task
   169  
   170  Gathers up zip files uploaded by other tasks, from the local Cirrus-CI caching service.
   171  Depending on the execution context (a PR or a branch), this task uploads the files
   172  found to storage buckets at:
   173  
   174  * [https://storage.cloud.google.com/libpod-pr-releases](https://storage.cloud.google.com/libpod-pr-releases)
   175  * [https://storage.cloud.google.com/libpod-master-releases](https://storage.cloud.google.com/libpod-master-releases)
   176  
   177  ***Note:*** Repeated builds from the same PR or branch, will clobber previous archives
   178              *by design*.  This is intended so that the "latest" archive is always
   179              available at a consistent URL.  The precise details regarding a particular
   180              build is encoded within the zip-archive comment.
   181  
   182  
   183  ## Base-images
   184  
   185  Base-images are VM disk-images specially prepared for executing as GCE VMs.
   186  In particular, they run services on startup similar in purpose/function
   187  as the standard 'cloud-init' services.
   188  
   189  *  The google services are required for full support of ssh-key management
   190     and GCE OAuth capabilities.  Google provides native images in GCE
   191     with services pre-installed, for many platforms. For example,
   192     RHEL, CentOS, and Ubuntu.
   193  
   194  *  Google does ***not*** provide any images for Fedora (as of 5/2019), nor do
   195     they provide a base-image prepared to run packer for creating other images
   196     in the ``test_build_vm_images`` Task (above).
   197  
   198  *  Base images do not need to be produced often, but doing so completely
   199     manually would be time-consuming and error-prone.  Therefore a special
   200     semi-automatic *Makefile* target is provided to assist with producing
   201     all the base-images: ``libpod_base_images``
   202  
   203  To produce new base-images, including an `image-builder-image` (used by
   204  the ``cache_images`` Task) some input parameters are required:
   205  
   206  * ``GCP_PROJECT_ID``: The complete GCP project ID string e.g. foobar-12345
   207    identifying where the images will be stored.
   208  
   209  * ``GOOGLE_APPLICATION_CREDENTIALS``: A *JSON* file containing
   210    credentials for a GCE service account.  This can be [a service
   211    account](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually)
   212    or [end-user
   213    credentials](https://cloud.google.com/docs/authentication/end-user#creating_your_client_credentials)
   214  
   215  *  Optionally, CSV's may be specified to ``PACKER_BUILDS``
   216     to limit the base-images produced.  For example,
   217     ``PACKER_BUILDS=fedora,image-builder-image``.
   218  
   219  If there is no existing 'image-builder-image' within GCE, a new
   220  one may be bootstrapped by creating a CentOS 7 VM with support for
   221  nested-virtualization, and with elevated cloud privileges (to access
   222  GCE, from within the GCE VM).  For example:
   223  
   224  ```
   225  $ alias pgcloud='sudo podman run -it --rm -e AS_ID=$UID
   226      -e AS_USER=$USER -v $HOME:$HOME:z quay.io/cevich/gcloud_centos:latest'
   227  
   228  $ URL=https://www.googleapis.com/auth
   229  $ SCOPES=$URL/userinfo.email,$URL/compute,$URL/devstorage.full_control
   230  
   231  # The --min-cpu-platform is critical for nested-virt.
   232  $ pgcloud compute instances create $USER-image-builder \
   233      --image-family centos-7 \
   234      --boot-disk-size "200GB" \
   235      --min-cpu-platform "Intel Haswell" \
   236      --machine-type n1-standard-2 \
   237      --scopes $SCOPES
   238  ```
   239  
   240  Then from that VM, execute the
   241  ``contrib/cirrus/packer/image-builder-image_base_setup.sh`` script.
   242  Shutdown the VM, and convert it into a new image-builder-image.
   243  
   244  Building new base images is done by first creating a VM from an
   245  image-builder-image and copying the credentials json file to it.
   246  
   247  ```
   248  $ hack/get_ci_vm.sh image-builder-image-1541772081
   249  ...in another terminal...
   250  $ pgcloud compute scp /path/to/gac.json $USER-image-builder-image-1541772081:.
   251  ```
   252  
   253  Then, on the VM, change to the ``packer`` sub-directory, and build the images:
   254  
   255  ```
   256  $ cd libpod/contrib/cirrus/packer
   257  $ make libpod_base_images GCP_PROJECT_ID=<VALUE> \
   258      GOOGLE_APPLICATION_CREDENTIALS=/path/to/gac.json \
   259      PACKER_BUILDS=<OPTIONAL>
   260  ```
   261  
   262  Assuming this is successful (hence the semi-automatic part), packer will
   263  produce a ``packer-manifest.json`` output file.  This contains the base-image
   264  names suitable for updating in ``.cirrus.yml``, `env` keys ``*_BASE_IMAGE``.
   265  
   266  On failure, it should be possible to determine the problem from the packer
   267  output.  Sometimes that means setting `PACKER_LOG=1` and troubleshooting
   268  the nested virt calls.  It's also possible to observe the (nested) qemu-kvm
   269  console output.  Simply set the ``TTYDEV`` parameter, for example:
   270  
   271  ```
   272  $ make libpod_base_images ... TTYDEV=$(tty)
   273    ...
   274  ```
   275  
   276  ## `$SPECIALMODE`
   277  
   278  Some tasks alter their behavior based on this value.  A summary of supported
   279  values follows:
   280  
   281  * `none`: Operate as normal, this is the default value if unspecified.
   282  * `rootless`: Causes a random, ordinary user account to be created
   283                and utilized for testing.
   284  * `in_podman`: Causes testing to occur within a container executed by
   285  * `windows`: See **darwin**
   286  * `darwin`: Signals the ``special_testing_cross`` task to cross-compile the remote client.