github.com/sentienttechnologies/studio-go-runner@v0.0.0-20201118202441-6d21f2ced8ee/docs/ci.md (about)

     1  # Continuous Integration Setup
     2  
     3  This document describes setting up a CI pipline that can be used to prepare releases for studio go runner.
     4  
     5  The steps described in this document can also be used by individual developers to perform build and release tasks on locally checked out code.
     6  
     7  studio go runner is intended to run in diverse hardware environments using GPU enabled machines. As a result providing a free, and publically hosted CI/CD platform is cost prohibitive. As an alternative the studio go runner CI and CD pipeline has been designed to center around the use of container images and is flexible about the hosting choices of specific steps in the pipeline.  Steps within the pipeline use image registries to demark the boundaries between pipeline steps.  Pipeline steps are typically implemented as jobs within a Kubernetes cluster, allowing the pipeline to be hosted using Kubernetes deployed on laptops through to fully productionized clusters.
     8  
     9  Triggering steps in the pipeline can be predicated on local/remote git commits, or any form of image publishing/releasing action against the image registr. Image registries may be hosted on self provisioned Kubernetes provisioned cluster either within the cloud, or on private infrastructure.  This allows testing to be done using the CI pipeline on both local laptops, workstations and in cloud or data center environments.  The choice of docker.io as the registry for the resulting build images is due to its support of selectively exposing only public repositories from github accounts preserving privacy.
    10  
    11  Pipelines can also be entirely self hosted upon the microk8s Kubernetes distribution, for example.  This style of pipeline is inteded to be used in circumstances where individuals have access to a single machine, have limited internet bandwidth, and so who do not wish to host images on external services or hosts or do not wish to incur costs for cloud resources and mightfor example have a local GPU that can be used for testing.
    12  
    13  These instructions first detail how a docker.io or local microk8s registry can be setup to trigger builds on github commits.  Instructions then detail how to make use of Keel, https://keel.sh/, to pull CI images into a cluster and run the pipeline.  Finally this document describes the use of Uber's Makisu to deliver production images to the docker.io / quay.io image hosting service(s).  docker hub is used as this is the most reliable of the image registries that Makisu supports, quay.io could not be made to work for this step.
    14  
    15  <!--ts-->
    16  
    17  Table of Contents
    18  =================
    19  
    20  * [Continuous Integration Setup](#continuous-integration-setup)
    21  * [Table of Contents](#table-of-contents)
    22  * [Pipeline Overview](#pipeline-overview)
    23  * [Prerequisties](#prerequisties)
    24    * [duat tools](#duat-tools)
    25    * [docker and the microk8s Kubernetes distribution installation](#docker-and-the-microk8s-kubernetes-distribution-installation)
    26    * [Optional tooling and Image Registries](#optional-tooling-and-image-registries)
    27  * [A word about privacy](#a-word-about-privacy)
    28  * [Build step Images (CI)](#build-step-images-ci)
    29    * [CUDA and Compilation builder image preparation](#cuda-and-compilation-builder-image-preparation)
    30    * [Internet based registra build images](#internet-based-registra-build-images)
    31      * [quay.io account](#quayio-account)
    32      * [quay.io release configuration](#quayio-release-configuration)
    33    * [Development and local build image bootstrapping](#development-and-local-build-image-bootstrapping)
    34  * [Continuous Integration](#continuous-integration)
    35    * [CI front office setup](#ci-front-office-setup)
    36    * [Triggering builds](#triggering-builds)
    37      * [Local source image builds](#local-source-image-builds)
    38      * [git based source image builds](#git-based-source-image-builds)
    39    * [Locally deployed keel testing and CI](#locally-deployed-keel-testing-and-ci)
    40  * [Monitoring and fault checking](#monitoring-and-fault-checking)
    41    * [Bootstrapping](#bootstrapping)
    42    * [microk8s Registry](#microk8s-registry)
    43    * [Image Builder](#image-builder)
    44    * [Keel components](#keel-components)
    45  <!--te-->
    46  
    47  # Pipeline Overview
    48  
    49  The CI pipeline for the studio go runner project uses docker images as inputs to a series of processing steps making up the pipeline.  The following sections describe the pipeline components, and an additional section describing build failure diagnosis and tracking.  This pipeline is designed for use by engineers with Kubernetes familiarity without a complex CI/CD platform and the chrome that typically accompanies domain specific platforms and languages employed by dedicated build-engineer roles.
    50  
    51  The pipeline is initiated through the creation of a builder docker image that contains a copy of the source code and tooling needed to perform the builds and testing.
    52  
    53  The first stage in the pipeline is to execute the build and test steps using a builder image.  If these are succesful the pipeline will then trigger a production image creation step that will also push the resulting production image to an image registry.
    54  
    55  As described above the major portions of the pipeline can be illustrated by the following figure:
    56  
    57  ```console
    58  +---------------+       +---------------------+      +---------------+        +-------------------+      +----------------------+      +-----------------+
    59  |               |       |                     |      |               |        |                   |      |                      |      |                 |
    60  |   Reference   |       |      git-watch      |      |     Makisu    |        |                   +----> |    Keel Triggers     |      | Container Based |
    61  |    Builder    +-----> |    Bootstrapping    +----> |               +------> |  Image Registry   |      |                      +----> |  CI Build Test  |
    62  |     Image     |       |      Copy Pod       |      | Image Builder |        |                   | <----+ Build, Test, Release |      |                 |
    63  |               |       |                     |      |               |        |                   |      |                      |      |                 |
    64  +---------------+       +---------------------+      +---------------+        +-------------------+      +----------------------+      +-----------------+
    65  ```
    66  
    67  Inputs and Outputs to pipeline steps consist of Images that when pushed to a registry will trigger downstream build steps.
    68  
    69  Before using the pipeline there are several user/developer requirements for familiarity with several technologies.
    70  
    71  1. Kubernetes
    72  
    73     A good technical and working knowledge is needed including knowing the Kubernetes resource abstractions as well as operational know-how
    74     of how to navigate between and within clusters, how to use pods and extract logs and pod descriptions to located and diagnose failures.
    75  
    76     Kubernetes forms a base level skill for developers and users of studio go runner open source code.
    77  
    78     This does not exclude users that wish to user or deploy Kubernetes free installations of studio go runner binary releases.
    79  
    80  2. Docker and Image registry functionality
    81  
    82     Experience with image registries an understanding of tagging and knowledge of semantic versioning 2.0.
    83  
    84  3. git and github.com
    85  
    86     Awareness of the release, tagging, branching features of github.
    87  
    88  Other software systems used include
    89  
    90  1. keel.sh
    91  2. Makisu from Uber
    92  3. Go from Google
    93  4. Kustomize from the Kubernetes sigs
    94  
    95  Montoring the progress of tasks within the pipeline can be done by inspecting pod states, and extracting logs of pods responsible for various processing steps.  The monitoring and diagnosis section at the end of this document contains further information.
    96  
    97  # Prerequisties
    98  
    99  ## duat tools
   100  
   101  Instructions within this document make use of the go based stencil tool.  This tool can be obtained for Linux from the github release point, https://github.com/karlmutch/duat/releases/download/0.13.0/stencil-linux-amd64.
   102  
   103  ```console
   104  $ mkdir -p ~/bin
   105  $ wget -O ~/bin/semver https://github.com/karlmutch/duat/releases/download/0.13.0/semver-linux-amd64
   106  $ chmod +x ~/bin/semver
   107  $ wget -O ~/bin/stencil https://github.com/karlmutch/duat/releases/download/0.13.0/stencil-linux-amd64
   108  $ chmod +x ~/bin/stencil
   109  $ wget -O ~/bin/git-watch https://github.com/karlmutch/duat/releases/download/0.13.0/git-watch-linux-amd64
   110  $ chmod +x ~/bin/git-watch
   111  $ export PATH=~/bin:$PATH
   112  ```
   113  
   114  For self hosted images using microk8s the additional git-watch tool is used to trigger CI/CD image bootstrapping as the alternative to using docker.io based image builds.
   115  
   116  Some tools such as petname are installed by the build scripts using 'go get' commands.
   117  
   118  ## docker and the microk8s Kubernetes distribution installation
   119  
   120  You will also need to install docker, and microk8s using Ubuntu snap.  When using docker installs only the snap distribution for docker is compatible with the microk8s deployment.
   121  
   122  ```console
   123  sudo snap install docker --classic
   124  sudo snap install microk8s --classic
   125  ```
   126  When using microk8s during development builds the setup involved simply setting up the services that you to run under microk8s to support a docker registry and also to enable any GPU resources you have present to aid in testing.
   127  
   128  ```console
   129  export LOGXI='*=DBG'
   130  export LOGXI_FORMAT='happy,maxcol=1024'
   131  
   132  export SNAP=/snap
   133  export PATH=$SNAP/bin:$PATH
   134  
   135  export KUBE_CONFIG=~/.kube/microk8s.config
   136  export KUBECONFIG=~/.kube/microk8s.config
   137  
   138  microk8s.stop
   139  microk8s.start
   140  microk8s.config > $KUBECONFIG
   141  microk8s.enable registry:size=30Gi storage dns gpu
   142  ```
   143  
   144  Now we need to perform some customization, the first step then is to locate the IP address for the host that can be used and then define an environment variable to reference the registry.  
   145  
   146  ```console
   147  export RegistryIP=`microk8s.kubectl --namespace container-registry get pod --selector=app=registry -o jsonpath="{.items[*].status.hostIP}"`
   148  export RegistryPort=32000
   149  echo $RegistryIP
   150  172.31.39.52
   151  ```
   152  
   153  Now we have an IP Address for our unsecured microk8s registry we need to add it to the containerd configuration file being used by microk8s to mark this specific endpoint as being permitted for use with HTTP rather than HTTPS, as follows:
   154  
   155  ```console
   156  sudo vim /var/snap/microk8s/current/args/containerd-template.toml
   157  ```
   158  
   159  And add the last two lines in the following example to the file substituting in the IP Address we selected
   160  
   161  ```console
   162      [plugins.cri.registry]
   163        [plugins.cri.registry.mirrors]
   164          [plugins.cri.registry.mirrors."docker.io"]
   165            endpoint = ["https://registry-1.docker.io"]
   166          [plugins.cri.registry.mirrors."local.insecure-registry.io"]
   167            endpoint = ["http://localhost:32000"]
   168          [plugins.cri.registry.mirrors."172.31.39.52:32000"]
   169            endpoint = ["http://172.31.39.52:32000"]
   170  ```
   171  
   172  ```console
   173  sudo vim /var/snap/docker/current/config/daemon.json
   174  ```
   175  
   176  And add the insecure-registries line in the following example to the file substituting in the IP Address we obtained from the $RegistryIP
   177  
   178  ```console
   179  {
   180      "log-level":        "error",
   181      "storage-driver":   "overlay2",
   182      "insecure-registries" : ["172.31.39.52:32000"]
   183  }
   184  ```
   185  
   186  The services then need restarting, note that the image registry will be cleared of any existing images in this step:
   187  
   188  ```console
   189  microk8s.disable registry
   190  microk8s.stop
   191  sudo snap disable docker
   192  sudo snap enable docker
   193  microk8s.start
   194  microk8s.enable registry:size=30Gi
   195  ```
   196  
   197  ## Optional tooling and Image Registries
   198  
   199  There are some optional steps that you should complete prior to using the build system depending upon your goal such as releasing the build for example.
   200  
   201  If you intend on marking a tagged github version of the build once successful you will need to export a GITHUB\_TOKEN environment variable.  Without this defined the build will not write any release tags etc to github.
   202  
   203  If you intend on releasing the container images then you will need to populate docker login credentials for the quay.io repository:
   204  
   205  ```console
   206  $ docker login quay.io
   207  Username: [Your quay.io user name]
   208  Password: [Your quay.io password]
   209  WARNING! Your password will be stored unencrypted in /home/kmutch/snap/docker/423/.docker/config.json.
   210  Configure a credential helper to remove this warning. See
   211  https://docs.docker.com/engine/reference/commandline/login/#credentials-store
   212  
   213  Login Succeeded
   214  ```
   215  
   216  # A word about privacy
   217  
   218  Many of the services that provide image hosting use Single Sign On and credentials management with your source code control platform of choice.  As a consequence of this often these services will gain access to any and all repositories private or otherwise that you might have access to within your account.  In order to preserve privacy and maintain fine grained control over the visibility of your private repositories it is recommended that when using docker hub and other services that you create a service account that has the minimal level of access to repositories as necessary to implement your CI/CD features.
   219  
   220  If the choice is made to use self hosted microk8s a container registry is deployed on our laptop or desktop that is not secured and relies on listening only to the local host network interface.  Using a network in-conjunction with this means you will need to secure your equipment and access to networks to prevent exposing the images produced by the build, and also to prevent other actors from placing docker images onto your machine.
   221  
   222  # Build step Images (CI)
   223  
   224  The studio go runner project uses Docker images to completely encapsulate build steps, including a full git clone of the source comprising releases or development source tree(s) copied into the image.  Using image registries, or alternatively the duat git-watch tool, it is possible to build an image from the git repository as commits occur and to then host the resulting image.  A local registry can be used to host builder images using microk8s, or Internet registries offer hosting for open source projects for free, and also offer paid hosted plans for users requiring privacy.
   225  
   226  If you intend on using this pipeline to compile locally modified code then this can be done by creating the build step images and then running the containers using volume mounts that point at your locally checked-out source code, or in the case of pipeline updating the build step images with code and pushing them to a docker registry that the pipeline is observing.
   227  
   228  The git-watch option serves on-premise users, and individual contributors, or small teams that do not have large financial resources to employ cloud hosted subscription sevices, or for whom the latency of moving images and data through residential internet connections is prohibitive.
   229  
   230  Before commencing a build of the runner a reference, or base image is created that contains all of the build tooling needed.  This image changes only when the build tooling needs upgrading or changing.  The reason for doing this is that this image is both time consuming and quite large due to dependencies on NVidia CUDA, Python, and Tensorflow.  Because of this the base image build is done manually and then propogated to image registries that your build environment can access.  Typically unless there is a major upgrade most developers will be able to simply perform a docker pull from the docker.io registry to get a copy of this image. The first of instructions detail building the base image.
   231  
   232  ## CUDA and Compilation builder image preparation
   233  
   234  In order to prepare for producing product specific build images a base image is employed that contains the infrequently changing build software on which the StudioML and AI frameworks used depend.
   235  
   236  If you wish to simply use an existing build configuration then you can pull the prebuilt image into your local docker registry, or from docker hub using the following command:
   237  
   238  ```
   239  docker pull leafai/studio-go-runner-dev-base:0.0.5
   240  ```
   241  
   242  For situations where an on-premise or single developer machine the base image can be built with the `Dockerfile_base` file using the following command:
   243  
   244  ```console
   245  docker build -t studio-go-runner-dev-base:working -f Dockerfile_base .
   246  export RepoImage=`docker inspect studio-go-runner-dev-base:working --format '{{ index .Config.Labels "registry.repo" }}:{{ index .Config.Labels "registry.version"}}'`
   247  docker tag studio-go-runner-dev-base:working $RepoImage
   248  docker rmi studio-go-runner-dev-base:working
   249  docker push $RepoImage
   250  ```
   251  
   252  If you are performing a build of a new version of the base image you can push the new version for others to use if you have the credentials needed to access the leafai account on github.
   253  
   254  ```console
   255  $ docker tag $RepoImage $DockerUsername/$RepoImage
   256  $ docker login docker.io
   257  Authenticating with existing credentials...
   258  WARNING! Your password will be stored unencrypted in /home/kmutch/.docker/config.json.
   259  Configure a credential helper to remove this warning. See
   260  https://docs.docker.com/engine/reference/commandline/login/#credentials-store
   261  
   262  Login Succeeded
   263  $ docker push $RepoImage
   264  c7125c35d2a0: Pushing [>                                                  ]  25.01MB/2.618GB
   265  1a5dc4559fc9: Pushing [===================>                               ]  62.55MB/163MB
   266  150f158a1cca: Pushing [=====>                                             ]   72.4MB/721.3MB
   267  e9fe4eadf101: Pushed
   268  7499c2deaea7: Pushing [====>                                              ]  67.39MB/705.3MB
   269  5e0543625ca3: Pushing [====>                                              ]  61.79MB/660.9MB
   270  fb88fc3593c5: Waiting
   271  5f6ee5ba06b5: Waiting
   272  3249250da32f: Waiting
   273  31d600707965: Waiting
   274  b67f23c2fd52: Waiting
   275  297fd071ca2f: Waiting
   276  2f0d1e8214b2: Waiting
   277  7dd604ffa87f: Waiting
   278  aa54c2bc1229: Waiting
   279  ```
   280  
   281  The next section instructions, give a summary of what needs to be done in order to use the docker hub service, or local docker registry to provision an image repository that auto-builds builder images from the studio go runner project and pushes these to the docker hub image registra.  The second section covers use cases for secured environment, along with developer workstations and laptops.
   282  
   283  ## Internet based registra build images
   284  
   285  ### quay.io account
   286  
   287  The first step is to create or login to an account on quay.io.  When creating an account it is best to ensure before starting that you have a browser window open to github.com using the account that you wish to use for accessing code on github to prevent any unintended accesses to private repositories from other github accounts.  As you create the account on quay.io you can choose to link it automatically to github granting application access from docker to your github authorized applications.  This is needed in order that docker can poll your projects for any pushed git commit changes in order to trigger image building, if you choose to use that feature.
   288  
   289  Having logged in you can now create a repository using the "Create Repository +" button at the top right corner of your web page underneath the account related drop down menu.
   290  
   291  The first screen will allow you to specify that you wish to create an image repository and assign it a name, also set the visibility to public, and to 'Link to a GitHub Repository Push', this indicates that any push of a commit or tag will result in a container build being triggered.
   292  
   293  Depending on access permissions you may need to fork the studio-go-runner repository to your personal github account to be able to have docker be able to find the github repository.
   294  
   295  Pushing the next button will then cause the browser to request github to authorize access from docker to github and will prompt you to allow this authorization to be setup for future interactions between the two platform.  Again, be sure you are assuming the role of the most recently logged in github user and that the one being authorized is the one you intend to allow Quay to obtain access to.
   296  
   297  After the authorization is enabled, the next web page is displayed which allows the organization and account to be choosen from which the image will be built.  Step through the next two screens to then select the repository that will be used and then push the continue button.
   298  
   299  You can then specify the branch(es) that can then be used for the builds to meet you own needs.  Pushing continue will then allow you to select the Dockerfile that will act as your source for the new image.  When using studio go runner a Dockerfile called Dockerfile\_standalone is versioned in the source code repository that will allow a fully standalone container to be created that can be perform the entire build, test, release life cycle for the software.  usign a slash indicates the top level of the go runner repo.
   300  
   301  Using continue will then prompt for the Context of the build which should be set to '/'.  You can now click through the rest of the selections and will end up with a fully populated trigger for the repository.
   302  
   303  You can now trigger the first build and test cycle for the repository.  Once the repository has been built you can proceed to setting up a Kubernetes test cluster than can pull the image(s) from the repository as they are updated via git commits followed by a git push.
   304  
   305  ### quay.io release configuration
   306  
   307  Now that we have a quay.io account to be used for software releases we now configure the local build environment to use this account.  This is done by using the Registry environment variable to store a yaml block with the account details in it.
   308  
   309  When adding a pasword to your registry.yaml with quay.io you should generate an encrypted password then place that into your registry.yaml file.  To do this go into the 'Account Setting' menu at the top right of the quay.io screen and the botton menu item "User Settings" has at the top a "Docker CLI Password" entry that has a highlighted link "Generate Encrypted Password' which will generate a long encrypted string that is then used in your file.
   310  
   311  ```
   312  cat registry.yaml
   313  quay.io:
   314    .*:
   315      security:
   316        tls:
   317          client:
   318            disabled: false
   319        basic:
   320          username: [account_name]
   321          password: [account_password]
   322  ```
   323  
   324  The next step is to store the registry yaml settings into an environment variable for use with the rest of these instructions:
   325  
   326  ```
   327  export Registry=`cat registry.yaml`
   328  ```
   329  
   330  ## Development and local build image bootstrapping
   331  
   332   This use case uses git commits to trigger builds of CI/CD workflow images occuring within a locally deployed Kubernetes cluster.  In order to support local Kubernetes clusters the microk8s tool is used, https://microk8s.io/.
   333  
   334  Uses cases for local clusters include secured environments, snap based installation of the microk8s tool can be done by downloading the snap file.  Another option is to download a git export of the microk8s tool and build it within your secured environment.  If you are using a secured environment adequate preparations should also be made for obtaining copies of any images that you will need for running your applications and also reference images needed by the microk8s install such as the images for the DNS server, the container registry, the Makisu image from docker hub and other images that will be used.  In order to be able to do this you can pre-pull images for build and push then to a private registry. If you need access to multiple registries, you can create one secret for each registry. Kubelet will merge any imagePullSecrets into a single virtual .docker/config.json. For more information please see, https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry.
   335  
   336  While you can run within a walled garden secured network environment the microk8s cluster does use an unsecured registry which means that the machine and any accounts on which builds are running should be secured independently.  If you wish to secure images that are produced by your pipeline then you should modify your ci\_containerize\_microk8s.yaml, or a copy of the same, file to point at a private secured registry, such as a self hosted https://trow.io/ instance.
   337  
   338  The CI bootstrap step is the name given to the initial CI pipeline image creation step.  The purpose of this step is to generate a docker image containing all of the source code needed for a build and test.
   339  
   340  When using container based pipelines the image registry being used becomes a critical part of the pipeline for storing the images that are pulled into processing steps and also for acting as a repository of images produced during pipeline execution.  When using microk8s two registries will exist within the local system one provisioned by docker in the host system, and a second hosted by microk8s that acts as your kubernetes registry.
   341  
   342  Images moving within the pipeline will generally be handled by the Kubernetes registry, however in order for the pipeline to access this registry there are two ways of doing so, the first using the Kubernetes APIs and the second to treat the registry as a server openly available outside of the cluster.  These requirements can be met by using the internal Kubernetes registry using the microk8s IP addresses and also the address of the host all referencing the same registry.
   343  
   344  The first step is the loading of the base image containing the needed build tooling.  The base image can be loaded into your local docker environment and then subsequently pushed to the cluster registry.  If you have followed the instructions in the 'CUDA and Compilation base image preparation' section then this image when pulled will come from the locally stored image, alternatively the image should be pulled from the docker.io repository.
   345  
   346  ```console
   347  docker pull leafai/studio-go-runner-dev-base:0.0.5
   348  docker tag leafai/studio-go-runner-dev-base:0.0.5 localhost:$RegistryPort/leafai/studio-go-runner-dev-base:0.0.5
   349  docker tag leafai/studio-go-runner-dev-base:0.0.5 $RegistryIP:$RegistryPort/leafai/studio-go-runner-dev-base:0.0.5
   350  docker push localhost:$RegistryPort/leafai/studio-go-runner-dev-base:0.0.5
   351  docker push $RegistryIP:$RegistryPort/leafai/studio-go-runner-dev-base:0.0.5
   352  ```
   353  
   354  Once the base image is loaded and has been pushed into the kubernetes container registry, git-watch is used to initiate image builds inside the cluster that, use the base image, git clone source code from fresh commits, and build scripts etc to create an entirely encapsulated CI image.
   355  
   356  # Continuous Integration
   357  
   358  ## CI front office setup
   359  
   360  Having an image repository store build images with everything needed to build will allow a suitably configured Kubernetes cluster to query for bootstrapped build images output by manually building the source image, or using a tool like git-watch and to use these for triggering a building, testing, and integration.
   361  
   362  The git-watch tool monitors a git repository and polls looking for pushed commits.  Once a change is detected the code is cloned to be built a Makisu pod is started for creating images within the Kubernetes cluster.  The Makisu build then pushes build images to a user nominated repository which becomes the triggering point for the CI/CD downstream steps.
   363  
   364  Because localized images are intended to assist in conditions where image transfers are expensive time wise it is recommended that the first step be to deploy the redis cache as a Kubernetes service.  This cache will be employed by Makisu when container images builds are performed by Makisu. The cache pods can be started by using the following commands:
   365  
   366  ```console
   367  $ microk8s.kubectl apply -f ci_containerize_cache.yaml
   368  namespace/makisu-cache created
   369  pod/redis created
   370  service/redis created
   371  ```
   372  
   373  Because we can run both in a local developer mode to build images inside the Kubernetes cluster running on our local machine or as a fully automatted CI pipeline in an unsupervised manner the git-watch can be run both using a shell inside a checked-out code based, or as a pod inside a Kubernetes cluster in an unattended fashion.
   374  
   375  The studio go runner standalone build image can be used within a go runner deployment to perform testing and validation against a live minio (s3 server) and a RabbitMQ (queue server) instances deployed within a single Kubernetes namespace.  The definition of the deployment is stored within the source code repository, in the ci\_keel.yaml.
   376  
   377  The build deployment contains an annotated kubernetes deployment of the build image that when deployed alongside a keel Kubernetes instance can react to fresh build images to cycle automatically through build, test, release image cycles.
   378  
   379  Keel is documented at https://keel.sh/, installation instruction can also be found at, https://keel.sh/guide/installation.html.  Once deployed keel can be left to run as a background service observing Kubernetes deployments that contain annotations it is designed to react to.  Keel will watch for changes to image repositories and will automatically upgrade the Deployment pods as new images are seen causing the CI/CD build logic encapsulated inside the images to be triggered as they they are launched as part of a pod.
   380  
   381  The commands that you might perform in order to deploy keel into an existing Kubernetes deploy might well appear as follows:
   382  
   383  ```
   384  mkdir -p ~/project/src/github.com/keel-hq
   385  cd ~/project/src/github.com/keel-hq
   386  git clone https://github.com/keel-hq/keel.git
   387  microk8s.kubectl create -f ~/project/src/github.com/keel-hq/keel/deployment/deployment-rbac.yaml
   388  mkdir -p ~/project/src/github.com/leaf-ai
   389  cd ~/project/src/github.com/leaf-ai
   390  git clone https://github.com/leaf-ai/studio-go-runner.git
   391  cd studio-go-runner
   392  git checkout [branch name]
   393  
   394  export GIT_BRANCH=`echo '{{.duat.gitBranch}}'|stencil -supress-warnings - | tr '_' '-' | tr '\/' '-'`
   395  
   396  # Follow the instructions for setting up the Prerequisites for compilation in the main README.md file
   397  ```
   398  
   399  The image name for the build Deployment is used by keel to watch for updates, as defined in the ci\_keel.yaml Kubernetes configuration file(s).  The keel yaml file is supplied as part of the service code inside the Deployment resource definition. The keel labels within the ci\_keel.yaml file dictate under what circumstances the keel server will trigger a new pod for the build and test to be created in response to the reference build image changing as git commit and push operations are performed.  Information about these labels can be found at, https://keel.sh/v1/guide/documentation.html#Policies.
   400  
   401  ## Triggering builds
   402  
   403  In order for the CI to run a source build image is generated as the very first step.  Generally creation of the source build image is either done manually, or triggered via git commit activity.
   404  
   405  The appearance of the source build image within a docker image repository will then act as a trigger for the CI build to occur.
   406  
   407  ### Local source image builds
   408  
   409  One of the options that exists for build and release is to make use of a locally checked out source tree and to perform the build, test, release cycle locally.  Local source builds will make use of Kubernetes typically using locally deployed clusters using microk8s, a Kubernetes distribution for Ubuntu that runs on a single physical host.  A typical full build is initiated using the build.sh script found in the root directory, assuming you have already completed the installation steps previously documented above.
   410  
   411  ```console
   412  $ ./build.sh
   413  ```
   414  
   415  Another faster developement cycle option is to perform the build directly at the command line which shortens the cycle to just include a refresh of the source code befpre pushing the build to the docker image registry which is being monitored by the CI pipeline.
   416  
   417  ```
   418  working_file=$$.studio-go-runner-working
   419  stencil -input Dockerfile_standalone > ${working_file}
   420  docker build -t leafai/studio-go-runner-standalone-build:$GIT_BRANCH -f ${working_file} .
   421  rm ${working_file}
   422  ```
   423  
   424  The next step then is to push the image to the docker image registry being used by the CI pipeline.
   425  
   426  ```
   427  docker tag leafai/studio-go-runner-standalone-build:$GIT_BRANCH $RegistryIP:32000/leafai/studio-go-runner-standalone-build:$GIT_BRANCH
   428  docker push $RegistryIP:32000/leafai/studio-go-runner-standalone-build:$GIT_BRANCH
   429  ```
   430  
   431  Remember that each time you wish to run the CI pipeline simply rebuild the source build image and then CI pipeline will restart itself and run.  You are now ready to deploy the [CI pipeline](#) using keel.
   432  
   433  ### git based source image builds
   434  
   435  Triggering builds can also be done via a locally checked out git repository, or a reference to a remote repository.  In both cases git-watch is used to monitor for changes.
   436  
   437  git-watcher is a tool from the duat toolset used to initiate builds on detecting git commit events.  Commits need not be pushed when performing a locally triggered build.
   438  
   439  Once git watcher detects changes to the code base it will use a Microk8s Kubernetes job to dispatch the build in the form of a container image to an instance of keel running inside a Kubernetes cluster.  The 
   440  
   441  git-watcher uses the first argument as the git repository location to be polled as well as the branch name of interest denoted by the '^' character.  Configuring the git-watcher downstream actions once a change is registered occurs using the ci\_containerize\_microk8s.yaml, or ci\_containerize\_local.yaml.  The yaml file contains references to the location of the container registry that will receive the image only it has been built.  The intent is that a Kubernetes task such as keel.sh will further process the image as part of a CI/CD pipeline after the Makisu step has completed, please see the section describing Continuous Integration.
   442  
   443  The following shows an example of running the git-watcher locally specifing a remote git origin:
   444  
   445  ```console
   446  $ git-watch -v --job-template ci_containerize_microk8s.yaml https://github.com/leaf-ai/studio-go-runner.git^`git rev-parse --abbrev-ref HEAD`
   447  ```
   448  
   449  In cases where a locally checked-out copy of the source repository is used and commits are done locally, then the following can be used to watch commits without pushes and trigger builds from those using the local file path as the source code location:
   450  
   451  ```console
   452  $ git-watch -v --ignore-aws-errors --job-template ci_containerize_local.yaml `pwd`^`git rev-parse --abbrev-ref HEAD`
   453  ```
   454  
   455  You are now ready to deploy the [CI pipeline](#) using keel.
   456  
   457  ## Locally deployed keel testing and CI
   458  
   459  The next step is to modify the ci\_keel.yaml or use the duat stencil templating tool to inject the branch name on which the development is being performed or the release prepared, and then deploy the continuous integration (CI) stack.
   460  
   461  The $Registry environment variable is used to pass your image registry username, and password to any keel containers and to the release image builder, Makisu, using a kubernetes secret.  An example of how to set this value is included in the [quay.io release configuration](#quay.io-release-configuration) section above.
   462  
   463  You will also needs the K8S_NAMESPACE environment variable defined to create a sandbox for the builds to occur in.
   464  
   465  ```
   466  export K8S_NAMESPACE=ci-go-runner-$USER
   467  ```
   468  
   469  The $RegistryIP and $RegistryPort are defined in the [docker and the microk8s Kubernetes distribution installation](#docker-and-the-microk8s-Kubernetes-distribution-installation) section.
   470  
   471  The $Registry environment variable is used to define the repository into which any released images will be pushed.  Before using the registry setting you should copy registry-template.yaml to registry.yaml, modify the contents, and set environment variables as detailed in the [Internet based registra build images](#Internet-based-registra-build-images) section.
   472  
   473  When a CI build is initiated multiple containers will be created to cover the two dependencies of the runner, the rabbitMQ and minio servers, along with the source builder container.  These will run together to deplicate a production system for building, testing and validating the runner system.
   474  
   475  As a CI build finishes the stack will scale down the testing dependencies it uses for queuing and storage and will keep the build container alive so that logs can be examined.  At this point the CI will spin off container image builds for the various platforms runner is deployed too using Makisu.  Makisu will push images that are released to quay.io.
   476  
   477  If the environment variable GITHUB\_TOKEN is present when deploying an integration stack it will be placed as a Kubernetes secret into the integration stack.  If the secret is present then upon successful build and test cycles the running container will attempt to create and deploy a release using the github release pages.
   478  
   479  ```console
   480  export GITHUB_TOKEN=[Place a github personal account token here]
   481  ```
   482  
   483  Any changes to the build source images are ignored while builds are running.  Once the build completes upgrades will only then be used to trigger new builds in order to prevent premature termination.  When the build, testing, and image releases have completed and pushed commits have been seen for the code base then the pod will be shutdown for the latest build and a new pod created.
   484  
   485  When the build completes the pods that are present that are only useful during the actual build and test steps will be scaled back to 0 instances.  The CI script, ci.sh, will spin up and down specific kubernetes jobs and deployments when they are needed automatically by using the Kubernetes microk8s.kubectl command.  Because of this your development and build cluster will need access to the Kubernetes API server to complete these tasks.  The Kubernetes API access is enabled by the ci\_keel.yaml file when the standalone build container is initialized.
   486  
   487  In the case that microk8s is being used to host images moving through the pipeline then the $Registry setting must contain the IP address that the microk8s registry will be using that is accessible across the system, that is the $RegistryIP and the port number $RegistryPort.  The $Image value can be used to specify the name of the container image that is being used, its host name will differ because the image gets pushed from a localhost development machine and therefore is denoted by the localhost host name rather than the IP address for the registry.
   488  
   489  The following example configures build images to come from a localhost registry.
   490  
   491  ```console
   492  stencil -input ci_keel.yaml -values Registry=${Registry},Image=$RegistryIP:$RegistryPort/leafai/studio-go-runner-standalone-build:${GIT_BRANCH},Namespace=${K8S_NAMESPACE}| microk8s.kubectl apply -f -
   493  export K8S_POD_NAME=`microk8s.kubectl --namespace=$K8S_NAMESPACE get pods -o json | jq '.items[].metadata.name | select ( startswith("build-"))' --raw-output`
   494  microk8s.kubectl --namespace $K8S_NAMESPACE logs -f $K8S_POD_NAME
   495  ```
   496  
   497  These instructions will be useful to those using a locally deployed Kubernetes distribution such as microk8s.  If you wish to use microk8s you should first deploy using the workstations instructions found in this souyrce code repository at docs/workstation.md.  You can then return to this section for further information on deploying the keel based CI/CD within your microk8s environment.
   498  
   499  In the case that a test of a locally pushed docker image is needed you can build your image locally and then when the build.sh is run it will do a docker push to a microk8s cluster instance running on your workstation or laptop.  In order for the keel deployment to select the locally hosted image registry you set the Image variable for stencil to substitute into the ci\_keel.yaml file.
   500  
   501  When the release features are used the CI/CD system will make use of the Makisu image builder, authored by Uber.  Makisu allows docker containers to build images entirely within an existing container with no specialized dependencies and also without needing dind (docker in docker), or access to a docker server socket.
   502  
   503  ```console
   504  $ ./build.sh
   505  $ stencil -input ci_keel.yaml -values Registry=${Registry},Image=localhost:32000/leafai/studio-go-runner-standalone-build:${GIT_BRANCH},Namespace=${K8S_NAMESPACE}| microk8s.kubectl apply -f -
   506  ```
   507  
   508  If you are using the Image bootstrapping features of git-watch the commands would appear as follows:
   509  
   510  ```console
   511  $ stencil -input ci_keel.yaml -values Registry=$Registry,Image=$RegistryIP:$RegistryPort/leafai/studio-go-runner-standalone-build:latest,Namespace=${K8S_NAMESPACE} | microk8s.kubectl apply -f -
   512  ```
   513  
   514  In the above case the branch you are currently on dictates which bootstrapped images based on their image tag will be collected and used for CI/CD operations.
   515  
   516  
   517  If you wish to watch the build as it proceeds within the CI pipeline the containers output, or log, can be examined.  For interactive monitoring of the build process kubebox can be used, [c.f. github.com/astefanutti/kubebox](https://github.com/astefanutti/kubebox).
   518  
   519  
   520  # Monitoring and fault checking
   521  
   522  This section contains a description of the CI pipeline using the microk8s deployment model.  The pod related portions of the pipeline can be translated directly to cases where a full Kubernetes cluster is being used, typically when GPU testing is being undertaken.  The principal differences will be in how the image registry portions of the pipeline present.
   523  
   524  As described above the major portions of the pipeline can be illustrated by the following figure:
   525  
   526  ```console
   527  +---------------------+      +---------------+        +-------------------+      +----------------------+
   528  |                     |      |               |        |                   |      |                      |
   529  |                     |      |     Makisu    |        |                   +----> |    Keel Deployed     |
   530  |    Bootstrapping    +----> |               +------> |  Image Registry   |      |                      |
   531  |      Copy Pod       |      | Image Builder |        |                   | <----+ Build, Test, Release |
   532  |                     |      |               |        |                   |      |                      |
   533  +---------------------+      +---------------+        +-------------------+      +----------------------+
   534  ```
   535  
   536  ## Bootstrapping
   537  
   538  The first two steps of the pipeline are managed via the duat git-watch tool.  The git-watch tool as documented within these instructions is run using a a local shell but can be containerized.  In any event the git-watch tool can also be deployed using a docker container/pod.  The git-watch tool will output logging directly on the console and can be monitored either directly via the shell, or a docker log command, or a microk8s.kubectl log [pod name] command depending on the method choosen to start it.
   539  
   540  The logging for the git-watch is controlled via environment variables documented in the following documentation, https://github.com/karlmutch/duat/blob/master/README.md.  It can be a good choice to run the git-watch tool in debug mode all the time as this allows the last known namespaces used for builds to be retained after the build is complete for examination of logs etc at the expense of some extra kubernetes resource consumption.
   541  
   542  ```console
   543  $ export LOGXI='*=DBG'
   544  $ export LOGXI_FORMAT='happy,maxcol=1024'
   545  $ git-watch -v --debug --job-template ci_containerize_microk8s.yaml https://github.com/leaf-ai/studio-go-runner.git^feature/212_kops_1_11
   546  10:33:05.219071 DBG git-watch git-watch-linux-amd64 built at 2019-04-16_13:30:30-0700, against commit id 7b7ba25c05061692e3a907a2f42a302f68f3a2cf
   547  15:02:35.519322 DBG git-watch git-watch-linux-amd64 built at 2019-04-22_11:41:41-0700, against commit id 5ff93074afd789ed8ae24d79d1bd3004daeeba86
   548  15:03:12.667279 INF git-watch task update id: d962a116-6ccb-4c56-89c8-5081e7172cbe text: volume update volume: d962a116-6ccb-4c56-89c8-5081e7172cbe phase: (v1.PersistentVolumeClaimPhase) (len=5) "Bound" namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon
   549  15:03:25.612810 INF git-watch task update id: d962a116-6ccb-4c56-89c8-5081e7172cbe text: pod update id: d962a116-6ccb-4c56-89c8-5081e7172cbe namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon phase: Pending
   550  15:03:32.427939 INF git-watch task update id: d962a116-6ccb-4c56-89c8-5081e7172cbe text: pod update id: d962a116-6ccb-4c56-89c8-5081e7172cbe namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon phase: Failed
   551  15:03:46.553206 INF git-watch task update id: d962a116-6ccb-4c56-89c8-5081e7172cbe text: running dir: /tmp/git-watcher/9qvdLJYmoCmquvDfjv7rbVF7BETblcb3hBBw50vUgp id: d962a116-6ccb-4c56-89c8-5081e7172cbe namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon
   552  15:03:46.566524 INF git-watch task completed id: d962a116-6ccb-4c56-89c8-5081e7172cbe dir: /tmp/git-watcher/9qvdLJYmoCmquvDfjv7rbVF7BETblcb3hBBw50vUgp namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon
   553  15:38:54.655816 INF git-watch task update id: 8d1da39a-c7f7-45ad-b332-b09750b9dd8c text: volume update namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon volume: 8d1da39a-c7f7-45ad-b332-b09750b9dd8c phase: (v1.PersistentVolumeClaimPhase) (len=5) "Bound"
   554  15:39:06.145428 INF git-watch task update id: 8d1da39a-c7f7-45ad-b332-b09750b9dd8c text: pod update id: 8d1da39a-c7f7-45ad-b332-b09750b9dd8c namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon phase: Pending
   555  15:39:07.735691 INF git-watch task update id: 8d1da39a-c7f7-45ad-b332-b09750b9dd8c text: pod update id: 8d1da39a-c7f7-45ad-b332-b09750b9dd8c namespace: gw-0-9-14-feature-212-kops-1-11-aaaagjhioon phase: Running
   556  ```
   557  
   558  Logging records Kubernetes operations that will first create a persistent volume and then copy the source code for the present commit to the volume using a proxy pod and SSH.  SSH is used to tunnel data across a socket and to the persistent volume via a terminal session streaming the data.  Once the copy operation has completed the git-watch then initiates the second step using the Kubernetes core APIs.
   559  
   560  In order to observe the copy-pod the following commands are useful:
   561  
   562  ```console
   563  $ export KUBE_CONFIG=~/.kube/microk8s.config
   564  $ export KUBECONFIG=~/.kube/microk8s.config
   565  $ microk8s.kubectl get ns
   566  ci-go-runner                                  Active   2d18h
   567  container-registry                            Active   6d1h
   568  default                                       Active   6d18h
   569  gw-0-9-14-feature-212-kops-1-11-aaaagjhioon   Active   1s
   570  keel                                          Active   3d
   571  kube-node-lease                               Active   6d1h
   572  kube-public                                   Active   6d18h
   573  kube-system                                   Active   6d18h
   574  makisu-cache                                  Active   4d19h
   575  
   576  $ microk8s.kubectl --namespace gw-0-9-14-feature-212-kops-1-11-aaaagjhioon get pods
   577  NAME                 READY   STATUS      RESTARTS   AGE
   578  copy-pod             0/1     Completed   0          2d15h
   579  imagebuilder-ts669   0/1     Completed   0          2d15h
   580  ```
   581  
   582  ## microk8s Registry
   583  
   584  The microk8s registry can become large as the number of builds mounts up.  To perform a garbage collection on the registry use the following command:
   585  
   586  ```
   587  microk8s.kubectl exec --namespace container-registry -it $(microk8s.kubectl get pods --namespace="container-registry" --field-selector=status.phase=Running -o jsonpath={.items..metadata.name}) -- bin/registry garbage-collect /etc/docker/registry/config.yml
   588  ```
   589  
   590  ## Image Builder
   591  
   592  Using the image building pod ID you may now extract logs from within the pipeline, using the -f option to follow the log until completion.
   593  
   594  ```console
   595  $ microk8s.kubectl --namespace gw-0-9-14-feature-212-kops-1-11-aaaagjhioon logs -f imagebuilder-qc429
   596  {"level":"warn","ts":1555972746.9400618,"msg":"Blacklisted /var/run because it contains a mountpoint inside. No changes of that directory will be reflected in the final image."}
   597  {"level":"info","ts":1555972746.9405785,"msg":"Starting Makisu build (version=v0.1.9)"}
   598  {"level":"info","ts":1555972746.9464102,"msg":"Using build context: /makisu-context"}
   599  {"level":"info","ts":1555972746.9719934,"msg":"Using redis at makisu-cache:6379 for cacheID storage"}
   600  {"level":"error","ts":1555972746.9831564,"msg":"Failed to fetch intermediate layer with cache ID 276f9a51: find layer 276f9a51: layer not found in cache"}
   601  {"level":"info","ts":1555972746.9832165,"msg":"* Stage 1/1 : (alias=0,latestfetched=-1)"}
   602  {"level":"info","ts":1555972746.983229,"msg":"* Step 1/19 (commit,modifyfs) : FROM microk8s-registry:5000/leafai/studio-go-runner-dev-base:0.0.5  (96902554)"}
   603  ...
   604  {"level":"info","ts":1555973113.7649434,"msg":"Stored cacheID mapping to KVStore: c5c81535 => MAKISU_CACHE_EMPTY"}
   605  {"level":"info","ts":1555973113.7652907,"msg":"Stored cacheID mapping to KVStore: a0dcd605 => MAKISU_CACHE_EMPTY"}
   606  {"level":"info","ts":1555973113.766166,"msg":"Computed total image size 7079480773","total_image_size":7079480773}
   607  {"level":"info","ts":1555973113.7661939,"msg":"Successfully built image leafai/studio-go-runner-standalone-build:feature_212_kops_1_11"}
   608  {"level":"info","ts":1555973113.7662325,"msg":"* Started pushing image 10.1.1.46:5000/leafai/studio-go-runner-standalone-build:feature_212_kops_1_11"}
   609  {"level":"info","ts":1555973113.9430845,"msg":"* Started pushing layer sha256:d18d76a881a47e51f4210b97ebeda458767aa6a493b244b4b40bfe0b1ddd2c42"}
   610  {"level":"info","ts":1555973113.9432425,"msg":"* Started pushing layer sha256:34667c7e4631207d64c99e798aafe8ecaedcbda89fb9166203525235cc4d72b9"}
   611  {"level":"info","ts":1555973114.0487752,"msg":"* Started pushing layer sha256:119c7358fbfc2897ed63529451df83614c694a8abbd9e960045c1b0b2dc8a4a1"}
   612  {"level":"info","ts":1555973114.4315908,"msg":"* Finished pushing layer sha256:d18d76a881a47e51f4210b97ebeda458767aa6a493b244b4b40bfe0b1ddd2c42"}
   613  {"level":"info","ts":1555973114.5885575,"msg":"* Finished pushing layer sha256:119c7358fbfc2897ed63529451df83614c694a8abbd9e960045c1b0b2dc8a4a1"}
   614  ...
   615  {"level":"info","ts":1555973479.759059,"msg":"* Finished pushing image 10.1.1.46:5000/leafai/studio-go-runner-standalone-build:feature_212_kops_1_11 in 6m5.99280605s"}
   616  {"level":"info","ts":1555973479.7590847,"msg":"Successfully pushed 10.1.1.46:5000/leafai/studio-go-runner-standalone-build:feature_212_kops_1_11 to 10.1.1.46:5000"}
   617  {"level":"info","ts":1555973479.759089,"msg":"Finished building leafai/studio-go-runner-standalone-build:feature_212_kops_1_11"}
   618  ```
   619  
   620  The last action of pushing the built image from the Miksau pod into our local docker registry can be seen above.  The image pushed is now available in this case to a keel.sh namespace and any pods waiting on new images for performing the product build and test steps.
   621  
   622  ## Keel components
   623  
   624  The CI portion of the pipeline will seek to run the tests in a real deployment.  If you look below you will see three pods that are running within keel.  Two pods are support pods for testing, the minio pod runs a blob server that mimics the AWS S3 protocols, the rabbitMQ server provides the queuing capability of a production deployment.  The two support pods will run with either 0 or 1 replica and will be scaled up and down by the main build pod as the test is started and stopped.
   625  
   626  ```console
   627  $ microk8s.kubectl get ns
   628  ci-go-runner         Active   5s
   629  container-registry   Active   39m
   630  default              Active   6d23h
   631  kube-node-lease      Active   47m
   632  kube-public          Active   6d23h
   633  kube-system          Active   6d23h
   634  makisu-cache         Active   17m
   635  $ microk8s.kubectl --namespace ci-go-runner get pods                      
   636  NAME                                READY   STATUS              RESTARTS   AGE
   637  build-5f6c54b658-8grpm              0/1     ContainerCreating   0          82s
   638  minio-deployment-7f49449779-2s9d7   1/1     Running             0          82s
   639  rabbitmq-controller-dbgc7           0/1     ContainerCreating   0          82s
   640  $ microk8s.kubectl --namespace ci-go-runner logs -f build-5f6c54b658-8grpm
   641  Warning : env variable azure_registry_name not set
   642  Mon Apr 22 23:03:27 UTC 2019 - building ...
   643  2019-04-22T23:03:27+0000 DBG stencil stencil built at 2019-04-12_17:28:28-0700, against commit id 2842db335d8e7d3b4ca97d9ace7d729754032c59
   644  2019-04-22T23:03:27+0000 DBG stencil leaf-ai/studio-go-runner/studio-go-runner:0.9.14-feature-212-kops-1-11-aaaagjhioon
   645  declare -x AMQP_URL="amqp://\${RABBITMQ_DEFAULT_USER}:\${RABBITMQ_DEFAULT_PASS}@\${RABBITMQ_SERVICE_SERVICE_HOST}:\${RABBITMQ_SERVICE_SERVICE_PORT}/%2f?connection_attempts=2&retry_delay=.5&socket_timeout=5"
   646  declare -x CUDA_8_DEB="https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb"
   647  declare -x CUDA_9_DEB="https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb"
   648  ...
   649  --- PASS: TestStrawMan (0.00s)
   650  === RUN   TestS3MinioAnon
   651  2019-04-22T23:04:31+0000 INF s3_anon_access Alive checked _: [addr 10.152.183.12:9000 host build-5f6c54b658-8grpm]
   652  --- PASS: TestS3MinioAnon (7.33s)
   653  PASS
   654  ok      github.com/leaf-ai/studio-go-runner/internal/runner     7.366s
   655  2019-04-22T23:04:33+0000 INF build.go building internal/runner
   656  ...
   657  i2019-04-22T23:10:44+0000 WRN runner stopping k8sStateLogger _: [host build-5f6c54b658-8grpm] in:
   658  2019-04-22T23:10:44+0000 INF runner forcing test mode server down _: [host build-5f6c54b658-8grpm]
   659  2019-04-22T23:10:44+0000 WRN runner http: Server closedstack[monitor.go:69] _: [host build-5f6c54b658-8grpm] in:
   660  ok      github.com/leaf-ai/studio-go-runner/cmd/runner  300.395s
   661  2019-04-22T23:10:46+0000 INF build.go building cmd/runner
   662  2019-04-22T23:11:07+0000 INF build.go renaming ./bin/runner-linux-amd64 to ./bin/runner-linux-amd64-cpu
   663  2019-04-22T23:11:27+0000 INF build.go github releasing [/project/src/github.com/leaf-ai/studio-go-runner/cmd/runner/bin/runner-linux-amd64 /project/src/github.com/leaf-ai/studio-go-runner/cmd/runner/bin/runner-linux-amd64-cpu /project/src/github.com/leaf-ai/studio-go-runner/build-.log]
   664  imagebuild-mounted starting build-5f6c54b658-8grpm
   665  2019-04-22T23:12:00+0000 DBG stencil stencil built at 2019-04-12_17:28:28-0700, against commit id 2842db335d8e7d3b4ca97d9ace7d729754032c59
   666  2019-04-22T23:12:00+0000 DBG stencil leaf-ai/studio-go-runner/studio-go-runner:0.9.14--aaaagjihjms
   667  job.batch/imagebuilder created
   668  ```
   669  
   670  You can now head over to github and if you had the github token loaded as a secret you will be able to see the production binaries release.
   671  
   672  The next step if enabled is for the keel build to dispatch a production container build within the Kubernetes cluster and then for the image to be pushed using the credentials supplied as a part of the original command line that deployed the keel driven CI.  Return to the first section of the continuous integration for more information.
   673  
   674  
   675  Copyright © 2019-2020 Cognizant Digital Business, Evolutionary AI. All rights reserved. Issued under the Apache 2.0 license.