github.com/shashidharatd/test-infra@v0.0.0-20171006011030-71304e1ca560/prow/getting_started.md (about)

     1  # How to turn up a new cluster
     2  
     3  Prow should run anywhere that Kubernetes runs. Here are the steps required to
     4  set up a basic prow cluster on [GKE](https://cloud.google.com/container-engine/).
     5  Prow will work on any Kubernetes cluster, so feel free to turn up a cluster
     6  some other way and skip the first step. You can set up a project on GCP using
     7  the [cloud console](https://console.cloud.google.com/).
     8  
     9  ## Create the cluster
    10  
    11  I'm assuming that `PROJECT` and `ZONE` environment variables are set.
    12  
    13  ```sh
    14  export PROJECT=your-project
    15  export ZONE=us-west1-a
    16  ```
    17  
    18  Run the following to create the cluster. This will also set up `kubectl` to
    19  point to the new cluster.
    20  
    21  ```sh
    22  gcloud container --project "${PROJECT}" clusters create prow \
    23    --zone "${ZONE}" --machine-type n1-standard-4 --num-nodes 2
    24  ```
    25  
    26  ## Create the GitHub secrets
    27  
    28  You will need two secrets to talk to GitHub. The `hmac-token` is the token that
    29  you give to GitHub for validating webhooks. Generate it using any reasonable
    30  randomness-generator. I like [random.org][1]. The `oauth-token` is an OAuth2 token
    31  that has read and write access to the bot account. Generate it from the
    32  [account's settings -> Personal access tokens -> Generate new token][2].
    33  
    34  ```sh
    35  kubectl create secret generic hmac-token --from-file=hmac=/path/to/hook/secret
    36  kubectl create secret generic oauth-token --from-file=oauth=/path/to/oauth/secret
    37  ```
    38  
    39  Note that Github events triggered by the account above are ignored by some
    40  prow plugins. It is prudent to use a different bot account for performing
    41  merges or rerunning tests, whether the deployment that drives the second
    42  account is `tide` or the `submit-queue` munger.
    43  
    44  ## Run the prow components in the cluster
    45  
    46  Run the following command to start up a basic set of prow components.
    47  
    48  ```sh
    49  kubectl apply -f cluster/starter.yaml
    50  ```
    51  
    52  After a moment, the cluster components will be running.
    53  
    54  ```sh
    55  $ kubectl get deployments
    56  NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    57  deck         2         2         2            2           1m
    58  hook         2         2         2            2           1m
    59  horologium   1         1         1            1           1m
    60  plank        1         1         1            1           1m
    61  sinker       1         1         1            1           1m
    62  ```
    63  
    64  Find out your external address. It might take a couple minutes for the IP to
    65  show up.
    66  
    67  ```sh
    68  $ kubectl get ingress ing
    69  NAME      HOSTS     ADDRESS          PORTS     AGE
    70  ing       *         an.ip.addr.ess   80        3m
    71  ```
    72  
    73  Go to that address in a web browser and verify that the "echo-test" job has a
    74  green check-mark next to it. At this point you have a prow cluster that is ready
    75  to start receiving GitHub events!
    76  
    77  ## Add the webhook to GitHub
    78  
    79  On the GitHub repo you would like to use, go to Settings -> Webhooks -> Add
    80  webhook. You can also add org-level webhooks.
    81  
    82  Set the payload URL to `http://<IP-FROM-INGRESS>/hook`, the content type to
    83  `application/json`, the secret to your HMAC secret, and ask it to send everything.
    84  After you've created your webhook, GitHub will indicate that it successfully
    85  sent an event by putting a green checkmark under "Recent Deliveries."
    86  
    87  # Next steps
    88  
    89  ## Enable some plugins by modifying `plugins.yaml`
    90  
    91  Create a file called `plugins.yaml` and add the following to it:
    92  
    93  ```yaml
    94  plugins:
    95    YOUR_ORG/YOUR_REPO:
    96    - size
    97  ```
    98  
    99  Replace `YOUR_ORG/YOUR_REPO:` with the appropriate values. If you want, you can
   100  instead just say `YOUR_ORG:` and the plugin will run for every repo in the org.
   101  
   102  Run the following to test the file, replacing the path as necessary:
   103  
   104  ```
   105  bazel run //prow/cmd/config -- --plugin-path=path/to/plugins.yaml
   106  ```
   107  
   108  There should be no errors. You can run this as a part of your presubmit testing
   109  so that any errors are caught before you try to update.
   110  
   111  Now run the following to update the configmap, replacing the path as necessary:
   112  
   113  ```
   114  kubectl create configmap plugins --from-file=plugins=path/to/plugins.yaml --dry-run -o yaml | kubectl replace configmap plugins -f -
   115  ```
   116  
   117  We added a make rule to do this for us:
   118  
   119  ```Make
   120  get-cluster-credentials:
   121      gcloud container clusters get-credentials "$(CLUSTER)" --project="$(PROJECT)" --zone="$(ZONE)"
   122  
   123  update-plugins: get-cluster-credentials
   124      kubectl create configmap plugins --from-file=plugins=plugins.yaml --dry-run -o yaml | kubectl replace configmap plugins -f -
   125  ```
   126  
   127  Now when you open a PR, it will automatically be labelled with a `size/*`
   128  label. When you make a change to the plugin config and push it with `make
   129  update-plugins`, you do not need to redeploy any of your cluster components.
   130  They will pick up the change within a few minutes.
   131  
   132  ## Add more jobs by modifying `config.yaml`
   133  
   134  Create a file called `config.yaml`, and add the following to it:
   135  
   136  ```yaml
   137  periodics:
   138  - interval: 10m
   139    agent: kubernetes
   140    name: echo-test
   141    spec:
   142      containers:
   143      - image: alpine
   144        command: ["/bin/date"]
   145  postsubmits:
   146    YOUR_ORG/YOUR_REPO:
   147    - name: test-postsubmit
   148      agent: kubernetes
   149      spec:
   150        containers:
   151        - image: alpine
   152          command: ["/bin/printenv"]
   153  presubmits:
   154    YOUR_ORG/YOUR_REPO:
   155    - name: test-presubmit
   156      trigger: "(?m)^/test this"
   157      rerun_command: "/test this"
   158      context: test-presubmit
   159      always_run: true
   160      skip_report: true
   161      agent: kubernetes
   162      spec:
   163        containers:
   164        - image: alpine
   165          command: ["/bin/printenv"]
   166  ```
   167  
   168  Run the following to test the file, replacing the path as necessary:
   169  
   170  ```
   171  bazel run //prow/cmd/config -- --config-path=path/to/config.yaml
   172  ```
   173  
   174  Now run the following to update the configmap.
   175  
   176  ```
   177  kubectl create configmap config --from-file=config=path/to/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
   178  ```
   179  
   180  We use a make rule:
   181  
   182  ```Make
   183  update-config: get-cluster-credentials
   184      kubectl create configmap config --from-file=config=config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
   185  ```
   186  
   187  Presubmits and postsubmits are triggered by the `trigger` plugin. Be sure to
   188  enable that plugin by adding it to the list you created in the last section.
   189  
   190  Now when you open a PR it will automatically run the presubmit that you added
   191  to this file. You can see it on your prow dashboard. Once you are happy that it
   192  is stable, switch `skip_report` to `false`. Then, it will post a status on the
   193  PR. When you make a change to the config and push it with `make update-config`,
   194  you do not need to redeploy any of your cluster components. They will pick up
   195  the change within a few minutes.
   196  
   197  When you push a new change, the postsubmit job will run.
   198  
   199  For more information on the job environment, see [How to add new jobs][3].
   200  
   201  ## Run test pods in a different namespace or a different cluster
   202  
   203  You may choose to keep prowjobs or run tests in a different namespace. First
   204  create the namespace by `kubectl create -f`ing this:
   205  
   206  ```yaml
   207  apiVersion: v1
   208  kind: Namespace
   209  metadata:
   210    name: prow
   211  ```
   212  
   213  Now, in `config.yaml`, set `prowjob_namespace` or `pod_namespace` to the
   214  name from the YAML file. You can then use RBAC roles to limit what test pods
   215  can do.
   216  
   217  You may choose to run test pods in a separate cluster entirely. Create a secret
   218  containing the following:
   219  
   220  ```yaml
   221  endpoint: https://<master-ip>
   222  clientCertificate: <base64-encoded cert>
   223  clientKey: <base64-encoded key>
   224  clusterCaCertificate: <base64-encoded cert>
   225  ```
   226  
   227  You can learn these by running `gcloud container clusters describe` on your
   228  cluster. Then, mount this secret into the prow components that need it and set
   229  the `--build-cluster` flag to the location you mount it at. For instance, you
   230  will need to merge the following into the plank deployment:
   231  
   232  ```yaml
   233  spec:
   234    containers:
   235    - name: plank
   236      args:
   237      - --build-cluster=/etc/cluster/cluster
   238      volumeMounts:
   239      - mountPath: /etc/cluster
   240        name: cluster
   241        readOnly: true
   242    volumes:
   243    - name: cluster
   244      secret:
   245        defaultMode: 420
   246        secretName: build-cluster
   247  ```
   248  
   249  ## Configure SSL
   250  
   251  I suggest using [kube-lego][4] for automatic LetsEncrypt integration. If you
   252  already have a cert then follow the [official docs][5] to set up HTTPS
   253  termination. Promote your ingress IP to static IP. On GKE, run:
   254  
   255  ```
   256  gcloud compute addresses create [ADDRESS_NAME] --addresses [IP_ADDRESS] --region [REGION]
   257  ```
   258  
   259  Point the DNS record for your domain to point at that ingress IP. The convention
   260  for naming is `prow.org.io`, but of course that's not a requirement.
   261  
   262  Then, install kube-lego as described in its readme. You don't need to run it in
   263  a separate namespace.
   264  
   265  [1]: https://www.random.org/strings/?num=1&len=16&digits=on&upperalpha=on&loweralpha=on&unique=on&format=html&rnd=new
   266  [2]: https://github.com/settings/tokens
   267  [3]: ./README.md##how-to-add-new-jobs
   268  [4]: https://github.com/jetstack/kube-lego
   269  [5]: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls