github.com/SUSE/skuba@v1.4.17/ci/infra/testrunner/README.md (about) 1 # Testrunner 2 3 ## Contents 4 5 - [Summary](#summary) 6 - [Design](#design) 7 - [Configuration](#configuration-parameters) 8 - [Packages](#packages) 9 - [Utils](#utils) 10 - [Platform](#platform) 11 - [Terraform](#terraform) 12 - [Openstack](#openstack) 13 - [VMware](#vmware) 14 - [Skuba](#skuba) 15 - [Log](#log) 16 - [Test](#test) 17 - [Environment setup](#environment-setup) 18 - [Local setup](#local-setup) 19 - [CI setup](#ci-setup) 20 - [Usage](#usage) 21 - [General CLI options](#general-cli-options) 22 - [Provision](#provision-command) 23 - [Node commands](#node-commands) 24 - [Node Upgrade](#node-upgrade-command) 25 - [Ssh](#ssh-command) 26 - [Test](#test-command) 27 - [Examples](#examples) 28 - [Create K8s Cluster](#create-k8s-cluster) 29 - [Collect logs](#collect-logs) 30 - [Install using registration code](#install-using-registration-code) 31 - [Install packages from mirror](#install-packages-from-mirror) 32 33 ## Summary 34 35 Testrunner is a CLI tool for setting up an environment for running e2e tests, making transparent the mechanism used for providing the test infrastructure. It can be used as a stand-alone tool, running either locally or as part of a CI pipeline. It provides commands for provisioning the infrastructure, deploying a k8s cluster using `skuba`, and running tests. It also provides [a library](tests/README.md) for developing `pytest`-based tests. 36 37 38 ## Design 39 40 The `testrunner` is composed by multiple components as described in the figure below: 41 * The `testrunner.py` frontend cli application, which receives CLI options, loads a configuration and executes the selected command. The configuration is loaded from a yaml file (by default, [vars.yaml](vars.yaml)) and mixed with the environment variables. 42 * A set of supporting classes which offer functionality for setting up and interacting with a cluster and executing tests. These classes wrap external tools such as `skuba`, `terraform`, `kubectl` and `pytests`. The commands offered by the `testrunner.py` cli application mostly expose the functionalities of these classes and add only the required glue code. 43 * In the case of the test command, the `testrunner` does not only wrap the `pytest` testing tool ([tests/driver.py](tests/driver.py), but also offers a test library ([tests/conftest.py](tests/conftest.py)) which implements reusable test functionalities as well as `fixtures` to facilitate [test development](tests/README.md). These fixtures use the libraries offered by testrunner for setting up the test infrastructure, deploying the cluster, interact with the cluster and the nodes, cleaning up after execution, execute common validation checks, among others. 44 45 The objective of this design is to maintain a clear separation between the following concerns: 46 * User interface including configuration (`testrunner.py` and `BaseConfig` class) 47 * Wrapping external classes (supporting libraries for `skuba`, `terraform`, `kubectl`) 48 * Reusable test functionalities (`conftest.py`) 49 * Test logic (diverse test suites) 50 51 It is worth noticing that tests can be executed directly using pytest, but it is more convenient to execute them using the testrunner, resulting in a consistent user experience. 52 53 ``` 54 | env variables 55 v 56 +--------------+ +----------------+ +---------------+ 57 CLI options | |<--------| | | Configuration | 58 ----------->| Testrunner | | Initialization |<----| (vars.yaml) | 59 | |----+ | | | | 60 +--------------+ | +----------------+ +---------------+ 61 | | 62 Test command | | Setup commands 63 v v (provision, bootstrap, ...) 64 +--------------+ +--------------+ 65 | pytest | | Supporting | Wrap skuba, 66 | wrapper | | libraries | terraform, 67 | | | | kubectl 68 +--------------+ +--------------+ 69 | ^ 70 Invokes with | | Use 71 cli optons v | 72 +--------------+ +--------------+ 73 | | | | reusable test 74 | pytest | | Test library | fixtures 75 | | | | 76 +--------------+ +--------------+ 77 | ^ 78 Execute | | 79 v | 80 +--------------+ | Use 81 | | | 82 | Tests |------------+ 83 | | 84 +--------------+ 85 ``` 86 87 ## Configuration parameters 88 89 Testrunner provides configuration by means of: 90 91 - A yaml configuration file (defaults to `vars.yaml` in current directory) 92 - Environment variables that override the configuration. Every configuration option of the form `<section>.<variable>' can be subtituted by an environment variable `SECTION_VARIABLE. For example `skuba.binpath` is overriden by `SKUBA_BINPATH`. 93 - CLI options which override configuration parameters such as the logging level (see [Usage](#usage)) 94 95 The following sections document the configuration options. The CLI arguments are described in the [Usage section](#usage). 96 97 #### Packages 98 The `packages` section configures the source of the packages to be installed in the nodes: 99 100 * additional_pkgs: list with additional packages to be installed in the nodes. For example, for installing SUSE certificates for self-signed packages in development environments: 101 ``` 102 packages: 103 additional_pkgs: 104 - "ca-certificates-suse" 105 ``` 106 * additional_repos: repositories to be added to the nodes. For example, for installing maintenance updates. It takes the form of a map: 107 ``` 108 packages: 109 additional_repos: 110 repo1: url/to/repo1 111 repo2: url/to/repo2 112 ``` 113 * mirror: URL for the repository mirrors to be used when setting up the skuba nodes, replacing the URL of the repositories defined in terraform. Used, for instance, to switch to development repositories or internal repositories when running in the CI pipeline. 114 * registry_code: code use for registering CaaSP product. If specified, the registries from the tfvars are ignored. Additional repositories can still be defined using the `maintenance` configuration parameter. 115 116 117 ### Utils 118 119 This section configures the utils module used for executing commands. 120 121 * ssh_key: specifies the location of the key used to access nodes. The default is to use the user's key located at `$HOME/.ssh/id_rsa`. 122 * ssh_sock: name of the socket used to communicate with the ssh-agent. Default is /tmp/testrunner_ssh_sock' 123 124 Example: 125 ``` 126 utils: 127 ssh_sock: "/path/to/ssh-agent/socket" 128 ``` 129 ### Platform 130 131 This section configures general platform-independent parameters. Platform dependent parameters are defined in the corresponding sections (Terraform, Openstack, VMware) 132 133 - log_dir: path to the directory where platform logs are collected. Defaults to `$WORKSPACE/platform_logs` 134 135 ``` 136 log_dir: "/path/to/log/dir/ 137 ``` 138 139 #### Terraform 140 141 General setting for terraform-based platforms such as [Openstack](#openstack) and [VMware](#vmware). 142 143 * internal_net: name of the network used when provisioning the platform. Defaults to `stack_name` 144 * lb: specifications for the load balancer(s) 145 * nodeuser: the user name used to login into the platform nodes. Optional. 146 * master: specifications for the master(s) 147 * plugin_dir: directory used for retrieving terraform plugins. If not set, plugins are installed using terraform [discovery mechanism](https://www.terraform.io/docs/extend/how-terraform-works.html#discovery) 148 * retries: maximum number of attempts to recover from failures during terraform provisioning 149 * stack name: the unique name of the platform stack on the shared infrastructure, used as prefix by many resources such as networks, nodes, among others. Default is "$USER" 150 * tfdir: path to the terraform files. Testrunner must have writing permissions to this directory. Defaults to `$WORKSPACE/ci/infra`. 151 * tfvars: name of the terraform variables file to be used. Defaults to "terraform.tfvars.json.ci.example" 152 * workdir: working directory on which tfout file will be generated. Default is `$WORKSPACE` 153 * worker: specifications for the worker(s) 154 155 Example 156 ``` 157 terraform: 158 stack_name="my-test-stack" 159 ``` 160 161 #### Openstack 162 163 * openrc: path to the environment setup script 164 165 Example: 166 ``` 167 openstack: 168 openrc: "/home/myuser/my-openrc.sh" 169 ``` 170 171 #### VMware 172 173 * env_file: path to environment variables file 174 175 Example: 176 ``` 177 vmware: 178 env_file: "/path/to/env/file" 179 ``` 180 181 ### Skuba 182 183 The Skuba section defines the location and execution options for the `skuba` command. As `testrunner` can be used either from a local development or testing environment or a CI pipeline, the configuration allows to define the location of the binary. 184 185 * binpath: path to skuba binary. Default is "$WORKSPACE/go/bin/skuba" 186 * cluster: name of the cluster. Default is "test-cluster" 187 * verbosity: verbosity level for skuba command execution 188 * workdir: working directory on which cluster is initialized. Default is "$WORKSPACE" 189 190 Example: 191 ``` 192 skuba: 193 binpath: "/usr/bin/skuba" 194 verbosity: 10 195 ``` 196 197 ### Kubectl 198 199 The kubectl section defines the configuration of the kubectl tool. 200 * binpath: path to the kubectl binary. Defaults to `/usr/bin/kubectl` 201 * kubeconf: path to the kubeconfig file. defaults to `<workspace>/test-cluster/admin.conf` 202 203 ### Log 204 205 Testrunner sends output to both a console and file logger handlers, configured using the following `log` variables: 206 207 * file: path to the file used to send a copy of the log with verbosity `DEBUG`. Default is "$WORKSPACE/testrunner.log" 208 * level: debug verbosity level to console. Can be any of `DEBUG`, `INFO`, `WARNING`, `ERROR`. Defaults to `INFO`. 209 * overwrite: boolean that indicates if the content of the log file must be overwritten (`True`) or log entries must be appended at the end of the file if it exists. Defaults to `False` (do not overwrite) 210 * quiet: boolean that indicates if `testrunner` will send any output to console (`False`) or not will execute silently (`True`). Quiet mode is useful when `testrunner` is used as a library. Defaults to `False`. 211 212 Example: 213 ``` 214 log: 215 level: DEBUG 216 ``` 217 218 ### Test 219 220 * no_destroy: boolean that indicates if provisioned resources should be deleted when test ends. Defaults to `False` 221 222 ``` 223 no_destroy: True #keep resources after test ends 224 ``` 225 226 ## Environment Setup 227 228 This section details how to setup `testrunner` 229 230 ### Local setup 231 232 Copy `vars.yaml` to `/path/to/myvars.yaml`, set the variables according to your environment and needs, and use the `--vars /path/to/myvars.yaml` CLI argument when running `testrunner`. 233 234 #### Work Environment 235 236 Different components of `testrunner` requires a working directory. In particular, skuba for maintaining the test cluster configuration. The content of this directory can be erased or overwritten by `testrunner`. Be sure you create a directory to be used as workspace which is NOT located under your local working copy of the `skuba` project. By default, the working directory is taken from environment variable `WORKSPACE`: 237 238 ``` 239 export WORKSPACE="/path/to/workspace" 240 ``` 241 242 243 #### skuba and platform 244 245 Set the `skuba` and `terraform parameters depending on how you are testing `skuba`: 246 * If testing from local source: 247 ``` 248 skuba: 249 binpath: "/path/to/go/bin/directory" 250 ``` 251 252 Be sure you `terraform.tfdir` directory to point to the `ci/infra` directory in the local `skuba` repo: 253 ``` 254 terraform: 255 tfdir: "/path/to/local/skuba/repo/ci/infra" 256 ``` 257 258 259 * If testing from installed package 260 261 Use skuba binary installed from the package 262 ``` 263 skuba: 264 binpath: "/usr/bin/" 265 ``` 266 267 You must copy the terraform files installed from the package to a work directory and set the `tfdir` directory accordingly: 268 ``` 269 terraform: 270 tfdir: "/path/to/terraform/files" 271 ``` 272 273 You must provide the ssh key to connect to the cluster nodes, as the `shared_id` key used in development is not available. By default, your `id_rsa` key will be used, but you can provide any key: 274 275 ``` 276 utils: 277 ssh_key: "path/to/id_rsa" 278 ``` 279 280 #### Open Stack 281 282 1. Download your openrc file from openstack 283 284 2. Optionally, add your openstack password to the downloaded openrc.sh as shown below: 285 ``` 286 # With Keystone you pass the keystone password. 287 #echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: " 288 #read -sr OS_PASSWORD_INPUT 289 #export OS_PASSWORD=$OS_PASSWORD_INPUT 290 export OS_PASSWORD="YOUR PASSWORD" 291 ``` 292 3. Set the path to the openrc file in the testrunner's vars file: 293 294 ``` 295 openstack: 296 openrc: "/path/to/openrc.sh" 297 ``` 298 299 or as an environment variable: ```export OPENSTACK_OPENRC=/path/to/openrc.sh``` 300 301 #### VMware 302 303 1. Create an environment file e.g. `vmware-env.sh` with the following: 304 ``` 305 #!/usr/bin/env bash 306 307 export VSPHERE_SERVER="vsphere.cluster.endpoint.hostname" 308 export VSPHERE_USER="username@vsphere.cluster.endpoint.hostname" 309 export VSPHERE_PASSWORD="password" 310 export VSPHERE_ALLOW_UNVERIFIED_SSL="true" 311 ``` 312 313 2. Set the path to the VMware environment file in the testrunner's vars file: 314 315 ``` 316 vmware: 317 env_file: "/path/to/vmware-env.sh" 318 ``` 319 or as an environment variable: `export VMWARE_ENV_FILE=/path/to/vmware-env.sh` 320 321 3. Be sure to use the `-p` or `--platform` argument when invoking `testrunner` and set it to `vmware`, otherwise `openstack` is used. 322 323 #### Libvirt 324 325 `testrunner` can provision a cluster of virtual machines using terraform libvirt provider. The only noticeable difference with other platforms is the dependency on the terraform libvirt provider plugin which is not available from the official terraform plugin site, neither is delivered as part of the CaaSP packages. However this is available from the development [CaaSP repositories](http://download.suse.de/ibs/Devel:/CaaSP:/4.0/SLE_15_SP1/) for SLE15-SP1 or from the public [openSUSE repositories](https://build.opensuse.org/package/show/systemsmanagement:terraform:unstable/terraform-provider-libvirt) for non SLE15-SP1 hosts. Notice it requires an updated version of libvirt (4.1.0 or above). 326 327 There three configuration variables required for libvirt operations, all configurable from the configuration yaml file or by environment variables like any other variable in the yaml file: 328 329 ```yaml 330 libvirt: 331 uri: "qemu:///system" #os.getenv("LIBVIRT_URI") 332 keyfile: "" #os.getenv("LIBVIRT_KEYFILE") 333 image_uri: "" #os.getenv("LIBVIRT_IMAGE_URI") 334 ``` 335 336 `uri` is the URI used by the libvirt client to connect the libvirt host `qemu:///system` for local libvirt services. 337 `keyfile` is the path of the keyfile used to connect to the libbirt `uri`. This path is included as part of the `uri` as a query. Consider a remote ssh uri as `qemu+ssh://<user>@<libvirt_host>/system`, defining a `keyfile` turns it into `qemu+ssh://<user>@<libvirt_host>/system?keyfile=<keyfile_path>` 338 `image_uri` is the URI that will be used to pull the image for the VMs deployment, usually this points to some JeOS image.Note the image is expected to include cloud-init. 339 340 ### Jenkins Setup 341 342 In your Jenkins file, you need to set up environment variables which will replace options in the yaml file. This is more convenient than having to edit the yaml file in the CI pipeline. 343 344 #### Work environment 345 346 By default, Jenkins has a `WORKSPACE` environment variable so that `workspace` will be replaced automatically. 347 348 #### Skuba 349 350 Jenkins checks out the `skuba` repository under the `workspace` directory and generates the binaries also under the `workspace`, which are the default locations. Therefore, there is no need to specify any location: 351 352 ``` 353 skuba: 354 binpath: "" 355 ``` 356 357 #### Terraform 358 359 It is advisable to use a unique id related to the job execution as the 360 terraform stack name that doesn't contain slash or dash and is at a maximum 70 361 bytes long: 362 363 ``` 364 TERRAFORM_STACK_NAME = "${BUILD_NUMBER}-${JOB_NAME.replaceAll("/","-")}".take(70) 365 ``` 366 367 #### Openstack 368 369 Set the path to the `openrc` file using jenkins's builtin `credentials` directive. 370 371 ``` 372 OPENSTACK_OPENRC = credentials('openrc') 373 ``` 374 375 #### VMware 376 377 Set the path to `env_file` using jenkins's builtin `credentials` directive. 378 ``` 379 VMWARE_ENV_FILE = credentials('vmware-env') 380 ``` 381 382 ### Example 383 384 ``` 385 environment { 386 OPENSTACK_OPENRC = credentials('openrc') 387 TERRAFORM_STACK_NAME = "${JOB_NAME}-${BUILD_NUMBER}" 388 GITHUB_TOKEN = credentials('github-token') 389 PLATFORM = 'openstack' 390 } 391 ``` 392 393 ## Usage 394 395 ### General CLI options 396 397 ``` 398 ./testrunner --help 399 usage: 400 This script is meant to be run manually on test servers, developer desktops, or Jenkins. 401 This script supposed to run on python virtualenv from testrunner. Requires root privileges. 402 Warning: it removes docker containers, VMs, images, and network configuration. 403 404 [-h] [-v YAML_PATH] [-p {openstack,vmware,bare-metal,libvirt}] [-c] 405 [-l {DEBUG,INFO,WARNING,ERROR}] 406 {info,config,get_logs,cleanup,provision,bootstrap,deploy,status,cluster-upgrade-plan,join-node,remove-node,node-upgrade,join-nodes,ssh,test,inhibit_kured} 407 ... 408 409 positional arguments: 410 {info,get_logs,cleanup,provision,bootstrap,deploy,status,cluster-upgrade-plan,join-node,remove-node,node-upgrade,join-nodes,ssh,test,inhibit_kured} 411 command 412 info ip info 413 config print configuration 414 get_logs gather logs from nodes 415 cleanup cleanup created skuba environment 416 provision provision nodes for cluster in your configured 417 platform e.g: openstack, vmware. 418 bootstrap bootstrap k8s cluster 419 deploy initializes, bootstrap and join all nodes k8s 420 status check K8s cluster status 421 cluster-upgrade-plan 422 Cluster upgrade plan 423 check-node check node health 424 join-node add node in k8s cluster with the given role. 425 remove-node remove node from k8s cluster. 426 node-upgrade upgrade kubernetes version in node 427 join-nodes add multiple provisioned nodes k8s. 428 ssh Execute command in node via ssh. 429 test execute tests 430 inhibit_kured Prevent kured to reboot nodes 431 432 433 optional arguments: 434 -h, --help show this help message and exit 435 -v YAML_PATH, --vars YAML_PATH 436 path for platform yaml file. Default is vars.yaml. eg: 437 -v myconfig.yaml 438 -p {openstack,vmware,bare-metal,libvirt}, --platform {openstack,vmware,bare-metal,libvirt} 439 The platform you're targeting. Default is openstack 440 -l {DEBUG,INFO,WARNING,ERROR}, --log-level {DEBUG,INFO,WARNING,ERROR} 441 log level 442 -c, --print-conf prints the configuration 443 444 ``` 445 446 ### Provision command 447 448 ``` 449 optional arguments: 450 -h, --help show this help message and exit 451 -m MASTER_COUNT, -master-count MASTER_COUNT 452 number of masters nodes to be deployed. eg: -m 2 453 -w WORKER_COUNT, --worker-count WORKER_COUNT 454 number of workers nodes to be deployed. eg: -w 2 455 ``` 456 ### Bootstrap 457 458 ``` 459 optional arguments: 460 -h, --help show this help message and exit 461 -k KUBERNETES_VERSION, --kubernetes-version KUBERNETES_VERSION 462 kubernetes version 463 -c, --cloud-provider Use cloud provider integration 464 -t TIMEOUT, --timeout TIMEOUT 465 timeout for waiting a node to become ready (seconds) 466 -m R M, --registry-mirror R M 467 Add to the registry R a mirror M. If an image is 468 available at the mirror it will be preferred, otherwise 469 the image in the original registry is used. This 470 argument can be used multiple times, then mirrors will 471 be tried in that order. Example: 472 --registry-mirror registry.example.com/path test-registry.example.com/path 473 ``` 474 475 ### Deploy 476 477 ``` 478 optional arguments: 479 -h, --help show this help message and exit 480 -k KUBERNETES_VERSION, --kubernetes-version KUBERNETES_VERSION 481 kubernetes version 482 -c, --cloud-provider Use cloud provider integration 483 -t TIMEOUT, --timeout TIMEOUT 484 timeout for waiting a node to become ready (seconds) 485 -m R M, --registry-mirror R M 486 Add to the registry R a mirror M. If an image is 487 available at the mirror it will be preferred, otherwise 488 the image in the original registry is used. This 489 argument can be used multiple times, then mirrors will 490 be tried in that order. Example: 491 --registry-mirror registry.example.com/path test-registry.example.com/path 492 ``` 493 494 ### Join nodes 495 496 ``` 497 -h, --help show this help message and exit 498 -m MASTERS, --masters MASTERS 499 Specify how many masters to join. Default is all 500 -w WORKERS, --workers WORKERS 501 Specify how many workers to join. Default is all 502 -t TIMEOUT, --timeout TIMEOUT 503 timeout for waiting the master nodes to become ready (seconds) 504 ``` 505 506 ### Check cluster 507 508 Checks the status of the cluster. If no check is specified, all checks that apply to 509 the stage are executed. 510 511 ``` 512 -c CHECKS [CHECKS ...], --check CHECKS [CHECKS ...] 513 check to be executed (multiple checks can be specified) 514 -s STAGE, -stage STAGE 515 only execute checks that apply to this stage 516 ``` 517 518 ### Node commands 519 520 Common parameters 521 522 ``` 523 -h, --help show this help message and exit 524 -r {master,worker}, --role {master,worker} 525 role of the node to be added or deleted. eg: --role 526 master 527 -n NODE, --node NODE node to be added or deleted. eg: -n 0 528 529 ``` 530 #### Join Node 531 532 Joins node to cluster with the given role 533 534 ``` 535 -t TIMEOUT, --timeout TIMEOUT 536 timeout for waiting a node to become ready (seconds) 537 ``` 538 539 #### Node Upgrade command 540 541 Upgrades node 542 543 ``` 544 -h, --help show this help message and exit 545 -a {plan,apply}, --action {plan,apply} 546 action: plan or apply upgrade 547 ``` 548 549 #### Ssh command 550 551 Executes command in a node 552 553 ``` 554 -c ..., --cmd ... remote command and its arguments. e.g ls -al. Must be 555 last argument for ssh command 556 ``` 557 558 #### Check command 559 560 Checks the status of a node. If no check is specified, all checks that apply to 561 the node's role are executed. 562 563 ``` 564 -c CHECKS [CHECKS ...], --check CHECKS [CHECKS ...] 565 check to be executed (multiple checks can be specified) 566 -s STAGE, -stage STAGE 567 only execute checks that apply to this stage 568 ``` 569 570 ### Test command 571 572 ``` 573 optional arguments: 574 -h, --help show this help message and exit 575 -f MARK, --filter MARK 576 Filter the tests based on markers 577 -j JUNIT, --junit JUNIT 578 Name of the xml file to record the results to. 579 -m MODULE, --module MODULE 580 folder with the tests 581 -s TEST_SUITE, --suite TEST_SUITE 582 test file name 583 -t TEST, --test TEST test to execute 584 -l, --list only list tests to be executed 585 -v, --verbose show all output from testrunner libraries 586 --skip-setup {provisioned,bootstrapped,deployed} 587 Skip the given setup step. 'provisioned' For when you 588 have already provisioned the nodes. 'bootstrapped' For 589 when you have already bootstrapped the cluster. 590 'deployed' For when you already have a fully deployed 591 cluster. 592 --traceback {long,short,line,no} 593 level of detail in traceback for test failure 594 595 ``` 596 597 ## Examples 598 599 ### Create K8s Cluster 600 601 1. Deploy nodes to openstack 602 ```./testrunner provision``` 603 2. Initialize the control plane 604 ```./testrunner bootstrap``` 605 3. Join nodes 606 ```./testrunner join-node --role worker --node 0``` 607 608 5. Use K8s 609 Once your nodes are bootstrapped, $WORKSPACE/test-cluster folder will be created. Inside test-cluster, Your kubeconfig file will be located in with the name of admin.conf in test-cluster folder. 610 ``` 611 chang@~/Workspace/vNext/test-cluster$ kubectl get pods --all-namespaces --kubeconfig=./admin.conf 612 NAMESPACE NAME READY STATUS RESTARTS AGE 613 kube-system cilium-6mnrh 1/1 Running 0 3m 614 kube-system cilium-z9rqm 1/1 Running 0 3m 615 kube-system coredns-559fbd6bb4-gw7cn 1/1 Running 0 4m 616 kube-system coredns-559fbd6bb4-jqt4r 1/1 Running 0 4m 617 kube-system etcd-my-master-0 1/1 Running 0 3m 618 kube-system kube-apiserver-my-master-0 1/1 Running 0 3m 619 kube-system kube-controller-manager-my-master-0 1/1 Running 0 3m 620 kube-system kube-proxy-782z2 1/1 Running 0 4m 621 kube-system kube-proxy-kf7g5 1/1 Running 0 3m 622 kube-system kube-scheduler-my-master-0 1/1 Running 0 3m 623 ``` 624 625 ### Collect logs 626 627 ```./testrunner get_logs``` 628 629 All collected logs are stored at `path/to/workspace/platform_logs/` 630 631 Logs that are currently being collected are the cloud-init logs for each of the nodes: 632 633 /var/run/cloud-init/status.json 634 /var/log/cloud-init-output.log 635 /var/log/cloud-init.log 636 637 These are stored each in their own folder named `path/to/workspace/platform_logs/{master|worker}_ip_address/` 638 639 ### Install using registration code 640 641 1. Configure the registration code to be passed to nodes: 642 643 `vars.yaml` 644 ``` 645 packages: 646 registry_code: "<registry code>" 647 ``` 648 2. Configure `testrunner` to use a `skuba` binary compatible with the version installed in the nodes: 649 650 `vars.yaml` 651 ``` 652 skuba: 653 bin_path: "/path/to/skuba" 654 ``` 655 656 ### Install packages from mirror 657 658 Specify the mirror an enable the installation of certifates package: 659 660 `vars.yaml` 661 ``` 662 packages: 663 mirror: "my.mirror.site" 664 certificates: "certificates-package" 665 ``` 666