github.com/coreos/mantle@v0.13.0/README.md (about)

     1  # Mantle: Gluing Container Linux together
     2  
     3  This repository is a collection of utilities for developing Container Linux. Most of the
     4  tools are for uploading, running, and interacting with Container Linux instances running
     5  locally or in a cloud.
     6  
     7  ## Overview
     8  Mantle is composed of many utilities:
     9   - `cork` for handling the Container Linux SDK
    10   - `gangue` for downloading from Google Storage
    11   - `kola` for launching instances and running tests
    12   - `kolet` an agent for kola that runs on instances
    13   - `ore` for interfacing with cloud providers
    14   - `plume` for releasing Container Linux
    15  
    16  All of the utilities support the `help` command to get a full listing of their subcommands
    17  and options.
    18  
    19  ## Tools
    20  
    21  ### cork
    22  Cork is a tool that helps working with Container Linux images and the SDK.
    23  
    24  #### cork create
    25  Download and unpack the Container Linux SDK.
    26  
    27  `cork create`
    28  
    29  #### cork enter
    30  Enter the SDK chroot, and optionally run a command. The command and its
    31  arguments can be given after `--`.
    32  
    33  `cork enter -- repo sync`
    34  
    35  #### cork download-image
    36  Download a Container Linux image into `$PWD/.cache/images`.
    37  
    38  `cork download-image --platform=qemu`
    39  
    40  #### Building Container Linux with cork
    41  See [Modifying Container Linux](https://coreos.com/os/docs/latest/sdk-modifying-coreos.html) for
    42  an example of using cork to build a Container Linux image.
    43  
    44  ### gangue
    45  Gangue is a tool for downloading and verifying files from Google Storage with authenticated requests.
    46  It is primarily used by the SDK.
    47  
    48  #### gangue get
    49  Get a file from Google Storage and verify it using GPG.
    50  
    51  ### kola
    52  Kola is a framework for testing software integration in Container Linux instances
    53  across multiple platforms. It is primarily designed to operate within
    54  the Container Linux SDK for testing software that has landed in the OS image.
    55  Ideally, all software needed for a test should be included by building
    56  it into the image from the SDK.
    57  
    58  Kola supports running tests on multiple platforms, currently QEMU, GCE,
    59  AWS, VMware VSphere, Packet, and OpenStack. In the future systemd-nspawn and other
    60  platforms may be added.
    61  Local platforms do not rely on access to the Internet as a design
    62  principle of kola, minimizing external dependencies. Any network
    63  services required get built directly into kola itself. Machines on cloud
    64  platforms do not have direct access to the kola so tests may depend on
    65  Internet services such as discovery.etcd.io or quay.io instead.
    66  
    67  Kola outputs assorted logs and test data to `_kola_temp` for later
    68  inspection.
    69  
    70  Kola is still under heavy development and it is expected that its
    71  interface will continue to change.
    72  
    73  By default, kola uses the `qemu` platform with the most recently built image
    74  (assuming it is run from within the SDK).
    75  
    76  #### kola run
    77  The run command invokes the main kola test harness. It
    78  runs any tests whose registered names matches a glob pattern.
    79  
    80  `kola run <glob pattern>`
    81  
    82  #### kola list
    83  The list command lists all of the available tests.
    84  
    85  #### kola spawn
    86  The spawn command launches Container Linux instances.
    87  
    88  #### kola mkimage
    89  The mkimage command creates a copy of the input image with its primary console set
    90  to the serial port (/dev/ttyS0). This causes more output to be logged on the console,
    91  which is also logged in `_kola_temp`. This can only be used with QEMU images and must
    92  be used with the `coreos_*_image.bin` image, *not* the `coreos_*_qemu_image.img`.
    93  
    94  #### kola bootchart
    95  The bootchart command launches an instance then generates an svg of the boot process
    96  using `systemd-analyze`.
    97  
    98  #### kola updatepayload
    99  The updatepayload command launches a Container Linux instance then updates it by
   100  sending an update to its update_engine. The update is the `coreos_*_update.gz` in the
   101  latest build directory.
   102  
   103  #### kola subtest parallelization
   104  Subtests can be parallelized by adding `c.H.Parallel()` at the top of the inline function
   105  given to `c.Run`. It is not recommended to utilize the `FailFast` flag in tests that utilize
   106  this functionality as it can have unintended results.
   107  
   108  #### kola test namespacing
   109  The top-level namespace of tests should fit into one of the following categories:
   110  1. Groups of tests targeting specific packages/binaries may use that
   111  namespace (ex: `docker.*`)
   112  2. Tests that target multiple supported distributions may use the
   113  `coreos` namespace.
   114  3. Tests that target singular distributions may use the distribution's
   115  namespace.
   116  
   117  #### kola test registration
   118  Registering kola tests currently requires that the tests are registered
   119  under the kola package and that the test function itself lives within
   120  the mantle codebase.
   121  
   122  Groups of similar tests are registered in an init() function inside the
   123  kola package.  `Register(*Test)` is called per test. A kola `Test`
   124  struct requires a unique name, and a single function that is the entry
   125  point into the test. Additionally, userdata (such as a Container Linux
   126  Config) can be supplied. See the `Test` struct in
   127  [kola/register/register.go](https://github.com/coreos/mantle/tree/master/kola/register/register.go)
   128  for a complete list of options.
   129  
   130  #### kola test writing
   131  A kola test is a go function that is passed a `platform.TestCluster` to
   132  run code against.  Its signature is `func(platform.TestCluster)`
   133  and must be registered and built into the kola binary. 
   134  
   135  A `TestCluster` implements the `platform.Cluster` interface and will
   136  give you access to a running cluster of Container Linux machines. A test writer
   137  can interact with these machines through this interface.
   138  
   139  To see test examples look under
   140  [kola/tests](https://github.com/coreos/mantle/tree/master/kola/tests) in the
   141  mantle codebase.
   142  
   143  For a quickstart see [kola/README.md](/kola/README.md).
   144  
   145  #### kola native code
   146  For some tests, the `Cluster` interface is limited and it is desirable to
   147  run native go code directly on one of the Container Linux machines. This is
   148  currently possible by using the `NativeFuncs` field of a kola `Test`
   149  struct. This like a limited RPC interface.
   150  
   151  `NativeFuncs` is used similar to the `Run` field of a registered kola
   152  test. It registers and names functions in nearby packages.  These
   153  functions, unlike the `Run` entry point, must be manually invoked inside
   154  a kola test using a `TestCluster`'s `RunNative` method. The function
   155  itself is then run natively on the specified running Container Linux instances.
   156  
   157  For more examples, look at the
   158  [coretest](https://github.com/coreos/mantle/tree/master/kola/tests/coretest)
   159  suite of tests under kola. These tests were ported into kola and make
   160  heavy use of the native code interface.
   161  
   162  #### Manhole
   163  The `platform.Manhole()` function creates an interactive SSH session which can
   164  be used to inspect a machine during a test.
   165  
   166  ### kolet
   167  kolet is run on kola instances to run native functions in tests. Generally kolet
   168  is not invoked manually.
   169  
   170  ### ore
   171  Ore provides a low-level interface for each cloud provider. It has commands
   172  related to launching instances on a variety of platforms (gcloud, aws,
   173  azure, esx, and packet) within the latest SDK image. Ore mimics the underlying
   174  api for each cloud provider closely, so the interface for each cloud provider
   175  is different. See each providers `help` command for the available actions.
   176  
   177  Note, when uploading to some cloud providers (e.g. gce) the image may need to be packaged
   178  with a different --format (e.g. --format=gce) when running `image_to_vm.sh`
   179  
   180  ### plume
   181  Plume is the Container Linux release utility. Releases are done in two stages,
   182  each with their own command: pre-release and release. Both of these commands are idempotent.
   183  
   184  #### plume pre-release
   185  The pre-release command does as much of the release process as possible without making anything public.
   186  This includes uploading images to cloud providers (except those like gce which don't allow us to upload
   187  images without making them public).
   188  
   189  ### plume release
   190  Publish a new Container Linux release. This makes the images uploaded by pre-release public and uploads
   191  images that pre-release could not. It copies the release artifacts to public storage buckets and updates
   192  the directory index.
   193  
   194  #### plume index
   195  Generate and upload index.html objects to turn a Google Cloud Storage
   196  bucket into a publicly browsable file tree. Useful if you want something
   197  like Apache's directory index for your software download repository.
   198  Plume release handles this as well, so it does not need to be run as part of
   199  the release process.
   200  
   201  ## Platform Credentials
   202  Each platform reads the credentials it uses from different files. The `aws`, `azure`, `do`, `esx` and `packet`
   203  platforms support selecting from multiple configured credentials, call "profiles". The examples below
   204  are for the "default" profile, but other profiles can be specified in the credentials files and selected
   205  via the `--<platform-name>-profile` flag:
   206  ```
   207  kola spawn -p aws --aws-profile other_profile
   208  ```
   209  
   210  ### aws
   211  `aws` reads the `~/.aws/credentials` file used by Amazon's aws command-line tool.
   212  It can be created using the `aws` command:
   213  ```
   214  $ aws configure
   215  ```
   216  To configure a different profile, use the `--profile` flag
   217  ```
   218  $ aws configure --profile other_profile
   219  ```
   220  
   221  The `~/.aws/credentials` file can also be populated manually:
   222  ```
   223  [default]
   224  aws_access_key_id = ACCESS_KEY_ID_HERE
   225  aws_secret_access_key = SECRET_ACCESS_KEY_HERE
   226  ```
   227  
   228  To install the `aws` command in the SDK, run:
   229  ```
   230  sudo emerge --ask awscli
   231  ```
   232  
   233  ### azure
   234  `azure` uses `~/.azure/azureProfile.json`. This can be created using the `az` [command](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli):
   235  ```
   236  $ az login`
   237  ```
   238  It also requires that the environment variable `AZURE_AUTH_LOCATION` points to a JSON file (this can also be set via the `--azure-auth` parameter). The JSON file will require a service provider active directory account to be created.
   239  
   240  Service provider accounts can be created via the `az` command (the output will contain an `appId` field which is used as the `clientId` variable in the `AZURE_AUTH_LOCATION` JSON):
   241  ```
   242  az ad sp create-for-rbac
   243  ```
   244  
   245  The client secret can be created inside of the Azure portal when looking at the service provider account under the `Azure Active Directory` service on the `App registrations` tab.
   246  
   247  You can find your subscriptionId & tenantId in the `~/.azure/azureProfile.json` via:
   248  ```
   249  cat ~/.azure/azureProfile.json | jq '{subscriptionId: .subscriptions[].id, tenantId: .subscriptions[].tenantId}'
   250  ```
   251  
   252  The JSON file exported to the variable `AZURE_AUTH_LOCATION` should be generated by hand and have the following contents:
   253  ```
   254  {
   255    "clientId": "<service provider id>", 
   256    "clientSecret": "<service provider secret>", 
   257    "subscriptionId": "<subscription id>", 
   258    "tenantId": "<tenant id>", 
   259    "activeDirectoryEndpointUrl": "https://login.microsoftonline.com", 
   260    "resourceManagerEndpointUrl": "https://management.azure.com/", 
   261    "activeDirectoryGraphResourceId": "https://graph.windows.net/", 
   262    "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/", 
   263    "galleryEndpointUrl": "https://gallery.azure.com/", 
   264    "managementEndpointUrl": "https://management.core.windows.net/"
   265  }
   266  
   267  ```
   268  
   269  ### do
   270  `do` uses `~/.config/digitalocean.json`. This can be configured manually:
   271  ```
   272  {
   273      "default": {
   274          "token": "token goes here"
   275      }
   276  }
   277  ```
   278  
   279  ### esx
   280  `esx` uses `~/.config/esx.json`. This can be configured manually:
   281  ```
   282  {
   283      "default": {
   284          "server": "server.address.goes.here",
   285          "user": "user.goes.here",
   286          "password": "password.goes.here"
   287      }
   288  }
   289  ```
   290  
   291  ### gce
   292  `gce` uses the `~/.boto` file. When the `gce` platform is first used, it will print
   293  a link that can be used to log into your account with gce and get a verification code
   294  you can paste in. This will populate the `.boto` file.
   295  
   296  See [Google Cloud Platform's Documentation](https://cloud.google.com/storage/docs/boto-gsutil)
   297  for more information about the `.boto` file.
   298  
   299  ### openstack
   300  `openstack` uses `~/.config/openstack.json`. This can be configured manually:
   301  ```
   302  {
   303      "default": {
   304          "auth_url": "auth url here",
   305          "tenant_id": "tenant id here",
   306          "tenant_name": "tenant name here",
   307          "username": "username here",
   308          "password": "password here",
   309          "user_domain": "domain id here",
   310          "floating_ip_pool": "floating ip pool here",
   311          "region_name": "region here"
   312      }
   313  }
   314  ```
   315  
   316  `user_domain` is required on some newer versions of OpenStack using Keystone V3 but is optional on older versions. `floating_ip_pool` and `region_name` can be optionally specified here to be used as a default if not specified on the command line.
   317  
   318  ### packet
   319  `packet` uses `~/.config/packet.json`. This can be configured manually:
   320  ```
   321  {
   322  	"default": {
   323  		"api_key": "your api key here",
   324  		"project": "project id here"
   325  	}
   326  }
   327  ```
   328  
   329  ### qemu
   330  `qemu` is run locally and needs no credentials, but does need to be run as root.
   331  
   332  ### qemu-unpriv
   333  `qemu-unpriv` is run locally and needs no credentials. It has a restricted set of functionality compared to the `qemu` platform, such as:
   334  
   335  - Single node only, no machine to machine networking
   336  - DHCP provides no data (forces several tests to be disabled)
   337  - No [Local cluster](platform/local/)