github.com/alloyci/alloy-runner@v1.0.1-0.20180222164613-925503ccafd6/docs/executors/docker.md (about)

     1  # The Docker executor
     2  
     3  AlloyCI Runner can use Docker to run builds on user provided images. This is
     4  possible with the use of **Docker** executor.
     5  
     6  The **Docker** executor when used with AlloyCI, connects to [Docker Engine]
     7  and runs each build in a separate and isolated container using the predefined
     8  image that is [set up in `.alloy-ci.json`][json] and in accordance in
     9  [`config.toml`][toml].
    10  
    11  That way you can have a simple and reproducible build environment that can also
    12  run on your workstation. The added benefit is that you can test all the
    13  commands that we will explore later from your shell, rather than having to test
    14  them on a dedicated CI server.
    15  
    16  ## Workflow
    17  
    18  The Docker executor divides the build into multiple steps:
    19  
    20  1. **Prepare**: Create and start the services.
    21  1. **Pre-build**: Clone, restore cache and download artifacts from previous
    22     stages. This is run on a special Docker Image.
    23  1. **Build**: User build. This is run on the user-provided docker image.
    24  1. **Post-build**: Create cache, upload artifacts to AlloyCI. This is run on
    25     a special Docker Image.
    26  
    27  The special Docker Image is based on [Alpine Linux] and contains all the tools
    28  required to run the prepare step the build: the Git binary and the Runner
    29  binary for supporting caching and artifacts. You can find the definition of
    30  this special image [in the official Runner repository][special-build].
    31  
    32  ## The `image` keyword
    33  
    34  The `image` keyword is the name of the Docker image that is present in the
    35  local Docker Engine (list all images with `docker images`) or any image that
    36  can be found at [Docker Hub][hub]. For more information about images and Docker
    37  Hub please read the [Docker Fundamentals][] documentation.
    38  
    39  In short, with `image` we refer to the docker image, which will be used to
    40  create a container on which your build will run.
    41  
    42  If you don't specify the namespace, Docker implies `library` which includes all
    43  [official images](https://hub.docker.com/u/library/). That's why you'll see
    44  many times the `library` part omitted in `.alloy-ci.json` and `config.toml`.
    45  For example you can define an image like `image: ruby:2.1`, which is a shortcut
    46  for `image: library/ruby:2.1`.
    47  
    48  Then, for each Docker image there are tags, denoting the version of the image.
    49  These are defined with a colon (`:`) after the image name. For example, for
    50  Ruby you can see the supported tags at <https://hub.docker.com/_/ruby/>. If you
    51  don't specify a tag (like `image: ruby`), `latest` is implied.
    52  
    53  ## The `services` keyword
    54  
    55  The `services` keyword defines just another Docker image that is run during
    56  your build and is linked to the Docker image that the `image` keyword defines.
    57  This allows you to access the service image during build time.
    58  
    59  The service image can run any application, but the most common use case is to
    60  run a database container, e.g., `mysql`. It's easier and faster to use an
    61  existing image and run it as an additional container than install `mysql` every
    62  time the project is built.
    63  
    64  ### How is service linked to the build
    65  
    66  To better understand how the container linking works, read
    67  [Linking containers together](https://docs.docker.com/userguide/dockerlinks/).
    68  
    69  To summarize, if you add `mysql` as service to your application, this image
    70  will then be used to create a container that is linked to the build container.
    71  According to the [workflow](#workflow) this is the first step that is performed
    72  before running the actual builds.
    73  
    74  The service container for MySQL will be accessible under the hostname `mysql`.
    75  So, in order to access your database service you have to connect to the host
    76  named `mysql` instead of a socket or `localhost`.
    77  
    78  ## Define image and services from `.alloy-ci.json`
    79  
    80  You can simply define an image that will be used for all jobs and a list of
    81  services that you want to use during build time.
    82  
    83  ```json
    84  {
    85    "image": "ruby:2.2",
    86    "services": [
    87      "postgres:9.3"
    88    ],
    89    "before_script": [
    90      "bundle install"
    91    ],
    92    "test": {
    93      "script": [
    94        "bundle exec rake spec"
    95      ]
    96    }
    97  }
    98  ```
    99  
   100  It is also possible to define different images and services per job:
   101  
   102  ```json
   103  {
   104    "before_script": [
   105      "bundle install"
   106    ],
   107    "test:2.1": {
   108      "image": "ruby:2.1",
   109      "services": [
   110        "postgres:9.3"
   111      ],
   112      "script": [
   113        "bundle exec rake spec"
   114      ]
   115    },
   116    "test:2.2": {
   117      "image": "ruby:2.2",
   118      "services": [
   119        "postgres:9.4"
   120      ],
   121      "script": [
   122        "bundle exec rake spec"
   123      ]
   124    }
   125  }
   126  ```
   127  
   128  ## Define image and services in `config.toml`
   129  
   130  Look for the `[runners.docker]` section:
   131  
   132  ```
   133  [runners.docker]
   134    image = "ruby:2.1"
   135    services = ["mysql:latest", "postgres:latest"]
   136  ```
   137  
   138  The image and services defined this way will be added to all builds run by
   139  that Runner, so even if you don't define an `image` inside `.alloy-ci.json`,
   140  the one defined in `config.toml` will be used.
   141  
   142  ## Define an image from a private Docker registry
   143  
   144  You can also define images located on
   145  private registries that could also require authentication.
   146  
   147  All you have to do is be explicit on the image definition in `.alloy-ci.json`.
   148  
   149  ```json
   150  image: my.registry.tld:5000/namepace/image:tag
   151  ```
   152  
   153  In the example above, AlloyCI Runner will look at `my.registry.tld:5000` for the
   154  image `namespace/image:tag`.
   155  
   156  If the repository is private you need to authenticate your AlloyCI Runner in the
   157  registry. Read more on [using a private Docker registry][runner-priv-reg].
   158  
   159  ## Accessing the services
   160  
   161  Let's say that you need a Wordpress instance to test some API integration with
   162  your application.
   163  
   164  You can then use for example the [tutum/wordpress][] as a service image in your
   165  `.alloy-ci.json`:
   166  
   167  ```json
   168  services:
   169  - tutum/wordpress:latest
   170  ```
   171  
   172  When the build is run, `tutum/wordpress` will be started first and you will have
   173  access to it from your build container under the hostname `tutum__wordpress`
   174  and `tutum-wordpress`.
   175  
   176  The AlloyCI Runner creates two alias hostnames for the service that you can use
   177  alternatively. The aliases are taken from the image name following these rules:
   178  
   179  1. Everything after `:` is stripped
   180  2. For the first alias, the slash (`/`) is replaced with double underscores (`__`)
   181  2. For the second alias, the slash (`/`) is replaced with a single dash (`-`)
   182  
   183  Using a private service image will strip any port given and apply the rules as
   184  described above. A service `registry.alloy-wp.com:4999/tutum/wordpress` will
   185  result in hostname `registry.alloy-wp.com__tutum__wordpress` and
   186  `registry.alloy-wp.com-tutum-wordpress`.
   187  
   188  ## Configuring services
   189  
   190  Many services accept environment variables which allow you to easily change
   191  database names or set account names depending on the environment.
   192  
   193  AlloyCI Runner 1.0 and up passes all JSON-defined variables to the created
   194  service containers.
   195  
   196  For all possible configuration variables check the documentation of each image
   197  provided in their corresponding Docker hub page.
   198  
   199  > **Note**:
   200  > All variables will be passed to all services containers. It's not designed to
   201  distinguish which variable should go where.
   202  Secure variables are only passed to the build container.
   203  
   204  ## Mounting a directory in RAM
   205  
   206  You can mount a path in RAM using tmpfs. This can speed up the time required to test if there is a lot of I/O related work, such as with databases.
   207  If you use the `tmpfs` and `services_tmpfs` options in the runner configuration, you can specify multiple paths, each with its own options. See the [docker reference](https://docs.docker.com/engine/reference/commandline/run/#mount-tmpfs-tmpfs) for details.
   208  This is an example `config.toml` to mount the data directory for the official Mysql container in RAM.
   209  
   210  ```
   211  [runners.docker]
   212    # For the main container
   213    [runners.docker.tmpfs]
   214        "/var/lib/mysql" = "rw,noexec"
   215  
   216    # For services
   217    [runners.docker.services_tmpfs]
   218        "/var/lib/mysql" = "rw,noexec"
   219  ```
   220  
   221  ## Build directory in service
   222  
   223  AlloyCI Runner mounts a `/builds` directory to all shared services.
   224  
   225  See an issue: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/1520
   226  
   227  ### PostgreSQL service example
   228  
   229  See the specific documentation for
   230  [using PostgreSQL as a service](https://github.com/AlloyCI/alloy_ci/tree/master/doc/services/postgres.md).
   231  
   232  ### MySQL service example
   233  
   234  See the specific documentation for
   235  [using MySQL as a service](https://github.com/AlloyCI/alloy_ci/tree/master/doc/services/mysql.md).
   236  
   237  ### The services health check
   238  
   239  After the service is started, AlloyCI Runner waits some time for the service to
   240  be responsive. Currently, the Docker executor tries to open a TCP connection to
   241  the first exposed service in the service container.
   242  
   243  ## The builds and cache storage
   244  
   245  The Docker executor by default stores all builds in
   246  `/builds/<namespace>/<project-name>` and all caches in `/cache` (inside the
   247  container).
   248  
   249  You can overwrite the `/builds` and `/cache` directories by defining the
   250  `builds_dir` and `cache_dir` options under the `[[runners]]` section in
   251  `config.toml`. This will modify where the data are stored inside the container.
   252  
   253  If you modify the `/cache` storage path, you also need to make sure to mark this
   254  directory as persistent by defining it in `volumes = ["/my/cache/"]` under the
   255  `[runners.docker]` section in `config.toml`.
   256  
   257  Read the next section of persistent storage for more information.
   258  
   259  ## The persistent storage
   260  
   261  The Docker executor can provide a persistent storage when running the containers.
   262  All directories defined under `volumes =` will be persistent between builds.
   263  
   264  The `volumes` directive supports 2 types of storage:
   265  
   266  1. `<path>` - **the dynamic storage**. The `<path>` is persistent between subsequent
   267      runs of the same concurrent job for that project. The data is attached to a
   268      custom cache container: `runner-<short-token>-project-<id>-concurrent-<job-id>-cache-<unique-id>`.
   269  2. `<host-path>:<path>[:<mode>]` - **the host-bound storage**. The `<path>` is
   270      bind to `<host-path>` on the host system. The optional `<mode>` can specify
   271      that this storage is read-only or read-write (default).
   272  
   273  ## The persistent storage for builds
   274  
   275  If you make the `/builds` to be **the host-bound storage**, your builds will be stored in:
   276  `/builds/<short-token>/<concurrent-id>/<namespace>/<project-name>`, where:
   277  
   278  - `<short-token>` is a shortened version of the Runner's token (first 8 letters)
   279  - `<concurrent-id>` is a unique number, identifying the local job ID on the
   280    particular Runner in context of the project
   281  
   282  ## The privileged mode
   283  
   284  The Docker executor supports a number of options that allows to fine tune the
   285  build container. One of these options is the [`privileged` mode][privileged].
   286  
   287  ### Use docker-in-docker with privileged mode
   288  
   289  The configured `privileged` flag is passed to the build container and all
   290  services, thus allowing to easily use the docker-in-docker approach.
   291  
   292  First, configure your Runner (config.toml) to run in `privileged` mode:
   293  
   294  ```toml
   295  [[runners]]
   296    executor = "docker"
   297    [runners.docker]
   298      privileged = true
   299  ```
   300  
   301  Then, make your build script (`.alloy-ci.json`) to use Docker-in-Docker
   302  container:
   303  
   304  ```json
   305  {
   306    "image": "docker:git",
   307    "services": [
   308      "docker:dind"
   309    ],
   310    "build": {
   311      "script": [
   312        "docker build -t my-image .",
   313        "docker push my-image"
   314      ]
   315    }
   316  }
   317  ```
   318  
   319  ## The ENTRYPOINT
   320  
   321  The Docker executor doesn't overwrite the [`ENTRYPOINT` of a Docker image][entry].
   322  
   323  That means that if your image defines the `ENTRYPOINT` and doesn't allow to run
   324  scripts with `CMD`, the image will not work with the Docker executor.
   325  
   326  With the use of `ENTRYPOINT` it is possible to create special Docker image that
   327  would run the build script in a custom environment, or in secure mode.
   328  
   329  You may think of creating a Docker image that uses an `ENTRYPOINT` that doesn't
   330  execute the build script, but does execute a predefined set of commands, for
   331  example to build the docker image from your directory. In that case, you can
   332  run the build container in [privileged mode](#the-privileged-mode), and make
   333  the build environment of the Runner secure.
   334  
   335  Consider the following example:
   336  
   337  1. Create a new Dockerfile:
   338  
   339      ```bash
   340      FROM docker:dind
   341      ADD / /entrypoint.sh
   342      ENTRYPOINT ["/bin/sh", "/entrypoint.sh"]
   343      ```
   344  
   345  2. Create a bash script (`entrypoint.sh`) that will be used as the `ENTRYPOINT`:
   346  
   347      ```bash
   348      #!/bin/sh
   349  
   350      dind docker daemon
   351          --host=unix:///var/run/docker.sock \
   352          --host=tcp://0.0.0.0:2375 \
   353          --storage-driver=vf &
   354  
   355      docker build -t "$BUILD_IMAGE" .
   356      docker push "$BUILD_IMAGE"
   357      ```
   358  
   359  3. Push the image to the Docker registry.
   360  
   361  4. Run Docker executor in `privileged` mode. In `config.toml` define:
   362  
   363      ```toml
   364      [[runners]]
   365        executor = "docker"
   366        [runners.docker]
   367          privileged = true
   368      ```
   369  
   370  5. In your project use the following `.alloy-ci.json`:
   371  
   372      ```json
   373      {
   374        "variables": {
   375          "BUILD_IMAGE": "my.image"
   376        },
   377        "build": {
   378          "image": "my/docker-build:image",
   379          "script": [
   380            "Dummy Script"
   381          ]
   382        }
   383      }
   384      ```
   385  
   386  This is just one of the examples. With this approach the possibilities are
   387  limitless.
   388  
   389  ## How pull policies work
   390  
   391  When using the `docker` or `docker+machine` executors, you can set the
   392  `pull_policy` parameter which defines how the Runner will work when pulling
   393  Docker images (for both `image` and `services` keywords).
   394  
   395  > **Note:**
   396  If you don't set any value for the `pull_policy` parameter, then
   397  Runner will use the `always` pull policy as the default value.
   398  
   399  Now let's see how these policies work.
   400  
   401  ### Using the `never` pull policy
   402  
   403  The `never` pull policy disables images pulling completely. If you set the
   404  `pull_policy` parameter of a Runner to `never`, then users will be able
   405  to use only the images that have been manually pulled on the docker host
   406  the Runner runs on.
   407  
   408  If an image cannot be found locally, then the Runner will fail the build
   409  with an error similar to:
   410  
   411  ```
   412  Pulling docker image local_image:latest ...
   413  ERROR: Build failed: Error: image local_image:latest not found
   414  ```
   415  
   416  **When to use this pull policy?**
   417  
   418  This pull policy should be used if you want or need to have a full
   419  control on which images are used by the Runner's users. It is a good choice
   420  for private Runners that are dedicated to a project where only specific images
   421  can be used (not publicly available on any registries).
   422  
   423  **When not to use this pull policy?**
   424  
   425  This pull policy will not work properly with most of [auto-scaled](../configuration/autoscale.md)
   426  Docker executor use cases. Because of how auto-scaling works, the `never`
   427  pull policy may be usable only when using a pre-defined cloud instance
   428  images for chosen cloud provider. The image needs to contain installed
   429  Docker Engine and local copy of used images.
   430  
   431  ### Using the `if-not-present` pull policy
   432  
   433  When the `if-not-present` pull policy is used, the Runner will first check
   434  if the image is present locally. If it is, then the local version of
   435  image will be used. Otherwise, the Runner will try to pull the image.
   436  
   437  **When to use this pull policy?**
   438  
   439  This pull policy is a good choice if you want to use images pulled from
   440  remote registries but you want to reduce time spent on analyzing image
   441  layers difference, when using heavy and rarely updated images.
   442  In that case, you will need once in a while to manually remove the image
   443  from the local Docker Engine store to force the update of the image.
   444  
   445  It is also the good choice if you need to use images that are built
   446  and available only locally, but on the other hand, also need to allow to
   447  pull images from remote registries.
   448  
   449  **When not to use this pull policy?**
   450  
   451  This pull policy should not be used if your builds use images that
   452  are updated frequently and need to be used in most recent versions.
   453  In such situation, the network load reduction created by this policy may
   454  be less worthy than the necessity of the very frequent deletion of local
   455  copies of images.
   456  
   457  This pull policy should also not be used if your Runner can be used by
   458  different users which should not have access to private images used
   459  by each other. Especially do not use this pull policy for shared Runners.
   460  
   461  To understand why the `if-not-present` pull policy creates security issues
   462  when used with private images, read the
   463  [security considerations documentation][secpull].
   464  
   465  ### Using the `always` pull policy
   466  
   467  The `always` pull policy will ensure that the image is **always** pulled.
   468  When `always` is used, the Runner will try to pull the image even if a local
   469  copy is available. If the image is not found, then the build will
   470  fail with an error similar to:
   471  
   472  ```
   473  Pulling docker image registry.tld/my/image:latest ...
   474  ERROR: Build failed: Error: image registry.tld/my/image:latest not found
   475  ```
   476  
   477  **When to use this pull policy?**
   478  
   479  This pull policy should be used if your Runner is publicly available
   480  and configured as a shared Runner in your AlloyCI instance. It is the
   481  only pull policy that can be considered as secure when the Runner will
   482  be used with private images.
   483  
   484  This is also a good choice if you want to force users to always use
   485  the newest images.
   486  
   487  Also, this will be the best solution for an [auto-scaled](../configuration/autoscale.md)
   488  configuration of the Runner.
   489  
   490  **When not to use this pull policy?**
   491  
   492  This pull policy will definitely not work if you need to use locally
   493  stored images. In this case, the Runner will skip the local copy of the image
   494  and try to pull it from the remote registry. If the image was build locally
   495  and doesn't exist in any public registry (and especially in the default
   496  Docker registry), the build will fail with:
   497  
   498  ```
   499  Pulling docker image local_image:latest ...
   500  ERROR: Build failed: Error: image local_image:latest not found
   501  ```
   502  
   503  ## Docker vs Docker-SSH (and Docker+Machine vs Docker-SSH+Machine)
   504  
   505  > **Note**:
   506  Starting with AlloyCI Runner 1.0, both docker-ssh and docker-ssh+machine executors
   507  are **deprecated** and will be removed in one of the upcoming releases.
   508  
   509  We provided a support for a special type of Docker executor, namely Docker-SSH
   510  (and the autoscaled version: Docker-SSH+Machine). Docker-SSH uses the same logic
   511  as the Docker executor, but instead of executing the script directly, it uses an
   512  SSH client to connect to the build container.
   513  
   514  Docker-ssh then connects to the SSH server that is running inside the container
   515  using its internal IP.
   516  
   517  This executor is no longer maintained and will be removed in near future.
   518  
   519  [Docker Fundamentals]: https://docs.docker.com/engine/understanding-docker/
   520  [docker engine]: https://www.docker.com/products/docker-engine
   521  [hub]: https://hub.docker.com/
   522  [linking-containers]: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
   523  [tutum/wordpress]: https://registry.hub.docker.com/u/tutum/wordpress/
   524  [postgres-hub]: https://registry.hub.docker.com/u/library/postgres/
   525  [mysql-hub]: https://registry.hub.docker.com/u/library/mysql/
   526  [runner-priv-reg]: ../configuration/advanced-configuration.md#using-a-private-container-registry
   527  [json]: https://github.com/AlloyCI/alloy_ci/tree/master/doc/json/README.md
   528  [toml]: ../commands/README.md#configuration-file
   529  [alpine linux]: https://alpinelinux.org/
   530  [special-build]: https://gitlab.com/AlloyCI/alloy-runner/tree/master/dockerfiles/build
   531  [privileged]: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
   532  [entry]: https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
   533  [secpull]: ../security/README.md##usage-of-private-docker-images-with-if-not-present-pull-policy