github.com/endocode/docker@v1.4.2-0.20160113120958-46eb4700391e/docs/articles/dockerfile_best-practices.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "Best practices for writing Dockerfiles"
     4  description = "Hints, tips and guidelines for writing clean, reliable Dockerfiles"
     5  keywords = ["Examples, Usage, base image, docker, documentation, dockerfile, best practices, hub,  official repo"]
     6  [menu.main]
     7  parent = "smn_images"
     8  +++
     9  <![end-metadata]-->
    10  
    11  # Best practices for writing Dockerfiles
    12  
    13  ## Overview
    14  
    15  Docker can build images automatically by reading the instructions from a
    16  `Dockerfile`, a text file that contains all the commands, in order, needed to
    17  build a given image. `Dockerfile`s adhere to a specific format and use a
    18  specific set of instructions. You can learn the basics on the
    19  [Dockerfile Reference](../reference/builder.md) page. If
    20  you’re new to writing `Dockerfile`s, you should start there.
    21  
    22  This document covers the best practices and methods recommended by Docker,
    23  Inc. and the Docker community for creating easy-to-use, effective
    24  `Dockerfile`s. We strongly suggest you follow these recommendations (in fact,
    25  if you’re creating an Official Image, you *must* adhere to these practices).
    26  
    27  You can see many of these practices and recommendations in action in the [buildpack-deps `Dockerfile`](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile).
    28  
    29  > Note: for more detailed explanations of any of the Dockerfile commands
    30  >mentioned here, visit the [Dockerfile Reference](../reference/builder.md) page.
    31  
    32  ## General guidelines and recommendations
    33  
    34  ### Containers should be ephemeral
    35  
    36  The container produced by the image your `Dockerfile` defines should be as
    37  ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
    38  destroyed and a new one built and put in place with an absolute minimum of
    39  set-up and configuration.
    40  
    41  ### Use a .dockerignore file
    42  
    43  In most cases, it's best to put each Dockerfile in an empty directory. Then,
    44  add to that directory only the files needed for building the Dockerfile. To
    45  increase the build's performance, you can exclude files and directories by
    46  adding a `.dockerignore` file to that directory as well. This file supports
    47  exclusion patterns similar to `.gitignore` files. For information on creating one,
    48  see the [.dockerignore file](../reference/builder.md#dockerignore-file).
    49  
    50  ### Avoid installing unnecessary packages
    51  
    52  In order to reduce complexity, dependencies, file sizes, and build times, you
    53  should avoid installing extra or unnecessary packages just because they
    54  might be “nice to have.” For example, you don’t need to include a text editor
    55  in a database image.
    56  
    57  ### Run only one process per container
    58  
    59  In almost all cases, you should only run a single process in a single
    60  container. Decoupling applications into multiple containers makes it much
    61  easier to scale horizontally and reuse containers. If that service depends on
    62  another service, make use of [container linking](../userguide/networking/default_network/dockerlinks.md).
    63  
    64  ### Minimize the number of layers
    65  
    66  You need to find the balance between readability (and thus long-term
    67  maintainability) of the `Dockerfile` and minimizing the number of layers it
    68  uses. Be strategic and cautious about the number of layers you use.
    69  
    70  ### Sort multi-line arguments
    71  
    72  Whenever possible, ease later changes by sorting multi-line arguments
    73  alphanumerically. This will help you avoid duplication of packages and make the
    74  list much easier to update. This also makes PRs a lot easier to read and
    75  review. Adding a space before a backslash (`\`) helps as well.
    76  
    77  Here’s an example from the [`buildpack-deps` image](https://github.com/docker-library/buildpack-deps):
    78  
    79      RUN apt-get update && apt-get install -y \
    80        bzr \
    81        cvs \
    82        git \
    83        mercurial \
    84        subversion
    85  
    86  ### Build cache
    87  
    88  During the process of building an image Docker will step through the
    89  instructions in your `Dockerfile` executing each in the order specified.
    90  As each instruction is examined Docker will look for an existing image in its
    91  cache that it can reuse, rather than creating a new (duplicate) image.
    92  If you do not want to use the cache at all you can use the ` --no-cache=true`
    93  option on the `docker build` command.
    94  
    95  However, if you do let Docker use its cache then it is very important to
    96  understand when it will, and will not, find a matching image. The basic rules
    97  that Docker will follow are outlined below:
    98  
    99  * Starting with a base image that is already in the cache, the next
   100  instruction is compared against all child images derived from that base
   101  image to see if one of them was built using the exact same instruction. If
   102  not, the cache is invalidated.
   103  
   104  * In most cases simply comparing the instruction in the `Dockerfile` with one
   105  of the child images is sufficient.  However, certain instructions require
   106  a little more examination and explanation.
   107  
   108  * For the `ADD` and `COPY` instructions, the contents of the file(s)
   109  in the image are examined and a checksum is calculated for each file.
   110  The last-modified and last-accessed times of the file(s) are not considered in
   111  these checksums. During the cache lookup, the checksum is compared against the
   112  checksum in the existing images. If anything has changed in the file(s), such
   113  as the contents and metadata, then the cache is invalidated.
   114  
   115  * Aside from the `ADD` and `COPY` commands, cache checking will not look at the
   116  files in the container to determine a cache match. For example, when processing
   117  a `RUN apt-get -y update` command the files updated in the container
   118  will not be examined to determine if a cache hit exists.  In that case just
   119  the command string itself will be used to find a match.
   120  
   121  Once the cache is invalidated, all subsequent `Dockerfile` commands will
   122  generate new images and the cache will not be used.
   123  
   124  ## The Dockerfile instructions
   125  
   126  Below you'll find recommendations for the best way to write the
   127  various instructions available for use in a `Dockerfile`.
   128  
   129  ### FROM
   130  
   131  [Dockerfile reference for the FROM instruction](../reference/builder.md#from)
   132  
   133  Whenever possible, use current Official Repositories as the basis for your
   134  image. We recommend the [Debian image](https://registry.hub.docker.com/_/debian/)
   135  since it’s very tightly controlled and kept extremely minimal (currently under
   136  100 mb), while still being a full distribution.
   137  
   138  ### RUN
   139  
   140  [Dockerfile reference for the RUN instruction](../reference/builder.md#run)
   141  
   142  As always, to make your `Dockerfile` more readable, understandable, and
   143  maintainable, split long or complex `RUN` statements on multiple lines separated
   144  with backslashes.
   145  
   146  ### apt-get
   147  
   148  Probably the most common use-case for `RUN` is an application of `apt-get`. The
   149  `RUN apt-get` command, because it installs packages, has several gotchas to look
   150  out for.
   151  
   152  You should avoid `RUN apt-get upgrade` or `dist-upgrade`, as many of the
   153  “essential” packages from the base images won't upgrade inside an unprivileged
   154  container. If a package contained in the base image is out-of-date, you should
   155  contact its maintainers.
   156  If you know there’s a particular package, `foo`, that needs to be updated, use
   157  `apt-get install -y foo` to update automatically.
   158  
   159  Always combine  `RUN apt-get update` with `apt-get install` in the same `RUN`
   160  statement, for example:
   161  
   162          RUN apt-get update && apt-get install -y \
   163              package-bar \
   164              package-baz \
   165              package-foo
   166  
   167  
   168  Using `apt-get update` alone in a `RUN` statement causes caching issues and
   169  subsequent `apt-get install` instructions fail.
   170  For example, say you have a Dockerfile:
   171  
   172          FROM ubuntu:14.04
   173          RUN apt-get update
   174          RUN apt-get install -y curl
   175  
   176  After building the image, all layers are in the Docker cache. Suppose you later
   177  modify `apt-get install` by adding extra package:
   178  
   179          FROM ubuntu:14.04
   180          RUN apt-get update
   181          RUN apt-get install -y curl nginx
   182  
   183  Docker sees the initial and modified instructions as identical and reuses the
   184  cache from previous steps. As a result the `apt-get update` is *NOT* executed
   185  because the build uses the cached version. Because the `apt-get update` is not
   186  run, your build can potentially get an outdated version of the `curl` and `nginx`
   187  packages.
   188  
   189  Using  `RUN apt-get update && apt-get install -y` ensures your Dockerfile
   190  installs the latest package versions with no further coding or manual
   191  intervention. This technique is known as "cache busting". You can also achieve
   192  cache-busting by specifying a package version. This is known as version pinning,
   193  for example:
   194  
   195          RUN apt-get update && apt-get install -y \
   196              package-bar \
   197              package-baz \
   198              package-foo=1.3.*
   199  
   200  Version pinning forces the build to retrieve a particular version regardless of
   201  what’s in the cache. This technique can also reduce failures due to unanticipated changes
   202  in required packages.
   203  
   204  Below is a well-formed `RUN` instruction that demonstrates all the `apt-get`
   205  recommendations.
   206  
   207      RUN apt-get update && apt-get install -y \
   208          aufs-tools \
   209          automake \
   210          build-essential \
   211          curl \
   212          dpkg-sig \
   213          libcap-dev \
   214          libsqlite3-dev \
   215          mercurial \
   216          reprepro \
   217          ruby1.9.1 \
   218          ruby1.9.1-dev \
   219          s3cmd=1.1.* \
   220       && rm -rf /var/lib/apt/lists/*
   221  
   222  The `s3cmd` instructions specifies a version `1.1.0*`. If the image previously
   223  used an older version, specifying the new one causes a cache bust of `apt-get
   224  update` and ensure the installation of the new version. Listing packages on
   225  each line can also prevent mistakes in package duplication.
   226  
   227  In addition, cleaning up the apt cache and removing `/var/lib/apt/lists` helps
   228  keep the image size down. Since the `RUN` statement starts with
   229  `apt-get update`, the package cache will always be refreshed prior to
   230  `apt-get install`.
   231  
   232  > **Note**: The official Debian and Ubuntu images [automatically run `apt-get clean`](https://github.com/docker/docker/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105),
   233  > so explicit invocation is not required.
   234  
   235  ### CMD
   236  
   237  [Dockerfile reference for the CMD instruction](../reference/builder.md#cmd)
   238  
   239  The `CMD` instruction should be used to run the software contained by your
   240  image, along with any arguments. `CMD` should almost always be used in the
   241  form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
   242  service (Apache, Rails, etc.), you would run something like
   243  `CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is
   244  recommended for any service-based image.
   245  
   246  In most other cases, `CMD` should be given an interactive shell (bash, python,
   247  perl, etc), for example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
   248  `CMD [“php”, “-a”]`. Using this form means that when you execute something like
   249  `docker run -it python`, you’ll get dropped into a usable shell, ready to go.
   250  `CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in
   251  conjunction with [`ENTRYPOINT`](../reference/builder.md#entrypoint), unless
   252  you and your expected users are already quite familiar with how `ENTRYPOINT`
   253  works.
   254  
   255  ### EXPOSE
   256  
   257  [Dockerfile reference for the EXPOSE instruction](../reference/builder.md#expose)
   258  
   259  The `EXPOSE` instruction indicates the ports on which a container will listen
   260  for connections. Consequently, you should use the common, traditional port for
   261  your application. For example, an image containing the Apache web server would
   262  use `EXPOSE 80`, while an image containing MongoDB would use `EXPOSE 27017` and
   263  so on.
   264  
   265  For external access, your users can execute `docker run` with a flag indicating
   266  how to map the specified port to the port of their choice.
   267  For container linking, Docker provides environment variables for the path from
   268  the recipient container back to the source (ie, `MYSQL_PORT_3306_TCP`).
   269  
   270  ### ENV
   271  
   272  [Dockerfile reference for the ENV instruction](../reference/builder.md#env)
   273  
   274  In order to make new software easier to run, you can use `ENV` to update the
   275  `PATH` environment variable for the software your container installs. For
   276  example, `ENV PATH /usr/local/nginx/bin:$PATH` will ensure that `CMD [“nginx”]`
   277  just works.
   278  
   279  The `ENV` instruction is also useful for providing required environment
   280  variables specific to services you wish to containerize, such as Postgres’s
   281  `PGDATA`.
   282  
   283  Lastly, `ENV` can also be used to set commonly used version numbers so that
   284  version bumps are easier to maintain, as seen in the following example:
   285  
   286      ENV PG_MAJOR 9.3
   287      ENV PG_VERSION 9.3.4
   288      RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress && …
   289      ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH
   290  
   291  Similar to having constant variables in a program (as opposed to hard-coding
   292  values), this approach lets you change a single `ENV` instruction to
   293  auto-magically bump the version of the software in your container.
   294  
   295  ### ADD or COPY
   296  
   297  [Dockerfile reference for the ADD instruction](../reference/builder.md#add)<br/>
   298  [Dockerfile reference for the COPY instruction](../reference/builder.md#copy)
   299  
   300  Although `ADD` and `COPY` are functionally similar, generally speaking, `COPY`
   301  is preferred. That’s because it’s more transparent than `ADD`. `COPY` only
   302  supports the basic copying of local files into the container, while `ADD` has
   303  some features (like local-only tar extraction and remote URL support) that are
   304  not immediately obvious. Consequently, the best use for `ADD` is local tar file
   305  auto-extraction into the image, as in `ADD rootfs.tar.xz /`.
   306  
   307  If you have multiple `Dockerfile` steps that use different files from your
   308  context, `COPY` them individually, rather than all at once. This will ensure that
   309  each step's build cache is only invalidated (forcing the step to be re-run) if the
   310  specifically required files change.
   311  
   312  For example:
   313  
   314      COPY requirements.txt /tmp/
   315      RUN pip install --requirement /tmp/requirements.txt
   316      COPY . /tmp/
   317  
   318  Results in fewer cache invalidations for the `RUN` step, than if you put the
   319  `COPY . /tmp/` before it.
   320  
   321  Because image size matters, using `ADD` to fetch packages from remote URLs is
   322  strongly discouraged; you should use `curl` or `wget` instead. That way you can
   323  delete the files you no longer need after they've been extracted and you won't
   324  have to add another layer in your image. For example, you should avoid doing
   325  things like:
   326  
   327      ADD http://example.com/big.tar.xz /usr/src/things/
   328      RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
   329      RUN make -C /usr/src/things all
   330  
   331  And instead, do something like:
   332  
   333      RUN mkdir -p /usr/src/things \
   334          && curl -SL http://example.com/big.tar.xz \
   335          | tar -xJC /usr/src/things \
   336          && make -C /usr/src/things all
   337  
   338  For other items (files, directories) that do not require `ADD`’s tar
   339  auto-extraction capability, you should always use `COPY`.
   340  
   341  ### ENTRYPOINT
   342  
   343  [Dockerfile reference for the ENTRYPOINT instruction](../reference/builder.md#entrypoint)
   344  
   345  The best use for `ENTRYPOINT` is to set the image's main command, allowing that
   346  image to be run as though it was that command (and then use `CMD` as the
   347  default flags).
   348  
   349  Let's start with an example of an image for the command line tool `s3cmd`:
   350  
   351      ENTRYPOINT ["s3cmd"]
   352      CMD ["--help"]
   353  
   354  Now the image can be run like this to show the command's help:
   355  
   356      $ docker run s3cmd
   357  
   358  Or using the right parameters to execute a command:
   359  
   360      $ docker run s3cmd ls s3://mybucket
   361  
   362  This is useful because the image name can double as a reference to the binary as
   363  shown in the command above.
   364  
   365  The `ENTRYPOINT` instruction can also be used in combination with a helper
   366  script, allowing it to function in a similar way to the command above, even
   367  when starting the tool may require more than one step.
   368  
   369  For example, the [Postgres Official Image](https://registry.hub.docker.com/_/postgres/)
   370  uses the following script as its `ENTRYPOINT`:
   371  
   372  ```bash
   373  #!/bin/bash
   374  set -e
   375  
   376  if [ "$1" = 'postgres' ]; then
   377      chown -R postgres "$PGDATA"
   378  
   379      if [ -z "$(ls -A "$PGDATA")" ]; then
   380          gosu postgres initdb
   381      fi
   382  
   383      exec gosu postgres "$@"
   384  fi
   385  
   386  exec "$@"
   387  ```
   388  
   389  > **Note**:
   390  > This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec)
   391  > so that the final running application becomes the container's PID 1. This allows
   392  > the application to receive any Unix signals sent to the container.
   393  > See the [`ENTRYPOINT`](../reference/builder.md#entrypoint)
   394  > help for more details.
   395  
   396  
   397  The helper script is copied into the container and run via `ENTRYPOINT` on
   398  container start:
   399  
   400      COPY ./docker-entrypoint.sh /
   401      ENTRYPOINT ["/docker-entrypoint.sh"]
   402  
   403  This script allows the user to interact with Postgres in several ways.
   404  
   405  It can simply start Postgres:
   406  
   407      $ docker run postgres
   408  
   409  Or, it can be used to run Postgres and pass parameters to the server:
   410  
   411      $ docker run postgres postgres --help
   412  
   413  Lastly, it could also be used to start a totally different tool, such as Bash:
   414  
   415      $ docker run --rm -it postgres bash
   416  
   417  ### VOLUME
   418  
   419  [Dockerfile reference for the VOLUME instruction](../reference/builder.md#volume)
   420  
   421  The `VOLUME` instruction should be used to expose any database storage area,
   422  configuration storage, or files/folders created by your docker container. You
   423  are strongly encouraged to use `VOLUME` for any mutable and/or user-serviceable
   424  parts of your image.
   425  
   426  ### USER
   427  
   428  [Dockerfile reference for the USER instruction](../reference/builder.md#user)
   429  
   430  If a service can run without privileges, use `USER` to change to a non-root
   431  user. Start by creating the user and group in the `Dockerfile` with something
   432  like `RUN groupadd -r postgres && useradd -r -g postgres postgres`.
   433  
   434  > **Note:** Users and groups in an image get a non-deterministic
   435  > UID/GID in that the “next” UID/GID gets assigned regardless of image
   436  > rebuilds. So, if it’s critical, you should assign an explicit UID/GID.
   437  
   438  You should avoid installing or using `sudo` since it has unpredictable TTY and
   439  signal-forwarding behavior that can cause more problems than it solves. If
   440  you absolutely need functionality similar to `sudo` (e.g., initializing the
   441  daemon as root but running it as non-root), you may be able to use
   442  [“gosu”](https://github.com/tianon/gosu).
   443  
   444  Lastly, to reduce layers and complexity, avoid switching `USER` back
   445  and forth frequently.
   446  
   447  ### WORKDIR
   448  
   449  [Dockerfile reference for the WORKDIR instruction](../reference/builder.md#workdir)
   450  
   451  For clarity and reliability, you should always use absolute paths for your
   452  `WORKDIR`. Also, you should use `WORKDIR` instead of  proliferating
   453  instructions like `RUN cd … && do-something`, which are hard to read,
   454  troubleshoot, and maintain.
   455  
   456  ### ONBUILD
   457  
   458  [Dockerfile reference for the ONBUILD instruction](../reference/builder.md#onbuild)
   459  
   460  An `ONBUILD` command executes after the current `Dockerfile` build completes.
   461  `ONBUILD` executes in any child image derived `FROM` the current image.  Think
   462  of the `ONBUILD` command as an instruction the parent `Dockerfile` gives
   463  to the child `Dockerfile`.
   464  
   465  A Docker build executes `ONBUILD` commands before any command in a child
   466  `Dockerfile`.
   467  
   468  `ONBUILD` is useful for images that are going to be built `FROM` a given
   469  image. For example, you would use `ONBUILD` for a language stack image that
   470  builds arbitrary user software written in that language within the
   471  `Dockerfile`, as you can see in [Ruby’s `ONBUILD` variants](https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile).
   472  
   473  Images built from `ONBUILD` should get a separate tag, for example:
   474  `ruby:1.9-onbuild` or `ruby:2.0-onbuild`.
   475  
   476  Be careful when putting `ADD` or `COPY` in `ONBUILD`. The “onbuild” image will
   477  fail catastrophically if the new build's context is missing the resource being
   478  added. Adding a separate tag, as recommended above, will help mitigate this by
   479  allowing the `Dockerfile` author to make a choice.
   480  
   481  ## Examples for Official Repositories
   482  
   483  These Official Repositories have exemplary `Dockerfile`s:
   484  
   485  * [Go](https://registry.hub.docker.com/_/golang/)
   486  * [Perl](https://registry.hub.docker.com/_/perl/)
   487  * [Hy](https://registry.hub.docker.com/_/hylang/)
   488  * [Rails](https://registry.hub.docker.com/_/rails)
   489  
   490  ## Additional resources:
   491  
   492  * [Dockerfile Reference](../reference/builder.md)
   493  * [More about Base Images](baseimages.md)
   494  * [More about Automated Builds](https://docs.docker.com/docker-hub/builds/)
   495  * [Guidelines for Creating Official
   496  Repositories](https://docs.docker.com/docker-hub/official_repos/)