github.com/jogo/docker@v1.7.0-rc1/docs/sources/articles/dockerfile_best-practices.md (about)

     1  page_title: Best practices for writing Dockerfiles
     2  page_description: Hints, tips and guidelines for writing clean, reliable Dockerfiles
     3  page_keywords: Examples, Usage, base image, docker, documentation, dockerfile, best practices, hub, official repo
     4  
     5  # Best practices for writing Dockerfiles
     6  
     7  ## Overview
     8  
     9  Docker can build images automatically by reading the instructions from a
    10  `Dockerfile`, a text file that contains all the commands, in order, needed to
    11  build a given image. `Dockerfile`s adhere to a specific format and use a
    12  specific set of instructions. You can learn the basics on the 
    13  [Dockerfile Reference](https://docs.docker.com/reference/builder/) page. If
    14  you’re new to writing `Dockerfile`s, you should start there.
    15  
    16  This document covers the best practices and methods recommended by Docker,
    17  Inc. and the Docker community for creating easy-to-use, effective
    18  `Dockerfile`s. We strongly suggest you follow these recommendations (in fact,
    19  if you’re creating an Official Image, you *must* adhere to these practices).
    20  
    21  You can see many of these practices and recommendations in action in the [buildpack-deps `Dockerfile`](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile).
    22  
    23  > Note: for more detailed explanations of any of the Dockerfile commands
    24  >mentioned here, visit the [Dockerfile Reference](https://docs.docker.com/reference/builder/) page.
    25  
    26  ## General guidelines and recommendations
    27  
    28  ### Containers should be ephemeral
    29  
    30  The container produced by the image your `Dockerfile` defines should be as
    31  ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
    32  destroyed and a new one built and put in place with an absolute minimum of
    33  set-up and configuration.
    34  
    35  ### Use a .dockerignore file
    36  
    37  In most cases, it's best to put each Dockerfile in an empty directory. Then,
    38  add to that directory only the files needed for building the Dockerfile. To
    39  increase the build's performance, you can exclude files and directories by
    40  adding a `.dockerignore` file to that directory as well. This file supports 
    41  exclusion patterns similar to `.gitignore` files. For information on creating one,
    42  see the [.dockerignore file](../../reference/builder/#dockerignore-file).
    43  
    44  ### Avoid installing unnecessary packages
    45  
    46  In order to reduce complexity, dependencies, file sizes, and build times, you
    47  should avoid installing extra or unnecessary packages just because they
    48  might be “nice to have.” For example, you don’t need to include a text editor
    49  in a database image.
    50  
    51  ### Run only one process per container
    52  
    53  In almost all cases, you should only run a single process in a single
    54  container. Decoupling applications into multiple containers makes it much
    55  easier to scale horizontally and reuse containers. If that service depends on
    56  another service, make use of [container linking](https://docs.docker.com/userguide/dockerlinks/).
    57  
    58  ### Minimize the number of layers
    59  
    60  You need to find the balance between readability (and thus long-term
    61  maintainability) of the `Dockerfile` and minimizing the number of layers it
    62  uses. Be strategic and cautious about the number of layers you use.
    63  
    64  ### Sort multi-line arguments
    65  
    66  Whenever possible, ease later changes by sorting multi-line arguments
    67  alphanumerically. This will help you avoid duplication of packages and make the
    68  list much easier to update. This also makes PRs a lot easier to read and
    69  review. Adding a space before a backslash (`\`) helps as well.
    70  
    71  Here’s an example from the [`buildpack-deps` image](https://github.com/docker-library/buildpack-deps):
    72  
    73      RUN apt-get update && apt-get install -y \
    74        bzr \
    75        cvs \
    76        git \
    77        mercurial \
    78        subversion
    79  
    80  ### Build cache
    81  
    82  During the process of building an image Docker will step through the
    83  instructions in your `Dockerfile` executing each in the order specified.
    84  As each instruction is examined Docker will look for an existing image in its
    85  cache that it can reuse, rather than creating a new (duplicate) image.
    86  If you do not want to use the cache at all you can use the ` --no-cache=true`
    87  option on the `docker build` command.
    88  
    89  However, if you do let Docker use its cache then it is very important to
    90  understand when it will, and will not, find a matching image. The basic rules
    91  that Docker will follow are outlined below:
    92  
    93  * Starting with a base image that is already in the cache, the next
    94  instruction is compared against all child images derived from that base
    95  image to see if one of them was built using the exact same instruction. If
    96  not, the cache is invalidated.
    97  
    98  * In most cases simply comparing the instruction in the `Dockerfile` with one
    99  of the child images is sufficient.  However, certain instructions require
   100  a little more examination and explanation.
   101  
   102  * In the case of the `ADD` and `COPY` instructions, the contents of the file(s)
   103  being put into the image are examined. Specifically, a checksum is done
   104  of the file(s) and then that checksum is used during the cache lookup.
   105  If anything has changed in the file(s), including its metadata,
   106  then the cache is invalidated.
   107  
   108  * Aside from the `ADD` and `COPY` commands cache checking will not look at the
   109  files in the container to determine a cache match. For example, when processing
   110  a `RUN apt-get -y update` command the files updated in the container
   111  will not be examined to determine if a cache hit exists.  In that case just
   112  the command string itself will be used to find a match.
   113  
   114  Once the cache is invalidated, all subsequent `Dockerfile` commands will
   115  generate new images and the cache will not be used.
   116  
   117  ## The Dockerfile instructions
   118  
   119  Below you'll find recommendations for the best way to write the
   120  various instructions available for use in a `Dockerfile`.
   121  
   122  ### [`FROM`](https://docs.docker.com/reference/builder/#from)
   123  
   124  Whenever possible, use current Official Repositories as the basis for your
   125  image. We recommend the [Debian image](https://registry.hub.docker.com/_/debian/)
   126  since it’s very tightly controlled and kept extremely minimal (currently under
   127  100 mb), while still being a full distribution.
   128  
   129  ### [`RUN`](https://docs.docker.com/reference/builder/#run)
   130  
   131  As always, to make your `Dockerfile` more readable, understandable, and
   132  maintainable, put long or complex `RUN` statements on multiple lines separated
   133  with backslashes.
   134  
   135  Probably the most common use-case for `RUN` is an application of `apt-get`.
   136  When using `apt-get`, here are a few things to keep in mind:
   137  
   138  * Don’t do `RUN apt-get update` on a single line. This will cause
   139  caching issues if the referenced archive gets updated, which will make your
   140  subsequent `apt-get install` fail without comment.
   141  
   142  * Avoid `RUN apt-get upgrade` or `dist-upgrade`, since many of the “essential”
   143  packages from the base images will fail to upgrade inside an unprivileged
   144  container. If a base package is out of date, you should contact its
   145  maintainers. If you know there’s a particular package, `foo`, that needs to be
   146  updated, use `apt-get install -y foo` and it will update automatically.
   147  
   148  * Do write instructions like:
   149  
   150      RUN apt-get update && apt-get install -y package-bar package-foo package-baz
   151  
   152  Writing the instruction this way not only makes it easier to read
   153  and maintain, but also, by including `apt-get update`, ensures that the cache
   154  will naturally be busted and the latest versions will be installed with no
   155  further coding or manual intervention required.
   156  
   157  * Further natural cache-busting can be realized by version-pinning packages
   158  (e.g., `package-foo=1.3.*`). This will force retrieval of that version
   159  regardless of what’s in the cache.
   160  Writing your `apt-get` code this way will greatly ease maintenance and reduce
   161  failures due to unanticipated changes in required packages.
   162  
   163  #### Example
   164  
   165  Below is a well-formed `RUN` instruction that demonstrates the above
   166  recommendations. Note that the last package, `s3cmd`, specifies a version
   167  `1.1.0*`. If the image previously used an older version, specifying the new one
   168  will cause a cache bust of `apt-get update` and ensure the installation of
   169  the new version (which in this case had a new, required feature).
   170  
   171      RUN apt-get update && apt-get install -y \
   172          aufs-tools \
   173          automake \
   174          btrfs-tools \
   175          build-essential \
   176          curl \
   177          dpkg-sig \
   178          git \
   179          iptables \
   180          libapparmor-dev \
   181          libcap-dev \
   182          libsqlite3-dev \
   183          lxc=1.0* \
   184          mercurial \
   185          parallel \
   186          reprepro \
   187          ruby1.9.1 \
   188          ruby1.9.1-dev \
   189          s3cmd=1.1.0*
   190  
   191  Writing the instruction this way also helps you avoid potential duplication of
   192  a given package because it is much easier to read than an instruction like:
   193  
   194      RUN apt-get install -y package-foo && apt-get install -y package-bar
   195      
   196  ### [`CMD`](https://docs.docker.com/reference/builder/#cmd)
   197  
   198  The `CMD` instruction should be used to run the software contained by your
   199  image, along with any arguments. `CMD` should almost always be used in the
   200  form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
   201  service (Apache, Rails, etc.), you would run something like
   202  `CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is
   203  recommended for any service-based image.
   204  
   205  In most other cases, `CMD` should be given an interactive shell (bash, python,
   206  perl, etc), for example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
   207  `CMD [“php”, “-a”]`. Using this form means that when you execute something like
   208  `docker run -it python`, you’ll get dropped into a usable shell, ready to go.
   209  `CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in
   210  conjunction with [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#entrypoint), unless
   211  you and your expected users are already quite familiar with how `ENTRYPOINT`
   212  works. 
   213  
   214  ### [`EXPOSE`](https://docs.docker.com/reference/builder/#expose)
   215  
   216  The `EXPOSE` instruction indicates the ports on which a container will listen
   217  for connections. Consequently, you should use the common, traditional port for
   218  your application. For example, an image containing the Apache web server would
   219  use `EXPOSE 80`, while an image containing MongoDB would use `EXPOSE 27017` and
   220  so on.
   221  
   222  For external access, your users can execute `docker run` with a flag indicating
   223  how to map the specified port to the port of their choice.
   224  For container linking, Docker provides environment variables for the path from
   225  the recipient container back to the source (ie, `MYSQL_PORT_3306_TCP`).
   226  
   227  ### [`ENV`](https://docs.docker.com/reference/builder/#env)
   228  
   229  In order to make new software easier to run, you can use `ENV` to update the
   230  `PATH` environment variable for the software your container installs. For
   231  example, `ENV PATH /usr/local/nginx/bin:$PATH` will ensure that `CMD [“nginx”]`
   232  just works.
   233  
   234  The `ENV` instruction is also useful for providing required environment
   235  variables specific to services you wish to containerize, such as Postgres’s
   236  `PGDATA`.
   237  
   238  Lastly, `ENV` can also be used to set commonly used version numbers so that
   239  version bumps are easier to maintain, as seen in the following example:
   240  
   241      ENV PG_MAJOR 9.3
   242      ENV PG_VERSION 9.3.4
   243      RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress && …
   244      ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH
   245  
   246  Similar to having constant variables in a program (as opposed to hard-coding
   247  values), this approach lets you change a single `ENV` instruction to
   248  auto-magically bump the version of the software in your container.
   249  
   250  ### [`ADD`](https://docs.docker.com/reference/builder/#add) or [`COPY`](https://docs.docker.com/reference/builder/#copy)
   251  
   252  Although `ADD` and `COPY` are functionally similar, generally speaking, `COPY`
   253  is preferred. That’s because it’s more transparent than `ADD`. `COPY` only
   254  supports the basic copying of local files into the container, while `ADD` has
   255  some features (like local-only tar extraction and remote URL support) that are
   256  not immediately obvious. Consequently, the best use for `ADD` is local tar file
   257  auto-extraction into the image, as in `ADD rootfs.tar.xz /`.
   258  
   259  If you have multiple `Dockerfile` steps that use different files from your
   260  context, `COPY` them individually, rather than all at once. This will ensure that
   261  each step's build cache is only invalidated (forcing the step to be re-run) if the
   262  specifically required files change.
   263  
   264  For example:
   265  
   266      COPY requirements.txt /tmp/
   267      RUN pip install /tmp/requirements.txt
   268      COPY . /tmp/
   269  
   270  Results in fewer cache invalidations for the `RUN` step, than if you put the
   271  `COPY . /tmp/` before it.
   272  
   273  Because image size matters, using `ADD` to fetch packages from remote URLs is
   274  strongly discouraged; you should use `curl` or `wget` instead. That way you can
   275  delete the files you no longer need after they've been extracted and you won't
   276  have to add another layer in your image. For example, you should avoid doing
   277  things like:
   278  
   279      ADD http://example.com/big.tar.xz /usr/src/things/
   280      RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
   281      RUN make -C /usr/src/things all
   282  
   283  And instead, do something like:
   284  
   285      RUN mkdir -p /usr/src/things \
   286          && curl -SL http://example.com/big.tar.gz \
   287          | tar -xJC /usr/src/things \
   288          && make -C /usr/src/things all
   289  
   290  For other items (files, directories) that do not require `ADD`’s tar
   291  auto-extraction capability, you should always use `COPY`.
   292  
   293  ### [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#entrypoint)
   294  
   295  The best use for `ENTRYPOINT` is to set the image's main command, allowing that
   296  image to be run as though it was that command (and then use `CMD` as the
   297  default flags).
   298  
   299  Let's start with an example of an image for the command line tool `s3cmd`:
   300  
   301      ENTRYPOINT ["s3cmd"]
   302      CMD ["--help"]
   303  
   304  Now the image can be run like this to show the command's help:
   305  
   306      $ docker run s3cmd
   307  
   308  Or using the right parameters to execute a command:
   309  
   310      $ docker run s3cmd ls s3://mybucket
   311  
   312  This is useful because the image name can double as a reference to the binary as
   313  shown in the command above.
   314  
   315  The `ENTRYPOINT` instruction can also be used in combination with a helper
   316  script, allowing it to function in a similar way to the command above, even
   317  when starting the tool may require more than one step.
   318  
   319  For example, the [Postgres Official Image](https://registry.hub.docker.com/_/postgres/)
   320  uses the following script as its `ENTRYPOINT`:
   321  
   322  ```bash
   323  #!/bin/bash
   324  set -e
   325  
   326  if [ "$1" = 'postgres' ]; then
   327      chown -R postgres "$PGDATA"
   328  
   329      if [ -z "$(ls -A "$PGDATA")" ]; then
   330          gosu postgres initdb
   331      fi
   332  
   333      exec gosu postgres "$@"
   334  fi
   335  
   336  exec "$@"
   337  ```
   338  
   339  > **Note**:
   340  > This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec)
   341  > so that the final running application becomes the container's PID 1. This allows
   342  > the application to receive any Unix signals sent to the container.
   343  > See the [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#ENTRYPOINT)
   344  > help for more details.
   345  
   346  
   347  The helper script is copied into the container and run via `ENTRYPOINT` on
   348  container start:
   349  
   350      COPY ./docker-entrypoint.sh /
   351      ENTRYPOINT ["/docker-entrypoint.sh"]
   352  
   353  This script allows the user to interact with Postgres in several ways.
   354  
   355  It can simply start Postgres:
   356  
   357      $ docker run postgres
   358  
   359  Or, it can be used to run Postgres and pass parameters to the server:
   360  
   361      $ docker run postgres postgres --help
   362  
   363  Lastly, it could also be used to start a totally different tool, such Bash:
   364  
   365      $ docker run --rm -it postgres bash
   366  
   367  ### [`VOLUME`](https://docs.docker.com/reference/builder/#volume)
   368  
   369  The `VOLUME` instruction should be used to expose any database storage area,
   370  configuration storage, or files/folders created by your docker container. You
   371  are strongly encouraged to use `VOLUME` for any mutable and/or user-serviceable
   372  parts of your image.
   373  
   374  ### [`USER`](https://docs.docker.com/reference/builder/#user)
   375  
   376  If a service can run without privileges, use `USER` to change to a non-root
   377  user. Start by creating the user and group in the `Dockerfile` with something
   378  like `RUN groupadd -r postgres && useradd -r -g postgres postgres`.
   379  
   380  > **Note:** Users and groups in an image get a non-deterministic
   381  > UID/GID in that the “next” UID/GID gets assigned regardless of image
   382  > rebuilds. So, if it’s critical, you should assign an explicit UID/GID.
   383  
   384  You should avoid installing or using `sudo` since it has unpredictable TTY and
   385  signal-forwarding behavior that can cause more problems than it solves. If
   386  you absolutely need functionality similar to `sudo` (e.g., initializing the
   387  daemon as root but running it as non-root), you may be able to use
   388  [“gosu”](https://github.com/tianon/gosu). 
   389  
   390  Lastly, to reduce layers and complexity, avoid switching `USER` back
   391  and forth frequently.
   392  
   393  ### [`WORKDIR`](https://docs.docker.com/reference/builder/#workdir)
   394  
   395  For clarity and reliability, you should always use absolute paths for your
   396  `WORKDIR`. Also, you should use `WORKDIR` instead of  proliferating
   397  instructions like `RUN cd … && do-something`, which are hard to read,
   398  troubleshoot, and maintain.
   399  
   400  ### [`ONBUILD`](https://docs.docker.com/reference/builder/#onbuild)
   401  
   402  An `ONBUILD` command executes after the current `Dockerfile` build completes.
   403  `ONBUILD` executes in any child image derived `FROM` the current image.  Think
   404  of the `ONBUILD` command as an instruction the parent `Dockerfile` gives
   405  to the child `Dockerfile`.
   406  
   407  A Docker build executes `ONBUILD` commands before any command in a child
   408  `Dockerfile`.
   409  
   410  `ONBUILD` is useful for images that are going to be built `FROM` a given
   411  image. For example, you would use `ONBUILD` for a language stack image that
   412  builds arbitrary user software written in that language within the
   413  `Dockerfile`, as you can see in [Ruby’s `ONBUILD` variants](https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile). 
   414  
   415  Images built from `ONBUILD` should get a separate tag, for example:
   416  `ruby:1.9-onbuild` or `ruby:2.0-onbuild`.
   417  
   418  Be careful when putting `ADD` or `COPY` in `ONBUILD`. The “onbuild” image will
   419  fail catastrophically if the new build's context is missing the resource being
   420  added. Adding a separate tag, as recommended above, will help mitigate this by
   421  allowing the `Dockerfile` author to make a choice.
   422  
   423  ## Examples for Official Repositories
   424  
   425  These Official Repositories have exemplary `Dockerfile`s:
   426  
   427  * [Go](https://registry.hub.docker.com/_/golang/)
   428  * [Perl](https://registry.hub.docker.com/_/perl/)
   429  * [Hy](https://registry.hub.docker.com/_/hylang/)
   430  * [Rails](https://registry.hub.docker.com/_/rails)
   431  
   432  ## Additional resources:
   433  
   434  * [Dockerfile Reference](https://docs.docker.com/reference/builder/#onbuild)
   435  * [More about Base Images](https://docs.docker.com/articles/baseimages/)
   436  * [More about Automated Builds](https://docs.docker.com/docker-hub/builds/)
   437  * [Guidelines for Creating Official 
   438  Repositories](https://docs.docker.com/docker-hub/official_repos/)