github.com/circular-dark/docker@v1.7.0/docs/articles/dockerfile_best-practices.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "Best practices for writing Dockerfiles"
     4  description = "Hints, tips and guidelines for writing clean, reliable Dockerfiles"
     5  keywords = ["Examples, Usage, base image, docker, documentation, dockerfile, best practices, hub,  official repo"]
     6  [menu.main]
     7  parent = "smn_images"
     8  +++
     9  <![end-metadata]-->
    10  
    11  # Best practices for writing Dockerfiles
    12  
    13  ## Overview
    14  
    15  Docker can build images automatically by reading the instructions from a
    16  `Dockerfile`, a text file that contains all the commands, in order, needed to
    17  build a given image. `Dockerfile`s adhere to a specific format and use a
    18  specific set of instructions. You can learn the basics on the 
    19  [Dockerfile Reference](https://docs.docker.com/reference/builder/) page. If
    20  you’re new to writing `Dockerfile`s, you should start there.
    21  
    22  This document covers the best practices and methods recommended by Docker,
    23  Inc. and the Docker community for creating easy-to-use, effective
    24  `Dockerfile`s. We strongly suggest you follow these recommendations (in fact,
    25  if you’re creating an Official Image, you *must* adhere to these practices).
    26  
    27  You can see many of these practices and recommendations in action in the [buildpack-deps `Dockerfile`](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile).
    28  
    29  > Note: for more detailed explanations of any of the Dockerfile commands
    30  >mentioned here, visit the [Dockerfile Reference](https://docs.docker.com/reference/builder/) page.
    31  
    32  ## General guidelines and recommendations
    33  
    34  ### Containers should be ephemeral
    35  
    36  The container produced by the image your `Dockerfile` defines should be as
    37  ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
    38  destroyed and a new one built and put in place with an absolute minimum of
    39  set-up and configuration.
    40  
    41  ### Use a .dockerignore file
    42  
    43  In most cases, it's best to put each Dockerfile in an empty directory. Then,
    44  add to that directory only the files needed for building the Dockerfile. To
    45  increase the build's performance, you can exclude files and directories by
    46  adding a `.dockerignore` file to that directory as well. This file supports 
    47  exclusion patterns similar to `.gitignore` files. For information on creating one,
    48  see the [.dockerignore file](../../reference/builder/#dockerignore-file).
    49  
    50  ### Avoid installing unnecessary packages
    51  
    52  In order to reduce complexity, dependencies, file sizes, and build times, you
    53  should avoid installing extra or unnecessary packages just because they
    54  might be “nice to have.” For example, you don’t need to include a text editor
    55  in a database image.
    56  
    57  ### Run only one process per container
    58  
    59  In almost all cases, you should only run a single process in a single
    60  container. Decoupling applications into multiple containers makes it much
    61  easier to scale horizontally and reuse containers. If that service depends on
    62  another service, make use of [container linking](https://docs.docker.com/userguide/dockerlinks/).
    63  
    64  ### Minimize the number of layers
    65  
    66  You need to find the balance between readability (and thus long-term
    67  maintainability) of the `Dockerfile` and minimizing the number of layers it
    68  uses. Be strategic and cautious about the number of layers you use.
    69  
    70  ### Sort multi-line arguments
    71  
    72  Whenever possible, ease later changes by sorting multi-line arguments
    73  alphanumerically. This will help you avoid duplication of packages and make the
    74  list much easier to update. This also makes PRs a lot easier to read and
    75  review. Adding a space before a backslash (`\`) helps as well.
    76  
    77  Here’s an example from the [`buildpack-deps` image](https://github.com/docker-library/buildpack-deps):
    78  
    79      RUN apt-get update && apt-get install -y \
    80        bzr \
    81        cvs \
    82        git \
    83        mercurial \
    84        subversion
    85  
    86  ### Build cache
    87  
    88  During the process of building an image Docker will step through the
    89  instructions in your `Dockerfile` executing each in the order specified.
    90  As each instruction is examined Docker will look for an existing image in its
    91  cache that it can reuse, rather than creating a new (duplicate) image.
    92  If you do not want to use the cache at all you can use the ` --no-cache=true`
    93  option on the `docker build` command.
    94  
    95  However, if you do let Docker use its cache then it is very important to
    96  understand when it will, and will not, find a matching image. The basic rules
    97  that Docker will follow are outlined below:
    98  
    99  * Starting with a base image that is already in the cache, the next
   100  instruction is compared against all child images derived from that base
   101  image to see if one of them was built using the exact same instruction. If
   102  not, the cache is invalidated.
   103  
   104  * In most cases simply comparing the instruction in the `Dockerfile` with one
   105  of the child images is sufficient.  However, certain instructions require
   106  a little more examination and explanation.
   107  
   108  * In the case of the `ADD` and `COPY` instructions, the contents of the file(s)
   109  being put into the image are examined. Specifically, a checksum is done
   110  of the file(s) and then that checksum is used during the cache lookup.
   111  If anything has changed in the file(s), including its metadata,
   112  then the cache is invalidated.
   113  
   114  * Aside from the `ADD` and `COPY` commands cache checking will not look at the
   115  files in the container to determine a cache match. For example, when processing
   116  a `RUN apt-get -y update` command the files updated in the container
   117  will not be examined to determine if a cache hit exists.  In that case just
   118  the command string itself will be used to find a match.
   119  
   120  Once the cache is invalidated, all subsequent `Dockerfile` commands will
   121  generate new images and the cache will not be used.
   122  
   123  ## The Dockerfile instructions
   124  
   125  Below you'll find recommendations for the best way to write the
   126  various instructions available for use in a `Dockerfile`.
   127  
   128  ### [`FROM`](https://docs.docker.com/reference/builder/#from)
   129  
   130  Whenever possible, use current Official Repositories as the basis for your
   131  image. We recommend the [Debian image](https://registry.hub.docker.com/_/debian/)
   132  since it’s very tightly controlled and kept extremely minimal (currently under
   133  100 mb), while still being a full distribution.
   134  
   135  ### [`RUN`](https://docs.docker.com/reference/builder/#run)
   136  
   137  As always, to make your `Dockerfile` more readable, understandable, and
   138  maintainable, put long or complex `RUN` statements on multiple lines separated
   139  with backslashes.
   140  
   141  Probably the most common use-case for `RUN` is an application of `apt-get`.
   142  When using `apt-get`, here are a few things to keep in mind:
   143  
   144  * Don’t do `RUN apt-get update` on a single line. This will cause
   145  caching issues if the referenced archive gets updated, which will make your
   146  subsequent `apt-get install` fail without comment.
   147  
   148  * Avoid `RUN apt-get upgrade` or `dist-upgrade`, since many of the “essential”
   149  packages from the base images will fail to upgrade inside an unprivileged
   150  container. If a base package is out of date, you should contact its
   151  maintainers. If you know there’s a particular package, `foo`, that needs to be
   152  updated, use `apt-get install -y foo` and it will update automatically.
   153  
   154  * Do write instructions like:
   155  
   156      RUN apt-get update && apt-get install -y package-bar package-foo package-baz
   157  
   158  Writing the instruction this way not only makes it easier to read
   159  and maintain, but also, by including `apt-get update`, ensures that the cache
   160  will naturally be busted and the latest versions will be installed with no
   161  further coding or manual intervention required.
   162  
   163  * Further natural cache-busting can be realized by version-pinning packages
   164  (e.g., `package-foo=1.3.*`). This will force retrieval of that version
   165  regardless of what’s in the cache.
   166  Writing your `apt-get` code this way will greatly ease maintenance and reduce
   167  failures due to unanticipated changes in required packages.
   168  
   169  #### Example
   170  
   171  Below is a well-formed `RUN` instruction that demonstrates the above
   172  recommendations. Note that the last package, `s3cmd`, specifies a version
   173  `1.1.0*`. If the image previously used an older version, specifying the new one
   174  will cause a cache bust of `apt-get update` and ensure the installation of
   175  the new version (which in this case had a new, required feature).
   176  
   177      RUN apt-get update && apt-get install -y \
   178          aufs-tools \
   179          automake \
   180          btrfs-tools \
   181          build-essential \
   182          curl \
   183          dpkg-sig \
   184          git \
   185          iptables \
   186          libapparmor-dev \
   187          libcap-dev \
   188          libsqlite3-dev \
   189          lxc=1.0* \
   190          mercurial \
   191          parallel \
   192          reprepro \
   193          ruby1.9.1 \
   194          ruby1.9.1-dev \
   195          s3cmd=1.1.0*
   196  
   197  Writing the instruction this way also helps you avoid potential duplication of
   198  a given package because it is much easier to read than an instruction like:
   199  
   200      RUN apt-get install -y package-foo && apt-get install -y package-bar
   201      
   202  ### [`CMD`](https://docs.docker.com/reference/builder/#cmd)
   203  
   204  The `CMD` instruction should be used to run the software contained by your
   205  image, along with any arguments. `CMD` should almost always be used in the
   206  form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
   207  service (Apache, Rails, etc.), you would run something like
   208  `CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is
   209  recommended for any service-based image.
   210  
   211  In most other cases, `CMD` should be given an interactive shell (bash, python,
   212  perl, etc), for example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
   213  `CMD [“php”, “-a”]`. Using this form means that when you execute something like
   214  `docker run -it python`, you’ll get dropped into a usable shell, ready to go.
   215  `CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in
   216  conjunction with [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#entrypoint), unless
   217  you and your expected users are already quite familiar with how `ENTRYPOINT`
   218  works. 
   219  
   220  ### [`EXPOSE`](https://docs.docker.com/reference/builder/#expose)
   221  
   222  The `EXPOSE` instruction indicates the ports on which a container will listen
   223  for connections. Consequently, you should use the common, traditional port for
   224  your application. For example, an image containing the Apache web server would
   225  use `EXPOSE 80`, while an image containing MongoDB would use `EXPOSE 27017` and
   226  so on.
   227  
   228  For external access, your users can execute `docker run` with a flag indicating
   229  how to map the specified port to the port of their choice.
   230  For container linking, Docker provides environment variables for the path from
   231  the recipient container back to the source (ie, `MYSQL_PORT_3306_TCP`).
   232  
   233  ### [`ENV`](https://docs.docker.com/reference/builder/#env)
   234  
   235  In order to make new software easier to run, you can use `ENV` to update the
   236  `PATH` environment variable for the software your container installs. For
   237  example, `ENV PATH /usr/local/nginx/bin:$PATH` will ensure that `CMD [“nginx”]`
   238  just works.
   239  
   240  The `ENV` instruction is also useful for providing required environment
   241  variables specific to services you wish to containerize, such as Postgres’s
   242  `PGDATA`.
   243  
   244  Lastly, `ENV` can also be used to set commonly used version numbers so that
   245  version bumps are easier to maintain, as seen in the following example:
   246  
   247      ENV PG_MAJOR 9.3
   248      ENV PG_VERSION 9.3.4
   249      RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress && …
   250      ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH
   251  
   252  Similar to having constant variables in a program (as opposed to hard-coding
   253  values), this approach lets you change a single `ENV` instruction to
   254  auto-magically bump the version of the software in your container.
   255  
   256  ### [`ADD`](https://docs.docker.com/reference/builder/#add) or [`COPY`](https://docs.docker.com/reference/builder/#copy)
   257  
   258  Although `ADD` and `COPY` are functionally similar, generally speaking, `COPY`
   259  is preferred. That’s because it’s more transparent than `ADD`. `COPY` only
   260  supports the basic copying of local files into the container, while `ADD` has
   261  some features (like local-only tar extraction and remote URL support) that are
   262  not immediately obvious. Consequently, the best use for `ADD` is local tar file
   263  auto-extraction into the image, as in `ADD rootfs.tar.xz /`.
   264  
   265  If you have multiple `Dockerfile` steps that use different files from your
   266  context, `COPY` them individually, rather than all at once. This will ensure that
   267  each step's build cache is only invalidated (forcing the step to be re-run) if the
   268  specifically required files change.
   269  
   270  For example:
   271  
   272      COPY requirements.txt /tmp/
   273      RUN pip install /tmp/requirements.txt
   274      COPY . /tmp/
   275  
   276  Results in fewer cache invalidations for the `RUN` step, than if you put the
   277  `COPY . /tmp/` before it.
   278  
   279  Because image size matters, using `ADD` to fetch packages from remote URLs is
   280  strongly discouraged; you should use `curl` or `wget` instead. That way you can
   281  delete the files you no longer need after they've been extracted and you won't
   282  have to add another layer in your image. For example, you should avoid doing
   283  things like:
   284  
   285      ADD http://example.com/big.tar.xz /usr/src/things/
   286      RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
   287      RUN make -C /usr/src/things all
   288  
   289  And instead, do something like:
   290  
   291      RUN mkdir -p /usr/src/things \
   292          && curl -SL http://example.com/big.tar.gz \
   293          | tar -xJC /usr/src/things \
   294          && make -C /usr/src/things all
   295  
   296  For other items (files, directories) that do not require `ADD`’s tar
   297  auto-extraction capability, you should always use `COPY`.
   298  
   299  ### [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#entrypoint)
   300  
   301  The best use for `ENTRYPOINT` is to set the image's main command, allowing that
   302  image to be run as though it was that command (and then use `CMD` as the
   303  default flags).
   304  
   305  Let's start with an example of an image for the command line tool `s3cmd`:
   306  
   307      ENTRYPOINT ["s3cmd"]
   308      CMD ["--help"]
   309  
   310  Now the image can be run like this to show the command's help:
   311  
   312      $ docker run s3cmd
   313  
   314  Or using the right parameters to execute a command:
   315  
   316      $ docker run s3cmd ls s3://mybucket
   317  
   318  This is useful because the image name can double as a reference to the binary as
   319  shown in the command above.
   320  
   321  The `ENTRYPOINT` instruction can also be used in combination with a helper
   322  script, allowing it to function in a similar way to the command above, even
   323  when starting the tool may require more than one step.
   324  
   325  For example, the [Postgres Official Image](https://registry.hub.docker.com/_/postgres/)
   326  uses the following script as its `ENTRYPOINT`:
   327  
   328  ```bash
   329  #!/bin/bash
   330  set -e
   331  
   332  if [ "$1" = 'postgres' ]; then
   333      chown -R postgres "$PGDATA"
   334  
   335      if [ -z "$(ls -A "$PGDATA")" ]; then
   336          gosu postgres initdb
   337      fi
   338  
   339      exec gosu postgres "$@"
   340  fi
   341  
   342  exec "$@"
   343  ```
   344  
   345  > **Note**:
   346  > This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec)
   347  > so that the final running application becomes the container's PID 1. This allows
   348  > the application to receive any Unix signals sent to the container.
   349  > See the [`ENTRYPOINT`](https://docs.docker.com/reference/builder/#ENTRYPOINT)
   350  > help for more details.
   351  
   352  
   353  The helper script is copied into the container and run via `ENTRYPOINT` on
   354  container start:
   355  
   356      COPY ./docker-entrypoint.sh /
   357      ENTRYPOINT ["/docker-entrypoint.sh"]
   358  
   359  This script allows the user to interact with Postgres in several ways.
   360  
   361  It can simply start Postgres:
   362  
   363      $ docker run postgres
   364  
   365  Or, it can be used to run Postgres and pass parameters to the server:
   366  
   367      $ docker run postgres postgres --help
   368  
   369  Lastly, it could also be used to start a totally different tool, such Bash:
   370  
   371      $ docker run --rm -it postgres bash
   372  
   373  ### [`VOLUME`](https://docs.docker.com/reference/builder/#volume)
   374  
   375  The `VOLUME` instruction should be used to expose any database storage area,
   376  configuration storage, or files/folders created by your docker container. You
   377  are strongly encouraged to use `VOLUME` for any mutable and/or user-serviceable
   378  parts of your image.
   379  
   380  ### [`USER`](https://docs.docker.com/reference/builder/#user)
   381  
   382  If a service can run without privileges, use `USER` to change to a non-root
   383  user. Start by creating the user and group in the `Dockerfile` with something
   384  like `RUN groupadd -r postgres && useradd -r -g postgres postgres`.
   385  
   386  > **Note:** Users and groups in an image get a non-deterministic
   387  > UID/GID in that the “next” UID/GID gets assigned regardless of image
   388  > rebuilds. So, if it’s critical, you should assign an explicit UID/GID.
   389  
   390  You should avoid installing or using `sudo` since it has unpredictable TTY and
   391  signal-forwarding behavior that can cause more problems than it solves. If
   392  you absolutely need functionality similar to `sudo` (e.g., initializing the
   393  daemon as root but running it as non-root), you may be able to use
   394  [“gosu”](https://github.com/tianon/gosu). 
   395  
   396  Lastly, to reduce layers and complexity, avoid switching `USER` back
   397  and forth frequently.
   398  
   399  ### [`WORKDIR`](https://docs.docker.com/reference/builder/#workdir)
   400  
   401  For clarity and reliability, you should always use absolute paths for your
   402  `WORKDIR`. Also, you should use `WORKDIR` instead of  proliferating
   403  instructions like `RUN cd … && do-something`, which are hard to read,
   404  troubleshoot, and maintain.
   405  
   406  ### [`ONBUILD`](https://docs.docker.com/reference/builder/#onbuild)
   407  
   408  An `ONBUILD` command executes after the current `Dockerfile` build completes.
   409  `ONBUILD` executes in any child image derived `FROM` the current image.  Think
   410  of the `ONBUILD` command as an instruction the parent `Dockerfile` gives
   411  to the child `Dockerfile`.
   412  
   413  A Docker build executes `ONBUILD` commands before any command in a child
   414  `Dockerfile`.
   415  
   416  `ONBUILD` is useful for images that are going to be built `FROM` a given
   417  image. For example, you would use `ONBUILD` for a language stack image that
   418  builds arbitrary user software written in that language within the
   419  `Dockerfile`, as you can see in [Ruby’s `ONBUILD` variants](https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile). 
   420  
   421  Images built from `ONBUILD` should get a separate tag, for example:
   422  `ruby:1.9-onbuild` or `ruby:2.0-onbuild`.
   423  
   424  Be careful when putting `ADD` or `COPY` in `ONBUILD`. The “onbuild” image will
   425  fail catastrophically if the new build's context is missing the resource being
   426  added. Adding a separate tag, as recommended above, will help mitigate this by
   427  allowing the `Dockerfile` author to make a choice.
   428  
   429  ## Examples for Official Repositories
   430  
   431  These Official Repositories have exemplary `Dockerfile`s:
   432  
   433  * [Go](https://registry.hub.docker.com/_/golang/)
   434  * [Perl](https://registry.hub.docker.com/_/perl/)
   435  * [Hy](https://registry.hub.docker.com/_/hylang/)
   436  * [Rails](https://registry.hub.docker.com/_/rails)
   437  
   438  ## Additional resources:
   439  
   440  * [Dockerfile Reference](https://docs.docker.com/reference/builder/#onbuild)
   441  * [More about Base Images](https://docs.docker.com/articles/baseimages/)
   442  * [More about Automated Builds](https://docs.docker.com/docker-hub/builds/)
   443  * [Guidelines for Creating Official 
   444  Repositories](https://docs.docker.com/docker-hub/official_repos/)