github.com/netbrain/docker@v1.9.0-rc2/docs/articles/dockerfile_best-practices.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "Best practices for writing Dockerfiles"
     4  description = "Hints, tips and guidelines for writing clean, reliable Dockerfiles"
     5  keywords = ["Examples, Usage, base image, docker, documentation, dockerfile, best practices, hub,  official repo"]
     6  [menu.main]
     7  parent = "smn_images"
     8  +++
     9  <![end-metadata]-->
    10  
    11  # Best practices for writing Dockerfiles
    12  
    13  ## Overview
    14  
    15  Docker can build images automatically by reading the instructions from a
    16  `Dockerfile`, a text file that contains all the commands, in order, needed to
    17  build a given image. `Dockerfile`s adhere to a specific format and use a
    18  specific set of instructions. You can learn the basics on the
    19  [Dockerfile Reference](../reference/builder.md) page. If
    20  you’re new to writing `Dockerfile`s, you should start there.
    21  
    22  This document covers the best practices and methods recommended by Docker,
    23  Inc. and the Docker community for creating easy-to-use, effective
    24  `Dockerfile`s. We strongly suggest you follow these recommendations (in fact,
    25  if you’re creating an Official Image, you *must* adhere to these practices).
    26  
    27  You can see many of these practices and recommendations in action in the [buildpack-deps `Dockerfile`](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile).
    28  
    29  > Note: for more detailed explanations of any of the Dockerfile commands
    30  >mentioned here, visit the [Dockerfile Reference](../reference/builder.md) page.
    31  
    32  ## General guidelines and recommendations
    33  
    34  ### Containers should be ephemeral
    35  
    36  The container produced by the image your `Dockerfile` defines should be as
    37  ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
    38  destroyed and a new one built and put in place with an absolute minimum of
    39  set-up and configuration.
    40  
    41  ### Use a .dockerignore file
    42  
    43  In most cases, it's best to put each Dockerfile in an empty directory. Then,
    44  add to that directory only the files needed for building the Dockerfile. To
    45  increase the build's performance, you can exclude files and directories by
    46  adding a `.dockerignore` file to that directory as well. This file supports
    47  exclusion patterns similar to `.gitignore` files. For information on creating one,
    48  see the [.dockerignore file](../reference/builder.md#dockerignore-file).
    49  
    50  ### Avoid installing unnecessary packages
    51  
    52  In order to reduce complexity, dependencies, file sizes, and build times, you
    53  should avoid installing extra or unnecessary packages just because they
    54  might be “nice to have.” For example, you don’t need to include a text editor
    55  in a database image.
    56  
    57  ### Run only one process per container
    58  
    59  In almost all cases, you should only run a single process in a single
    60  container. Decoupling applications into multiple containers makes it much
    61  easier to scale horizontally and reuse containers. If that service depends on
    62  another service, make use of [container linking](../userguide/dockerlinks.md).
    63  
    64  ### Minimize the number of layers
    65  
    66  You need to find the balance between readability (and thus long-term
    67  maintainability) of the `Dockerfile` and minimizing the number of layers it
    68  uses. Be strategic and cautious about the number of layers you use.
    69  
    70  ### Sort multi-line arguments
    71  
    72  Whenever possible, ease later changes by sorting multi-line arguments
    73  alphanumerically. This will help you avoid duplication of packages and make the
    74  list much easier to update. This also makes PRs a lot easier to read and
    75  review. Adding a space before a backslash (`\`) helps as well.
    76  
    77  Here’s an example from the [`buildpack-deps` image](https://github.com/docker-library/buildpack-deps):
    78  
    79      RUN apt-get update && apt-get install -y \
    80        bzr \
    81        cvs \
    82        git \
    83        mercurial \
    84        subversion
    85  
    86  ### Build cache
    87  
    88  During the process of building an image Docker will step through the
    89  instructions in your `Dockerfile` executing each in the order specified.
    90  As each instruction is examined Docker will look for an existing image in its
    91  cache that it can reuse, rather than creating a new (duplicate) image.
    92  If you do not want to use the cache at all you can use the ` --no-cache=true`
    93  option on the `docker build` command.
    94  
    95  However, if you do let Docker use its cache then it is very important to
    96  understand when it will, and will not, find a matching image. The basic rules
    97  that Docker will follow are outlined below:
    98  
    99  * Starting with a base image that is already in the cache, the next
   100  instruction is compared against all child images derived from that base
   101  image to see if one of them was built using the exact same instruction. If
   102  not, the cache is invalidated.
   103  
   104  * In most cases simply comparing the instruction in the `Dockerfile` with one
   105  of the child images is sufficient.  However, certain instructions require
   106  a little more examination and explanation.
   107  
   108  * For the `ADD` and `COPY` instructions, the contents of the file(s)
   109  in the image are examined and a checksum is calculated for each file.
   110  The last-modified and last-accessed times of the file(s) are not considered in
   111  these checksums. During the cache lookup, the checksum is compared against the
   112  checksum in the existing images. If anything has changed in the file(s), such
   113  as the contents and metadata, then the cache is invalidated.
   114  
   115  * Aside from the `ADD` and `COPY` commands cache checking will not look at the
   116  files in the container to determine a cache match. For example, when processing
   117  a `RUN apt-get -y update` command the files updated in the container
   118  will not be examined to determine if a cache hit exists.  In that case just
   119  the command string itself will be used to find a match.
   120  
   121  Once the cache is invalidated, all subsequent `Dockerfile` commands will
   122  generate new images and the cache will not be used.
   123  
   124  ## The Dockerfile instructions
   125  
   126  Below you'll find recommendations for the best way to write the
   127  various instructions available for use in a `Dockerfile`.
   128  
   129  ### FROM
   130  
   131  [Dockerfile reference for the FROM instruction](../reference/builder.md#from)
   132  
   133  Whenever possible, use current Official Repositories as the basis for your
   134  image. We recommend the [Debian image](https://registry.hub.docker.com/_/debian/)
   135  since it’s very tightly controlled and kept extremely minimal (currently under
   136  100 mb), while still being a full distribution.
   137  
   138  ### RUN
   139  
   140  [Dockerfile reference for the RUN instruction](../reference/builder.md#run)
   141  
   142  As always, to make your `Dockerfile` more readable, understandable, and
   143  maintainable, split long or complex `RUN` statements on multiple lines separated
   144  with backslashes.
   145  
   146  ### apt-get
   147  
   148  Probably the most common use-case for `RUN` is an application of `apt-get`. The
   149  `RUN apt-get` command, because it installs packages, has several gotchas to look
   150  out for.
   151  
   152  You should avoid `RUN apt-get upgrade` or `dist-upgrade`, as many of the
   153  “essential” packages from the base images won't upgrade inside an unprivileged
   154  container. If a package contained in the base image is out-of-date, you should
   155  contact its maintainers.
   156  If you know there’s a particular package, `foo`, that needs to be updated, use
   157  `apt-get install -y foo` to update automatically.
   158  
   159  Always combine  `RUN apt-get update` with `apt-get install` in the same `RUN`
   160  statement, for example:
   161  
   162          RUN apt-get update && apt-get install -y \
   163              package-bar \
   164              package-baz \
   165              package-foo
   166  
   167  
   168  Using `apt-get update` alone in a `RUN` statement causes caching issues and
   169  subsequent `apt-get install` instructions fail.
   170  For example, say you have a Dockerfile:
   171  
   172          FROM ubuntu:14.04
   173          RUN apt-get update
   174          RUN apt-get install -y curl
   175  
   176  After building the image, all layers are in the Docker cache. Suppose you later
   177  modify `apt-get install` by adding extra package:
   178  
   179          FROM ubuntu:14.04
   180          RUN apt-get update
   181          RUN apt-get install -y curl nginx
   182  
   183  Docker sees the initial and modified instructions as identical and reuses the
   184  cache from previous steps. As a result the `apt-get update` is *NOT* executed
   185  because the build uses the cached version. Because the `apt-get update` is not
   186  run, your build can potentially get an outdated version of the `curl` and `nginx`
   187  packages.
   188  
   189  Using  `RUN apt-get update && apt-get install -y` ensures your Dockerfile
   190  installs the latest package versions with no further coding or manual
   191  intervention. This technique is known as "cache busting". You can also achieve
   192  cache-busting by specifying a package version. This is known as version pinning,
   193  for example:
   194  
   195          RUN apt-get update && apt-get install -y \
   196              package-bar \
   197              package-baz \
   198              package-foo=1.3.*
   199  
   200  Version pinning forces the build to retrieve a particular version regardless of
   201  what’s in the cache. This technique can also reduce failures due to unanticipated changes
   202  in required packages.
   203  
   204  Below is a well-formed `RUN` instruction that demonstrates all the `apt-get`
   205  recommendations.
   206  
   207      RUN apt-get update && apt-get install -y \
   208          aufs-tools \
   209          automake \
   210          build-essential \
   211          curl \
   212          dpkg-sig \
   213          libcap-dev \
   214          libsqlite3-dev \
   215          lxc=1.0* \
   216          mercurial \
   217          reprepro \
   218          ruby1.9.1 \
   219          ruby1.9.1-dev \
   220          s3cmd=1.1.* \
   221       && apt-get clean \
   222       && rm -rf /var/lib/apt/lists/*
   223  
   224  The `s3cmd` instructions specifies a version `1.1.0*`. If the image previously
   225  used an older version, specifying the new one causes a cache bust of `apt-get
   226  update` and ensure the installation of the new version. Listing packages on
   227  each line can also prevent mistakes in package duplication.
   228  
   229  In addition, cleaning up the apt cache and removing `/var/lib/apt/lists` helps
   230  keep the image size down. Since the `RUN` statement starts with
   231  `apt-get update`, the package cache will always be refreshed prior to
   232  `apt-get install`.
   233  
   234  ### CMD
   235  
   236  [Dockerfile reference for the CMD instruction](../reference/builder.md#cmd)
   237  
   238  The `CMD` instruction should be used to run the software contained by your
   239  image, along with any arguments. `CMD` should almost always be used in the
   240  form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
   241  service (Apache, Rails, etc.), you would run something like
   242  `CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is
   243  recommended for any service-based image.
   244  
   245  In most other cases, `CMD` should be given an interactive shell (bash, python,
   246  perl, etc), for example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
   247  `CMD [“php”, “-a”]`. Using this form means that when you execute something like
   248  `docker run -it python`, you’ll get dropped into a usable shell, ready to go.
   249  `CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in
   250  conjunction with [`ENTRYPOINT`](../reference/builder.md#entrypoint), unless
   251  you and your expected users are already quite familiar with how `ENTRYPOINT`
   252  works.
   253  
   254  ### EXPOSE
   255  
   256  [Dockerfile reference for the EXPOSE instruction](../reference/builder.md#expose)
   257  
   258  The `EXPOSE` instruction indicates the ports on which a container will listen
   259  for connections. Consequently, you should use the common, traditional port for
   260  your application. For example, an image containing the Apache web server would
   261  use `EXPOSE 80`, while an image containing MongoDB would use `EXPOSE 27017` and
   262  so on.
   263  
   264  For external access, your users can execute `docker run` with a flag indicating
   265  how to map the specified port to the port of their choice.
   266  For container linking, Docker provides environment variables for the path from
   267  the recipient container back to the source (ie, `MYSQL_PORT_3306_TCP`).
   268  
   269  ### ENV
   270  
   271  [Dockerfile reference for the ENV instruction](../reference/builder.md#env)
   272  
   273  In order to make new software easier to run, you can use `ENV` to update the
   274  `PATH` environment variable for the software your container installs. For
   275  example, `ENV PATH /usr/local/nginx/bin:$PATH` will ensure that `CMD [“nginx”]`
   276  just works.
   277  
   278  The `ENV` instruction is also useful for providing required environment
   279  variables specific to services you wish to containerize, such as Postgres’s
   280  `PGDATA`.
   281  
   282  Lastly, `ENV` can also be used to set commonly used version numbers so that
   283  version bumps are easier to maintain, as seen in the following example:
   284  
   285      ENV PG_MAJOR 9.3
   286      ENV PG_VERSION 9.3.4
   287      RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress && …
   288      ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH
   289  
   290  Similar to having constant variables in a program (as opposed to hard-coding
   291  values), this approach lets you change a single `ENV` instruction to
   292  auto-magically bump the version of the software in your container.
   293  
   294  ### ADD or COPY
   295  
   296  [Dockerfile reference for the ADD instruction](../reference/builder.md#add)<br/>
   297  [Dockerfile reference for the COPY instruction](../reference/builder.md#copy)
   298  
   299  Although `ADD` and `COPY` are functionally similar, generally speaking, `COPY`
   300  is preferred. That’s because it’s more transparent than `ADD`. `COPY` only
   301  supports the basic copying of local files into the container, while `ADD` has
   302  some features (like local-only tar extraction and remote URL support) that are
   303  not immediately obvious. Consequently, the best use for `ADD` is local tar file
   304  auto-extraction into the image, as in `ADD rootfs.tar.xz /`.
   305  
   306  If you have multiple `Dockerfile` steps that use different files from your
   307  context, `COPY` them individually, rather than all at once. This will ensure that
   308  each step's build cache is only invalidated (forcing the step to be re-run) if the
   309  specifically required files change.
   310  
   311  For example:
   312  
   313      COPY requirements.txt /tmp/
   314      RUN pip install /tmp/requirements.txt
   315      COPY . /tmp/
   316  
   317  Results in fewer cache invalidations for the `RUN` step, than if you put the
   318  `COPY . /tmp/` before it.
   319  
   320  Because image size matters, using `ADD` to fetch packages from remote URLs is
   321  strongly discouraged; you should use `curl` or `wget` instead. That way you can
   322  delete the files you no longer need after they've been extracted and you won't
   323  have to add another layer in your image. For example, you should avoid doing
   324  things like:
   325  
   326      ADD http://example.com/big.tar.xz /usr/src/things/
   327      RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
   328      RUN make -C /usr/src/things all
   329  
   330  And instead, do something like:
   331  
   332      RUN mkdir -p /usr/src/things \
   333          && curl -SL http://example.com/big.tar.xz \
   334          | tar -xJC /usr/src/things \
   335          && make -C /usr/src/things all
   336  
   337  For other items (files, directories) that do not require `ADD`’s tar
   338  auto-extraction capability, you should always use `COPY`.
   339  
   340  ### ENTRYPOINT
   341  
   342  [Dockerfile reference for the ENTRYPOINT instruction](../reference/builder.md#entrypoint)
   343  
   344  The best use for `ENTRYPOINT` is to set the image's main command, allowing that
   345  image to be run as though it was that command (and then use `CMD` as the
   346  default flags).
   347  
   348  Let's start with an example of an image for the command line tool `s3cmd`:
   349  
   350      ENTRYPOINT ["s3cmd"]
   351      CMD ["--help"]
   352  
   353  Now the image can be run like this to show the command's help:
   354  
   355      $ docker run s3cmd
   356  
   357  Or using the right parameters to execute a command:
   358  
   359      $ docker run s3cmd ls s3://mybucket
   360  
   361  This is useful because the image name can double as a reference to the binary as
   362  shown in the command above.
   363  
   364  The `ENTRYPOINT` instruction can also be used in combination with a helper
   365  script, allowing it to function in a similar way to the command above, even
   366  when starting the tool may require more than one step.
   367  
   368  For example, the [Postgres Official Image](https://registry.hub.docker.com/_/postgres/)
   369  uses the following script as its `ENTRYPOINT`:
   370  
   371  ```bash
   372  #!/bin/bash
   373  set -e
   374  
   375  if [ "$1" = 'postgres' ]; then
   376      chown -R postgres "$PGDATA"
   377  
   378      if [ -z "$(ls -A "$PGDATA")" ]; then
   379          gosu postgres initdb
   380      fi
   381  
   382      exec gosu postgres "$@"
   383  fi
   384  
   385  exec "$@"
   386  ```
   387  
   388  > **Note**:
   389  > This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec)
   390  > so that the final running application becomes the container's PID 1. This allows
   391  > the application to receive any Unix signals sent to the container.
   392  > See the [`ENTRYPOINT`](../reference/builder.md#entrypoint)
   393  > help for more details.
   394  
   395  
   396  The helper script is copied into the container and run via `ENTRYPOINT` on
   397  container start:
   398  
   399      COPY ./docker-entrypoint.sh /
   400      ENTRYPOINT ["/docker-entrypoint.sh"]
   401  
   402  This script allows the user to interact with Postgres in several ways.
   403  
   404  It can simply start Postgres:
   405  
   406      $ docker run postgres
   407  
   408  Or, it can be used to run Postgres and pass parameters to the server:
   409  
   410      $ docker run postgres postgres --help
   411  
   412  Lastly, it could also be used to start a totally different tool, such as Bash:
   413  
   414      $ docker run --rm -it postgres bash
   415  
   416  ### VOLUME
   417  
   418  [Dockerfile reference for the VOLUME instruction](../reference/builder.md#volume)
   419  
   420  The `VOLUME` instruction should be used to expose any database storage area,
   421  configuration storage, or files/folders created by your docker container. You
   422  are strongly encouraged to use `VOLUME` for any mutable and/or user-serviceable
   423  parts of your image.
   424  
   425  ### USER
   426  
   427  [Dockerfile reference for the USER instruction](../reference/builder.md#user)
   428  
   429  If a service can run without privileges, use `USER` to change to a non-root
   430  user. Start by creating the user and group in the `Dockerfile` with something
   431  like `RUN groupadd -r postgres && useradd -r -g postgres postgres`.
   432  
   433  > **Note:** Users and groups in an image get a non-deterministic
   434  > UID/GID in that the “next” UID/GID gets assigned regardless of image
   435  > rebuilds. So, if it’s critical, you should assign an explicit UID/GID.
   436  
   437  You should avoid installing or using `sudo` since it has unpredictable TTY and
   438  signal-forwarding behavior that can cause more problems than it solves. If
   439  you absolutely need functionality similar to `sudo` (e.g., initializing the
   440  daemon as root but running it as non-root), you may be able to use
   441  [“gosu”](https://github.com/tianon/gosu).
   442  
   443  Lastly, to reduce layers and complexity, avoid switching `USER` back
   444  and forth frequently.
   445  
   446  ### WORKDIR
   447  
   448  [Dockerfile reference for the WORKDIR instruction](../reference/builder.md#workdir)
   449  
   450  For clarity and reliability, you should always use absolute paths for your
   451  `WORKDIR`. Also, you should use `WORKDIR` instead of  proliferating
   452  instructions like `RUN cd … && do-something`, which are hard to read,
   453  troubleshoot, and maintain.
   454  
   455  ### ONBUILD
   456  
   457  [Dockerfile reference for the ONBUILD instruction](../reference/builder.md#onbuild)
   458  
   459  An `ONBUILD` command executes after the current `Dockerfile` build completes.
   460  `ONBUILD` executes in any child image derived `FROM` the current image.  Think
   461  of the `ONBUILD` command as an instruction the parent `Dockerfile` gives
   462  to the child `Dockerfile`.
   463  
   464  A Docker build executes `ONBUILD` commands before any command in a child
   465  `Dockerfile`.
   466  
   467  `ONBUILD` is useful for images that are going to be built `FROM` a given
   468  image. For example, you would use `ONBUILD` for a language stack image that
   469  builds arbitrary user software written in that language within the
   470  `Dockerfile`, as you can see in [Ruby’s `ONBUILD` variants](https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile).
   471  
   472  Images built from `ONBUILD` should get a separate tag, for example:
   473  `ruby:1.9-onbuild` or `ruby:2.0-onbuild`.
   474  
   475  Be careful when putting `ADD` or `COPY` in `ONBUILD`. The “onbuild” image will
   476  fail catastrophically if the new build's context is missing the resource being
   477  added. Adding a separate tag, as recommended above, will help mitigate this by
   478  allowing the `Dockerfile` author to make a choice.
   479  
   480  ## Examples for Official Repositories
   481  
   482  These Official Repositories have exemplary `Dockerfile`s:
   483  
   484  * [Go](https://registry.hub.docker.com/_/golang/)
   485  * [Perl](https://registry.hub.docker.com/_/perl/)
   486  * [Hy](https://registry.hub.docker.com/_/hylang/)
   487  * [Rails](https://registry.hub.docker.com/_/rails)
   488  
   489  ## Additional resources:
   490  
   491  * [Dockerfile Reference](../reference/builder.md)
   492  * [More about Base Images](baseimages.md)
   493  * [More about Automated Builds](https://docs.docker.com/docker-hub/builds/)
   494  * [Guidelines for Creating Official
   495  Repositories](https://docs.docker.com/docker-hub/official_repos/)