github.com/hustcat/docker@v1.3.3-0.20160314103604-901c67a8eeab/docs/userguide/eng-image/dockerfile_best-practices.md (about) 1 <!--[metadata]> 2 +++ 3 aliases = ["/engine/articles/dockerfile_best-practices/"] 4 title = "Best practices for writing Dockerfiles" 5 description = "Hints, tips and guidelines for writing clean, reliable Dockerfiles" 6 keywords = ["Examples, Usage, base image, docker, documentation, dockerfile, best practices, hub, official repo"] 7 [menu.main] 8 parent = "engine_images" 9 +++ 10 <![end-metadata]--> 11 12 # Best practices for writing Dockerfiles 13 14 Docker can build images automatically by reading the instructions from a 15 `Dockerfile`, a text file that contains all the commands, in order, needed to 16 build a given image. `Dockerfile`s adhere to a specific format and use a 17 specific set of instructions. You can learn the basics on the 18 [Dockerfile Reference](../../reference/builder.md) page. If 19 you’re new to writing `Dockerfile`s, you should start there. 20 21 This document covers the best practices and methods recommended by Docker, 22 Inc. and the Docker community for creating easy-to-use, effective 23 `Dockerfile`s. We strongly suggest you follow these recommendations (in fact, 24 if you’re creating an Official Image, you *must* adhere to these practices). 25 26 You can see many of these practices and recommendations in action in the [buildpack-deps `Dockerfile`](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile). 27 28 > Note: for more detailed explanations of any of the Dockerfile commands 29 >mentioned here, visit the [Dockerfile Reference](../../reference/builder.md) page. 30 31 ## General guidelines and recommendations 32 33 ### Containers should be ephemeral 34 35 The container produced by the image your `Dockerfile` defines should be as 36 ephemeral as possible. By “ephemeral,” we mean that it can be stopped and 37 destroyed and a new one built and put in place with an absolute minimum of 38 set-up and configuration. 39 40 ### Use a .dockerignore file 41 42 In most cases, it's best to put each Dockerfile in an empty directory. Then, 43 add to that directory only the files needed for building the Dockerfile. To 44 increase the build's performance, you can exclude files and directories by 45 adding a `.dockerignore` file to that directory as well. This file supports 46 exclusion patterns similar to `.gitignore` files. For information on creating one, 47 see the [.dockerignore file](../../reference/builder.md#dockerignore-file). 48 49 ### Avoid installing unnecessary packages 50 51 In order to reduce complexity, dependencies, file sizes, and build times, you 52 should avoid installing extra or unnecessary packages just because they 53 might be “nice to have.” For example, you don’t need to include a text editor 54 in a database image. 55 56 ### Run only one process per container 57 58 In almost all cases, you should only run a single process in a single 59 container. Decoupling applications into multiple containers makes it much 60 easier to scale horizontally and reuse containers. If that service depends on 61 another service, make use of [container linking](../../userguide/networking/default_network/dockerlinks.md). 62 63 ### Minimize the number of layers 64 65 You need to find the balance between readability (and thus long-term 66 maintainability) of the `Dockerfile` and minimizing the number of layers it 67 uses. Be strategic and cautious about the number of layers you use. 68 69 ### Sort multi-line arguments 70 71 Whenever possible, ease later changes by sorting multi-line arguments 72 alphanumerically. This will help you avoid duplication of packages and make the 73 list much easier to update. This also makes PRs a lot easier to read and 74 review. Adding a space before a backslash (`\`) helps as well. 75 76 Here’s an example from the [`buildpack-deps` image](https://github.com/docker-library/buildpack-deps): 77 78 RUN apt-get update && apt-get install -y \ 79 bzr \ 80 cvs \ 81 git \ 82 mercurial \ 83 subversion 84 85 ### Build cache 86 87 During the process of building an image Docker will step through the 88 instructions in your `Dockerfile` executing each in the order specified. 89 As each instruction is examined Docker will look for an existing image in its 90 cache that it can reuse, rather than creating a new (duplicate) image. 91 If you do not want to use the cache at all you can use the ` --no-cache=true` 92 option on the `docker build` command. 93 94 However, if you do let Docker use its cache then it is very important to 95 understand when it will, and will not, find a matching image. The basic rules 96 that Docker will follow are outlined below: 97 98 * Starting with a base image that is already in the cache, the next 99 instruction is compared against all child images derived from that base 100 image to see if one of them was built using the exact same instruction. If 101 not, the cache is invalidated. 102 103 * In most cases simply comparing the instruction in the `Dockerfile` with one 104 of the child images is sufficient. However, certain instructions require 105 a little more examination and explanation. 106 107 * For the `ADD` and `COPY` instructions, the contents of the file(s) 108 in the image are examined and a checksum is calculated for each file. 109 The last-modified and last-accessed times of the file(s) are not considered in 110 these checksums. During the cache lookup, the checksum is compared against the 111 checksum in the existing images. If anything has changed in the file(s), such 112 as the contents and metadata, then the cache is invalidated. 113 114 * Aside from the `ADD` and `COPY` commands, cache checking will not look at the 115 files in the container to determine a cache match. For example, when processing 116 a `RUN apt-get -y update` command the files updated in the container 117 will not be examined to determine if a cache hit exists. In that case just 118 the command string itself will be used to find a match. 119 120 Once the cache is invalidated, all subsequent `Dockerfile` commands will 121 generate new images and the cache will not be used. 122 123 ## The Dockerfile instructions 124 125 Below you'll find recommendations for the best way to write the 126 various instructions available for use in a `Dockerfile`. 127 128 ### FROM 129 130 [Dockerfile reference for the FROM instruction](../../reference/builder.md#from) 131 132 Whenever possible, use current Official Repositories as the basis for your 133 image. We recommend the [Debian image](https://hub.docker.com/_/debian/) 134 since it’s very tightly controlled and kept extremely minimal (currently under 135 100 mb), while still being a full distribution. 136 137 ### RUN 138 139 [Dockerfile reference for the RUN instruction](../../reference/builder.md#run) 140 141 As always, to make your `Dockerfile` more readable, understandable, and 142 maintainable, split long or complex `RUN` statements on multiple lines separated 143 with backslashes. 144 145 ### apt-get 146 147 Probably the most common use-case for `RUN` is an application of `apt-get`. The 148 `RUN apt-get` command, because it installs packages, has several gotchas to look 149 out for. 150 151 You should avoid `RUN apt-get upgrade` or `dist-upgrade`, as many of the 152 “essential” packages from the base images won't upgrade inside an unprivileged 153 container. If a package contained in the base image is out-of-date, you should 154 contact its maintainers. 155 If you know there’s a particular package, `foo`, that needs to be updated, use 156 `apt-get install -y foo` to update automatically. 157 158 Always combine `RUN apt-get update` with `apt-get install` in the same `RUN` 159 statement, for example: 160 161 RUN apt-get update && apt-get install -y \ 162 package-bar \ 163 package-baz \ 164 package-foo 165 166 167 Using `apt-get update` alone in a `RUN` statement causes caching issues and 168 subsequent `apt-get install` instructions fail. 169 For example, say you have a Dockerfile: 170 171 FROM ubuntu:14.04 172 RUN apt-get update 173 RUN apt-get install -y curl 174 175 After building the image, all layers are in the Docker cache. Suppose you later 176 modify `apt-get install` by adding extra package: 177 178 FROM ubuntu:14.04 179 RUN apt-get update 180 RUN apt-get install -y curl nginx 181 182 Docker sees the initial and modified instructions as identical and reuses the 183 cache from previous steps. As a result the `apt-get update` is *NOT* executed 184 because the build uses the cached version. Because the `apt-get update` is not 185 run, your build can potentially get an outdated version of the `curl` and `nginx` 186 packages. 187 188 Using `RUN apt-get update && apt-get install -y` ensures your Dockerfile 189 installs the latest package versions with no further coding or manual 190 intervention. This technique is known as "cache busting". You can also achieve 191 cache-busting by specifying a package version. This is known as version pinning, 192 for example: 193 194 RUN apt-get update && apt-get install -y \ 195 package-bar \ 196 package-baz \ 197 package-foo=1.3.* 198 199 Version pinning forces the build to retrieve a particular version regardless of 200 what’s in the cache. This technique can also reduce failures due to unanticipated changes 201 in required packages. 202 203 Below is a well-formed `RUN` instruction that demonstrates all the `apt-get` 204 recommendations. 205 206 RUN apt-get update && apt-get install -y \ 207 aufs-tools \ 208 automake \ 209 build-essential \ 210 curl \ 211 dpkg-sig \ 212 libcap-dev \ 213 libsqlite3-dev \ 214 mercurial \ 215 reprepro \ 216 ruby1.9.1 \ 217 ruby1.9.1-dev \ 218 s3cmd=1.1.* \ 219 && rm -rf /var/lib/apt/lists/* 220 221 The `s3cmd` instructions specifies a version `1.1.0*`. If the image previously 222 used an older version, specifying the new one causes a cache bust of `apt-get 223 update` and ensure the installation of the new version. Listing packages on 224 each line can also prevent mistakes in package duplication. 225 226 In addition, cleaning up the apt cache and removing `/var/lib/apt/lists` helps 227 keep the image size down. Since the `RUN` statement starts with 228 `apt-get update`, the package cache will always be refreshed prior to 229 `apt-get install`. 230 231 > **Note**: The official Debian and Ubuntu images [automatically run `apt-get clean`](https://github.com/docker/docker/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105), 232 > so explicit invocation is not required. 233 234 ### CMD 235 236 [Dockerfile reference for the CMD instruction](../../reference/builder.md#cmd) 237 238 The `CMD` instruction should be used to run the software contained by your 239 image, along with any arguments. `CMD` should almost always be used in the 240 form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a 241 service (Apache, Rails, etc.), you would run something like 242 `CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is 243 recommended for any service-based image. 244 245 In most other cases, `CMD` should be given an interactive shell (bash, python, 246 perl, etc), for example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or 247 `CMD [“php”, “-a”]`. Using this form means that when you execute something like 248 `docker run -it python`, you’ll get dropped into a usable shell, ready to go. 249 `CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in 250 conjunction with [`ENTRYPOINT`](../../reference/builder.md#entrypoint), unless 251 you and your expected users are already quite familiar with how `ENTRYPOINT` 252 works. 253 254 ### EXPOSE 255 256 [Dockerfile reference for the EXPOSE instruction](../../reference/builder.md#expose) 257 258 The `EXPOSE` instruction indicates the ports on which a container will listen 259 for connections. Consequently, you should use the common, traditional port for 260 your application. For example, an image containing the Apache web server would 261 use `EXPOSE 80`, while an image containing MongoDB would use `EXPOSE 27017` and 262 so on. 263 264 For external access, your users can execute `docker run` with a flag indicating 265 how to map the specified port to the port of their choice. 266 For container linking, Docker provides environment variables for the path from 267 the recipient container back to the source (ie, `MYSQL_PORT_3306_TCP`). 268 269 ### ENV 270 271 [Dockerfile reference for the ENV instruction](../../reference/builder.md#env) 272 273 In order to make new software easier to run, you can use `ENV` to update the 274 `PATH` environment variable for the software your container installs. For 275 example, `ENV PATH /usr/local/nginx/bin:$PATH` will ensure that `CMD [“nginx”]` 276 just works. 277 278 The `ENV` instruction is also useful for providing required environment 279 variables specific to services you wish to containerize, such as Postgres’s 280 `PGDATA`. 281 282 Lastly, `ENV` can also be used to set commonly used version numbers so that 283 version bumps are easier to maintain, as seen in the following example: 284 285 ENV PG_MAJOR 9.3 286 ENV PG_VERSION 9.3.4 287 RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress && … 288 ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH 289 290 Similar to having constant variables in a program (as opposed to hard-coding 291 values), this approach lets you change a single `ENV` instruction to 292 auto-magically bump the version of the software in your container. 293 294 ### ADD or COPY 295 296 [Dockerfile reference for the ADD instruction](../../reference/builder.md#add)<br/> 297 [Dockerfile reference for the COPY instruction](../../reference/builder.md#copy) 298 299 Although `ADD` and `COPY` are functionally similar, generally speaking, `COPY` 300 is preferred. That’s because it’s more transparent than `ADD`. `COPY` only 301 supports the basic copying of local files into the container, while `ADD` has 302 some features (like local-only tar extraction and remote URL support) that are 303 not immediately obvious. Consequently, the best use for `ADD` is local tar file 304 auto-extraction into the image, as in `ADD rootfs.tar.xz /`. 305 306 If you have multiple `Dockerfile` steps that use different files from your 307 context, `COPY` them individually, rather than all at once. This will ensure that 308 each step's build cache is only invalidated (forcing the step to be re-run) if the 309 specifically required files change. 310 311 For example: 312 313 COPY requirements.txt /tmp/ 314 RUN pip install --requirement /tmp/requirements.txt 315 COPY . /tmp/ 316 317 Results in fewer cache invalidations for the `RUN` step, than if you put the 318 `COPY . /tmp/` before it. 319 320 Because image size matters, using `ADD` to fetch packages from remote URLs is 321 strongly discouraged; you should use `curl` or `wget` instead. That way you can 322 delete the files you no longer need after they've been extracted and you won't 323 have to add another layer in your image. For example, you should avoid doing 324 things like: 325 326 ADD http://example.com/big.tar.xz /usr/src/things/ 327 RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things 328 RUN make -C /usr/src/things all 329 330 And instead, do something like: 331 332 RUN mkdir -p /usr/src/things \ 333 && curl -SL http://example.com/big.tar.xz \ 334 | tar -xJC /usr/src/things \ 335 && make -C /usr/src/things all 336 337 For other items (files, directories) that do not require `ADD`’s tar 338 auto-extraction capability, you should always use `COPY`. 339 340 ### ENTRYPOINT 341 342 [Dockerfile reference for the ENTRYPOINT instruction](../../reference/builder.md#entrypoint) 343 344 The best use for `ENTRYPOINT` is to set the image's main command, allowing that 345 image to be run as though it was that command (and then use `CMD` as the 346 default flags). 347 348 Let's start with an example of an image for the command line tool `s3cmd`: 349 350 ENTRYPOINT ["s3cmd"] 351 CMD ["--help"] 352 353 Now the image can be run like this to show the command's help: 354 355 $ docker run s3cmd 356 357 Or using the right parameters to execute a command: 358 359 $ docker run s3cmd ls s3://mybucket 360 361 This is useful because the image name can double as a reference to the binary as 362 shown in the command above. 363 364 The `ENTRYPOINT` instruction can also be used in combination with a helper 365 script, allowing it to function in a similar way to the command above, even 366 when starting the tool may require more than one step. 367 368 For example, the [Postgres Official Image](https://hub.docker.com/_/postgres/) 369 uses the following script as its `ENTRYPOINT`: 370 371 ```bash 372 #!/bin/bash 373 set -e 374 375 if [ "$1" = 'postgres' ]; then 376 chown -R postgres "$PGDATA" 377 378 if [ -z "$(ls -A "$PGDATA")" ]; then 379 gosu postgres initdb 380 fi 381 382 exec gosu postgres "$@" 383 fi 384 385 exec "$@" 386 ``` 387 388 > **Note**: 389 > This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec) 390 > so that the final running application becomes the container's PID 1. This allows 391 > the application to receive any Unix signals sent to the container. 392 > See the [`ENTRYPOINT`](../../reference/builder.md#entrypoint) 393 > help for more details. 394 395 396 The helper script is copied into the container and run via `ENTRYPOINT` on 397 container start: 398 399 COPY ./docker-entrypoint.sh / 400 ENTRYPOINT ["/docker-entrypoint.sh"] 401 402 This script allows the user to interact with Postgres in several ways. 403 404 It can simply start Postgres: 405 406 $ docker run postgres 407 408 Or, it can be used to run Postgres and pass parameters to the server: 409 410 $ docker run postgres postgres --help 411 412 Lastly, it could also be used to start a totally different tool, such as Bash: 413 414 $ docker run --rm -it postgres bash 415 416 ### VOLUME 417 418 [Dockerfile reference for the VOLUME instruction](../../reference/builder.md#volume) 419 420 The `VOLUME` instruction should be used to expose any database storage area, 421 configuration storage, or files/folders created by your docker container. You 422 are strongly encouraged to use `VOLUME` for any mutable and/or user-serviceable 423 parts of your image. 424 425 ### USER 426 427 [Dockerfile reference for the USER instruction](../../reference/builder.md#user) 428 429 If a service can run without privileges, use `USER` to change to a non-root 430 user. Start by creating the user and group in the `Dockerfile` with something 431 like `RUN groupadd -r postgres && useradd -r -g postgres postgres`. 432 433 > **Note:** Users and groups in an image get a non-deterministic 434 > UID/GID in that the “next” UID/GID gets assigned regardless of image 435 > rebuilds. So, if it’s critical, you should assign an explicit UID/GID. 436 437 You should avoid installing or using `sudo` since it has unpredictable TTY and 438 signal-forwarding behavior that can cause more problems than it solves. If 439 you absolutely need functionality similar to `sudo` (e.g., initializing the 440 daemon as root but running it as non-root), you may be able to use 441 [“gosu”](https://github.com/tianon/gosu). 442 443 Lastly, to reduce layers and complexity, avoid switching `USER` back 444 and forth frequently. 445 446 ### WORKDIR 447 448 [Dockerfile reference for the WORKDIR instruction](../../reference/builder.md#workdir) 449 450 For clarity and reliability, you should always use absolute paths for your 451 `WORKDIR`. Also, you should use `WORKDIR` instead of proliferating 452 instructions like `RUN cd … && do-something`, which are hard to read, 453 troubleshoot, and maintain. 454 455 ### ONBUILD 456 457 [Dockerfile reference for the ONBUILD instruction](../../reference/builder.md#onbuild) 458 459 An `ONBUILD` command executes after the current `Dockerfile` build completes. 460 `ONBUILD` executes in any child image derived `FROM` the current image. Think 461 of the `ONBUILD` command as an instruction the parent `Dockerfile` gives 462 to the child `Dockerfile`. 463 464 A Docker build executes `ONBUILD` commands before any command in a child 465 `Dockerfile`. 466 467 `ONBUILD` is useful for images that are going to be built `FROM` a given 468 image. For example, you would use `ONBUILD` for a language stack image that 469 builds arbitrary user software written in that language within the 470 `Dockerfile`, as you can see in [Ruby’s `ONBUILD` variants](https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile). 471 472 Images built from `ONBUILD` should get a separate tag, for example: 473 `ruby:1.9-onbuild` or `ruby:2.0-onbuild`. 474 475 Be careful when putting `ADD` or `COPY` in `ONBUILD`. The “onbuild” image will 476 fail catastrophically if the new build's context is missing the resource being 477 added. Adding a separate tag, as recommended above, will help mitigate this by 478 allowing the `Dockerfile` author to make a choice. 479 480 ## Examples for Official Repositories 481 482 These Official Repositories have exemplary `Dockerfile`s: 483 484 * [Go](https://hub.docker.com/_/golang/) 485 * [Perl](https://hub.docker.com/_/perl/) 486 * [Hy](https://hub.docker.com/_/hylang/) 487 * [Rails](https://hub.docker.com/_/rails) 488 489 ## Additional resources: 490 491 * [Dockerfile Reference](../../reference/builder.md) 492 * [More about Base Images](baseimages.md) 493 * [More about Automated Builds](https://docs.docker.com/docker-hub/builds/) 494 * [Guidelines for Creating Official 495 Repositories](https://docs.docker.com/docker-hub/official_repos/)