github.com/kobeld/docker@v1.12.0-rc1/docs/swarm/key-concepts.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Swarm key concepts" 4 description = "Introducing key concepts for Docker Swarm" 5 keywords = ["docker, container, cluster, swarm"] 6 [menu.main] 7 identifier="swarm-concepts" 8 parent="engine_swarm" 9 weight="2" 10 advisory = "rc" 11 +++ 12 <![end-metadata]--> 13 # Docker Swarm key concepts 14 15 Building upon the core features of Docker Engine, Docker Swarm enables you to 16 create a Swarm of Docker Engines and orchestrate services to run in the Swarm. 17 This topic describes key concepts to help you begin using Docker Swarm. 18 19 ## Swarm 20 21 **Docker Swarm** is the name for the cluster management and orchestration 22 features embedded in the Docker Engine. Engines that are participating in a 23 cluster are running in **Swarm mode**. 24 25 A **Swarm** is a cluster of Docker Engines where you deploy a set of application 26 services. When you deploy an application to a Swarm, you specify the desired 27 state of the services, such as which services to run and how many instances of 28 those services. The Swarm takes care of all orchestration duties required to 29 keep the services running in the desired state. 30 31 ## Node 32 33 A **node** is an active instance of the Docker Engine in the Swarm. 34 35 When you deploy your application to a Swarm, **manager nodes** accept the 36 service definition that describes the Swarm's desired state. Manager nodes also 37 perform the orchestration and cluster management functions required to maintain 38 the desired state of the Swarm. For example, when a manager node receives notice 39 to deploy a web server, it dispatches the service tasks to worker nodes. 40 41 By default the Docker Engine starts one manager node for a Swarm, but as you 42 scale you can add more managers to make the cluster more fault-tolerant. If you 43 require high availability Swarm management, Docker recommends three or five 44 Managers in your cluster. 45 46 Because Swarm manager nodes share data using Raft, there must be an odd number 47 of managers. The Swarm cluster can continue functioning in the face of up to 48 `N/2` failures where `N` is the number of manager nodes. More than five 49 managers is likely to degrade cluster performance and is not recommended. 50 51 **Worker nodes** receive and execute tasks dispatched from manager nodes. By 52 default manager nodes are also worker nodes, but you can configure managers to 53 be manager-only nodes. 54 55 ## Services and tasks 56 57 A **service** is the definition of how to run the various tasks that make up 58 your application. For example, you may create a service that deploys a Redis 59 image in your Swarm. 60 61 A **task** is the atomic scheduling unit of Swarm. For example a task may be to 62 schedule a Redis container to run on a worker node. 63 64 65 ## Service types 66 67 For **replicated services**, Swarm deploys a specific number of replica tasks 68 based upon the scale you set in the desired state. 69 70 For **global services**, Swarm runs one task for the service on every available 71 node in the cluster. 72 73 ## Load balancing 74 75 Swarm uses **ingress load balancing** to expose the services you want to make 76 available externally to the Swarm. Swarm can automatically assign the service a 77 **PublishedPort** or you can configure a PublishedPort for the service in the 78 30000-32767 range. External components, such as cloud load balancers, can access 79 the service on the PublishedPort of any node in the cluster, even if the node is 80 not currently running the service. 81 82 Swarm has an internal DNS component that automatically assigns each service in 83 the Swarm DNS entry. Swarm uses **internal load balancing** distribute requests 84 among services within the cluster based upon the services' DNS name. 85 86 <p style="margin-bottom:300px"> </p>