github.com/docker/compose-on-kubernetes@v0.5.0/docs/mapping.md (about) 1 # Stack to Kubernetes mapping 2 3 There are several key differences between Swarm and Kubernetes which prevent a 4 1:1 mapping of Swarm onto Kubernetes. An opinionated mapping can be achieved, 5 however, with a couple of minor caveats. 6 7 As a stack is essentially just a list of Swarm services the mapping is done on a 8 per service basis. 9 10 ## Swarm service to Kubernetes objects 11 12 There are fundamentally two classes of Kubernetes objects required to map a 13 Swarm service: Something to deploy and scale the containers and something to 14 handle intra- and extra-stack networking. 15 16 ### Pod deployment 17 18 In Kubernetes one does not manipulate individual containers but rather a set of 19 containers called a 20 [_pod_](https://kubernetes.io/docs/concepts/workloads/pods/pod/). Pods can be 21 deployed and scaled using different controllers depending on what the desired 22 behaviour is. 23 24 The following Compose snippet declares a global service: 25 26 ```yaml 27 version: "3.6" 28 29 services: 30 worker: 31 image: dockersamples/examplevotingapp_worker 32 deploy: 33 mode: global 34 ``` 35 36 If a service is declared to be global, Compose on Kubernetes uses a 37 [_DaemonSet_](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) 38 to deploy pods. **Note:** Such services cannot use a persistent volume. 39 40 The following Compose snippet declares a service that uses a volume for storage: 41 42 ```yaml 43 version: "3.6" 44 45 services: 46 mysql: 47 volumes: 48 - db-data:/var/lib/mysql 49 50 volumes: 51 db-data: 52 ``` 53 54 If a service uses a volume, a 55 [_StatefulSet_](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) 56 is used. For more information about how volumes are handled, see the 57 [following section](#volumes). 58 59 In all other cases, a 60 [_Deployment_](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) 61 is used. 62 63 #### Volumes 64 65 There are several different types of volumes that are handled by Compose for 66 Kubernetes. 67 68 The following Compose snippet declares a service that uses a persistent volume: 69 70 ```yaml 71 version: "3.6" 72 73 services: 74 mysql: 75 volumes: 76 - db-data:/var/lib/mysql 77 78 volumes: 79 db-data: 80 ``` 81 82 A 83 [_PersistentVolumeClaim_](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) 84 with an empty provider is created when one specifies a persistent volume for a 85 service. This requires that Kubernetes has a default storage provider 86 configured. 87 88 The following Compose snippet declares a service with a host bind mount: 89 90 ```yaml 91 version: "3.6" 92 93 services: 94 web: 95 image: nginx:alpine 96 volumes: 97 - type: bind 98 source: /srv/data/static 99 target: /opt/app/static 100 ``` 101 102 Host bind mounts are supported but note that an absolute source path is 103 required. 104 105 The following Compose snippet declares a service with a tmpfs mount: 106 107 ```yaml 108 version: "3.6" 109 110 services: 111 web: 112 image: nginx:alpine 113 tmpfs: 114 - /tmpfs 115 ``` 116 117 Mounts of type tmpfs create an empty directory which is stored in memory for the 118 life of the pod. 119 120 #### Secrets 121 122 Secrets in Swarm are a simple key-value pair. In Kubernetes, a secret has a name 123 and then a map of keys to values. This means that the mapping is non-trivial. 124 The following Compose snippet shows a service with two secrets: 125 126 ```yaml 127 version: "3.6" 128 129 services: 130 web: 131 image: nginx:alpine 132 secrets: 133 - mysecret 134 - myexternalsecret 135 136 secrets: 137 mysecret: 138 file: ./my_secret.txt 139 myexternalsecret: 140 external: true 141 ``` 142 143 When deployed using the Docker CLI, a Kubernetes secret will be created from the 144 client-local file `./my_secret.txt`. The secret's name will be `my_secret`, it 145 will have a single key `my_secret.txt` whose value will be the contents of the 146 file. As expected, this secret will then be mounted to `/run/secrets/my_secret` 147 in the pod. 148 149 External secrets need to be created manually by the user using `kubectl` or the 150 relevant Kubernetes APIs. The secret name must match the Swarm secret name, its 151 key must be `file` and the associated value the secret value: 152 153 ```bash 154 $ echo -n 'external secret' > ./file 155 $ kubectl create secret generic myexternalsecret --from-file=./file 156 secret "myexternalsecret" created 157 ``` 158 159 #### Configs 160 161 Configs work the same as [secrets](#secrets). 162 163 ### Intra-stack networking 164 165 Kubernetes does not have the notion of a _network_ like Swarm does. Instead all 166 pods that exist in a namespace can network with each other. In order for DNS 167 name resolution between pods to work, a 168 [_HeadlessService_](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) 169 is required. 170 171 As we are unable to determine which stack services need to communicate with each 172 other in advance, we create a _HeadlessService_ for each stack service with the 173 name of the service. 174 175 **Note:** Service names must be unique by Kubernetes namespace. 176 177 ### Extra-stack networking 178 179 In order for stack services to be accessible to the outside word, a port must be 180 exposed as is shown in the following snippet: 181 182 ```yaml 183 version: "3.6" 184 185 services: 186 web: 187 image: nginx:alpine 188 ports: 189 - target: 80 190 published: 8080 191 protocol: tcp 192 mode: host 193 ``` 194 195 For this case, a published port of 8080 is specified for the target port of 80. 196 To do this, a 197 [_LoadBalancer_](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) 198 service is created with the service name suffixed by `-published`. This 199 implicitly creates 200 [_NodePort_](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) 201 and 202 _ClusterIP_ services. 203 204 **Note**: For clusters that do not have a _LoadBalancer_, the controller can be 205 run with `--default-service-type=NodePort`. This way, a _NodePort_ service is 206 created instead of a _LoadBalancer_ service, with the published port being used 207 as the node port. This requires that the published port to be within the 208 configured _NodePort_ range. 209 210 If only a target port is specified then a _NodePort_ is created with a random 211 port.