github.skymusic.top/operator-framework/operator-sdk@v0.8.2/doc/helm/dev/developer_guide.md (about) 1 # Developer guide 2 3 This document provides some useful information and tips for a developer 4 creating an operator powered by Helm. 5 6 ## Getting started with Helm Charts 7 8 Since we are interested in using Helm for the lifecycle management of our 9 application on Kubernetes, it is beneficial for a developer to get a good grasp 10 of [Helm charts][helm_charts]. Helm charts allow a developer to leverage their 11 existing Kubernetes resource files (written in YAML). One of the biggest 12 benefits of using Helm in conjunction with existing Kubernetes resource files 13 is the ability to use templating so that you can customize Kubernetes resources 14 with the simplicity of a few [Helm values][helm_values]. 15 16 ### Installing Helm 17 18 If you are unfamiliar with Helm, the easiest way to get started is to 19 [install Helm][helm_install] and test your charts using the `helm` command line 20 tool. 21 22 **NOTE:** Installing Helm's Tiller component in your cluster is not required, 23 because the Helm operator runs a Tiller component internally. 24 25 ### Testing a Helm chart locally 26 27 Sometimes it is beneficial for a developer to run the Helm chart installation 28 from their local machine as opposed to running/rebuilding the operator each 29 time. To do this, initialize a new project: 30 31 ```sh 32 $ operator-sdk new --type helm --kind Foo --api-version foo.example.com/v1alpha1 foo-operator 33 INFO[0000] Creating new Helm operator 'foo-operator'. 34 INFO[0000] Created build/Dockerfile 35 INFO[0000] Created watches.yaml 36 INFO[0000] Created deploy/service_account.yaml 37 INFO[0000] Created deploy/role.yaml 38 INFO[0000] Created deploy/role_binding.yaml 39 INFO[0000] Created deploy/operator.yaml 40 INFO[0000] Created deploy/crds/foo_v1alpha1_foo_crd.yaml 41 INFO[0000] Created deploy/crds/foo_v1alpha1_foo_cr.yaml 42 INFO[0000] Created helm-charts/foo/ 43 INFO[0000] Run git init ... 44 Initialized empty Git repository in /home/joe/go/src/github.com/operator-framework/foo-operator/.git/ 45 INFO[0000] Run git init done 46 INFO[0000] Project creation complete. 47 48 $ cd foo-operator 49 ``` 50 51 For this example we will use the default Nginx helm chart scaffolded by 52 `operator-sdk new`. Without making any changes, we can see what the default 53 release manifests are: 54 55 ```sh 56 $ helm template --name test-release helm-charts/foo 57 --- 58 # Source: foo/templates/service.yaml 59 apiVersion: v1 60 kind: Service 61 metadata: 62 name: test-release-foo 63 labels: 64 app.kubernetes.io/name: foo 65 helm.sh/chart: foo-0.1.0 66 app.kubernetes.io/instance: test-release 67 app.kubernetes.io/managed-by: Tiller 68 spec: 69 type: ClusterIP 70 ports: 71 - port: 80 72 targetPort: http 73 protocol: TCP 74 name: http 75 selector: 76 app.kubernetes.io/name: foo 77 app.kubernetes.io/instance: test-release 78 79 --- 80 # Source: foo/templates/deployment.yaml 81 apiVersion: apps/v1beta2 82 kind: Deployment 83 metadata: 84 name: test-release-foo 85 labels: 86 app.kubernetes.io/name: foo 87 helm.sh/chart: foo-0.1.0 88 app.kubernetes.io/instance: test-release 89 app.kubernetes.io/managed-by: Tiller 90 spec: 91 replicas: 1 92 selector: 93 matchLabels: 94 app.kubernetes.io/name: foo 95 app.kubernetes.io/instance: test-release 96 template: 97 metadata: 98 labels: 99 app.kubernetes.io/name: foo 100 app.kubernetes.io/instance: test-release 101 spec: 102 containers: 103 - name: foo 104 image: "nginx:stable" 105 imagePullPolicy: IfNotPresent 106 ports: 107 - name: http 108 containerPort: 80 109 protocol: TCP 110 livenessProbe: 111 httpGet: 112 path: / 113 port: http 114 readinessProbe: 115 httpGet: 116 path: / 117 port: http 118 resources: 119 {} 120 121 122 --- 123 # Source: foo/templates/ingress.yaml 124 125 ``` 126 127 Next, deploy these resource manifests to your cluster without using 128 Tiller: 129 130 ```sh 131 $ helm template --name test-release helm-charts/foo | kubectl apply -f - 132 service/test-release-foo created 133 deployment.apps/test-release-foo created 134 ``` 135 136 Check that the release resources were created: 137 138 ```sh 139 $ kubectl get all -l app.kubernetes.io/instance=test-release 140 NAME READY STATUS RESTARTS AGE 141 pod/test-release-foo-5554d49986-47676 1/1 Running 0 2m 142 143 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 144 service/test-release-foo ClusterIP 10.100.136.126 <none> 80/TCP 2m 145 146 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 147 deployment.apps/test-release-foo 1 1 1 1 2m 148 149 NAME DESIRED CURRENT READY AGE 150 replicaset.apps/test-release-foo-5554d49986 1 1 1 2m 151 ``` 152 153 Next, let's create a simple values file that we can use to override the Helm 154 chart's defaults: 155 156 ```sh 157 cat << EOF >> overrides.yaml 158 replicaCount: 2 159 service: 160 port: 8080 161 EOF 162 ``` 163 164 Re-run the templates and re-apply them to the cluster, this time using the 165 `overrides.yaml` file we just created: 166 167 ```sh 168 $ helm template -f overrides.yaml --name test-release helm-charts/foo | kubectl apply -f - 169 service/test-release-foo configured 170 deployment.apps/test-release-foo configured 171 ``` 172 173 Now you'll see that there are 2 deployment replicas and the service port 174 has been updated to `8080`. 175 176 ```sh 177 $ kubectl get deployment -l app.kubernetes.io/instance=test-release 178 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 179 test-release-foo 2 2 2 2 3m 180 181 $ kubectl get service -l app.kubernetes.io/instance=test-release 182 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 183 test-release-foo ClusterIP 10.100.136.126 <none> 8080/TCP 3m 184 ``` 185 186 Lastly, delete the release: 187 188 ```sh 189 $ helm template -f overrides.yaml --name test-release helm-charts/foo | kubectl delete -f - 190 service "test-release-foo" deleted 191 deployment.apps "test-release-foo" deleted 192 ``` 193 194 Check that the resources were deleted: 195 196 ```sh 197 $ kubectl get all -l app.kubernetes.io/instance=test-release 198 No resources found. 199 ``` 200 201 ## Using Helm inside of an Operator 202 Now that we have demonstrated using the Helm CLI, we want to trigger this Helm 203 chart release process when a custom resource changes. We want to map our `foo` 204 Helm chart to a specific Kubernetes resource that the operator will watch. This 205 mapping is done in a file called `watches.yaml`. 206 207 ### Watches file 208 209 The Operator expects a mapping file, which lists each GVK to watch and the 210 corresponding path to a Helm chart, to be copied into the 211 container at a predefined location: `/opt/helm/watches.yaml` 212 213 Dockerfile example: 214 215 ```Dockerfile 216 COPY watches.yaml /opt/helm/watches.yaml 217 ``` 218 219 The Watches file format is yaml and is an array of objects. The object has 220 mandatory fields: 221 222 **version**: The version of the Custom Resource that you will be watching. 223 224 **group**: The group of the Custom Resource that you will be watching. 225 226 **kind**: The kind of the Custom Resource that you will be watching. 227 228 **chart**: This is the path to the Helm chart that you have added to the 229 container. For example, if your Helm charts directory is at 230 `/opt/helm/helm-charts/` and your Helm chart is named `busybox`, this value 231 will be `/opt/helm/helm-charts/busybox`. 232 233 Example specifying a Helm chart watch: 234 235 ```yaml 236 --- 237 - version: v1alpha1 238 group: foo.example.com 239 kind: Foo 240 chart: /opt/helm/helm-charts/foo 241 ``` 242 243 ### Custom Resource file 244 245 The Custom Resource file format is Kubernetes resource file. The object has 246 mandatory fields: 247 248 **apiVersion**: The version of the Custom Resource that will be created. 249 250 **kind**: The kind of the Custom Resource that will be created 251 252 **metadata**: Kubernetes specific metadata to be created 253 254 **spec**: The spec contains the YAML for values that override the Helm chart's 255 defaults. This corresponds to the `overrides.yaml` file we created above. This 256 field is optional and can be empty, which results in the default Helm chart 257 being released by the operator. 258 259 ### Testing a Helm operator locally 260 261 Once a developer is comfortable working with the above workflow, it will be 262 beneficial to test the logic inside of an operator. To accomplish this, we can 263 use `operator-sdk up local` from the top-level directory of our project. The 264 `up local` command reads from `./watches.yaml` and uses `~/.kube/config` to 265 communicate with a Kubernetes cluster just as the `kubectl apply` commands did 266 when we were testing our Helm chart locally. This section assumes the developer 267 has read the [Helm Operator user guide][helm_operator_user_guide] and has the 268 proper dependencies installed. 269 270 Since `up local` reads from `./watches.yaml`, there are a couple options 271 available to the developer. If `chart` is left alone (by default 272 `/opt/helm/helm-charts/<name>`) the Helm chart must exist at that location in 273 the filesystem. It is recommended that the developer create a symlink at this 274 location, pointed to the Helm chart in the project directory, so that changes 275 to the Helm chart are reflected where necessary. 276 277 ```sh 278 sudo mkdir -p /opt/helm/helm-charts 279 sudo ln -s $PWD/helm-charts/<name> /opt/helm/helm-charts/<name> 280 ``` 281 282 Create a Custom Resource Definition (CRD) and proper Role-Based Access Control 283 (RBAC) definitions for resource Foo. `operator-sdk` autogenerates these files 284 inside of the `deploy` folder: 285 286 ```sh 287 kubectl create -f deploy/crds/foo_v1alpha1_foo_crd.yaml 288 kubectl create -f deploy/service_account.yaml 289 kubectl create -f deploy/role.yaml 290 kubectl create -f deploy/role_binding.yaml 291 ``` 292 293 Run the `up local` command: 294 295 ```sh 296 $ operator-sdk up local 297 INFO[0000] Running the operator locally. 298 INFO[0000] Go Version: go1.10.3 299 INFO[0000] Go OS/Arch: linux/amd64 300 INFO[0000] operator-sdk Version: v0.2.0+git 301 {"level":"info","ts":1543357618.0081263,"logger":"kubebuilder.controller","caller":"controller/controller.go:120","msg":"Starting EventSource","Controller":"foo-controller","Source":{"Type":{"apiVersion":"foo.example.com/v1alpha1","kind":"Foo"}}} 302 {"level":"info","ts":1543357618.008322,"logger":"helm.controller","caller":"controller/controller.go:73","msg":"Watching resource","apiVersion":"foo.example.com/v1alpha1","kind":"Foo","namespace":"default","resyncPeriod":"5s"} 303 ``` 304 305 Now that the operator is watching resource `Foo` for events, the creation of a 306 Custom Resource will trigger our Helm chart to be executed. Take a look at 307 `deploy/crds/foo_v1alpha1_foo_cr.yaml`. Our chart does not have a `size` value, 308 so let's remove it. Your CR file should look like the following: 309 310 ```yaml 311 apiVersion: "foo.example.com/v1alpha1" 312 kind: "Foo" 313 metadata: 314 name: "example-foo" 315 spec: 316 # Add fields here 317 ``` 318 319 Since `spec` is not set, Helm is invoked with no extra variables. The next 320 section covers how extra variables are passed from a Custom Resource to 321 Helm. This is why it is important to set sane defaults for the operator. 322 323 Create a Custom Resource instance of Foo with default var `state` set to 324 `present`: 325 326 ```sh 327 $ kubectl apply -f deploy/crds/foo_v1alpha1_foo_cr.yaml 328 foo.foo.example.com/example-foo created 329 ``` 330 331 The custom resource status will be updated with the release information if the 332 installation succeeds. Let's get the release name: 333 334 ```sh 335 $ export RELEASE_NAME=$(kubectl get foos example-foo -o jsonpath={..status.release.name}) 336 $ echo $RELEASE_NAME 337 example-foo-4f8ay4vfr99ulx905hax3j6x1 338 ``` 339 340 Check that the release resources were created: 341 342 ```sh 343 $ kubectl get all -l app.kubernetes.io/instance=${RELEASE_NAME} 344 NAME READY STATUS RESTARTS AGE 345 pod/example-foo-4f8ay4vfr99ulx905hax3j6x1-9dfd67fc6-s6krb 1/1 Running 0 4m 346 347 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 348 service/example-foo-4f8ay4vfr99ulx905hax3j6x1 ClusterIP 10.102.91.83 <none> 80/TCP 4m 349 350 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 351 deployment.apps/example-foo-4f8ay4vfr99ulx905hax3j6x1 1 1 1 1 4m 352 353 NAME DESIRED CURRENT READY AGE 354 replicaset.apps/example-foo-4f8ay4vfr99ulx905hax3j6x1-9dfd67fc6 1 1 1 4m 355 356 ``` 357 358 Modify `deploy/crds/foo_v1alpha1_foo_cr.yaml` to set `replicaCount` to `2`: 359 360 ```yaml 361 apiVersion: "foo.example.com/v1alpha1" 362 kind: "Foo" 363 metadata: 364 name: "example-foo" 365 spec: 366 # Add fields here 367 replicaCount: 2 368 ``` 369 370 Apply the changes to Kubernetes and confirm that the deployment has 2 replicas: 371 372 ```sh 373 $ kubectl apply -f deploy/crds/foo_v1alpha1_foo_cr.yaml 374 foo.foo.example.com/example-foo configured 375 376 $ kubectl get deployment -l app.kubernetes.io/instance=${RELEASE_NAME} 377 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 378 example-foo-4f8ay4vfr99ulx905hax3j6x1 2 2 2 2 6m 379 ``` 380 381 ### Testing a Helm operator on a cluster 382 383 Now that a developer is confident in the operator logic, testing the operator 384 inside of a pod on a Kubernetes cluster is desired. Running as a pod inside a 385 Kubernetes cluster is preferred for production use. 386 387 Build the `foo-operator` image and push it to a registry: 388 389 ```sh 390 operator-sdk build quay.io/example/foo-operator:v0.0.1 391 docker push quay.io/example/foo-operator:v0.0.1 392 ``` 393 394 Kubernetes deployment manifests are generated in `deploy/operator.yaml`. The 395 deployment image in this file needs to be modified from the placeholder 396 `REPLACE_IMAGE` to the previous built image. To do this run: 397 398 ```sh 399 sed -i 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml 400 ``` 401 402 **Note** 403 If you are performing these steps on OSX, use the following command: 404 405 ```sh 406 sed -i "" 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml 407 ``` 408 409 Deploy the foo-operator: 410 411 ```sh 412 kubectl create -f deploy/crds/foo_v1alpha1_foo_crd.yaml # if CRD doesn't exist already 413 kubectl create -f deploy/service_account.yaml 414 kubectl create -f deploy/role.yaml 415 kubectl create -f deploy/role_binding.yaml 416 kubectl create -f deploy/operator.yaml 417 ``` 418 419 Verify that the foo-operator is up and running: 420 421 ```sh 422 $ kubectl get deployment 423 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 424 foo-operator 1 1 1 1 1m 425 ``` 426 427 ## Override values sent to Helm 428 429 The override values that are sent to Helm are managed by the operator. The 430 contents of the `spec` section are passed along verbatim to Helm and treated 431 like a value file would be if the Helm CLI were used (e.g. 432 `helm install -f overrides.yaml ./my-chart`) 433 434 For the CR example: 435 436 ```yaml 437 apiVersion: "app.example.com/v1alpha1" 438 kind: "Database" 439 metadata: 440 name: "example" 441 spec: 442 message: "Hello world 2" 443 newParameter: "newParam" 444 ``` 445 446 The structure passed to Helm as values is: 447 448 ```yaml 449 message: "Hello world 2" 450 newParameter: "newParam" 451 ``` 452 453 [helm_charts]:https://helm.sh/docs/developing_charts/ 454 [helm_values]:https://helm.sh/docs/using_helm/#customizing-the-chart-before-installing 455 [helm_install]:https://helm.sh/docs/using_helm/ 456 [helm_operator_user_guide]:../user-guide.md