github.skymusic.top/operator-framework/operator-sdk@v0.8.2/pkg/leader/doc.go (about) 1 // Copyright 2018 The Operator-SDK Authors 2 // 3 // Licensed under the Apache License, Version 2.0 (the "License"); 4 // you may not use this file except in compliance with the License. 5 // You may obtain a copy of the License at 6 // 7 // http://www.apache.org/licenses/LICENSE-2.0 8 // 9 // Unless required by applicable law or agreed to in writing, software 10 // distributed under the License is distributed on an "AS IS" BASIS, 11 // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 // See the License for the specific language governing permissions and 13 // limitations under the License. 14 15 /* 16 Package leader implements Leader For Life, a simple alternative to lease-based 17 leader election. 18 19 Both the Leader For Life and lease-based approaches to leader election are 20 built on the concept that each candidate will attempt to create a resource with 21 the same GVK, namespace, and name. Whichever candidate succeeds becomes the 22 leader. The rest receive "already exists" errors and wait for a new 23 opportunity. 24 25 Leases provide a way to indirectly observe whether the leader still exists. The 26 leader must periodically renew its lease, usually by updating a timestamp in 27 its lock record. If it fails to do so, it is presumed dead, and a new election 28 takes place. If the leader is in fact still alive but unreachable, it is 29 expected to gracefully step down. A variety of factors can cause a leader to 30 fail at updating its lease, but continue acting as the leader before succeeding 31 at stepping down. 32 33 In the "leader for life" approach, a specific Pod is the leader. Once 34 established (by creating a lock record), the Pod is the leader until it is 35 destroyed. There is no possibility for multiple pods to think they are the 36 leader at the same time. The leader does not need to renew a lease, consider 37 stepping down, or do anything related to election activity once it becomes the 38 leader. 39 40 The lock record in this case is a ConfigMap whose OwnerReference is set to the 41 Pod that is the leader. When the leader is destroyed, the ConfigMap gets 42 garbage-collected, enabling a different candidate Pod to become the leader. 43 44 Leader for Life requires that all candidate Pods be in the same Namespace. It 45 uses the downwards API to determine the pod name, as hostname is not reliable. 46 You should run it configured with: 47 48 env: 49 - name: POD_NAME 50 valueFrom: 51 fieldRef: 52 fieldPath: metadata.name 53 */ 54 package leader