github.com/alibaba/sealer@v0.8.6-0.20220430115802-37a2bdaa8173/docs/site/src/zh/getting-started/using-clusterfile.md (about)

     1  # 使用Clusterfile初始化集群
     2  
     3  Clusterfile支持:用户自定义kubeadm,helm values 等配置的覆盖或合并,plugins 。。。
     4  
     5  ```yaml
     6  apiVersion: sealer.cloud/v2
     7  kind: Cluster
     8  metadata:
     9    name: my-cluster
    10  spec:
    11    image: kubernetes:v1.19.8
    12    env:
    13      - key1=value1
    14      - key2=value2
    15      - key2=value3 #key2=[value2, value3]
    16    ssh:
    17      passwd:
    18      pk: xxx
    19      pkPasswd: xxx
    20      user: root
    21      port: "2222"
    22    hosts:
    23      - ips: [ 192.168.0.2 ]
    24        roles: [ master ]
    25        env:
    26          - etcd-dir=/data/etcd
    27        ssh:
    28          user: xxx
    29          passwd: xxx
    30          port: "2222"
    31      - ips: [ 192.168.0.3 ]
    32        roles: [ node,db ]
    33  ```
    34  
    35  ## 使用案例
    36  
    37  ### 启动一个简单集群
    38  
    39  3 masters and 1 node, It's so clearly and simple, cool
    40  
    41  ```yaml
    42  apiVersion: sealer.cloud/v2
    43  kind: Cluster
    44  metadata:
    45    name: default-kubernetes-cluster
    46  spec:
    47    image: kubernetes:v1.19.8
    48    ssh:
    49      passwd: xxx
    50    hosts:
    51      - ips: [ 192.168.0.2,192.168.0.3,192.168.0.4 ]
    52        roles: [ master ]
    53      - ips: [ 192.168.0.5 ]
    54        roles: [ node ]
    55  ```
    56  
    57  ```shell script
    58  sealer apply -f Clusterfile
    59  ```
    60  
    61  ### 重写ssh配置 (例如密码和port等)
    62  
    63  ```yaml
    64  apiVersion: sealer.cloud/v2
    65  kind: Cluster
    66  metadata:
    67    name: default-kubernetes-cluster
    68  spec:
    69    image: kubernetes:v1.19.8
    70    ssh:
    71      passwd: xxx
    72      port: "2222"
    73    hosts:
    74      - ips: [ 192.168.0.2 ] # 该master节点端口号与其他节点不同
    75        roles: [ master ]
    76        ssh:
    77          passwd: yyy
    78          port: "22"
    79      - ips: [ 192.168.0.3,192.168.0.4 ]
    80        roles: [ master ]
    81      - ips: [ 192.168.0.5 ]
    82        roles: [ node ]
    83  ```
    84  
    85  ### 怎样设置自定义kubeadm配置
    86  
    87  更好的方法是直接将 kubeadm 配置添加到 Clusterfile 中,当然每个集群镜像都有它的默认配置,您可以只定义这些配置的一部分,然后sealer将其合并到默认配置中。
    88  
    89  ```yaml
    90  ### 默认配置:
    91  apiVersion: kubeadm.k8s.io/v1beta2
    92  kind: InitConfiguration
    93  localAPIEndpoint:
    94    # advertiseAddress: 192.168.2.110
    95    bindPort: 6443
    96  nodeRegistration:
    97    criSocket: /var/run/dockershim.sock
    98  
    99  ---
   100  apiVersion: kubeadm.k8s.io/v1beta2
   101  kind: ClusterConfiguration
   102  kubernetesVersion: v1.19.8
   103  controlPlaneEndpoint: "apiserver.cluster.local:6443"
   104  imageRepository: sea.hub:5000/library
   105  networking:
   106    # dnsDomain: cluster.local
   107    podSubnet: 100.64.0.0/10
   108    serviceSubnet: 10.96.0.0/22
   109  apiServer:
   110    certSANs:
   111      - 127.0.0.1
   112      - apiserver.cluster.local
   113      - 192.168.2.110
   114      - aliyun-inc.com
   115      - 10.0.0.2
   116      - 10.103.97.2
   117    extraArgs:
   118      etcd-servers: https://192.168.2.110:2379
   119      feature-gates: TTLAfterFinished=true,EphemeralContainers=true
   120      audit-policy-file: "/etc/kubernetes/audit-policy.yml"
   121      audit-log-path: "/var/log/kubernetes/audit.log"
   122      audit-log-format: json
   123      audit-log-maxbackup: '10'
   124      audit-log-maxsize: '100'
   125      audit-log-maxage: '7'
   126      enable-aggregator-routing: 'true'
   127    extraVolumes:
   128      - name: "audit"
   129        hostPath: "/etc/kubernetes"
   130        mountPath: "/etc/kubernetes"
   131        pathType: DirectoryOrCreate
   132      - name: "audit-log"
   133        hostPath: "/var/log/kubernetes"
   134        mountPath: "/var/log/kubernetes"
   135        pathType: DirectoryOrCreate
   136      - name: localtime
   137        hostPath: /etc/localtime
   138        mountPath: /etc/localtime
   139        readOnly: true
   140        pathType: File
   141  controllerManager:
   142    extraArgs:
   143      feature-gates: TTLAfterFinished=true,EphemeralContainers=true
   144      experimental-cluster-signing-duration: 876000h
   145    extraVolumes:
   146      - hostPath: /etc/localtime
   147        mountPath: /etc/localtime
   148        name: localtime
   149        readOnly: true
   150        pathType: File
   151  scheduler:
   152    extraArgs:
   153      feature-gates: TTLAfterFinished=true,EphemeralContainers=true
   154    extraVolumes:
   155      - hostPath: /etc/localtime
   156        mountPath: /etc/localtime
   157        name: localtime
   158        readOnly: true
   159        pathType: File
   160  etcd:
   161    local:
   162      extraArgs:
   163        listen-metrics-urls: http://0.0.0.0:2381
   164  
   165  ---
   166  apiVersion: kubeproxy.config.k8s.io/v1alpha1
   167  kind: KubeProxyConfiguration
   168  mode: "ipvs"
   169  ipvs:
   170    excludeCIDRs:
   171      - "10.103.97.2/32"
   172  
   173  ---
   174  apiVersion: kubelet.config.k8s.io/v1beta1
   175  kind: KubeletConfiguration
   176  authentication:
   177    anonymous:
   178      enabled: false
   179    webhook:
   180      cacheTTL: 2m0s
   181      enabled: true
   182    x509:
   183      clientCAFile: /etc/kubernetes/pki/ca.crt
   184  authorization:
   185    mode: Webhook
   186    webhook:
   187      cacheAuthorizedTTL: 5m0s
   188      cacheUnauthorizedTTL: 30s
   189  cgroupDriver:
   190  cgroupsPerQOS: true
   191  clusterDomain: cluster.local
   192  configMapAndSecretChangeDetectionStrategy: Watch
   193  containerLogMaxFiles: 5
   194  containerLogMaxSize: 10Mi
   195  contentType: application/vnd.kubernetes.protobuf
   196  cpuCFSQuota: true
   197  cpuCFSQuotaPeriod: 100ms
   198  cpuManagerPolicy: none
   199  cpuManagerReconcilePeriod: 10s
   200  enableControllerAttachDetach: true
   201  enableDebuggingHandlers: true
   202  enforceNodeAllocatable:
   203    - pods
   204  eventBurst: 10
   205  eventRecordQPS: 5
   206  evictionHard:
   207    imagefs.available: 15%
   208    memory.available: 100Mi
   209    nodefs.available: 10%
   210    nodefs.inodesFree: 5%
   211  evictionPressureTransitionPeriod: 5m0s
   212  failSwapOn: true
   213  fileCheckFrequency: 20s
   214  hairpinMode: promiscuous-bridge
   215  healthzBindAddress: 127.0.0.1
   216  healthzPort: 10248
   217  httpCheckFrequency: 20s
   218  imageGCHighThresholdPercent: 85
   219  imageGCLowThresholdPercent: 80
   220  imageMinimumGCAge: 2m0s
   221  iptablesDropBit: 15
   222  iptablesMasqueradeBit: 14
   223  kubeAPIBurst: 10
   224  kubeAPIQPS: 5
   225  makeIPTablesUtilChains: true
   226  maxOpenFiles: 1000000
   227  maxPods: 110
   228  nodeLeaseDurationSeconds: 40
   229  nodeStatusReportFrequency: 10s
   230  nodeStatusUpdateFrequency: 10s
   231  oomScoreAdj: -999
   232  podPidsLimit: -1
   233  port: 10250
   234  registryBurst: 10
   235  registryPullQPS: 5
   236  rotateCertificates: true
   237  runtimeRequestTimeout: 2m0s
   238  serializeImagePulls: true
   239  staticPodPath: /etc/kubernetes/manifests
   240  streamingConnectionIdleTimeout: 4h0m0s
   241  syncFrequency: 1m0s
   242  volumeStatsAggPeriod: 1m0s
   243  ---
   244  apiVersion: kubeadm.k8s.io/v1beta2
   245  kind: JoinConfiguration
   246  caCertPath: /etc/kubernetes/pki/ca.crt
   247  discovery:
   248    timeout: 5m0s
   249  nodeRegistration:
   250    criSocket: /var/run/dockershim.sock
   251  controlPlane:
   252    localAPIEndpoint:
   253      # advertiseAddress: 192.168.56.7
   254      bindPort: 6443
   255  ```
   256  
   257  自定义kubeadm 配置(未指定参数使用默认值)
   258  
   259  ```yaml
   260  apiVersion: sealer.cloud/v2
   261  kind: Cluster
   262  metadata:
   263    name: my-cluster
   264  spec:
   265    image: kubernetes:v1.19.8
   266  ...
   267  ---
   268  ## 自定义配置必须指定kind类型
   269  kind: ClusterConfiguration
   270  kubernetesVersion: v1.19.8
   271  networking:
   272    podSubnet: 101.64.0.0/10
   273    serviceSubnet: 10.96.0.0/22
   274  ---
   275  kind: KubeletConfiguration
   276  authentication:
   277    webhook:
   278      cacheTTL: 2m1s
   279  ```
   280  
   281  ```shell
   282  # 使用自定义kubeadm配置初始化集群
   283  sealer apply -f Clusterfile
   284  ```
   285  
   286  ### 在config和脚本中使用env
   287  
   288  在configs或yaml文件中使用env
   289  
   290  ```yaml
   291  apiVersion: sealer.cloud/v2
   292  kind: Cluster
   293  metadata:
   294    name: my-cluster
   295  spec:
   296    image: kubernetes:v1.19.8
   297    env:
   298      - docker_dir=/var/lib/docker
   299      - ips=192.168.0.1;192.168.0.2;192.168.0.3 #ips=[192.168.0.1 192.168.0.2 192.168.0.3]
   300    hosts:
   301      - ips: [ 192.168.0.2 ]
   302        roles: [ master ]
   303        env: # 不同节点支持覆盖env,数组使用分号隔开
   304          - docker_dir=/data/docker
   305          - ips=192.168.0.2;192.168.0.3
   306      - ips: [ 192.168.0.3 ]
   307        roles: [ node ]
   308  ```
   309  
   310  在init.sh脚本中使用env:
   311  
   312  ```shell script
   313  #!/bin/bash
   314  echo $docker_dir ${ips[@]}
   315  ```
   316  
   317  当sealer执行脚本时env的设置类似于:`docker_dir=/data/docker ips=(192.168.0.2;192.168.0.3) && source init.sh`
   318  该例子中, master ENV 是 `/data/docker`, node ENV 为 `/var/lib/docker`
   319  
   320  ### 支持Env渲染
   321  
   322  支持[sprig](http://masterminds.github.io/sprig/) 模版函数。
   323  本案例展示如何使用 env 设置dashboard服务目标端口
   324  
   325  dashboard.yaml.tmpl:
   326  
   327  ```yaml
   328  ...
   329  kind: Service
   330  apiVersion: v1
   331  metadata:
   332    labels:
   333      k8s-app: kubernetes-dashboard
   334    name: kubernetes-dashboard
   335    namespace: kubernetes-dashboard
   336  spec:
   337    ports:
   338      - port: 443
   339        targetPort: {{ .DashBoardPort }}
   340    selector:
   341      k8s-app: kubernetes-dashboard
   342  ...
   343  ```
   344  
   345  编写kubefile,此时需要将yaml复制到`manifests etc charts`目录下,sealer只渲染该目录下的文件:
   346  
   347  sealer 将渲染 filename.yaml.tmpl 文件并创建一个名为 `filename.yaml` 的新文件
   348  
   349  ```yaml
   350  FROM kubernetes:1.16.9
   351  COPY dashobard.yaml.tmpl manifests/ # 仅支持`manifests etc charts` 目录下渲染文件
   352  CMD kubectl apply -f manifests/dashobard.yaml
   353  ```
   354  
   355  对于用户来说,只需要指定集群环境变量即可:
   356  
   357  ```shell script
   358  sealer run -e DashBoardPort=8443 mydashboard:latest -m xxx -n xxx -p xxx
   359  ```
   360  
   361  或者在Clusterfile中指定env
   362  
   363  ```yaml
   364  apiVersion: sealer.cloud/v2
   365  kind: Cluster
   366  metadata:
   367    name: my-cluster
   368  spec:
   369    image: mydashobard:latest
   370    env:
   371      - DashBoardPort=8443
   372    hosts:
   373      - ips: [ 192.168.0.2 ]
   374        roles: [ master ] # add role field to specify the node role
   375      - ips: [ 192.168.0.3 ]
   376        roles: [ node ]
   377  ```
   378  
   379  ### 使用env渲染Clusterfile
   380  
   381  ```shell
   382  apiVersion: sealer.cloud/v2
   383  kind: Cluster
   384  metadata:
   385    name: my-cluster
   386  spec:
   387    image: kubernetes:v1.19.8
   388    env:
   389      - podcidr=100.64.0.0/10
   390   ...
   391  ---
   392  apiVersion: kubeadm.k8s.io/v1beta2
   393  kind: ClusterConfiguration
   394  kubernetesVersion: v1.19.8
   395  controlPlaneEndpoint: "apiserver.cluster.local:6443"
   396  imageRepository: sea.hub:5000/library
   397  networking:
   398    # dnsDomain: cluster.local
   399    podSubnet: {{ .podcidr }}
   400    serviceSubnet: 10.96.0.0/22
   401  ---
   402  apiVersion: sealer.aliyun.com/v1alpha1
   403  kind: Config
   404  metadata:
   405    name: calico
   406  spec:
   407    path: etc/custom-resources.yaml
   408    data: |
   409      apiVersion: operator.tigera.io/v1
   410      kind: Installation
   411      metadata:
   412        name: default
   413      spec:
   414        # Configures Calico networking.
   415        calicoNetwork:
   416          # Note: The ipPools section cannot be modified post-install.
   417          ipPools:
   418          - blockSize: 26
   419            # Note: Must be the same as podCIDR
   420            cidr: {{ .podcidr }}
   421  ```
   422  
   423  ::: v-pre
   424  kubeadm和calico配置中的`{{ .podcidr }}`将被替换为Clusterfile.Env中的`podcidr`。
   425  :::