github.com/sealerio/sealer@v0.11.1-0.20240507115618-f4f89c5853ae/docs/design/clusterfile-v2.md (about)

     1  # Clusterfile v2 design
     2  
     3  ## Motivations
     4  
     5  Clusterfile v1 not match some requirement.
     6  
     7  * Different node has different ssh config like passwd.
     8  * Not clear, confused about what argument should put into Clusterfile.
     9  * Coupling the infra config and cluster config.
    10  
    11  ## Proposal
    12  
    13  * Delete provider field
    14  * Add env field
    15  * Modify hosts field, add ssh and env rewrite
    16  * Delete all kubeadm config
    17  
    18  ```yaml
    19  apiVersion: sealer.io/v2
    20  kind: Cluster
    21  metadata:
    22    name: my-cluster
    23  spec:
    24    image: kubernetes:v1.19.8
    25    env:
    26      - key1=value1
    27      - key2=value2
    28      - key2=value3 #key2=[value2, value3]
    29    ssh:
    30      passwd:
    31      pk: xxx
    32      pkPasswd: xxx
    33      user: root
    34      port: "2222"
    35    hosts:
    36      - ips: [ 192.168.0.2 ]
    37        roles: [ master ] # add role field to specify the node role
    38        env: # rewrite some nodes has different env config
    39          - etcd-dir=/data/etcd
    40        ssh: # rewrite ssh config if some node has different passwd...
    41          user: xxx
    42          passwd: xxx
    43          port: "2222"
    44      - ips: [ 192.168.0.3 ]
    45        roles: [ node,db ]
    46  ```
    47  
    48  ## Use cases
    49  
    50  ### Apply a simple cluster by default
    51  
    52  3 masters and a node, It's so clearly and simple, cool
    53  
    54  ```yaml
    55  apiVersion: sealer.io/v2
    56  kind: Cluster
    57  metadata:
    58    name: default-kubernetes-cluster
    59  spec:
    60    image: kubernetes:v1.19.8
    61    ssh:
    62      passwd: xxx
    63    hosts:
    64      - ips: [ 192.168.0.2,192.168.0.3,192.168.0.4 ]
    65        roles: [ master ]
    66      - ips: [ 192.168.0.5 ]
    67        roles: [ node ]
    68  ```
    69  
    70  ### Overwrite ssh config (for example password,and port)
    71  
    72  ```yaml
    73  apiVersion: sealer.io/v2
    74  kind: Cluster
    75  metadata:
    76    name: default-kubernetes-cluster
    77  spec:
    78    image: kubernetes:v1.19.8
    79    ssh:
    80      passwd: xxx
    81      port: "2222"
    82    hosts:
    83      - ips: [ 192.168.0.2 ]
    84        roles: [ master ]
    85        ssh:
    86          passwd: yyy
    87          port: "22"
    88      - ips: [ 192.168.0.3,192.168.0.4 ]
    89        roles: [ master ]
    90      - ips: [ 192.168.0.5 ]
    91        roles: [ node ]
    92  ```
    93  
    94  ### How to define your own kubeadm config
    95  
    96  The better way is to add kubeadm config directly into Clusterfile, of course every ClusterImage has it default config:
    97  You can only define part of those configs, sealer will merge then into default config.
    98  
    99  ```yaml
   100  apiVersion: kubeadm.k8s.io/v1beta3
   101  kind: InitConfiguration
   102  localAPIEndpoint:
   103    # advertiseAddress: 192.168.2.110
   104    bindPort: 6443
   105  nodeRegistration:
   106    criSocket: /var/run/dockershim.sock
   107  
   108  ---
   109  apiVersion: kubeadm.k8s.io/v1beta3
   110  kind: ClusterConfiguration
   111  kubernetesVersion: v1.19.8
   112  controlPlaneEndpoint: "apiserver.cluster.local:6443"
   113  imageRepository: sea.hub:5000
   114  networking:
   115    # dnsDomain: cluster.local
   116    podSubnet: 100.64.0.0/10
   117    serviceSubnet: 10.96.0.0/22
   118  apiServer:
   119    certSANs:
   120      - 127.0.0.1
   121      - apiserver.cluster.local
   122      - 192.168.2.110
   123      - aliyun-inc.com
   124      - 10.0.0.2
   125      - 10.103.97.2
   126    extraArgs:
   127      etcd-servers: https://192.168.2.110:2379
   128      feature-gates: TTLAfterFinished=true,EphemeralContainers=true
   129      audit-policy-file: "/etc/kubernetes/audit-policy.yml"
   130      audit-log-path: "/var/log/kubernetes/audit.log"
   131      audit-log-format: json
   132      audit-log-maxbackup: '10'
   133      audit-log-maxsize: '100'
   134      audit-log-maxage: '7'
   135      enable-aggregator-routing: 'true'
   136    extraVolumes:
   137      - name: "audit"
   138        hostPath: "/etc/kubernetes"
   139        mountPath: "/etc/kubernetes"
   140        pathType: DirectoryOrCreate
   141      - name: "audit-log"
   142        hostPath: "/var/log/kubernetes"
   143        mountPath: "/var/log/kubernetes"
   144        pathType: DirectoryOrCreate
   145      - name: localtime
   146        hostPath: /etc/localtime
   147        mountPath: /etc/localtime
   148        readOnly: true
   149        pathType: File
   150  controllerManager:
   151    extraArgs:
   152      feature-gates: TTLAfterFinished=true,EphemeralContainers=true
   153      experimental-cluster-signing-duration: 876000h
   154    extraVolumes:
   155      - hostPath: /etc/localtime
   156        mountPath: /etc/localtime
   157        name: localtime
   158        readOnly: true
   159        pathType: File
   160  scheduler:
   161    extraArgs:
   162      feature-gates: TTLAfterFinished=true,EphemeralContainers=true
   163    extraVolumes:
   164      - hostPath: /etc/localtime
   165        mountPath: /etc/localtime
   166        name: localtime
   167        readOnly: true
   168        pathType: File
   169  etcd:
   170    local:
   171      extraArgs:
   172        listen-metrics-urls: http://0.0.0.0:2381
   173  
   174  ---
   175  apiVersion: kubeproxy.config.k8s.io/v1alpha1
   176  kind: KubeProxyConfiguration
   177  mode: "ipvs"
   178  ipvs:
   179    excludeCIDRs:
   180      - "10.103.97.2/32"
   181  
   182  ---
   183  apiVersion: kubelet.config.k8s.io/v1beta1
   184  kind: KubeletConfiguration
   185  authentication:
   186    anonymous:
   187      enabled: false
   188    webhook:
   189      cacheTTL: 2m0s
   190      enabled: true
   191    x509:
   192      clientCAFile: /etc/kubernetes/pki/ca.crt
   193  authorization:
   194    mode: Webhook
   195    webhook:
   196      cacheAuthorizedTTL: 5m0s
   197      cacheUnauthorizedTTL: 30s
   198  cgroupDriver:
   199  cgroupsPerQOS: true
   200  clusterDomain: cluster.local
   201  configMapAndSecretChangeDetectionStrategy: Watch
   202  containerLogMaxFiles: 5
   203  containerLogMaxSize: 10Mi
   204  contentType: application/vnd.kubernetes.protobuf
   205  cpuCFSQuota: true
   206  cpuCFSQuotaPeriod: 100ms
   207  cpuManagerPolicy: none
   208  cpuManagerReconcilePeriod: 10s
   209  enableControllerAttachDetach: true
   210  enableDebuggingHandlers: true
   211  enforceNodeAllocatable:
   212    - pods
   213  eventBurst: 10
   214  eventRecordQPS: 5
   215  evictionHard:
   216    imagefs.available: 15%
   217    memory.available: 100Mi
   218    nodefs.available: 10%
   219    nodefs.inodesFree: 5%
   220  evictionPressureTransitionPeriod: 5m0s
   221  failSwapOn: true
   222  fileCheckFrequency: 20s
   223  hairpinMode: promiscuous-bridge
   224  healthzBindAddress: 127.0.0.1
   225  healthzPort: 10248
   226  httpCheckFrequency: 20s
   227  imageGCHighThresholdPercent: 85
   228  imageGCLowThresholdPercent: 80
   229  imageMinimumGCAge: 2m0s
   230  iptablesDropBit: 15
   231  iptablesMasqueradeBit: 14
   232  kubeAPIBurst: 10
   233  kubeAPIQPS: 5
   234  makeIPTablesUtilChains: true
   235  maxOpenFiles: 1000000
   236  maxPods: 110
   237  nodeLeaseDurationSeconds: 40
   238  nodeStatusReportFrequency: 10s
   239  nodeStatusUpdateFrequency: 10s
   240  oomScoreAdj: -999
   241  podPidsLimit: -1
   242  port: 10250
   243  registryBurst: 10
   244  registryPullQPS: 5
   245  rotateCertificates: true
   246  runtimeRequestTimeout: 2m0s
   247  serializeImagePulls: true
   248  staticPodPath: /etc/kubernetes/manifests
   249  streamingConnectionIdleTimeout: 4h0m0s
   250  syncFrequency: 1m0s
   251  volumeStatsAggPeriod: 1m0s
   252  ---
   253  apiVersion: kubeadm.k8s.io/v1beta3
   254  kind: JoinConfiguration
   255  caCertPath: /etc/kubernetes/pki/ca.crt
   256  discovery:
   257    timeout: 5m0s
   258  nodeRegistration:
   259    criSocket: /var/run/dockershim.sock
   260  controlPlane:
   261    localAPIEndpoint:
   262      # advertiseAddress: 192.168.56.7
   263      bindPort: 6443
   264  ```
   265  
   266  ### Using Kubeconfig to overwrite kubeadm configs
   267  
   268  If you don't want to care about so much Kubeadm configs, you can use `KubeConfig` object to overwrite(json patch merge) some fields.
   269  
   270  ```yaml
   271  apiVersion: sealer.io/v2
   272  kind: KubeadmConfig
   273  metadata:
   274    name: default-kubernetes-config
   275  spec:
   276    localAPIEndpoint:
   277      advertiseAddress: 192.168.2.110
   278      bindPort: 6443
   279    nodeRegistration:
   280      criSocket: /var/run/dockershim.sock
   281    kubernetesVersion: v1.19.8
   282    controlPlaneEndpoint: "apiserver.cluster.local:6443"
   283    imageRepository: sea.hub:5000
   284    networking:
   285      podSubnet: 100.64.0.0/10
   286      serviceSubnet: 10.96.0.0/22
   287    apiServer:
   288      certSANs:
   289        - sealer.cloud
   290        - 127.0.0.1
   291    clusterDomain: cluster.local
   292  ```
   293  
   294  ### Using ENV in configs and script
   295  
   296  Using ENV in configs or yaml files [check this](https://github.com/sealerio/sealer/blob/main/docs/design/global-config.md#global-configuration)
   297  
   298  ```yaml
   299  apiVersion: sealer.io/v2
   300  kind: Cluster
   301  metadata:
   302    name: my-cluster
   303  spec:
   304    image: kubernetes:v1.19.8
   305    env:
   306      docker-dir: /var/lib/docker
   307    hosts:
   308      - ips: [ 192.168.0.2 ]
   309        roles: [ master ] # add role field to specify the node role
   310        env: # overwrite some nodes has different env config
   311          docker-dir: /data/docker
   312      - ips: [ 192.168.0.3 ]
   313        roles: [ node ]
   314  ```
   315  
   316  Using ENV in init.sh script:
   317  
   318  ```shell script
   319  #!/bin/bash
   320  echo $docker-dir
   321  ```
   322  
   323  When sealer run the script will set ENV like this: `docker-dir=/data/docker && sh init.sh`
   324  In this case, master ENV is `/data/docker`, node ENV is by default `/var/lib/docker`
   325  
   326  ### How to use cloud infra
   327  
   328  If you're using public cloud, you needn't to config the ip field in Cluster Object. The infra Object will tell sealer to
   329  apply resource from public cloud, then render the ip list to Cluster Object.
   330  
   331  ```yaml
   332  apiVersion: sealer.io/v2
   333  kind: Cluster
   334  metadata:
   335    name: default-kubernetes-cluster
   336  spec:
   337    image: kubernetes:v1.19.8
   338  ---
   339  apiVersion: sealer.io/v2
   340  kind: Infra
   341  metadata:
   342    name: alicloud
   343  spec:
   344    provider: ALI_CLOUD
   345    ssh:
   346      passwd: xxx
   347      port: "2222"
   348    hosts:
   349      - count: 3
   350        role: [ master ]
   351        cpu: 4
   352        memory: 4
   353        systemDisk: 100
   354        dataDisk: [ 100,200 ]
   355      - count: 3
   356        role: [ node ]
   357        cpu: 4
   358        memory: 4
   359        systemDisk: 100
   360        dataDisk: [ 100, 200 ]
   361  ```
   362  
   363  After `sealer apply -f Clusterfile`, The cluster object will update:
   364  
   365  ```yaml
   366  apiVersion: sealer.io/v2
   367  kind: Cluster
   368  metadata:
   369    name: default-kubernetes-cluster
   370  spec:
   371    image: kubernetes:v1.19.8
   372    ssh:
   373      passwd: xxx
   374      port: "2222"
   375    hosts:
   376      - ips: [ 192.168.0.3 ]
   377        roles: [ master ]
   378  ...
   379  ```
   380  
   381  ### Env render support
   382  
   383  [Env render](https://github.com/sealerio/sealer/blob/main/docs/design/global-config.md#global-configuration)