github.com/SUSE/skuba@v1.4.17/README.md (about)

     1  # skuba
     2  
     3  Tool to manage the full lifecycle of a cluster.
     4  
     5  ## Table of Content
     6  
     7  - [Prerequisites](#prerequisites)
     8  - [Installation](#installation)
     9    * [Development](#development)
    10    * [Staging](#staging)
    11    * [Release](#release)
    12  - [Creating a cluster](#creating-a-cluster)
    13    * [cluster init](#cluster-init)
    14    * [node bootstrap](#node-bootstrap)
    15  - [Growing a cluster](#growing-a-cluster)
    16    * [node join](#node-join)
    17      + [master node join](#master-node-join)
    18      + [worker node join](#worker-node-join)
    19  - [Shrinking a cluster](#shrinking-a-cluster)
    20    * [node remove](#node-remove)
    21  - [kubectl-caasp](#kubectl-caasp)
    22  - [Demo](#demo)
    23  - [CI](ci/README.md)
    24  - [Update](skuba-update/README.md)
    25  
    26  ## Prerequisites
    27  
    28  The required infrastructure for deploying CaaSP needs to exist beforehand, it's
    29  required for you to have SSH access to these machines from the machine that you
    30  are running `skuba` from. `skuba` requires you to have added your SSH
    31  keys to the SSH agent on this machine, e.g:
    32  
    33  ```sh
    34  ssh-add ~/.ssh/id_rsa
    35  ```
    36  
    37  The system running `skuba` must have `kubectl` available.
    38  
    39  ## Installation
    40  
    41  ```sh
    42  go get github.com/SUSE/skuba/cmd/skuba
    43  ```
    44  
    45  ### Development
    46  
    47  A development build will:
    48  
    49  * Pull container images from `registry.suse.de/devel/caasp/4.0/containers/containers/caasp/v4`
    50  
    51  To build it, run:
    52  
    53  ```sh
    54  make
    55  ```
    56  
    57  ### Staging
    58  
    59  A staging build will:
    60  
    61  * Pull container images from `registry.suse.de/suse/sle-15-sp1/update/products/casp40/containers/caasp/v4`
    62  
    63  To build it, run:
    64  
    65  ```sh
    66  make staging
    67  ```
    68  
    69  ### Release
    70  
    71  A release build will:
    72  
    73  * Pull container images from `registry.suse.com/caasp/v4`
    74  
    75  To build it, run:
    76  
    77  ```sh
    78  make release
    79  ```
    80  
    81  ## Creating a cluster
    82  
    83  Go to any directory in your machine, e.g. `~/clusters`. From there, execute:
    84  
    85  ### cluster init
    86  
    87  The `init` process creates the definition of your cluster. Ideally there's
    88  nothing to tweak in the general case, but you can go through the generated
    89  configurations and check if everything is fine for your taste.
    90  
    91  ```
    92  skuba cluster init --control-plane load-balancer.example.com company-cluster
    93  ```
    94  
    95  This command will have generated a basic project scaffold in the `company-cluster`
    96  folder. You need to change the directory to this new folder in order to run the rest
    97  of the commands in this README.
    98  
    99  ### node bootstrap
   100  
   101  You need to bootstrap your first master node of the cluster. For this purpose
   102  you have to be inside the `company-cluster` folder.
   103  
   104  ```
   105  skuba node bootstrap --user opensuse --sudo --target <IP/fqdn> my-master
   106  ```
   107  
   108  You can check `skuba node bootstrap --help` for further options, but the
   109  previous command means:
   110  
   111  * Bootstrap node using a SSH connection to target `<IP/fqdn>`
   112    * Use `opensuse` user when opening the SSH session
   113    * Use `sudo` when executing commands inside the machine
   114  * Name the node `my-master`: this is what Kubernetes will use to refer to your node
   115  
   116  When this command has finished, some secrets will have been copied to your
   117  `company-cluster` folder. Namely:
   118  
   119  * Generated secrets will be copied inside the `pki` folder
   120  * The administrative `admin.conf` file of the cluster has been copied in
   121    root of the `company-cluster` folder
   122    * The `company-cluster/admin.conf` file is the `kubeconfig` configuration
   123      required by `kubectl` and other command line tools
   124  
   125  ## Growing a cluster
   126  
   127  ### node join
   128  
   129  Joining a node allows you to grow your Kubernetes cluster. You can join master nodes as
   130  well as worker nodes to your existing cluster. For this purpose you have to be inside the
   131  `company-cluster` folder.
   132  
   133  This task will automatically create a new bootstrap token on the existing cluster that will
   134  be used for the kubelet TLS bootstrap to happen on the new node. The token will be fed
   135  automatically to the configuration used to join the new node.
   136  
   137  This task will create the configuration file inside the `kubeadm-join.conf.d` folder as well
   138  with a file named `<IP/fqdn>.conf` that will contain the join configuration used. If this file
   139  existed before it will be honored, only overriding a small subset of settings automatically:
   140  
   141  * Bootstrap token to the one generated on demand
   142  * Kubelet extra args
   143    * `node-ip` if the `--target` is an IP address
   144    * `hostname-override` to the `node-name` provided as an argument
   145    * `cni-bin-dir` directory location if required
   146  * Node registration name to `node-name` provided as an argument
   147  
   148  #### master node join
   149  
   150  This command will join a new master node to the cluster. This will also increase the etcd
   151  member count by one.
   152  
   153  ```
   154  skuba node join --role master --user opensuse --sudo --target <IP/fqdn> second-master
   155  ```
   156  
   157  #### worker node join
   158  
   159  This command will join a new worker node to the cluster.
   160  
   161  ```
   162  skuba node join --role worker --user opensuse --sudo --target <IP/fqdn> my-worker
   163  ```
   164  
   165  ## Shrinking a cluster
   166  
   167  ### node remove
   168  
   169  It's possible to remove master and worker nodes from the cluster. All the required tasks to remove
   170  the target node will be performed automatically:
   171  
   172  * Drain the node (also cordoning it)
   173  * Mask and disable the kubelet service
   174  * If it's a master node:
   175    * Remove persisted information
   176      * etcd store
   177      * PKI secrets
   178    * Remove etcd member from the etcd cluster
   179    * Remove the endpoint from the `kubeadm-config` config map
   180  * Remove node from the cluster
   181  
   182  For removing a node you only need to provide the name of the node known to Kubernetes:
   183  
   184  ```
   185  skuba node remove my-worker
   186  ```
   187  
   188  Or, if you want to remove a master node:
   189  
   190  ```
   191  skuba node remove second-master
   192  ```
   193  
   194  ## kubectl-caasp
   195  
   196  This project also comes with a kubectl plugin that has the same layout as `skuba`. You can
   197  call to the same commands presented in `skuba` as `kubectl caasp` when installing the
   198  `kubectl-caasp` binary in your path.
   199  
   200  The purpose of the tool is to provide a quick way to see if nodes have pending
   201  upgrades.
   202  
   203  ```
   204  $ kubectl caasp cluster status
   205  NAME      STATUS   ROLE     OS-IMAGE                              KERNEL-VERSION           KUBELET-VERSION   CONTAINER-RUNTIME   HAS-UPDATES   HAS-DISRUPTIVE-UPDATES   CAASP-RELEASE-VERSION
   206  master0   Ready    master   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
   207  master1   Ready    master   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
   208  master2   Ready    master   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
   209  worker0   Ready    <none>   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
   210  worker1   Ready    <none>   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
   211  worker2   Ready    <none>   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
   212  ```
   213  
   214  ## Demo
   215  
   216  This is a quick screencast showing how it's easy to deploy a multi master node
   217  on top of AWS. The procedure is the same as the deployment on OpenStack or on
   218  libvirt.
   219  
   220  The deployment is done on AWS via the terraform files shared inside of the `infra`
   221  repository.
   222  
   223  Videos:
   224  
   225    * [infrastructure creation](https://asciinema.org/a/wy9bqNjzszRN030sUIGM7f9j6)
   226    * [cluster creation](https://asciinema.org/a/PjblNTwwx0Z7ujyQPEu8SNHgF)
   227  
   228  The videos are uncut, as you will see the whole deployment takes around 7 minutes:
   229  4 minutes for the infrastructure, 3 minutes for the actual cluster.
   230  
   231  The demo uses a small script to automate the sequential invocations of `skuba`.
   232  Anything can be used to do that, including bash.