github.phpd.cn/cilium/cilium@v1.6.12/tests/k8s/README.md (about) 1 ## Kubernetes multi node tests 2 3 This directory contains the necessary files to setup a 2 node kubernetes 4 cluster. 5 6 The directory structure is composed as follow: 7 8 - `cluster/` - files that have the kubernetes configurations 9 - `certs/` - certificates used in kubernetes components and in etcd, the 10 files are already generated so there's no need to regenerated them again. 11 - `cilium/` - cilium daemon sets adjusted to this cluster with a daemon set 12 for the loadbalancer mode. The files are generated on the fly based on the 13 `*.sed` files present. 14 - `cluster-manager.bash` - the script in charge of the certificates, 15 kubernetes and cilium files generation. It is also in charge of setting up 16 and deploy a fully kubernetes cluster with etcd running. This file has 17 several configurable options, like the version of etcd and k8s. 18 - `tests/` - the directory where the tests should be stored 19 - `deployments/` - yaml files to be managed for each runtime test. 20 - `ipv4/` - tests that are designed to be ran only in IPv4 mode. 21 - `ipv6/` - tests that are designed to be ran only in IPv6 mode. 22 - `00-setup-kubedns.sh` - script that makes sure kube-dns is up and running. 23 - `xx-test-name.sh` - all tests with this format will be ran in both IPv4 24 and IPv6 mode. 25 - `run-tests.bash` - in charge of running the runtime tests, setting up of the 26 IPv6 environment for the cluster, and running of the runtime tests in IPv6. 27 28 ### Cluster architecture 29 30 Running `vagrant up` will start 2 VMs: `k8s1` and 31 `k8s2`. 32 33 #### `k8s1` 34 35 `k8s1` will contain the etcd server, kube-apiserver, 36 kube-controller-manager, kube-scheduler and a kubelet instance. All kubernetes 37 components are spawned by kubeadm. 38 39 All components will be running in containers **except** kubelet and etcd. 40 41 This node will have 3 static IPs and 2 interfaces: 42 43 `enp0s8`: `192.168.36.11/24` and `fd01::b/16` 44 45 `enp0s9`: `192.168.37.11/24` 46 47 #### `k8s2` 48 49 `k8s2` will only contain a kubelet instance running. 50 51 This node will also have the 3 static IPs and 2 interfaces: 52 53 `enp0s8`: `192.168.36.12/24` and `fd01::c/16` 54 55 `enp0s9`: `192.168.37.12/24` 56 57 ### Switching between IPv4 and IPv6 58 59 After running `vagrant up` kubernetes and etcd will be running with TLS set up. 60 Note that cilium **will not be set up**. 61 62 Kubernetes will be running in IPv4 mode by default, to run with IPv6 mode, after 63 the machines are set up and running, run: 64 65 ``` 66 vagrant ssh ${vm} -- -t '/home/vagrant/go/src/github.com/cilium/cilium/tests/k8s/cluster/cluster-manager.bash reinstall --ipv6 --yes-delete-all-etcd-data' 67 vagrant ssh ${vm} -- -t 'sudo cp -R /root/.kube /home/vagrant' 68 vagrant ssh ${vm} -- -t 'sudo chown vagrant.vagrant -R /home/vagrant/.kube' 69 ``` 70 71 Where `${vm}` should be replaced with `k8s1` and `k8s2`. 72 73 This will reset the kubernetes cluster to it's initial state. 74 75 To revert it back to IPv4, run the same commands before without providing the 76 `--ipv6` option on the first command. 77 78 ### Deploying cilium 79 80 To deploy cilium after kubernetes is set up, simply run: 81 82 ``` 83 vagrant ssh k8s2 -- -t '/home/vagrant/go/src/github.com/cilium/cilium/tests/k8s/cluster/cluster-manager.bash deploy_cilium' 84 ``` 85 86 This command only needs to be executed in one of the nodes; since Cilium is 87 deployed as a DaemonSet, Kubernetes will deploy it on each node accordingly. 88 89 Cilium will also connect to etcd and kubernetes using TLS. 90 91 #### Loadbalancer mode (kubernetes ingress) 92 93 **Warning: the set up scripts were not tested with this mode** 94 95 In loadbalancer mode, the `k8s1` will run a daemon set designed for 96 this purpose, with `--lb` and `--snoop-device` set to `enp0s8`.