github.com/qsunny/k8s@v0.0.0-20220101153623-e6dca256d5bf/examples-master/staging/sharing-clusters/README.md (about) 1 # Sharing Clusters 2 3 This example demonstrates how to access one kubernetes cluster from another. It only works if both clusters are running on the same network, on a cloud provider that provides a private ip range per network (eg: GCE, GKE, AWS). 4 5 ## Setup 6 7 Create a cluster in US (you don't need to do this if you already have a running kubernetes cluster) 8 9 ```shell 10 $ cluster/kube-up.sh 11 ``` 12 13 Before creating our second cluster, lets have a look at the kubectl config: 14 15 ```yaml 16 apiVersion: v1 17 clusters: 18 - cluster: 19 certificate-authority-data: REDACTED 20 server: https://104.197.84.16 21 name: <clustername_us> 22 ... 23 current-context: <clustername_us> 24 ... 25 ``` 26 27 Now spin up the second cluster in Europe 28 29 ```shell 30 $ ./cluster/kube-up.sh 31 $ KUBE_GCE_ZONE=europe-west1-b KUBE_GCE_INSTANCE_PREFIX=eu ./cluster/kube-up.sh 32 ``` 33 34 Your kubectl config should contain both clusters: 35 36 ```yaml 37 apiVersion: v1 38 clusters: 39 - cluster: 40 certificate-authority-data: REDACTED 41 server: https://146.148.25.221 42 name: <clustername_eu> 43 - cluster: 44 certificate-authority-data: REDACTED 45 server: https://104.197.84.16 46 name: <clustername_us> 47 ... 48 current-context: kubernetesdev_eu 49 ... 50 ``` 51 52 And kubectl get nodes should agree: 53 54 ``` 55 $ kubectl get nodes 56 NAME LABELS STATUS 57 eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready 58 eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready 59 eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready 60 eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready 61 62 $ kubectl config use-context <clustername_us> 63 $ kubectl get nodes 64 NAME LABELS STATUS 65 kubernetes-node-5jtd kubernetes.io/hostname=kubernetes-node-5jtd Ready 66 kubernetes-node-lqfc kubernetes.io/hostname=kubernetes-node-lqfc Ready 67 kubernetes-node-sjra kubernetes.io/hostname=kubernetes-node-sjra Ready 68 kubernetes-node-wul8 kubernetes.io/hostname=kubernetes-node-wul8 Ready 69 ``` 70 71 ## Testing reachability 72 73 For this test to work we'll need to create a service in europe: 74 75 ``` 76 $ kubectl config use-context <clustername_eu> 77 $ kubectl create -f /tmp/secret.json 78 $ kubectl create -f examples/https-nginx/nginx-app.yaml 79 $ kubectl exec -it my-nginx-luiln -- echo "Europe nginx" >> /usr/share/nginx/html/index.html 80 $ kubectl get ep 81 NAME ENDPOINTS 82 kubernetes 10.240.249.92:443 83 nginxsvc 10.244.0.4:80,10.244.0.4:443 84 ``` 85 86 Just to test reachability, we'll try hitting the Europe nginx from our initial US central cluster. Create a basic curl pod in the US cluster: 87 88 ```yaml 89 apiVersion: v1 90 kind: Pod 91 metadata: 92 name: curlpod 93 spec: 94 containers: 95 - image: radial/busyboxplus:curl 96 command: 97 - sleep 98 - "360000000" 99 imagePullPolicy: IfNotPresent 100 name: curlcontainer 101 restartPolicy: Always 102 ``` 103 104 And test that you can actually reach the test nginx service across continents 105 106 ``` 107 $ kubectl config use-context <clustername_us> 108 $ kubectl -it exec curlpod -- /bin/sh 109 [ root@curlpod:/ ]$ curl http://10.244.0.4:80 110 Europe nginx 111 ``` 112 113 ## Granting access to the remote cluster 114 115 We will grant the US cluster access to the Europe cluster. Basically we're going to setup a secret that allows kubectl to function in a pod running in the US cluster, just like it did on our local machine in the previous step. First create a secret with the contents of the current .kube/config: 116 117 ```shell 118 $ kubectl config use-context <clustername_eu> 119 $ go run ./make_secret.go --kubeconfig=$HOME/.kube/config > /tmp/secret.json 120 $ kubectl config use-context <clustername_us> 121 $ kubectl create -f /tmp/secret.json 122 ``` 123 124 Create a kubectl pod that uses the secret, in the US cluster. 125 126 ```json 127 { 128 "kind": "Pod", 129 "apiVersion": "v1", 130 "metadata": { 131 "name": "kubectl-tester" 132 }, 133 "spec": { 134 "volumes": [ 135 { 136 "name": "secret-volume", 137 "secret": { 138 "secretName": "kubeconfig" 139 } 140 } 141 ], 142 "containers": [ 143 { 144 "name": "kubectl", 145 "image": "bprashanth/kubectl:0.0", 146 "imagePullPolicy": "Always", 147 "env": [ 148 { 149 "name": "KUBECONFIG", 150 "value": "/.kube/config" 151 } 152 ], 153 "args": [ 154 "proxy", "-p", "8001" 155 ], 156 "volumeMounts": [ 157 { 158 "name": "secret-volume", 159 "mountPath": "/.kube" 160 } 161 ] 162 } 163 ] 164 } 165 } 166 ``` 167 168 And check that you can access the remote cluster 169 170 ```shell 171 $ kubectl config use-context <clustername_us> 172 $ kubectl exec -it kubectl-tester bash 173 174 kubectl-tester $ kubectl get nodes 175 NAME LABELS STATUS 176 eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready 177 eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready 178 eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready 179 eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready 180 ``` 181 182 For a more advanced example of sharing clusters, see the [service-loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer/README.md) 183 184 185 <!-- BEGIN MUNGE: GENERATED_ANALYTICS --> 186 []() 187 <!-- END MUNGE: GENERATED_ANALYTICS -->