github.com/cockroachdb/cockroach@v20.2.0-alpha.1+incompatible/cloud/kubernetes/multiregion/README.md (about)

     1  # Running CockroachDB across multiple Kubernetes clusters
     2  
     3  The script and configuration files in this directory enable deploying
     4  CockroachDB across multiple Kubernetes clusters that are spread across different
     5  geographic regions. It deploys a CockroachDB
     6  [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
     7  into each separate cluster, and links them together using DNS.
     8  
     9  To use the configuration provided here, check out this repository (or otherwise
    10  download a copy of this directory), fill in the constants at the top of
    11  [setup.py](setup.py) with the relevant information about your Kubernetes
    12  clusters, optionally make any desired modifications to
    13  [cockroachdb-statefulset-secure.yaml](cockroachdb-statefulset-secure.yaml) as
    14  explained in [our Kubernetes performance tuning
    15  guide](https://www.cockroachlabs.com/docs/stable/kubernetes-performance.html),
    16  then finally run [setup.py](setup.py).
    17  
    18  You should see a lot of output as it does its thing, hopefully ending after
    19  printing out `job "cluster-init-secure" created`. This implies that everything
    20  was created successfully, and you should soon see the CockroachDB cluster
    21  initialized with 3 pods in the "READY" state in each Kubernetes cluster. At this
    22  point you can manage the StatefulSet in each cluster independently if you so
    23  desire, scaling up the number of replicas, changing their resource requests, or
    24  making other modifications as you please.
    25  
    26  If anything goes wrong along the way, please let us know via any of the [normal
    27  troubleshooting
    28  channels](https://www.cockroachlabs.com/docs/stable/support-resources.html).
    29  While we believe this creates a highly available, maintainable multi-region
    30  deployment, it is still pushing the boundaries of how Kubernetes is typically
    31  used, so feedback and issue reports are very appreciated.
    32  
    33  ## Limitations
    34  
    35  ### Pod-to-pod connectivity
    36  
    37  The deployment outlined in this directory relies on pod IP addresses being
    38  routable even across Kubernetes clusters and regions. This achieves optimal
    39  performance, particularly when compared to alternative solutions that route all packets between clusters through load balancers, but means that it won't work in certain environments.
    40  
    41  This requirement is satisfied by clusters deployed in cloud environments such as Google Kubernetes Engine, and
    42  can also be satisfied by on-prem environments depending on the [Kubernetes networking setup](https://kubernetes.io/docs/concepts/cluster-administration/networking/) used. If you want to test whether your cluster will work, you can run this basic network test:
    43  
    44  ```shell
    45  $ kubectl run network-test --image=alpine --restart=Never -- sleep 999999
    46  pod "network-test" created
    47  $ kubectl describe pod network-test | grep IP
    48  IP:           THAT-PODS-IP-ADDRESS
    49  $ kubectl config use-context YOUR-OTHER-CLUSTERS-CONTEXT-HERE
    50  $ kubectl run -it network-test --image=alpine --restart=Never -- ping THAT-PODS-IP-ADDRESS
    51  If you don't see a command prompt, try pressing enter.
    52  64 bytes from 10.12.14.10: seq=1 ttl=62 time=0.570 ms
    53  64 bytes from 10.12.14.10: seq=2 ttl=62 time=0.449 ms
    54  64 bytes from 10.12.14.10: seq=3 ttl=62 time=0.635 ms
    55  64 bytes from 10.12.14.10: seq=4 ttl=62 time=0.722 ms
    56  64 bytes from 10.12.14.10: seq=5 ttl=62 time=0.504 ms
    57  ...
    58  ```
    59  
    60  If the pods can directly connect, you should see successful ping output like the
    61  above. If they can't, you won't see any successful ping responses. Make sure to
    62  delete the `network-test` pod in each cluster when you're done!
    63  
    64  ### Exposing DNS servers to the Internet
    65  
    66  As currently configured, the way that the DNS servers from each Kubernetes
    67  cluster are hooked together is by exposing them via a load balanced IP address
    68  that's visible to the public Internet. This is because [Google Cloud Platform's Internal Load Balancers do not currently support clients in one region using a load balancer in another region](https://cloud.google.com/compute/docs/load-balancing/internal/#deploying_internal_load_balancing_with_clients_across_vpn_or_interconnect). 
    69  
    70  None of the services in your Kubernetes cluster will be made accessible, but
    71  their names could leak out to a motivated attacker. If this is unacceptable,
    72  please let us know and we can demonstrate other options. [Your voice could also
    73  help convince Google to allow clients from one region to use an Internal Load
    74  Balancer in another](https://issuetracker.google.com/issues/111021512),
    75  eliminating the problem.
    76  
    77  ## Cleaning up
    78  
    79  To remove all the resources created in your clusters by [setup.py](setup.py),
    80  copy the parameters you provided at the top of [setup.py](setup.py) to the top
    81  of [teardown.py](teardown.py) and run [teardown.py](teardown.py).
    82  
    83  ## More information
    84  
    85  For more information on running CockroachDB in Kubernetes, please see the [README
    86  in the parent directory](../README.md).