github.com/qsunny/k8s@v0.0.0-20220101153623-e6dca256d5bf/examples-master/staging/cockroachdb/README.md (about) 1 # CockroachDB on Kubernetes as a StatefulSet 2 3 This example deploys [CockroachDB](https://cockroachlabs.com) on Kubernetes as 4 a StatefulSet. CockroachDB is a distributed, scalable NewSQL database. Please see 5 [the homepage](https://cockroachlabs.com) and the 6 [documentation](https://www.cockroachlabs.com/docs/) for details. 7 8 ## Limitations 9 10 ### StatefulSet limitations 11 12 Standard StatefulSet limitations apply: There is currently no possibility to use 13 node-local storage (outside of single-node tests), and so there is likely 14 a performance hit associated with running CockroachDB on some external storage. 15 Note that CockroachDB already does replication and thus it is unnecessary to 16 deploy it onto persistent volumes which already replicate internally. 17 For this reason, high-performance use cases on a private Kubernetes cluster 18 may want to consider a DaemonSet deployment until Stateful Sets support node-local 19 storage (see #7562). 20 21 ### Recovery after persistent storage failure 22 23 A persistent storage failure (e.g. losing the hard drive) is gracefully handled 24 by CockroachDB as long as enough replicas survive (two out of three by 25 default). Due to the bootstrapping in this deployment, a storage failure of the 26 first node is special in that the administrator must manually prepopulate the 27 "new" storage medium by running an instance of CockroachDB with the `--join` 28 parameter. If this is not done, the first node will bootstrap a new cluster, 29 which will lead to a lot of trouble. 30 31 ### Dynamic volume provisioning 32 33 The deployment is written for a use case in which dynamic volume provisioning is 34 available. When that is not the case, the persistent volume claims need 35 to be created manually. See [minikube.sh](minikube.sh) for the necessary 36 steps. If you're on GCE or AWS, where dynamic provisioning is supported, no 37 manual work is needed to create the persistent volumes. 38 39 ## Testing locally on minikube 40 41 Follow the steps in [minikube.sh](minikube.sh) (or simply run that file). 42 43 ## Testing in the cloud on GCE or AWS 44 45 Once you have a Kubernetes cluster running, just run 46 `kubectl create -f cockroachdb-statefulset.yaml` to create your cockroachdb cluster. 47 This works because GCE and AWS support dynamic volume provisioning by default, 48 so persistent volumes will be created for the CockroachDB pods as needed. 49 50 ## Accessing the database 51 52 Along with our StatefulSet configuration, we expose a standard Kubernetes service 53 that offers a load-balanced virtual IP for clients to access the database 54 with. In our example, we've called this service `cockroachdb-public`. 55 56 Start up a client pod and open up an interactive, (mostly) Postgres-flavor 57 SQL shell using: 58 59 ```console 60 $ kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public --insecure 61 ``` 62 63 You can see example SQL statements for inserting and querying data in the 64 included [demo script](demo.sh), but can use almost any Postgres-style SQL 65 commands. Some more basic examples can be found within 66 [CockroachDB's documentation](https://www.cockroachlabs.com/docs/learn-cockroachdb-sql.html). 67 68 ## Accessing the admin UI 69 70 If you want to see information about how the cluster is doing, you can try 71 pulling up the CockroachDB admin UI by port-forwarding from your local machine 72 to one of the pods: 73 74 ```shell 75 kubectl port-forward cockroachdb-0 8080 76 ``` 77 78 Once you’ve done that, you should be able to access the admin UI by visiting 79 http://localhost:8080/ in your web browser. 80 81 ## Simulating failures 82 83 When all (or enough) nodes are up, simulate a failure like this: 84 85 ```shell 86 kubectl exec cockroachdb-0 -- /bin/bash -c "while true; do kill 1; done" 87 ``` 88 89 You can then reconnect to the database as demonstrated above and verify 90 that no data was lost. The example runs with three-fold replication, so 91 it can tolerate one failure of any given node at a time. Note also that 92 there is a brief period of time immediately after the creation of the 93 cluster during which the three-fold replication is established, and during 94 which killing a node may lead to unavailability. 95 96 The [demo script](demo.sh) gives an example of killing one instance of the 97 database and ensuring the other replicas have all data that was written. 98 99 ## Scaling up or down 100 101 Scale the Stateful Set by running 102 103 ```shell 104 kubectl scale statefulset cockroachdb --replicas=4 105 ``` 106 107 Note that you may need to create a new persistent volume claim first. If you 108 ran `minikube.sh`, there's a spare volume so you can immediately scale up by 109 one. If you're running on GCE or AWS, you can scale up by as many as you want 110 because new volumes will automatically be created for you. Convince yourself 111 that the new node immediately serves reads and writes. 112 113 ## Cleaning up when you're done 114 115 Because all of the resources in this example have been tagged with the label `app=cockroachdb`, 116 we can clean up everything that we created in one quick command using a selector on that label: 117 118 ```shell 119 kubectl delete statefulsets,persistentvolumes,persistentvolumeclaims,services,poddisruptionbudget -l app=cockroachdb 120 ``` 121 122 123 <!-- BEGIN MUNGE: GENERATED_ANALYTICS --> 124 []() 125 <!-- END MUNGE: GENERATED_ANALYTICS -->