github.com/apprenda/kismatic@v1.12.0/docs/storage.md (about)

     1  # Persistent Storage
     2  
     3  ## Background
     4  
     5  A container is, at its heart, just a process. When that process ends, whatever state it was managing ends along with it.
     6  
     7  To be stateful, a container must write to a disk, and for that state to have any real value, the disk must something that another container can find and access again later, should the original process die.
     8  
     9  This is obviously a challenge with a container cluster, where containers are starting and stopping and scaling all over the place.
    10  
    11  Kubernetes offers three main techniques to tie stateful disks to containers:
    12  
    13  * Pod **[Volumes](http://kubernetes.io/docs/user-guide/volumes/)**, in which a specific network share is created and described specifically in the specification of a Pod.
    14  * Staticly provisioned **[Persistent Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/)**, in which a specific network share is created and described generically to Kubernetes. Specific volumes are "claimed" by cluster users for use by a single Kubernetes namespace along with a name. Any pod within this namespace may consume a claimed PV by its name.
    15  * Dynamic provisioned **[Persistent Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/)**, in which Kubernetes interfaces with a storage layer, building new network shares on demand.
    16  
    17  To use Volumes, you need to have provisioned your storage with a specific use case in mind and will to push the storage details along with your pod spec. Since the implementation of storage is tied to the pod spec, the spec is less portable -- you can't run the same spec on another cluster.
    18  
    19  To use staticly provisioned PersistentVolumes, you sill need to provision your own storage; however, this can be accomplished more generally. For example, a storage engineer might create a big batch 100GB shares with various performance characteristics and engineers on various teams could make claims against these PVs without having to relay their specific needs to the storage team ahead of time.
    20  
    21  To use dynamically provisioned PersistentVolumes, you need to grant Kubernetes permission to make new storage volumes on your behalf. Any new claims that come in and aren't covered by a statically provisioned PV will result in a new volume being created for that claim. This is likely the ideal solution for the (functionally) infinite storage available in a public cloud -- however, for most private clouds, unbounded on-demand resource allocation is effectively a run on the bank. If you have a very large amount of available storage, or very small claims, a provisioner combined with a Resource Quota should serve your needs. Those with less elastic storage growth limits will likely have more success with occasional static provisioning, and Kismatic is currently focused on making this easier.
    22  
    23  ## Storage options for Kismatic-managed features
    24  
    25  There are currently three storage options with Kismatic:
    26  
    27  1. **None**. In this case, you won't be able to responsibly use any stateful features of Kismatic.
    28    * New shares may be added after creation using kubectl
    29  2. **Bring-your-own NFS shares**. When building a Kubernetes cluster with Kismatic, you will first provision one or more NFS shares on an off-cluster file server or SAN, open access to these shares from the Kubernetes network and provide their details.
    30    * New shares may be added after creation using kubectl
    31    * Only multi-reader, multi-writer NFSv3 volumes are supported
    32  3. **Kismatic manages a storage cluster**. When building a Kubernetes cluster with Kismatic, you will identify machines that will be used as part of a storage cluster. These may be dedicated to the task of storage or may duplicate other cluster roles, such as Worker and Ingress.
    33    * Kismatic will automatically create and claim one replicated NFS share of 10 GB automatically the first time a stateful feature is included on a storage cluster. Future features will use this same volume.
    34    * The addition of future shares on this storage cluster is left up to cluster operators. A single command, such as `kismatic volume add 10 storage01`, can be used to provision a new storage volume and also add that volume to Kubernetes as an unclaimed PersistentVolume.
    35    * The storage cluster will be set up using GlusterFS
    36  
    37  ## Using GlusterFS storage cluster for your workloads
    38  
    39  1. Use Kismatic to configure `nodes` for GlusterFS by providing their details in the plan file, ie.
    40     ```
    41     ...
    42     storage:
    43       expected_count: 2
    44       nodes:
    45       - host: storage1.somehost.com
    46         ip: 8.8.8.1
    47         internalip: 8.8.8.1
    48       - host: storage2.somehost.com
    49         ip: 8.8.8.1
    50         internalip: 8.8.8.1
    51     ```
    52  
    53   If you have an existing kubernetes cluster setup with Kismatic you can still have the tool configure a GlusterFS cluster by adding to your plan file similar to above and running:
    54     ```
    55     kismatic install step _storage.yaml
    56     ```
    57  
    58   This will setup a 2 node GlusterFS cluster and expose it as a kubernetes service with the name of `kismatic-storage`
    59  2. Create a new GlusterFS volume and expose it in kubernetes as a PersistentVolume use:
    60     ```
    61     kismatic volume add 10 storage01 -r 2 -d 1 -c="durable" -a 10.10.*.*
    62     ```
    63  
    64    * `10` represents the volume size in GB, in this example a GlusterFS volume with a `10GB` quota and a kubernetes PersistentVolume with a capacity of `10Gi` will be created
    65    * `storage01` is the volume name used when creating the GlusterFS volume name, the GlusterFS brick directories and Kubernetes PersistentVolume name. All GlusterFS bricks will be created under the `/data` directory on the node, using the logical disk mounted under `/`
    66    * `-r (replica-count)` the number of duplicates to make of each file stored when writing data
    67    * `-d (distribution-count)` the degree to which files will be distributed across the cluster. A count of 1 means that all files will exist at every replica. A count of 2 means that each set of replicas will have half of all files.
    68    * **NOTE**: the GlusterFS cluster must have at least `replica-count * distribution-count` storage nodes available for a volume to be created. In this example the storage cluster must have 2 or more nodes with at least 10GB free disk space on each of the machines
    69    * `-c (storage-class)` the name of the StorageClass that will be added when creating the PersistentVolume. Use this name when creating your PersistentVolumeClaims.
    70    * `-a allow-address` is comma separated list of off-cluster IP ranges that are permitted to mount and access the GlusterFS network volumes. Include any addresses you use for data management. Nodes in the Kubernetes cluster and the pods CIDR range will always have access.
    71    * **NOTE**: IP address is the only credential used to authorize a storage connection. All nodes and pods will be able to access these shares.
    72  3. Create a new PersistentVolumeClaim
    73     ```
    74     kind: PersistentVolumeClaim
    75     apiVersion: v1
    76     metadata:
    77       name: my-app-frontend-claim
    78       annotations:
    79         volume.beta.kubernetes.io/storage-class: "durable"
    80     spec:
    81       accessModes:
    82         - ReadWriteMany
    83       resources:
    84         requests:
    85           storage: 10Gi
    86     ```
    87  
    88   Use the `volume.beta.kubernetes.io/storage-class: "durable"` annotation for the PersistentVolumeClaim to bind to the newly created PersistentVolume
    89  
    90  4. Use the claim as a pod volume
    91     ```
    92     kind: Pod
    93     apiVersion: v1
    94     metadata:
    95       name: my-app-frontend
    96     spec:
    97       containers:
    98         - name: my-app-frontend
    99           image: nginx
   100           volumeMounts:
   101           - mountPath: "/var/www/html"
   102             name: html
   103       volumes:
   104         - name: html
   105           persistentVolumeClaim:
   106             claimName: my-app-frontend-claim
   107     ```
   108  
   109  5. Your pod will now have access to the `/var/www/html` directory that is backed by a GlusterFS volume. If you scale this pod out, each instance of the pod should have access to that directory.