github.com/qsunny/k8s@v0.0.0-20220101153623-e6dca256d5bf/examples-master/staging/persistent-volume-provisioning/README.md (about)

     1  ## Persistent Volume Provisioning
     2  
     3  This example shows how to use dynamic persistent volume provisioning.
     4  
     5  ### Prerequisites
     6  
     7  This example assumes that you have an understanding of Kubernetes administration and can modify the
     8  scripts that launch kube-controller-manager.
     9  
    10  ### Admin Configuration
    11  
    12  The admin must define `StorageClass` objects that describe named "classes" of storage offered in a cluster. Different classes might map to arbitrary levels or policies determined by the admin. When configuring a `StorageClass` object for persistent volume provisioning, the admin will need to describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a `PersistentVolume` belonging to the class.
    13  
    14  The name of a StorageClass object is significant, and is how users can request a particular class, by specifying the name in their `PersistentVolumeClaim`. The `provisioner` field must be specified as it determines what volume plugin is used for provisioning PVs. The `parameters` field contains the parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the `provisioner`. For example, the value `io1`, for the parameter `type`, and the parameter `iopsPerGB` are specific to EBS . When a parameter is omitted, some default is used.
    15  
    16  See [Kubernetes StorageClass documentation](https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses) for complete reference of all supported parameters.
    17  
    18  #### AWS
    19  
    20  ```yaml
    21  kind: StorageClass
    22  apiVersion: storage.k8s.io/v1
    23  metadata:
    24    name: slow
    25  provisioner: kubernetes.io/aws-ebs
    26  parameters:
    27    type: io1
    28    zones: us-east-1d, us-east-1c
    29    iopsPerGB: "10"
    30    fsType: ext4
    31  ```
    32  
    33  * `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details. Default: `gp2`.
    34  * `zone`: AWS zone. If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
    35  * `zones`: a comma separated list of AWS zone(s). If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
    36  * `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
    37  * `encrypted`: denotes whether the EBS volume should be encrypted or not. Valid values are `true` or `false`.
    38  * `kmsKeyId`: optional. The full Amazon Resource Name of the key to use when encrypting the volume. If none is supplied but `encrypted` is true, a key is generated by AWS. See AWS docs for valid ARN value.
    39  * `fsType`: fsType that are supported by kubernetes. Default: `"ext4"`.
    40  
    41  #### GCE
    42  
    43  ```yaml
    44  kind: StorageClass
    45  apiVersion: storage.k8s.io/v1
    46  metadata:
    47    name: slow
    48  provisioner: kubernetes.io/gce-pd
    49  parameters:
    50    type: pd-standard
    51    zone: us-central1-a
    52    fsType: ext4
    53  ```
    54  
    55  * `type`: `pd-standard` or `pd-ssd`. Default: `pd-ssd`
    56  * `zone`: GCE zone. If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
    57  * `zones`: a comma separated list of GCE zone(s). If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
    58  
    59  #### vSphere
    60  
    61  ```yaml
    62  kind: StorageClass
    63  apiVersion: storage.k8s.io/v1
    64  metadata:
    65    name: slow
    66  provisioner: kubernetes.io/vsphere-volume
    67  parameters:
    68    diskformat: eagerzeroedthick
    69    fsType:     ext3
    70  ```
    71  
    72  * `diskformat`: `thin`, `zeroedthick` and `eagerzeroedthick`. See vSphere docs for details. Default: `"thin"`.
    73  * `fsType`: fsType that are supported by kubernetes. Default: `"ext4"`.
    74  
    75  #### Portworx Volume
    76  
    77  ```yaml
    78  kind: StorageClass
    79  apiVersion: storage.k8s.io/v1
    80  metadata:
    81    name: portworx-io-priority-high
    82  provisioner: kubernetes.io/portworx-volume
    83  parameters:
    84    repl: "1"
    85    snap_interval:   "70"
    86    io_priority:  "high"
    87  
    88  ```
    89  
    90  *  `fs`: filesystem to be laid out: [none/xfs/ext4] (default: `ext4`)
    91  *  `block_size`: block size in Kbytes (default: `32`)
    92  *  `repl`: replication factor [1..3] (default: `1`)
    93  *  `io_priority`: IO Priority: [high/medium/low] (default: `low`)
    94  *  `snap_interval`: snapshot interval in minutes, 0 disables snaps (default: `0`)
    95  *  `aggregation_level`: specifies the number of chunks the volume would be distributed into, 0 indicates a non-aggregated volume (default: `0`)
    96  *  `ephemeral`: ephemeral storage [true/false] (default `false`)
    97  
    98  For a complete example refer ([Portworx Volume docs](../volumes/portworx/README.md))
    99  
   100  #### StorageOS
   101  
   102  ```yaml
   103  kind: StorageClass
   104  apiVersion: storage.k8s.io/v1
   105  metadata:
   106    name: sc-fast
   107  provisioner: kubernetes.io/storageos
   108  parameters:
   109    pool: default
   110    description: Kubernetes volume
   111    fsType: ext4
   112    adminSecretNamespace: default
   113    adminSecretName: storageos-secret
   114  ```
   115  
   116  *  `pool`: The name of the StorageOS distributed capacity pool to provision the volume from.  Uses the `default` pool which is normally present if not specified.
   117  *  `description`: The description to assign to volumes that were created dynamically.  All volume descriptions will be the same for the storage class, but different storage classes can be used to allow descriptions for different use cases.  Defaults to `Kubernetes volume`.
   118  *  `fsType`: The default filesystem type to request.  Note that user-defined rules within StorageOS may override this value.  Defaults to `ext4`.
   119  *  `adminSecretNamespace`: The namespace where the API configuration secret is located. Required if adminSecretName set.
   120  *  `adminSecretName`: The name of the secret to use for obtaining the StorageOS API credentials.  If not specified, default values will be attempted.
   121  
   122  For a complete example refer to the ([StorageOS example](../../staging/volumes/storageos/README.md))
   123  
   124  #### GLUSTERFS
   125  
   126  ```yaml
   127  apiVersion: storage.k8s.io/v1
   128  kind: StorageClass
   129  metadata:
   130    name: fast
   131  provisioner: kubernetes.io/glusterfs
   132  parameters:
   133    resturl: "http://127.0.0.1:8081"
   134    clusterid: "630372ccdc720a92c681fb928f27b53f"
   135    restuser: "admin"
   136    secretNamespace: "default"
   137    secretName: "heketi-secret"
   138    gidMin: "40000"
   139    gidMax: "50000"
   140    volumetype: "replicate:3"
   141    volumeoptions: "client.ssl on, server.ssl on"
   142    volumenameprefix: "dept-dev"
   143    snapfactor: "10"
   144    customepnameprefix: "dbstorage"
   145  ```
   146  
   147  Example storageclass can be found in [glusterfs-storageclass.yaml](glusterfs/glusterfs-storageclass.yaml).
   148  
   149  * `resturl` : Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be `IPaddress:Port` and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to
   150  `http://heketi-storage-project.cloudapps.mystorage.com` where the fqdn is a resolvable heketi service url.
   151  
   152  * `restauthenabled` : Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true', `restuser` and `restuserkey` or `secretNamespace` + `secretName` have to be filled. This option is deprecated, authentication is enabled when any of `restuser`, `restuserkey`, `secretName` or `secretNamespace` is specified.
   153  
   154  * `restuser` : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.
   155  
   156  * `restuserkey` : Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor of `secretNamespace` + `secretName`.
   157  
   158  * `secretNamespace` + `secretName` : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both `secretNamespace` and `secretName` are omitted. The provided secret must have type "kubernetes.io/glusterfs".
   159  When both `restuserkey` and `secretNamespace` + `secretName` is specified, the secret will be used.
   160  
   161  * `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex:
   162  "8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter.
   163  
   164  Example of a secret can be found in [glusterfs-secret.yaml](glusterfs/glusterfs-secret.yaml).
   165  
   166  * `gidMin` + `gidMax` : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively.
   167  
   168  * `volumetype` : The volume type and its parameters can be configured with this optional value. If the volume type is not mentioned, it's up to the provisioner to decide the volume type.
   169  For example:
   170  
   171    'Replica volume':
   172      `volumetype: replicate:3` where '3' is replica count.
   173    'Disperse/EC volume':
   174      `volumetype: disperse:4:2` where '4' is data and '2' is the redundancy count.
   175    'Distribute volume':
   176      `volumetype: none`
   177  
   178  For available volume types and its administration options refer: ([Administration Guide](http://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/))
   179  
   180  * `volumeoptions` : This option allows to specify the gluster volume option which has to be set on the dynamically provisioned GlusterFS volume. The value string should be comma separated strings which need to be set on the volume. As shown in example, if you want to enable encryption on gluster dynamically provisioned volumes you can pass `client.ssl on, server.ssl on` options. This is an optional parameter.
   181  
   182  For available volume options and its administration refer: ([Administration Guide](http://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/))
   183  
   184  * `volumenameprefix` : By default dynamically provisioned volumes has the naming schema of `vol_UUID` format. With this option present in storageclass, an admin can now prefix the desired volume name from storageclass. If `volumenameprefix` storageclass parameter is set, the dynamically provisioned volumes are created in below format where `_` is the field separator/delimiter:
   185  
   186  `volumenameprefix_Namespace_PVCname_randomUUID`
   187  
   188  Please note that, the value for this parameter cannot contain `_` in storageclass. This is an optional parameter.
   189  
   190  *`snapfactor`: Dynamically provisioned volume's thinpool size can be configured with this parameter. The value for the parameter should be in range of 1-100, this value will be taken into account while creating thinpool for the provisioned volume. This is an optional parameter with default value of 1.
   191  
   192  * `customepnameprefix` : By default dynamically provisioned volumes has an endpoint and service created with the naming schema of `glusterfs-dynamic-<PVC UUID` format. With this option present in storageclass, an admin can now prefix the desired endpoint from storageclass. If `customepnameprefix` storageclass parameter is set, the dynamically provisioned volumes will have an endpoint and service created in the following format where `-` is the field separator/delimiter: `customepnameprefix-<PVC UUID>`
   193  
   194  Reference : ([How to configure Gluster on Kubernetes](https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md))
   195  
   196  Reference : ([How to configure Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology))
   197  
   198  When the persistent volumes are dynamically provisioned, the Gluster plugin automatically create an endpoint and a headless service in the name `glusterfs-dynamic-<claimname>`. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.
   199  
   200  
   201  #### OpenStack Cinder
   202  
   203  ```yaml
   204  kind: StorageClass
   205  apiVersion: storage.k8s.io/v1
   206  metadata:
   207    name: gold
   208  provisioner: kubernetes.io/cinder
   209  parameters:
   210    type: fast
   211    availability: nova
   212    fsType: ext4
   213  ```
   214  
   215  * `type`: [VolumeType](http://docs.openstack.org/admin-guide/dashboard-manage-volumes.html) created in Cinder. Default is empty.
   216  * `availability`: Availability Zone. Default is empty.
   217  * `fsType`: fsType that are supported by kubernetes. Default: `"ext4"`.
   218  
   219  #### Ceph RBD
   220  
   221  ```yaml
   222    apiVersion: storage.k8s.io/v1
   223    kind: StorageClass
   224    metadata:
   225      name: fast
   226    provisioner: kubernetes.io/rbd
   227    parameters:
   228      monitors: 10.16.153.105:6789
   229      adminId: kube
   230      adminSecretName: ceph-secret
   231      adminSecretNamespace: kube-system
   232      pool: kube
   233      userId: kube
   234      userSecretName: ceph-secret-user
   235      fsType: ext4
   236      imageFormat: "2"
   237  ```
   238  
   239  * `monitors`: Ceph monitors, comma delimited. It is required.
   240  * `adminId`: Ceph client ID that is capable of creating images in the pool. Default is "admin".
   241  * `adminSecretName`: Secret Name for `adminId`. It is required. The provided secret must have type "kubernetes.io/rbd".
   242  * `adminSecretNamespace`: The namespace for `adminSecret`. Default is "default".
   243  * `pool`: Ceph RBD pool. Default is "rbd".
   244  * `userId`: Ceph client ID that is used to map the RBD image. Default is the same as `adminId`.
   245  * `userSecretName`: The name of Ceph Secret for `userId` to map RBD image. It must exist in the same namespace as PVCs. It is required.
   246  * `fsType`: fsType that are supported by kubernetes. Default: `"ext4"`.
   247  * `imageFormat`: Ceph RBD image format, "1" or "2". Default is "2".
   248  * `imageFeatures`: Ceph RBD image format 2 features, comma delimited. This is optional, and only be used if you set `imageFormat` to "2". Currently supported features are `layering` only. Default is "", no features is turned on.
   249  
   250  NOTE: We cannot turn on `exclusive-lock` feature for now (and `object-map`, `fast-diff`, `journaling` which require `exclusive-lock`), because exclusive lock and advisory lock cannot work together. (See [#45805](https://issue.k8s.io/45805))
   251  
   252  #### Quobyte
   253  
   254  <!-- BEGIN MUNGE: EXAMPLE quobyte/quobyte-storage-class.yaml -->
   255  
   256  ```yaml
   257  apiVersion: storage.k8s.io/v1
   258  kind: StorageClass
   259  metadata:
   260     name: slow
   261  provisioner: kubernetes.io/quobyte
   262  parameters:
   263      quobyteAPIServer: "http://138.68.74.142:7860"
   264      registry: "138.68.74.142:7861"
   265      adminSecretName: "quobyte-admin-secret"
   266      adminSecretNamespace: "kube-system"
   267      user: "root"
   268      group: "root"
   269      quobyteConfig: "BASE"
   270      quobyteTenant: "DEFAULT"
   271  ```
   272  
   273  [Download example](quobyte/quobyte-storage-class.yaml?raw=true)
   274  <!-- END MUNGE: EXAMPLE quobyte/quobyte-storage-class.yaml -->
   275  
   276  * **quobyteAPIServer** API Server of Quobyte in the format http(s)://api-server:7860
   277  * **registry** Quobyte registry to use to mount the volume. You can specify the registry as <host>:<port> pair or if you want to specify multiple registries you just have to put a comma between them e.q. <host1>:<port>,<host2>:<port>,<host3>:<port>. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
   278  * **adminSecretName** secret that holds information about the Quobyte user and the password to authenticate against the API server. The provided secret must have type "kubernetes.io/quobyte".
   279  * **adminSecretNamespace** The namespace for **adminSecretName**. Default is `default`.
   280  * **user** maps all access to this user. Default is `root`.
   281  * **group** maps all access to this group. Default is `nfsnobody`.
   282  * **quobyteConfig** use the specified configuration to create the volume. You can create a new configuration or modify an existing one with the Web console or the quobyte CLI. Default is `BASE`
   283  * **quobyteTenant** use the specified tenant ID to create/delete the volume. This Quobyte tenant has to be already present in Quobyte. For Quobyte < 1.4 use an empty string `""` as `DEFAULT` tenant. Default is `DEFAULT`
   284  * **createQuota** if set all volumes created by this storage class will get a Quota for the specified size. The quota is set for the logical disk size (which can differ from the physical size e.q. if replication is used). Default is ``False
   285  
   286  First create Quobyte admin's Secret in the system namespace. Here the Secret is created in `kube-system`:
   287  
   288  ```
   289  $ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-admin-secret.yaml --namespace=kube-system
   290  ```
   291  
   292  Then create the Quobyte storage class:
   293  
   294  ```
   295  $ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-storage-class.yaml
   296  ```
   297  
   298  Now create a PVC
   299  
   300  ```
   301  $ kubectl create -f examples/persistent-volume-provisioning/claim1.json
   302  ```
   303  
   304  Check the created PVC:
   305  
   306  ```
   307  $ kubectl describe pvc
   308  Name:       claim1
   309  Namespace:      default
   310  Status:     Bound
   311  Volume:     pvc-bdb82652-694a-11e6-b811-080027242396
   312  Labels:     <none>
   313  Capacity:       3Gi
   314  Access Modes:   RWO
   315  No events.
   316  
   317  $ kubectl describe pv
   318  Name:       pvc-bdb82652-694a-11e6-b811-080027242396
   319  Labels:		<none>
   320  Status:		Bound
   321  Claim:      default/claim1
   322  Reclaim Policy:	Delete
   323  Access Modes:   RWO
   324  Capacity:       3Gi
   325  Message:
   326  Source:
   327      Type:       Quobyte (a Quobyte mount on the host that shares a pod's lifetime)
   328      Registry:   138.68.79.14:7861
   329      Volume:     kubernetes-dynamic-pvc-bdb97c58-694a-11e6-91b6-080027242396
   330      ReadOnly:   false
   331  No events.
   332  ```
   333  
   334  Create a Pod to use the PVC:
   335  
   336  ```
   337  $ kubectl create -f examples/persistent-volume-provisioning/quobyte/example-pod.yaml
   338  ```
   339  
   340  #### <a name="azure-disk">Azure Disk</a>
   341  
   342  ```yaml
   343  kind: StorageClass
   344  apiVersion: storage.k8s.io/v1
   345  metadata:
   346    name: slow
   347  provisioner: kubernetes.io/azure-disk
   348  parameters:
   349    skuName: Standard_LRS
   350    location: eastus
   351    storageAccount: azure_storage_account_name
   352    fsType: ext4
   353  ```
   354  
   355  * `skuName`: Azure storage account Sku tier. Default is empty.
   356  * `location`: Azure storage account location. Default is empty.
   357  * `storageAccount`: Azure storage account name. If storage account is not provided, all storage accounts associated with the resource group are searched to find one that matches `skuName` and `location`. If storage account is provided, it must reside in the same resource group as the cluster, and `skuName` and `location` are ignored.
   358  * `fsType`: fsType that are supported by kubernetes. Default: `"ext4"`.
   359  
   360  #### Azure File
   361  
   362  ```yaml
   363  kind: StorageClass
   364  apiVersion: storage.k8s.io/v1beta1
   365  metadata:
   366    name: slow
   367  provisioner: kubernetes.io/azure-file
   368  parameters:
   369    skuName: Standard_LRS
   370    location: eastus
   371    storageAccount: azure_storage_account_name
   372  ```
   373  
   374  The parameters are the same as those used by [Azure Disk](#azure-disk)
   375  
   376  ### User provisioning requests
   377  
   378  Users request dynamically provisioned storage by including a storage class in their `PersistentVolumeClaim` using `spec.storageClassName` attribute.
   379  It is required that this value matches the name of a `StorageClass` configured by the administrator.
   380  
   381  ```
   382  {
   383    "kind": "PersistentVolumeClaim",
   384    "apiVersion": "v1",
   385    "metadata": {
   386      "name": "claim1"
   387    },
   388    "spec": {
   389      "accessModes": [
   390        "ReadWriteOnce"
   391      ],
   392      "resources": {
   393        "requests": {
   394          "storage": "3Gi"
   395        }
   396      },
   397      "storageClassName": "slow"
   398    }
   399  }
   400  ```
   401  
   402  ### Sample output
   403  
   404  #### GCE
   405  
   406  This example uses GCE but any provisioner would follow the same flow.
   407  
   408  First we note there are no Persistent Volumes in the cluster.  After creating a storage class and a claim including that storage class, we see a new PV is created
   409  and automatically bound to the claim requesting storage.
   410  
   411  
   412  ```
   413  $ kubectl get pv
   414  
   415  $ kubectl create -f examples/persistent-volume-provisioning/gce-pd.yaml
   416  storageclass "slow" created
   417  
   418  $ kubectl create -f examples/persistent-volume-provisioning/claim1.json
   419  persistentvolumeclaim "claim1" created
   420  
   421  $ kubectl get pv
   422  NAME                                       CAPACITY   ACCESSMODES   STATUS    CLAIM                        REASON    AGE
   423  pvc-bb6d2f0c-534c-11e6-9348-42010af00002   3Gi        RWO           Bound     default/claim1                         4s
   424  
   425  $ kubectl get pvc
   426  NAME      LABELS    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
   427  claim1    <none>    Bound     pvc-bb6d2f0c-534c-11e6-9348-42010af00002   3Gi        RWO           7s
   428  
   429  # delete the claim to release the volume
   430  $ kubectl delete pvc claim1
   431  persistentvolumeclaim "claim1" deleted
   432  
   433  # the volume is deleted in response to being release of its claim
   434  $ kubectl get pv
   435  
   436  ```
   437  
   438  
   439  #### Ceph RBD
   440  
   441  This section will guide you on how to configure and use the Ceph RBD provisioner.
   442  
   443  ##### Pre-requisites
   444  
   445  For this to work you must have a functional Ceph cluster, and the `rbd` command line utility must be installed on any host/container that `kube-controller-manager` or `kubelet` is running on.
   446  
   447  ##### Configuration
   448  
   449  First we must identify the Ceph client admin key. This is usually found in `/etc/ceph/ceph.client.admin.keyring` on your Ceph cluster nodes. The file will look something like this:
   450  
   451  ```
   452  [client.admin]
   453    key = AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
   454    auid = 0
   455    caps mds = "allow"
   456    caps mon = "allow *"
   457    caps osd = "allow *"
   458  ```
   459  
   460  From the key value, we will create a secret. We must create the Ceph admin Secret in the namespace defined in our `StorageClass`. In this example we've set the namespace to `kube-system`.
   461  
   462  ```
   463  $ kubectl create secret generic ceph-secret-admin --from-literal=key='AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' --namespace=kube-system --type=kubernetes.io/rbd
   464  ```
   465  
   466  Now modify `examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml` to reflect your environment, particularly the `monitors` field.  We are now ready to create our RBD Storage Class:
   467  
   468  ```
   469  $ kubectl create -f examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml
   470  ```
   471  
   472  The kube-controller-manager is now able to provision storage, however we still need to be able to map the RBD volume to a node. Mapping should be done with a non-privileged key, if you have existing users you can get all keys by running `ceph auth list` on your Ceph cluster with the admin key. For this example we will create a new user and pool.
   473  
   474  ```
   475  $ ceph osd pool create kube 512
   476  $ ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
   477  [client.kube]
   478      key = AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==
   479  ```
   480  
   481  This key will be made into a secret, just like the admin secret. However this user secret will need to be created in every namespace where you intend to consume RBD volumes provisioned in our example storage class. Let's create a namespace called `myns`, and create the user secret in that namespace.
   482  
   483  ```
   484  kubectl create namespace myns
   485  kubectl create secret generic ceph-secret-user --from-literal=key='AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==' --namespace=myns --type=kubernetes.io/rbd
   486  ```
   487  
   488  You are now ready to provision and use RBD storage.
   489  
   490  ##### Usage
   491  
   492  With the storageclass configured, let's create a PVC in our example namespace, `myns`:
   493  
   494  ```
   495  $ kubectl create -f examples/persistent-volume-provisioning/claim1.json --namespace=myns
   496  ```
   497  
   498  Eventually the PVC creation will result in a PV and RBD volume to match:
   499  
   500  ```
   501  $ kubectl describe pvc --namespace=myns
   502  Name:		claim1
   503  Namespace:	myns
   504  Status:		Bound
   505  Volume:		pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
   506  Labels:		<none>
   507  Capacity:	3Gi
   508  Access Modes:	RWO
   509  No events.
   510  
   511  $ kubectl describe pv
   512  Name:		pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
   513  Labels:		<none>
   514  Status:		Bound
   515  Claim:		myns/claim1
   516  Reclaim Policy:	Delete
   517  Access Modes:	RWO
   518  Capacity:	3Gi
   519  Message:
   520  Source:
   521      Type:		RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
   522      CephMonitors:	[127.0.0.1:6789]
   523      RBDImage:		kubernetes-dynamic-pvc-1cfb1862-664b-11e6-9a5d-90b11c09520d
   524      FSType:
   525      RBDPool:		kube
   526      RadosUser:		kube
   527      Keyring:		/etc/ceph/keyring
   528      SecretRef:		&{ceph-secret-user}
   529      ReadOnly:		false
   530  No events.
   531  ```
   532  
   533  With our storage provisioned, we can now create a Pod to use the PVC:
   534  
   535  ```
   536  $ kubectl create -f examples/persistent-volume-provisioning/rbd/pod.yaml --namespace=myns
   537  ```
   538  
   539  Now our pod has an RBD mount!
   540  
   541  ```
   542  $ export PODNAME=`kubectl get pod --selector='role=server' --namespace=myns --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
   543  $ kubectl exec -it $PODNAME --namespace=myns -- df -h | grep rbd
   544  /dev/rbd1       2.9G  4.5M  2.8G   1% /var/lib/www/html
   545  ```
   546  
   547  <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
   548  [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/persistent-volume-provisioning/README.md?pixel)]()
   549  <!-- END MUNGE: GENERATED_ANALYTICS -->