github.com/qsunny/k8s@v0.0.0-20220101153623-e6dca256d5bf/examples-master/staging/volumes/nfs/README.md (about)

     1  # Outline
     2  
     3  This example describes how to create Web frontend server, an auto-provisioned persistent volume on GCE or Azure, and an NFS-backed persistent claim.
     4  
     5  Demonstrated Kubernetes Concepts:
     6  
     7  * [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to
     8    define persistent disks (disk lifecycle not tied to the Pods).
     9  * [Services](https://kubernetes.io/docs/concepts/services-networking/service/) to enable Pods to
    10    locate one another.
    11  
    12  ![alt text][nfs pv example]
    13  
    14  As illustrated above, two persistent volumes are used in this example:
    15  
    16  - Web frontend Pod uses a persistent volume based on NFS server, and
    17  - NFS server uses an auto provisioned [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) from GCE PD or AWS EBS or Azure Disk.
    18  
    19  Note, this example uses an NFS container that doesn't support NFSv4.
    20  
    21  [nfs pv example]: nfs-pv.png
    22  
    23  
    24  ## Quickstart
    25  
    26  ```console
    27  # On GCE (create GCE PD PVC):
    28  $ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
    29  # On Azure (create Azure Disk PVC):
    30  $ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-azure-pv.yaml
    31  # Common steps after creating either GCE PD or Azure Disk PVC:
    32  $ kubectl create -f examples/staging/volumes/nfs/nfs-server-rc.yaml
    33  $ kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml
    34  # get the cluster IP of the server using the following command
    35  $ kubectl describe services nfs-server
    36  # use the NFS server IP to update nfs-pv.yaml and execute the following
    37  $ kubectl create -f examples/staging/volumes/nfs/nfs-pv.yaml
    38  $ kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml
    39  # run a fake backend
    40  $ kubectl create -f examples/staging/volumes/nfs/nfs-busybox-rc.yaml
    41  # get pod name from this command
    42  $ kubectl get pod -l name=nfs-busybox
    43  # use the pod name to check the test file
    44  $ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
    45  ```
    46  
    47  ## Example of NFS based persistent volume
    48  
    49  See [NFS Service and Replication Controller](nfs-web-rc.yaml) for a quick example of how to use an NFS
    50  volume claim in a replication controller. It relies on the
    51  [NFS persistent volume](nfs-pv.yaml) and
    52  [NFS persistent volume claim](nfs-pvc.yaml) in this example as well.
    53  
    54  ## Complete setup
    55  
    56  The example below shows how to export a NFS share from a single pod replication
    57  controller and import it into two replication controllers.
    58  
    59  ### NFS server part
    60  
    61  Define [the NFS Service and Replication Controller](nfs-server-rc.yaml) and
    62  [NFS service](nfs-server-service.yaml):
    63  
    64  The NFS server exports an auto-provisioned persistent volume backed by GCE PD or Azure Disk. If you are on GCE, create a GCE PD-based PVC:
    65  
    66  ```console
    67  $ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
    68  ```
    69  
    70  If you are on Azure, create an Azure Premium Disk-based PVC:
    71  
    72  ```console
    73  $ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-azure-pv.yaml
    74  ```
    75  
    76  Then using the created PVC, create an NFS server and service:
    77  
    78  ```console
    79  $ kubectl create -f examples/staging/volumes/nfs/nfs-server-rc.yaml
    80  $ kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml
    81  ```
    82  
    83  The directory contains dummy `index.html`. Wait until the pod is running
    84  by checking `kubectl get pods -l role=nfs-server`.
    85  
    86  ### Create the NFS based persistent volume claim
    87  
    88  The [NFS busybox controller](nfs-busybox-rc.yaml) uses a simple script to
    89  generate data written to the NFS server we just started. First, you'll need to
    90  find the cluster IP of the server:
    91  
    92  ```console
    93  $ kubectl describe services nfs-server
    94  ```
    95  
    96  Replace the invalid IP in the [nfs PV](nfs-pv.yaml). (In the future,
    97  we'll be able to tie these together using the service names, but for
    98  now, you have to hardcode the IP.)
    99  
   100  Create the [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
   101  
   102  and the persistent volume claim for your NFS server. The persistent volume and
   103  claim gives us an indirection that allow multiple pods to refer to the NFS
   104  server using a symbolic name rather than the hardcoded server address.
   105  
   106  ```console
   107  $ kubectl create -f examples/staging/volumes/nfs/nfs-pv.yaml
   108  $ kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml
   109  ```
   110  
   111  ## Setup the fake backend
   112  
   113  The [NFS busybox controller](nfs-busybox-rc.yaml) updates `index.html` on the
   114  NFS server every 10 seconds. Let's start that now:
   115  
   116  ```console
   117  $ kubectl create -f examples/staging/volumes/nfs/nfs-busybox-rc.yaml
   118  ```
   119  
   120  Conveniently, it's also a `busybox` pod, so we can get an early check
   121  that our mounts are working now. Find a busybox pod and exec:
   122  
   123  ```console
   124  $ kubectl get pod -l name=nfs-busybox
   125  NAME                READY     STATUS    RESTARTS   AGE
   126  nfs-busybox-jdhf3   1/1       Running   0          25m
   127  nfs-busybox-w3s4t   1/1       Running   0          25m
   128  $ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
   129  Thu Oct 22 19:20:18 UTC 2015
   130  nfs-busybox-w3s4t
   131  ```
   132  
   133  You should see output similar to the above if everything is working well. If
   134  it's not, make sure you changed the invalid IP in the [NFS PV](nfs-pv.yaml) file
   135  and make sure the `describe services` command above had endpoints listed
   136  (indicating the service was associated with a running pod).
   137  
   138  ### Setup the web server
   139  
   140  The [web server controller](nfs-web-rc.yaml) is an another simple replication
   141  controller demonstrates reading from the NFS share exported above as a NFS
   142  volume and runs a simple web server on it.
   143  
   144  Define the pod:
   145  
   146  ```console
   147  $ kubectl create -f examples/staging/volumes/nfs/nfs-web-rc.yaml
   148  ```
   149  
   150  This creates two pods, each of which serve the `index.html` from above. We can
   151  then use a simple service to front it:
   152  
   153  ```console
   154  $ kubectl create -f examples/staging/volumes/nfs/nfs-web-service.yaml
   155  ```
   156  
   157  We can then use the busybox container we launched before to check that `nginx`
   158  is serving the data appropriately:
   159  
   160  ```console
   161  $ kubectl get pod -l name=nfs-busybox
   162  NAME                READY     STATUS    RESTARTS   AGE
   163  nfs-busybox-jdhf3   1/1       Running   0          1h
   164  nfs-busybox-w3s4t   1/1       Running   0          1h
   165  $ kubectl get services nfs-web
   166  NAME      LABELS    SELECTOR            IP(S)        PORT(S)
   167  nfs-web   <none>    role=web-frontend   10.0.68.37   80/TCP
   168  $ kubectl exec nfs-busybox-jdhf3 -- wget -qO- http://10.0.68.37
   169  Thu Oct 22 19:28:55 UTC 2015
   170  nfs-busybox-w3s4t
   171  ```
   172  
   173  
   174  
   175  
   176  <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
   177  [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/staging/volumes/nfs/README.md?pixel)]()
   178  <!-- END MUNGE: GENERATED_ANALYTICS -->