github.com/jingruilea/kubeedge@v1.2.0-beta.0.0.20200410162146-4bb8902b3879/docs/proposals/csi.md (about)

     1  ---
     2  title: Container Storage Interface Proposal
     3  authors:
     4    - "@edisonxiang"
     5  approvers:
     6    - "@qizha"
     7    - "@Baoqiang-Zhang"
     8    - "@kevin-wangzefeng"
     9    - "@m1093782566"
    10    - "@fisherxu"
    11  creation-date: 2019-05-30
    12  last-updated: 2019-09-07
    13  status: implementable
    14  ---
    15  
    16  # Container Storage Interface Proposal
    17  
    18  * [Container Storage Interface Proposal](#container-storage-Interface-proposal)
    19    * [Motivation](#motivation)
    20    * [Goals](#goals)
    21    * [Non\-goals](#non-goals)
    22    * [Proposal](#proposal)
    23      * [Requirement](#requirement)
    24      * [Architecture](#architecture)
    25      * [Workflow](#workflow)
    26      * [Deployment](#deployment)
    27      * [Example](#example)
    28    * [Graduation Criterias](#graduation-criterias)
    29      * [Alpha](#alpha)
    30      * [Beta](#beta)
    31      * [GA](#ga)
    32  
    33  ## Motivation
    34  
    35  Currently KubeEdge only supports the following in-tree volumes which are based on Kubernetes.
    36  
    37  * [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
    38  * [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)
    39  * [configMap](https://kubernetes.io/docs/concepts/storage/volumes/#configmap)
    40  * [secret](https://kubernetes.io/docs/concepts/storage/volumes/#secret)
    41  
    42  That is not enough for the users who are using KubeEdge.
    43  Running applications with persistant data store at edge is common.
    44  For example, in a system which collects video data and makes analysis,
    45  one application stores the video data into a shared storage,
    46  and another application reads the data from the storage to make analysis.
    47  In this scenario, the NFS storage is quite suitable and is not implemented by KubeEdge.
    48  KubeEdge should allow users to store data using StorageClass (SC),
    49  PersistentVolume (PV) and PersistentVolumeClaim (PVC)
    50  so that the users can deploy stateful applications at edge.
    51  
    52  The Container Storage Interface (CSI) is a specification that resulted from cooperation
    53  between community members including Kubernetes, Mesos, Cloud Foundry, and Docker.
    54  The goal of this interface is to establish a standardized mechanism
    55  to expose arbitrary storage systems to their containerized workloads.
    56  CSI Spec has already been released v1.1 and lots of CSI Drivers are released by vendors.
    57  CSI Volume Plugin was first introduced to Kubernetes v1.9 with Alpha
    58  and graduated to stable (GA) since v1.13, and continuously be improved util now.
    59  
    60  * [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md)
    61  * [CSI Drivers](https://kubernetes-csi.github.io/docs/drivers.html)
    62  * [CSI Volume Plugin in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
    63  
    64  We can support Container Storage Interface (CSI) in KubeEdge,
    65  and give a standardized solution to the users who want to use persistent storage at edge.
    66  KubeEdge would provide very limited in-tree PersistentVolume (PV) implementations,
    67  many of upstream PersistentVolumes (PVs) are storage services on cloud, not quite suitable at edge.
    68  The key is to implement CSI and bring extendability to users so that they can install their own storage plugins.
    69  The users can choose any kind of CSI Drivers on demands. Since KubeEdge is built upon Kubernetes,
    70  it is possible to reuse some of existing framework from Kubernetes in KubeEdge.
    71  
    72  ## Goals
    73  
    74  * To support [Basic CSI Volume Lifecycle](https://github.com/container-storage-interface/spec/blob/master/spec.md#volume-lifecycle) in KubeEdge.
    75     * Create Volume and Delete Volume.
    76     * Controller Publish Volume and Controller Unpublish Volume.
    77     * Node Stage Volume and Node Unstage Volume.
    78     * Node Publish Volume and Node Unpublish Volume.
    79  * Compatible with Kubernetes and CSI.
    80  * To support storage at edge.
    81  
    82  ## Non-goals
    83  
    84  * To support Raw Block Volume.
    85  * To support Volume Snapshot and Restore.
    86  * To support Volume Topology.
    87  * To support storage on cloud.
    88  
    89  ## Proposal
    90  
    91  ### Requirement
    92  * Kubernetes v1.15+
    93  * CSI Spec v1.0.0+
    94  
    95  ### Architecture
    96  <img src="../images/csi/csi-architecture.png">
    97  
    98  The added components in KubeEdge are including:
    99  * External-Provisioner: list watch the Kubernetes API Resource `PersistentVolumeClaim` (`PVC`)
   100     and send messages to edge to issue actions including `Create Volume` and `Delete Volume` at edge.
   101     The users can reuse the [external-provisioner](https://github.com/kubernetes-csi/external-provisioner) from Kubernetes-CSI community.
   102  * External-Attacher: list watch the Kubernetes API Resource `VolumeAttachment` and send messages to edge
   103     to issue actions including `Controller Publish Volume` and `Controller Unpublish Volume` at edge.
   104     The users can reuse the [external-attacher](https://github.com/kubernetes-csi/external-attacher) from Kubernetes-CSI community.
   105  * CSI Driver from KubeEdge: this is more like CSI Driver proxy, and it implements all of the `Identity` and `Controller` interfaces.
   106    It connects with CloudHub by UNIX Domain Sockets (UDS) and sends messages to edge.
   107    Actually all of the actions about the Volume Lifecycle are executed in the CSI Driver from Vendor at edge.
   108  * UDS Server: provide the communication channel between KubeEdge and CSI Driver from KubeEdge.
   109    It is hosted in the KubeEdge CloudHub, but it is only used to communicate with CSI Driver from KubeEdge on cloud.
   110    It is not used to communicate between cloud and edge.
   111    It will receive messages from CSI Driver and send them to edge by the communication channel
   112    between CloudHub and EdgeHub such as websocket, quic and so forth,
   113    and then give a response to CSI Driver from KubeEdge.
   114  * Managers in Edge Controller: including PersistentVolume Manager, PersistentVolumeClaim Manager, VolumeAttachment Manager and so on.
   115    These manager components will poll the Kubernetes API Volume related Resources
   116    like PersistentVolume, PersistentVolumeClaim and VolumeAttachment,
   117    and sync these resources metadata to edge.
   118  * CSI Volume Plugin(In-Tree): issue actions including
   119     `Create Volume` and `Delete Volume`,
   120     `Controller Publish Volume` and `Controller Unpublish Volume`
   121     `Node Stage Volume` and `Node Unstage Volume`,
   122     `Node Publish Volume` and `Node Unpublish Volume` at edge.
   123  * Node-Driver-Registrar: register the CSI Driver from Vendor into `Edged` by UNIX Domain Sockets (UDS).
   124    The users can reuse the [node-driver-registrar](https://github.com/kubernetes-csi/node-driver-registrar) from Kubernetes-CSI community.
   125  * CSI Driver from Vendor: these drivers are chosen by the users and the KubeEdge team will not provide them.
   126  
   127  ### Workflow
   128  
   129  #### Create Volume
   130  <img src="../images/csi/csi-create-volume.png">
   131  
   132  #### Delete Volume
   133  <img src="../images/csi/csi-delete-volume.png">
   134  
   135  #### Attach Volume
   136  <img src="../images/csi/csi-attach-volume.png">
   137  
   138  #### Detach Volume
   139  <img src="../images/csi/csi-detach-volume.png">
   140  
   141  #### Mount Volume
   142  <img src="../images/csi/csi-mount-volume.png">
   143  
   144  #### Umount Volume
   145  <img src="../images/csi/csi-umount-volume.png">
   146  
   147  ### Deployment
   148  
   149  Before using CSI, on cloud the cluster admins need to deploy a `Statefulset` or `Deployment`
   150  to support for Create / Delete Volume and  Attach / Detach Volume.
   151  Here is a `Statefulset` example for CSI HostPath Driver on cloud.
   152  
   153  ```yaml
   154  kind: StatefulSet
   155  apiVersion: apps/v1
   156  metadata:
   157    name: csi-hostpath-controller
   158  spec:
   159    serviceName: "csi-hostpath-controller"
   160    replicas: 1
   161    selector:
   162      matchLabels:
   163        app: csi-hostpath-controller
   164    template:
   165      metadata:
   166        labels:
   167          app: csi-hostpath-controller
   168      spec:
   169        serviceAccountName: csi-controller
   170        containers:
   171          - name: csi-provisioner
   172            image: quay.io/k8scsi/csi-provisioner:v1.2.1
   173            imagePullPolicy: IfNotPresent
   174            args:
   175              - -v=5
   176              - --csi-address=/csi/csi.sock
   177              - --connection-timeout=15s
   178            volumeMounts:
   179              - mountPath: /csi
   180                name: csi-socket-dir
   181          - name: csi-attacher
   182            image: quay.io/k8scsi/csi-attacher:v1.1.1
   183            imagePullPolicy: IfNotPresent
   184            args:
   185              - --v=5
   186              - --csi-address=/csi/csi.sock
   187            volumeMounts:
   188            - mountPath: /csi
   189              name: csi-socket-dir
   190          - name: csi-driver
   191            image: kubeedge/csidriver:v1.1.0
   192            imagePullPolicy: IfNotPresent
   193            args:
   194              - "--v=5"
   195              - "--drivername=$(CSI_DRIVERNAME)"
   196              - "--endpoint=$(CSI_ENDPOINT)"
   197              - "--kubeedge-endpoint=$(KUBEEDGE_ENDPOINT)"
   198              - "--nodeid=$(KUBE_NODE_NAME)"
   199            env:
   200              - name: CSI_DRIVERNAME
   201                value: csi-hostpath
   202              - name: CSI_ENDPOINT
   203                value: unix:///csi/csi.sock
   204              - name: KUBEEDGE_ENDPOINT
   205                value: unix:///kubeedge/kubeedge.sock
   206              - name: KUBE_NODE_NAME
   207                # replace this value with the name of edge node
   208                # which is in charge of Create Volume and Delete Volume,
   209                # Controller Publish Volume and Controller Unpublish Volume.
   210                value: edge-node
   211            securityContext:
   212              privileged: true
   213            volumeMounts:
   214              - mountPath: /csi
   215                name: csi-socket-dir
   216              - mountPath: /kubeedge
   217                name: kubeedge-socket-dir
   218        volumes:
   219          - hostPath:
   220              path: /var/lib/kubelet/plugins/csi-hostpath
   221              type: DirectoryOrCreate
   222            name: csi-socket-dir
   223          - hostPath:
   224              path: /var/lib/kubeedge
   225              type: DirectoryOrCreate
   226            name: kubeedge-socket-dir
   227  ```
   228  
   229  The `Statefulset` includes:
   230  
   231  * The following containers:
   232  
   233    * `csi-provisioner` (`External-Provisioner`) container
   234  
   235      Responsible for issuing the calls of Create / Delete Volume.
   236  
   237    * `csi-attacher` (`External-Attacher`) container
   238  
   239      Responsible for issuing the calls of Attach / Detach Volume.
   240  
   241    * `csi-driver` (`CSI Driver from KubeEdge`) container
   242  
   243      Responsible for forwarding the calls from `csi-provisioner` and `csi-attacher` to CloudHub.
   244  
   245      The value of env `CSI_DRIVERNAME` is used to specify the name of CSI Driver on the edge node,
   246      which will execute the calls including Create / Delete Volume and  Attach / Detach Volume.
   247  
   248      The value of env `CSI_ENDPOINT` is used to specify the address of CSI UNIX Domain Socket.
   249  
   250      The value of env `KUBEEDGE_ENDPOINT` is used to specify the address of KubeEdge CloudHub UNIX Domain Socket.
   251  
   252      The value of env `KUBE_NODE_NAME` is used to specify the name of edge node which is in charge of
   253      `Create Volume` and `Delete Volume`, `Controller Publish Volume` and `Controller Unpublish Volume`.
   254  
   255  * The following volumes:
   256  
   257    * `csi-socket-dir` hostPath volume
   258  
   259        Expose `/var/lib/kubelet/plugins/csi-hostpath` from the host on Cloud.
   260  
   261        Mount in all the three containers.
   262        It will use this UNIX Domain Socket to make communications between `csi-provisioner` / `csi-attacher` and `csi-driver`.
   263  
   264    * `kubeedge-socket-dir` hostPath volume
   265  
   266        Expose `/var/lib/kubeedge` from the host.
   267  
   268        Mount inside `csi-driver` container which is the primary means of communication between KubeEdge CloudHub and the `csi-driver` container.
   269  
   270  The `Deployment` can be also used for this deployment on Cloud.
   271  
   272  **Currently we use UNIX Domain Socket between `csi-driver` container and KubeEdge CloudHub,
   273  so that the users need to deploy the above `StatefulSet` or `Deployment` with KubeEdge Cloud Core in the same host.**
   274  
   275  At edge the cluster admins need to deploy a `DaemonSet` as a sidecar
   276  to add support for the chosen CSI Driver from Vendor.
   277  Here is a `DaemonSet` example for CSI HostPath Driver at edge.
   278  
   279  ```yaml
   280  kind: DaemonSet
   281  apiVersion: apps/v1beta2
   282  metadata:
   283    name: csi-hostpath-edge
   284  spec:
   285    selector:
   286      matchLabels:
   287        app: csi-hostpath-edge
   288    template:
   289      metadata:
   290        labels:
   291          app: csi-hostpath-edge
   292      spec:
   293        hostNetwork: true
   294        containers:
   295          - name: node-driver-registrar
   296            image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
   297            imagePullPolicy: IfNotPresent
   298            lifecycle:
   299              preStop:
   300                exec:
   301                  command: ["/bin/sh", "-c", "rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock"]
   302            args:
   303              - --v=5
   304              - --csi-address=/csi/csi.sock
   305              - --kubelet-registration-path=/var/lib/edged/plugins/csi-hostpath/csi.sock
   306            securityContext:
   307              privileged: true
   308            env:
   309              - name: KUBE_NODE_NAME
   310                valueFrom:
   311                  fieldRef:
   312                    fieldPath: spec.nodeName
   313            volumeMounts:
   314              - name: socket-dir
   315                mountPath: /csi
   316              - name: registration-dir
   317                mountPath: /registration
   318          - name:  csi-hostpath-driver
   319            securityContext:
   320              privileged: true
   321              capabilities:
   322                add: ["SYS_ADMIN"]
   323              allowPrivilegeEscalation: true
   324            image: quay.io/k8scsi/hostpathplugin:v1.1.0
   325            imagePullPolicy: IfNotPresent
   326            args:
   327              - "--drivername=csi-hostpath"
   328              - "--v=5"
   329              - "--nodeid=$(KUBE_NODE_NAME)"
   330              - "--endpoint=unix:///csi/csi.sock"
   331            env:
   332              - name: KUBE_NODE_NAME
   333                valueFrom:
   334                  fieldRef:
   335                    apiVersion: v1
   336                    fieldPath: spec.nodeName
   337            imagePullPolicy: "IfNotPresent"
   338            volumeMounts:
   339              - name: socket-dir
   340                mountPath: /csi
   341              - name: plugins-dir
   342                mountPath: /var/lib/edged/plugins
   343                mountPropagation: Bidirectional
   344              - name: mountpoint-dir
   345                mountPath: /var/lib/edged/pods
   346                mountPropagation: "Bidirectional"
   347        volumes:
   348          - name: socket-dir
   349            hostPath:
   350              path: /var/lib/edged/plugins/csi-hostpath
   351              type: DirectoryOrCreate
   352          - name: registration-dir
   353            hostPath:
   354              path: /var/lib/edged/plugins_registry
   355              type: DirectoryOrCreate
   356          - name: plugins-dir
   357            hostPath:
   358              path: /var/lib/edged/plugins
   359              type: DirectoryOrCreate
   360          - name: mountpoint-dir
   361            hostPath:
   362              path: /var/lib/edged/pods
   363              type: DirectoryOrCreate
   364  ```
   365  The `DaemonSet` includes:
   366  
   367  * The following containers:
   368  
   369    * `node-driver-registrar` container
   370  
   371      Responsible for registering the UNIX Domain Socket with Edged.
   372  
   373    * `csi-hostpath-driver` container
   374  
   375      developped by the vendor.
   376  
   377  * The following volumes:
   378  
   379    * `socket-dir` hostPath volume
   380  
   381        Expose `/var/lib/edged/plugins/csi-hostpath` from the edge node as `hostPath.type: DirectoryOrCreate`.
   382  
   383        Mount inside `csi-hostpath-driver` container which is the primary means of communication between Edged and the `csi-hostpath-driver` container.
   384  
   385    * `registration-dir` hostPath volume
   386  
   387        Expose `/var/lib/edged/plugins_registry` from the edge node.
   388  
   389        Mount only in `node-driver-registrar` container at `/registration`.
   390  
   391        `node-driver-registrar` will use this UNIX Domain Socket to register the CSI Driver with Edged.
   392  
   393    * `plugins-dir` hostPath volume
   394  
   395        Expose `/var/lib/edged/plugins` from the edge node as `hostPath.type: DirectoryOrCreate`.
   396  
   397        Mount inside `csi-hostpath-driver` container.
   398  
   399    * `mountpoint-dir` hostPath volume
   400  
   401        Expose `/var/lib/edged/pods` from the edge node.
   402  
   403        Mount only in `csi-hostpath-driver` container at `/var/lib/edged/pods`.
   404        Ensure [bi-directional mount propagation](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation) is enabled,
   405        so that any mounts setup inside this container are propagated back to edge node host machine.
   406  
   407  If it is not possible to support `DaemonSet` in some KubeEdge release versions,
   408  The cluster admins need to deploy `Deployment` in place of `DaemonSet`.
   409  
   410  ### Example
   411  
   412  #### Static Provisioning Example
   413  
   414  Static Provisioning means the volume has already been created.
   415  There is no need to create volume by the `External-Provisioner`.
   416  The users can request for persistent storage using PersistentVolume and PersistentVolumeClaim.
   417  Here list the CSI NFS Driver as an example.
   418  
   419  PersistentVolume Example:
   420  ```yaml
   421  apiVersion: v1
   422  kind: PersistentVolume
   423  metadata:
   424    name: csi-nfs-pv
   425    labels:
   426      name: csi-nfs-pv
   427  spec:
   428    accessModes:
   429    - ReadWriteMany
   430    capacity:
   431      storage: 1Gi
   432    csi:
   433      driver: csi-nfs
   434      volumeHandle: data-id
   435      volumeAttributes:
   436        server: 127.0.0.1
   437        share: /export
   438  ```
   439  Refer to the NFS share information and create a PersistentVolume.
   440  `server: 127.0.0.1` and `share: /export` in the `volumeAttributes` are used to speficy the NFS share information.
   441  
   442  PersistentVolumeClaim Example:
   443  ```yaml
   444  apiVersion: v1
   445  kind: PersistentVolumeClaim
   446  metadata:
   447    name: csi-nfs-pvc
   448  spec:
   449    accessModes:
   450    - ReadWriteMany
   451    resources:
   452      requests:
   453        storage: 1Gi
   454    selector:
   455      matchExpressions:
   456      - key: name
   457        operator: In
   458        values: ["csi-nfs-pv"]
   459  ```
   460  Based on the PersistentVolume, this example is used to define PersistentVolumeClaim.
   461  Once the PersistentVolumeClaim is created successfully,
   462  the PersistentVolume will be bound to the PersistentVolumeClaim.
   463  
   464  Pod binding PersistentVolumeClaim Example:
   465  ```yaml
   466  kind: Pod
   467  apiVersion: v1
   468  metadata:
   469    name: my-csi-app
   470  spec:
   471    containers:
   472      - name: my-frontend
   473        image: busybox
   474        volumeMounts:
   475        - mountPath: "/data"
   476          name: my-csi-volume
   477        command: [ "sleep", "1000000" ]
   478    volumes:
   479      - name: my-csi-volume
   480        persistentVolumeClaim:
   481          claimName: csi-nfs-pvc
   482  ```
   483  `claimName: csi-nfs-pvc` is the name of PersistentVolumeClaim.
   484  Before the pod starts up, the volume will be attached to the edge node, and then mounted inside the pod.
   485  
   486  #### Dynamic Provisioning Example
   487  
   488  Dynamic Provisioning means the volume will be created by `External-Provisioner` firstly,
   489  and then the volume will be attached into the edge node which is running the pod,
   490  at last the volume will be mounted into the pod.
   491  Here list the CSI HostPath Driver as an example.
   492  
   493  StorageClass Example:
   494  ```yaml
   495  apiVersion: storage.k8s.io/v1
   496  kind: StorageClass
   497  metadata:
   498    name: csi-hostpath-sc
   499  provisioner: csi-hostpath
   500  reclaimPolicy: Delete
   501  volumeBindingMode: Immediate
   502  ```
   503  `provisioner: csi-hostpath` is used to specify which kind of CSI Driver will be used.
   504  
   505  PersistentVolumeClaim Example:
   506  ```yaml
   507  apiVersion: v1
   508  kind: PersistentVolumeClaim
   509  metadata:
   510    name: csi-hostpath-pvc
   511  spec:
   512    accessModes:
   513    - ReadWriteOnce
   514    resources:
   515      requests:
   516        storage: 1Gi
   517    storageClassName: csi-hostpath-sc
   518  ```
   519  `storageClassName: csi-hostpath-sc` is used to specify the storage class name
   520  which is defined in the StorageClass Example.
   521  Once PersistentVolumeClaim is created, KubeEdge will create volume based on
   522  the CSI Driver which is specified in the StorageClass, and the PersistentVolume
   523  will be created in Kubernetes and bound to the PersistentVolumeClaim.
   524  
   525  Pod binding PersistentVolumeClaim Example:
   526  ```yaml
   527  kind: Pod
   528  apiVersion: v1
   529  metadata:
   530    name: my-csi-app
   531  spec:
   532    containers:
   533      - name: my-frontend
   534        image: busybox
   535        volumeMounts:
   536        - mountPath: "/data"
   537          name: my-csi-volume
   538        command: [ "sleep", "1000000" ]
   539    volumes:
   540      - name: my-csi-volume
   541        persistentVolumeClaim:
   542          claimName: csi-hostpath-pvc
   543  ```
   544  `claimName: csi-hostpath-pvc` is the name of PersistentVolumeClaim.
   545  Before the pod starts up, the volume will be attached to the edge node, and then mounted inside the pod.
   546  
   547  ## Graduation Criterias
   548  
   549  ### Alpha
   550  
   551  #### Work Items
   552  
   553  * Managers in Edge Controller.
   554    * PersistentVolume Manager.
   555    * PersistentVolumeClaim Manager.
   556    * VolumeAttachment Manager.
   557    * Node Manager.
   558  * CSI Driver from KubeEdge.
   559    * Create Volume and Delete Volume.
   560    * Controller Publish Volume and Controller Unpublish Volume.
   561  * UNIX Domain Socket Server in CloudHub.
   562  * CSI Volume Plugin(In-Tree) in Edged.
   563    * Create Volume and Delete Volume.
   564    * Controller Publish Volume and Controller Unpublish Volume.
   565    * Node Stage Volume and Node Unstage Volume.
   566    * Node Publish Volume and Node Unpublish Volume.
   567  * Resource Message Exchange in MetaManager.
   568    * PersistentVolume
   569    * PersistentVolumeClaim
   570    * VolumeAttachment
   571    * Node
   572  
   573  #### Graduation Criteria
   574  
   575  * Fully compatible with the interfaces defined by [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) from v1.0.0.
   576  * Fully compatible with Kubernetes from v1.15.
   577  * In Alpha some allowed limitations are including:
   578    * The StatefulSet or Deployment including `External-Provisioner`, `External-Attacher` and `CSI Driver from KubeEdge`
   579      needs to be deployed with KubeEdge Cloud Core in the same host since the UNIX Domain Socket is used between them.
   580    * There is an Edge Node which is in charge of `Create Volume` and `Delete Volume`,
   581      `Controller Publish Volume` and `Controller Unpublish Volume`.
   582      The name of this Edge Node needs to be manually specified in the parameters of `CSI Driver from KubeEdge`.
   583  
   584  #### Release Plan
   585  
   586  KubeEdge 1.1
   587  
   588  ### Beta
   589  
   590  #### Work Items
   591  
   592  * Support Cross-Host communication between CloudHub and CSI Driver from KubeEdge.
   593  * Edge Node Election in CSI Driver from KubeEdge.
   594  * Unit Tests for CSI Support.
   595  * Intergration Tests for CSI Support.
   596  * E2E Tests for CSI Support.
   597  
   598  #### Graduation Criteria
   599  
   600  * All of the tests are done and passed including Unit Tests, Intergration Tests and E2E Tests.
   601  * All of the limitations existing in Alpha are solved.
   602  
   603  #### Release Plan
   604  
   605  KubeEdge 1.3
   606  
   607  ### GA
   608  
   609  #### Work Items
   610  
   611  * Failure Recovery Mechanism.
   612  * CSI User Guide in KubeEdge.
   613  
   614  #### Graduation Criteria
   615  
   616  * Failure Recovery Mechanism for High Availability.
   617  * The User Guide is clear and easy for the users to use in the producation environment.
   618  
   619  #### Release Plan
   620  
   621  TBD