github.com/anth0d/nomad@v0.0.0-20221214183521-ae3a0a2cad06/demo/csi/ceph-csi-plugin/README.md (about)

     1  # Ceph CSI Plugin
     2  
     3  The configuration here is for the Ceph RBD driver, migrated from the k8s
     4  config
     5  [documentation](https://github.com/ceph/ceph-csi/blob/master/docs/deploy-rbd.md). It
     6  can be modified for the CephFS Driver, as used
     7  [here](https://github.com/ceph/ceph-csi/blob/master/docs/deploy-cephfs.md).
     8  
     9  ## Deployment
    10  
    11  The Ceph CSI Node task requires that [`privileged =
    12  true`](https://www.nomadproject.io/docs/drivers/docker#privileged) be
    13  set. This is not needed for the Controller task.
    14  
    15  ### Plugin Arguments
    16  
    17  Refer to the official plugin
    18  [guide](https://github.com/ceph/ceph-csi/blob/master/docs/deploy-rbd.md).
    19  
    20  * `--type=rbd`: driver type `rbd` (or alternately `cephfs`)
    21  
    22  * `--endpoint=${CSI_ENDPOINT}`: if you don't use the `CSI_ENDPOINT`
    23      environment variable, this option must match the `mount_dir`
    24      specified in the `csi_plugin` stanza for the task.
    25  
    26  * `--nodeid=${node.unique.id}`: a unique ID for the node the task is running
    27    on.
    28  
    29  * `--instanceid=${NOMAD_ALLOC_ID}`: a unique ID distinguishing this instance
    30      of Ceph CSI among other instances, when sharing Ceph clusters across CSI
    31      instances for provisioning. Used for topology-aware deployments.
    32  
    33  ### Run the Plugins
    34  
    35  Run the plugins:
    36  
    37  ```
    38  $ nomad job run -var-file=nomad.vars ./plugin-cephrbd-controller.nomad
    39  ==> Monitoring evaluation "c8e65575"
    40      Evaluation triggered by job "plugin-cephrbd-controller"
    41  ==> Monitoring evaluation "c8e65575"
    42      Evaluation within deployment: "b15b6b2b"
    43      Allocation "1955d2ab" created: node "8dda4d46", group "cephrbd"
    44      Evaluation status changed: "pending" -> "complete"
    45  ==> Evaluation "c8e65575" finished with status "complete"
    46  
    47  $ nomad job run -var-file=nomad.vars ./plugin-cephrbd-node.nomad
    48  ==> Monitoring evaluation "5e92c5dc"
    49      Evaluation triggered by job "plugin-cephrbd-node"
    50  ==> Monitoring evaluation "5e92c5dc"
    51      Allocation "5bb9e57a" created: node "8dda4d46", group "cephrbd"
    52      Evaluation status changed: "pending" -> "complete"
    53  ==> Evaluation "5e92c5dc" finished with status "complete"
    54  
    55  $ nomad plugin status cephrbd
    56  ID                   = cephrbd
    57  Provider             = rbd.csi.ceph.com
    58  Version              = canary
    59  Controllers Healthy  = 1
    60  Controllers Expected = 1
    61  Nodes Healthy        = 1
    62  Nodes Expected       = 1
    63  
    64  Allocations
    65  ID        Node ID   Task Group  Version  Desired  Status   Created    Modified
    66  1955d2ab  8dda4d46  cephrbd     0        run      running  3m47s ago  3m37s ago
    67  5bb9e57a  8dda4d46  cephrbd     0        run      running  3m44s ago  3m43s ago
    68  ```
    69  
    70  ### Create a Volume
    71  
    72  The `secrets` block for the volume must be populated with the `userID` and
    73  `userKey` values pulled from `/etc/ceph/ceph.client.<user>.keyring`.
    74  
    75  ```
    76  $ nomad volume create ./volume.hcl
    77  Created external volume 0001-0024-e9ba69fa-67ff-5920-b374-84d5801edd19-0000000000000002-3603408d-a9ca-11eb-8ace-080027c5bc64 with ID testvolume
    78  ```
    79  
    80  ### Register a Volume
    81  
    82  You can register a volume that already exists in Ceph. In this case, you'll
    83  need to provide the `external_id` field. The `ceph-csi-id.tf` Terraform file
    84  in this directory can be used to generate the correctly-formatted ID. This is
    85  based on [Ceph-CSI ID
    86  Format](https://github.com/ceph/ceph-csi/blob/71ddf51544be498eee03734573b765eb04480bb9/internal/util/volid.go#L27)
    87  (see
    88  [examples](https://github.com/ceph/ceph-csi/blob/71ddf51544be498eee03734573b765eb04480bb9/internal/util/volid_test.go#L33)).
    89  
    90  
    91  ## Running Ceph in Vagrant
    92  
    93  For demonstration purposes only, you can run Ceph as a single container Nomad
    94  job on the Vagrant VM managed by the `Vagrantfile` at the top-level of this
    95  repo.
    96  
    97  The `./run-ceph.sh` script in this directory will deploy the demo container
    98  and wait for it to be ready. The data served by this container is entirely
    99  ephemeral and will be destroyed once it stops; you should not use this an
   100  example of how to run production Ceph workloads!
   101  
   102  ```sh
   103  $ ./run-ceph.sh
   104  
   105  nomad job run -var-file=nomad.vars ./ceph.nomad
   106  ==> Monitoring evaluation "68dde586"
   107      Evaluation triggered by job "ceph"
   108  ==> Monitoring evaluation "68dde586"
   109      Evaluation within deployment: "79e23968"
   110      Allocation "77fd50fb" created: node "ca3ee034", group "ceph"
   111      Evaluation status changed: "pending" -> "complete"
   112  ==> Evaluation "68dde586" finished with status "complete"
   113  
   114  waiting for Ceph to be ready..............................
   115  ready!
   116  ```
   117  
   118  The setup script in the Ceph container configures a key, which you'll need for
   119  creating volumes. You can extract the key from the keyring via `nomad alloc
   120  exec`:
   121  
   122  ```
   123  $ nomad alloc exec 77f  cat /etc/ceph/ceph.client.admin.keyring | awk '/key/{print $3}'
   124  AQDsIoxgHqpeBBAAtmd9Ndu4m1xspTbvwZdIzA==
   125  ```
   126  
   127  To run the Controller plugin against this Ceph, you'll need to use the plugin
   128  job in the file `plugin-cephrbd-controller-vagrant.nomad` so that it can reach
   129  the correct ports.
   130  
   131  ## Ceph CSI Driver Source
   132  
   133  - https://github.com/ceph/ceph-csi