github.com/replicatedhq/ship@v0.55.0/integration/init/forgeops/expected/.ship/upstream/README.md (about)

     1  # ForgeRock Directory Services Helm chart
     2  
     3  Deploy one or more ForgeRock Directory Server instances using Persistent disk claims
     4  and StatefulSets.
     5  
     6  ## Sample Usage
     7  
     8  To deploy to a Kubernetes cluster:
     9  
    10  `helm install --set "instance=userstore" ds`
    11  
    12  This will install a sample DS userstore.
    13  
    14  The instance will be available in the cluster as userstore-0.
    15  
    16  If you wish to connect an ldap browser on your local machine to this instance, you can use:
    17  
    18  `kubectl port-forward userstore-0 1389:1389`
    19  
    20  And open up a connection to ldap://localhost:1389
    21  
    22  The default password is "password".
    23  
    24  ## Persistent Disks
    25  
    26  The statefulset uses a Persistent volume claim template to allocate storage for each directory server pod. Persistent volume claims are not deleted when the statefulset is deleted.  In other words, performing a `helm delete ds-release`  will *not* delete the underlying storage. If you want to reclaim the storage, delete the PVC:
    27  
    28  ```bash
    29  kubectl get pvc
    30  kubectl delete pvc userstore-0
    31  ```
    32  
    33  ## Values.yaml
    34  
    35  Please refer to values.yaml. There are a number of variables you can set on the helm command line, or
    36  in your own custom.yaml to control the behavior of the deployment. The features described below
    37  are all controlled by variables in values.yaml.
    38  
    39  ## Diagnostics and Troubleshooting
    40  
    41  Use kubectl exec to get a shell into the running container. For example:
    42  
    43  `kubectl exec userstore-0 -it bash`
    44  
    45  There are a number of utility scripts found under `/opt/opendj/scripts`, as well as the 
    46  directory server commands in `/opt/opendj/bin`.
    47  
    48  use kubectl logs to see the pod logs. 
    49  
    50  `kubectl logs userstore-0 -f`
    51  
    52  ## Scaling and replication
    53  
    54  To scale a deployment set the number of replicas in values.yaml. See values.yaml
    55  for the various options. Each node in the statefulset is a combined directory and replication server. Note that the topology of the set can not be changed after installation by scaling the statefulset. You can not add or remove ds nodes without reinitializing the cluster from scratch or from a backup. The desired number of ds/rs instances should be planned in advance.
    56  
    57  
    58  ## Backup
    59  
    60  If backup is enabled, each pod in the statefulset mounts a shared backup
    61   volume claim (PVC) on bak/. This PVC holds the contents of the backups. You must size this PVC according 
    62  to the amount of backup data you wish to retain. Old backups must be purged manually. The backup pvc must
    63  be an ReadWriteMany volume type (like NFS, for example). 
    64  
    65  A backup can be initiated manually by execing into the image and running the scripts/backup.sh command. For example:
    66  
    67  `kubectl exec userstore-0 -it bash`
    68  `./scripts/backup.sh`
    69  
    70  The backups can be listed using `scripts/list-backup.sh`
    71  
    72  ## Restore 
    73  
    74  The chart can restore the state of the directory from a previous backup. Set the value restore.enabled=true on deployment.  The restore process will not overwrite a data/ pvc that contains data. 
    75  
    76  ## Benchmarking 
    77  
    78  If you are benchmarking on a cloud provider make sure you use an SSD storage class as the directory is very sensitive to disk performance.