github.com/verrazzano/verrazzano-monitoring-operator@v0.0.30/docs/usage.md (about)

     1  # Usage
     2  
     3  The document describes how to use the VMO in a standalone context, outside of the full Verrazzano context.
     4  
     5  ## Prerequisites
     6  
     7  The following is required to run the operator:
     8  
     9  * Kubernetes v1.13.x and up
    10  * [Helm v2.9.1](https://github.com/kubernetes/helm/releases/tag/v2.9.1) and up
    11  
    12  ## Installation
    13  
    14  ### Install the CRDs required by VMO
    15  
    16  ```
    17  kubectl apply -f k8s/crds/verrazzano-monitoring-operator-crds.yaml --validate=false
    18  ```
    19  
    20  ### Install Nginx Ingress Controller
    21  
    22  ```
    23  helm upgrade ingress-controller stable/nginx-ingress --install --version 1.27.0  --set controller.service.enableHttp=false \
    24    --set controller.scope.enabled=true
    25  ```
    26  
    27  ### Install VMO
    28  
    29  ```
    30  kubectl apply -f k8s/manifests/verrazzano-monitoring-operator.yaml
    31  ```
    32  
    33  This will deploy the latest VMO image, or you can fill in a specific VMO image.
    34  
    35  ## VMI Examples
    36  
    37  #### Simple VMI using NodePort access
    38  
    39  To deploy a simple VMI:
    40  
    41  Prepare a secret with the VMI username/password:
    42  ```
    43  kubectl create secret generic vmi-secrets \
    44        --from-literal=username=vmo \
    45        --from-literal=password=changeme
    46  ```
    47  
    48  Then:
    49  ```
    50  kubectl apply -f k8s/examples/simple-vmi.yaml
    51  ```
    52  
    53  Now, view the artifacts that the VMO created:
    54  
    55  ```
    56  kubectl get deployments
    57  NAME                                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    58  vmi-vmi-1-api                                       1/1     1            0           35s
    59  vmi-vmi-1-es-data-0                                 1/1     1            0           35s
    60  vmi-vmi-1-es-exporter                               1/1     1            0           35s
    61  vmi-vmi-1-es-ingest                                 1/1     1            0           35s
    62  vmi-vmi-1-grafana                                   1/1     1            0           35s
    63  vmi-vmi-1-kibana                                    1/1     1            0           35s
    64  vmi-vmi-1-prometheus-0                              1/1     1            0           35s
    65  
    66  kubectl get services
    67  NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    68  vmi-vmi-1-alertmanager           NodePort       10.96.120.46    <none>           9093:31685/TCP                        58s
    69  vmi-vmi-1-alertmanager-cluster   ClusterIP      None            <none>           9094/TCP                              58s
    70  vmi-vmi-1-api                    NodePort       10.96.83.126    <none>           9097:32645/TCP                        57s
    71  vmi-vmi-1-es-data                NodePort       10.96.249.102   <none>           9100:32535/TCP                        58s
    72  vmi-vmi-1-es-exporter            NodePort       10.96.95.21     <none>           9114:30699/TCP                        57s
    73  vmi-vmi-1-es-ingest              NodePort       10.96.22.40     <none>           9200:30090/TCP                        58s
    74  vmi-vmi-1-es-master              ClusterIP      None            <none>           9300/TCP                              58s
    75  vmi-vmi-1-grafana                NodePort       10.96.125.142   <none>           3000:30634/TCP                        59s
    76  vmi-vmi-1-kibana                 NodePort       10.96.142.26    <none>           5601:30604/TCP                        57s
    77  vmi-vmi-1-prometheus             NodePort       10.96.187.224   <none>           9090:30053/TCP,9100:32382/TCP         59s
    78  ```
    79  
    80  Now, access the endpoints for the various components, for example (based on the above output).  Note that this only works
    81  on a Kubernetes cluster with worker nodes with public IP addresses.
    82  * Grafana: http://worker_external_ip:30634
    83  * Prometheus: http://worker_external_ip:30053
    84  * Alertmanager: http://worker_external_ip:31685
    85  * Kibana: http://worker_external_ip:30604
    86  * Elasticsearch: http://worker_external_ip:30090
    87  
    88  #### VMI with Data Volumes
    89  
    90  This example specifies storage for the various VMO components, allowing VMI components' data to
    91  survive across pod restarts and node failure.
    92  
    93  ```
    94  kubectl apply -f k8s/examples/vmi-with-data-volumes.yaml
    95  ```
    96  
    97  In addition to the artifacts created by the Simple VMI example, this also results in the creation of PVCs:
    98  
    99  ```
   100  kubectl get pvc
   101  NAME                         STATUS   VOLUME                                                                                      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
   102  vmi-vmi-1-es-data            Bound    ocid1.volume.oc1.uk-london-1.abwgiljrpmpozhpi554dcybqvwzjhxyje2pkhc74fiotuvdkids424ywne3a   50Gi       RWO            oci            30s
   103  vmi-vmi-1-grafana            Bound    ocid1.volume.oc1.uk-london-1.abwgiljtupi46mdohk4hhnpy2laipwpfk3p44pizkrwdyft3p2vukkh2p2yq   50Gi       RWO            oci            30s
   104  vmi-vmi-1-prometheus         Bound    ocid1.volume.oc1.uk-london-1.abwgiljtqe3v3zzyo7hwgeq4f3la5j44cxum6353rpzw55xocxvtaxuz5gqa   50Gi       RWO            oci            30s
   105  ```
   106  
   107  #### VMI with Ingress, manaully created cert, no DNS
   108  
   109  This examples requires that the ingress-controller you deployed above has succeeded in creating a LoadBalancer:
   110  
   111  ```
   112  kubectl get svc
   113  NAME                                               TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                               AGE
   114  ingress-controller-nginx-ingress-controller        LoadBalancer   10.96.203.26    140.238.80.114   443:31379/TCP                         91s
   115  ```
   116  
   117  Using an ingress-controller _without_ a separate cert-manager requires that we create a TLS secret manually for this VMI inage.  We'll 
   118  create a self-signed cert for this example:
   119  
   120  ```
   121  export DNSDOMAINNAME=dev.vmi1.verrazzano.io
   122  # NOTE - double check your operating system's openssl.cnf location...
   123  cp /etc/ssl/openssl.cnf /tmp/
   124  echo '[ subject_alt_name ]' >> /tmp/openssl.cnf
   125  echo "subjectAltName = DNS:*.$DNSDOMAINNAME, DNS:api.$DNSDOMAINNAME, DNS:grafana.$DNSDOMAINNAME, DNS:help.$DNSDOMAINNAME, DNS:kibana.$DNSDOMAINNAME, DNS:prometheus.$DNSDOMAINNAME, DNS:elasticsearch.$DNSDOMAINNAME" >> /tmp/openssl.cnf
   126  openssl req -x509 -nodes -newkey rsa:2048 \
   127    -config /tmp/openssl.cnf \
   128    -extensions subject_alt_name \
   129    -keyout tls.key \
   130    -out tls.crt \
   131    -subj "/C=US/ST=Oregon/L=Portland/O=VMO/OU=PDX/CN=*.$DNSDOMAINNAME/emailAddress=postmaster@$DNSDOMAINNAME"
   132  kubectl create secret tls vmi-1-tls --key=tls.key --cert=tls.crt
   133  ```
   134  
   135  And create the VMI:
   136  ```
   137  kubectl apply -f k8s/examples/vmi-with-ingress.yaml
   138  ```
   139  
   140  Now, we can access our VMI endpoints through the LoadBalancer.  In the above example, our LoadBalancer IP is 140.238.80.114, and our VMI 
   141  base URI is dev.vmi1.verrazzano.io.  We'll use host headers:
   142  
   143  ```
   144  curl -k --user vmo:changeme https://140.238.80.114 --header "Host: grafana.dev.vmi1.verrazzano.io"
   145  curl -k --user vmo:changeme https://140.238.80.114 --header "Host: prometheus.dev.vmi1.verrazzano.io"
   146  curl -k --user vmo:changeme https://140.238.80.114 --header "Host: kibana.dev.vmi1.verrazzano.io"
   147  curl -k --user vmo:changeme https://140.238.80.114 --header "Host: elasticsearch.dev.vmi1.verrazzano.io"
   148  curl -k --user vmo:changeme https://140.238.80.114 --header "Host: api.dev.vmi1.verrazzano.io"
   149  ```
   150  
   151  # VMI with Ingress, external-dns and cert-manager
   152  
   153  The VMO was designed to work with [external-dns](https://github.com/helm/charts/tree/master/stable/external-dns) and 
   154  [cert-manager](https://github.com/jetstack/cert-manager), and adds the appropriate ingress annotations to 
   155  the created ingresses to trigger external-dns and cert-manager to take effect.
   156  
   157  If cert-manager is installed via Helm prior to running the above example, the manual step to create the TLS cert isn't necessary.
   158  
   159  Similarly, if external-dns is installed via Helm prior to running the above example, passing in host headers to our curl 
   160  commands isn't necessary.