github.com/argoproj-labs/argocd-operator@v0.10.0/docs/usage/export.md (about)

     1  # Export
     2  
     3  See the [ArgoCDExport Reference][argocdexport_reference] for the full list of properties to configure the export process for an Argo CD cluster.
     4  
     5  ## Requirements
     6  
     7  The following sections assume that an existing Argo CD cluster named `example-argocd` has been deployed by the operator using the existing basic `ArgoCD` example.
     8  
     9  ``` bash
    10  kubectl apply -f examples/argocd-basic.yaml
    11  ```
    12  
    13  If an `ArgoCDExport` resource is created that references an Argo CD cluster that does not exist, the operator will simply move on and wait until the Argo CD cluster does exist before taking any further action in the export process.
    14   
    15  ## ArgoCDExport
    16  
    17  The following example shows the most minimal valid manifest to export (backup) an Argo CD cluster that was provisioned using the Argo CD Operator. 
    18  
    19  ``` yaml
    20  apiVersion: argoproj.io/v1alpha1
    21  kind: ArgoCDExport
    22  metadata:
    23    name: example-argocdexport
    24    labels:
    25      example: basic
    26  spec:
    27    argocd: example-argocd
    28  ```
    29  
    30  This would create a new `ArgoCDExport` resource with the name of `example-argocdexport`. The operator will provision a 
    31  Kubernetes Job to run the built-in Argo CD export utility on the specified Argo CD cluster.
    32  
    33  If the `Schedule` property was set using valid Cron syntax, the operator will provision a CronJob to run the export on 
    34  a recurring schedule. Each time the CronJob executes, the export data will be overritten by the operator, only keeping 
    35  the most recent version.
    36  
    37  The data that is exported by the Job is owned by the `ArgoCDExport` resource, not the Argo CD cluster. So the cluster can 
    38  come and go, starting up everytime by importing the same backup data, if desired.
    39  
    40  See the `ArgoCD` [Import Reference][argocd_import] documentation for more information on importing the backup data when starting a new 
    41  Argo CD cluster.
    42  
    43  ## Export Data
    44  
    45  The Argo CD export data consists of a series of Kubernetes manifests representing the various cluster resources in YAML format stored in a single file. This exported YAML file is then `AES` encrypted before being saved to the storage backend of choice.
    46  
    47  See the Argo CD [Disaster Recovery][argocd_dr] documentation for more information on the Argo CD export data.
    48  
    49  ## Export Secrets
    50  
    51  An export Secret is used by the operator to hold the backup encryption key, as well as credentials if using a cloud 
    52  provider storage backend. The operator will create the Secret if it does not already exist, using the naming convention
    53  `[EXPORT NAME]-export`. For example, if the `ArgoCDExport` resource is named `example-argocdexport` from above, the 
    54  name of the generated secret would be `example-argocdexport-export`.
    55  
    56  The `SecretName` property on the `ArgoCDExport` Storage Spec can be used to change the name of the Secret.
    57  
    58  ``` yaml
    59  apiVersion: argoproj.io/v1alpha1
    60  kind: ArgoCDExport
    61  metadata:
    62    name: example-argocdexport
    63    labels:
    64      example: secret-name
    65  spec:
    66    argocd: example-argocd
    67    storage:
    68      secretName: my-backup-secret
    69  ```
    70  
    71  The following property is common across all storage backends. See the sections below for additional properties that are
    72  required for the different cloud provider backends. 
    73  
    74  **backup.key**
    75  
    76  The `backup.key` is the encryption key used by the operator when encrypting or decrypting the exported data. This key
    77  will be generated automatically if not provided.
    78  
    79  ## Storage Backend
    80  
    81  The exported data can be saved on a variety of backend storage locations. This can be persisted locally in the 
    82  Kubernetes cluster or remotely using a cloud provider.
    83  
    84  See the `ArgoCDExport` [Storage Reference][storage_reference] for information on controlling the underlying storage 
    85  options.
    86  
    87  ### Local
    88  
    89  By default, the operator will use a `local` storage backend for the export process. The operator will provision a 
    90  PersistentVolumeClaim using the defaults below to store the export data locally in the cluster on a PersistentVolume.
    91  
    92  ``` yaml
    93  apiVersion: argoproj.io/v1alpha1
    94  kind: ArgoCDExport
    95  metadata:
    96    name: example-argocdexport
    97    labels:
    98      example: pvc
    99  spec:
   100    argocd: example-argocd
   101    storage:
   102      backend: local
   103      pvc:
   104        accessModes:
   105          - ReadWriteOnce
   106        resources:
   107          requests:
   108            storage: 2Gi
   109        storageClassName: standard
   110  ```
   111  
   112  #### Local Example
   113  
   114  Create an `ArgoCDExport` resource in the `argocd` namespace using the basic example.
   115  
   116  ``` bash
   117  kubectl apply -n argocd -f examples/argocdexport-basic.yaml
   118  ```
   119  
   120  You can view the list of `ArgoCDExport` resources.
   121  
   122  ``` bash
   123  kubectl get argocdexports
   124  ```
   125  ```
   126  NAME                   AGE
   127  example-argocdexport   15m
   128  ```
   129  
   130  Creating the resource will result in the operator provisioning a Kubernetes Job to perform the export process. The Job 
   131  should not take long to complete.
   132  
   133  ``` bash
   134  kubectl get pods -l job-name=example-argocdexport
   135  ```
   136  ```
   137  NAME                         READY   STATUS      RESTARTS   AGE
   138  example-argocdexport-q92qm   0/1     Completed   0          1m
   139  ```
   140  
   141  if the Job fails for some reason, view the logs of the Pod to help in troubleshooting.
   142  
   143  ``` bash
   144  kubectl logs example-argocdexport-q92qm
   145  ```
   146  
   147  Output similar to what is shown below indicates a successful export.
   148  
   149  ```
   150  exporting argo-cd
   151  creating argo-cd backup
   152  encrypting argo-cd backup
   153  argo-cd export complete
   154  ```
   155  
   156  View the PersistentVolumeClaim created by the operator for the export data.
   157  
   158  ``` bash
   159  kubectl get pvc -n argocd
   160  ```
   161  ```
   162  NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
   163  example-argocdexport   Bound    pvc-6d15143d-184a-4e5a-a185-6b86924af8bd   2Gi        RWO            gp2            39s
   164  ```
   165  
   166  There should also be a corresponding PersistentVolume if dynamic volume support is enabled on the Kubernetes cluster.
   167  
   168  ``` bash
   169  kubectl get pv -n argocd
   170  ```
   171  ```
   172  NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS   REASON   AGE
   173  pvc-6d15143d-184a-4e5a-a185-6b86924af8bd   2Gi        RWO            Delete           Bound    argocd/example-argocdexport   gp2                     34s
   174  ```
   175  
   176  ### AWS
   177  
   178  The operator can use an Amazon Web Services S3 bucket to store the export data.
   179  
   180  ``` yaml
   181  apiVersion: argoproj.io/v1alpha1
   182  kind: ArgoCDExport
   183  metadata:
   184    name: example-argocdexport
   185    labels:
   186      example: aws
   187  spec:
   188    argocd: example-argocd
   189    storage:
   190      backend: aws
   191      secretName: aws-backup-secret
   192  ```
   193  
   194  #### AWS Secrets
   195  
   196  The storage `SecretName` property should reference an existing secret that contains the AWS credentials and bucket information.
   197  
   198  ``` yaml
   199  apiVersion: v1
   200  kind: Secret
   201  metadata:
   202    name: aws-backup-secret
   203    labels:
   204      example: aws
   205  type: Opaque
   206  data:
   207    aws.bucket.name: ...
   208    aws.bucket.region: ...
   209    aws.access.key.id: ...
   210    aws.secret.access.key: ....
   211  ```
   212  
   213  The following properties must exist on the Secret referenced in the `ArgoCDExport` resource when using `aws` as the storage backend.
   214  
   215  **aws.bucket.name**
   216  
   217  The name of the AWS S3 bucket. This should be the name of the bucket only, do not prefix the value `s3://`, as the operator will handle this automatically.
   218  
   219  **aws.bucket.region**
   220  
   221  The region of the AWS S3 bucket.
   222  
   223  **aws.access.key.id**
   224  
   225  The AWS IAM Access Key ID.
   226  
   227  **aws.secret.access.key**
   228  
   229  The AWS IAM Secret Access Key.
   230  
   231  #### AWS Example
   232  
   233  Once the required AWS credentials are set on the export Secret, create the `ArgoCDExport` resource in the `argocd` 
   234  namespace using the included AWS example.
   235  
   236  ``` bash
   237  kubectl apply -n argocd -f examples/argocdexport-aws.yaml
   238  ```
   239  
   240  Creating the resource will result in the operator provisioning a Kubernetes Job to perform the export process. 
   241  
   242  ``` bash
   243  kubectl get pods -l job-name=example-argocdexport
   244  ```
   245  
   246  The Job should not take long to complete.
   247  
   248  ```
   249  NAME                         READY   STATUS      RESTARTS   AGE
   250  example-argocdexport-q92qm   0/1     Completed   0          1m
   251  ```
   252  
   253  If the Job fails for some reason, view the logs of the Pod to help in troubleshooting.
   254  
   255  ``` bash
   256  kubectl logs example-argocdexport-q92qm
   257  ```
   258  
   259  Output similar to what is shown below indicates a successful export.
   260  
   261  ```
   262  exporting argo-cd
   263  creating argo-cd backup
   264  encrypting argo-cd backup
   265  pushing argo-cd backup to aws
   266  make_bucket: example-argocdexport
   267  upload: ../../backups/argocd-backup.yaml to s3://example-argocdexport/argocd-backup.yaml
   268  argo-cd export complete
   269  ```
   270  
   271  #### AWS IAM Configuration
   272  
   273  TODO: Add the required Role and Service Account configuration needed through AWS.
   274  
   275  ### Azure
   276  
   277  The operator can use a Micosoft Azure Storage Container to store the export data as Blob.
   278  
   279  ``` yaml
   280  apiVersion: argoproj.io/v1alpha1
   281  kind: ArgoCDExport
   282  metadata:
   283    name: example-argocdexport
   284    labels:
   285      example: azure
   286  spec:
   287    argocd: example-argocd
   288    storage:
   289      backend: azure
   290      secretName: azure-backup-secret
   291  ```
   292  
   293  #### Azure Secrets
   294  
   295  The storage `SecretName` property should reference an existing secret that contains the Azure credentials and bucket information.
   296  
   297  ``` yaml
   298  apiVersion: v1
   299  kind: Secret
   300  metadata:
   301    name: azure-backup-secret
   302    labels:
   303      example: azure
   304  type: Opaque
   305  data:
   306    azure.container.name: ...
   307    azure.service.id: ...
   308    azure.service.cert: |
   309      ...
   310    azure.storage.account: ...
   311    azure.tenant.id: ...
   312  ```
   313  
   314  The following properties must exist on the Secret referenced in the `ArgoCDExport` resource when using `azure` as the storage backend.
   315  
   316  **azure.container.name**
   317  
   318  The name of the Azure Storage Container. This should be the name of the container only. If the container does not 
   319  already exist, the operator will attempt to create it.
   320  
   321  **azure.service.id**
   322  
   323  The ID for the Service Principal that will be used to access Azure Storage.
   324  
   325  **azure.service.cert**
   326  
   327  The combination of certificate and private key for authenticating the Service Principal that will be used to access Azure Storage.
   328  
   329  **azure.storage.account**
   330  
   331  The name of the Azure Storage Account that owns the Container.
   332  
   333  **azure.tenant.id**
   334  
   335  The ID for the Azure Tenant that owns the Service Principal.
   336  
   337  #### Azure Example
   338  
   339  Once the required Azure credentials are set on the export Secret, create the `ArgoCDExport` resource in the `argocd` 
   340  namespace using the included AWS example.
   341  
   342  ``` bash
   343  kubectl apply -n argocd -f examples/argocdexport-azure.yaml
   344  ```
   345  
   346  Creating the resource will result in the operator provisioning a Kubernetes Job to perform the export process. 
   347  
   348  ``` bash
   349  kubectl get pods -l job-name=example-argocdexport
   350  ```
   351  
   352  The Job should not take long to complete.
   353  
   354  ```
   355  NAME                         READY   STATUS      RESTARTS   AGE
   356  example-argocdexport-q92qm   0/1     Completed   0          1m
   357  ```
   358  
   359  If the Job fails for some reason, view the logs of the Pod to help in troubleshooting.
   360  
   361  ``` bash
   362  kubectl logs example-argocdexport-q92qm
   363  ```
   364  
   365  Output similar to what is shown below indicates a successful export.
   366  
   367  ```
   368  exporting argo-cd
   369  creating argo-cd backup
   370  encrypting argo-cd backup
   371  pushing argo-cd backup to azure
   372  [
   373    {
   374      "cloudName": "...",
   375      "homeTenantId": "...",
   376      "id": "...",
   377      "isDefault": true,
   378      "managedByTenants": [],
   379      "name": "...",
   380      "state": "Enabled",
   381      "tenantId": "...",
   382      "user": {
   383        "name": "...",
   384        "type": "servicePrincipal"
   385      }
   386    }
   387  ]
   388  {
   389    "created": false
   390  }
   391  Finished[#############################################################]  100.0000%
   392  {
   393    "etag": "\"0x000000000000000\"",
   394    "lastModified": "2020-04-20T16:20:00+00:00"
   395  }
   396  argo-cd export complete
   397  ```
   398  
   399  #### Azure AD Configuration
   400  
   401  TODO: Add the required Role and Service Account configuration needed through Azure Active Directory.
   402  
   403  ### GCP
   404  
   405  The operator can use a Google Cloud Storage bucket to store the export data.
   406  
   407  ``` yaml
   408  apiVersion: argoproj.io/v1alpha1
   409  kind: ArgoCDExport
   410  metadata:
   411    name: example-argocdexport
   412    labels:
   413      example: gcp
   414  spec:
   415    argocd: example-argocd
   416    storage:
   417      backend: gcp
   418      secretName: gcp-backup-secret
   419  ```
   420  
   421  #### GCP Secrets
   422  
   423  The storage `SecretName` property should reference an existing secret that contains the GCP credentials and bucket information.
   424  
   425  ``` yaml
   426  apiVersion: v1
   427  kind: Secret
   428  metadata:
   429    name: gcp-backup-secret
   430    labels:
   431      example: gcp
   432  type: Opaque
   433  data:
   434    gcp.bucket.name: ...
   435    gcp.project.id: ...
   436    gcp.key.file: |
   437      ...
   438  ```
   439  
   440  The following properties must exist on the Secret referenced in the `ArgoCDExport` resource when using `gcp` as the storage backend.
   441  
   442  **gcp.bucket.name**
   443  
   444  The name of the GCP storage bucket. This should be the name of the bucket only, do not prefix the value `gs://`, as the operator will handle this automatically.
   445  
   446  **gcp.project.id**
   447  
   448  The the project ID to use for authenticating with GCP. This can be the text name or numeric ID for the GCP project.
   449  
   450  **gcp.key.file**
   451  
   452  The GCP key file that contains the service account authentication credentials. The key file can be JSON formatted (preferred) or p12 (legacy) format.
   453  
   454  #### GCP Example
   455  
   456  Once the required GCP credentials are set on the export Secret, create the `ArgoCDExport` resource in the `argocd` 
   457  namespace using the included GCP example.
   458  
   459  ``` bash
   460  kubectl apply -f examples/argocdexport-gcp.yaml
   461  ```
   462  
   463  This will result in the operator creating a Job to perform the export process. 
   464  
   465  ``` bash
   466  kubectl get pods -l job-name=example-argocdexport
   467  ```
   468  
   469  The Job should not take long to complete.
   470  
   471  ```
   472  NAME                         READY   STATUS      RESTARTS   AGE
   473  example-argocdexport-q92qm   0/1     Completed   0          1m
   474  ```
   475  
   476  If the Job fails for some reason, view the logs of the Pod to help in troubleshooting.
   477  
   478  ``` bash
   479  kubectl logs example-argocdexport-q92qm
   480  ```
   481  
   482  Output similar to what is shown below indicates a successful export.
   483  
   484  ```
   485  exporting argo-cd
   486  creating argo-cd backup
   487  encrypting argo-cd backup
   488  pushing argo-cd backup to gcp
   489  Activated service account credentials for: [argocd-export@example-project.iam.gserviceaccount.com]
   490  Creating gs://example-argocdexport/...
   491  Copying file:///backups/argocd-backup.yaml [Content-Type=application/octet-stream]...
   492  / [1 files][  7.8 KiB/  7.8 KiB]
   493  Operation completed over 1 objects/7.8 KiB.
   494  argo-cd export complete
   495  ```
   496  
   497  #### GCP IAM Configuration
   498  
   499  TODO: Add the required Role and Service Account configuration needed through GCP.
   500  
   501  ## Import
   502  
   503  See the `ArgoCD` [Import Reference][argocd_import] documentation for more information on importing the backup data when starting a new 
   504  Argo CD cluster.
   505  
   506  [argocdexport_reference]:../reference/argocdexport.md
   507  [storage_reference]:../reference/argocdexport.md#storage-options
   508  [argocd_dr]:https://argoproj.github.io/argo-cd/operator-manual/disaster_recovery/
   509  [argocd_import]:../reference/argocd.md#import-options