github.com/esnet/gdg@v0.6.1-0.20240412190737-6b6eba9c14d8/website/content/docs/gdg/cloud_configuration.md (about)

     1  ---
     2  title: "Cloud Configuration"
     3  weight: 15
     4  ---
     5  
     6  # Cloud Support
     7  
     8  Support for using a few cloud providers as a storage engine is now supported.  When enabled, the local file system is only used for reading configuration files.  Everything else relies on the data from the cloud provider matching up to your configuration.
     9  
    10  Currently the following providers are supported:
    11  
    12    - AWS S3
    13    - Google Storage (GS)
    14    - Azure
    15    - Custom (S3 Compatible clouds)
    16  
    17  {{< callout context="caution" title="Caution" icon="alert-triangle" >}}
    18  https://github.com/google/go-cloud was used to support all of these providers.  They should all work, but only S3 and Google have been properly tested.
    19  {{< /callout >}}
    20  
    21  <!-- NOTE:  the [go-cloud](https://github.com/google/go-cloud) was used to support all of these providers.  They should all work, but only S3 and Google have been properly tested. -->
    22  
    23  Most of these rely on the system configuration.  Here are some references for each respective environment:
    24  
    25    * Google Storage:
    26      * [https://cloud.google.com/docs/authentication#service-accounts](https://cloud.google.com/docs/authentication#service-accounts)
    27      * [https://cloud.google.com/docs/authentication/provide-credentials-adc#local-user-cred](https://cloud.google.com/docs/authentication/provide-credentials-adc#local-user-cred)
    28    * S3: [https://docs.aws.amazon.com/sdk-for-go/api/aws/session/](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/)
    29    * Azure: [https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob)
    30  
    31  
    32  ## Cloud Configuration
    33  
    34  ### General
    35  
    36  ```yaml
    37  storage_engine:
    38    any_label:
    39      kind: cloud
    40      cloud_type: [s3, gs,  azblob]
    41      bucket_name: ""
    42      prefix: "dummy"
    43  ```
    44  
    45  All authentication and authorization is done outside of GDG.
    46  
    47  ### Custom
    48  
    49  Examples of these S3 compatible clouds would be [minio](https://min.io/product/s3-compatibility) and [Ceph](https://docs.ceph.com/en/latest/radosgw/s3/).
    50  
    51  ```yaml
    52  storage_engine:
    53    some_label:
    54      custom: true   ## Required, if set to true most of the 'custom' configuration will be disregarded.
    55      kind: cloud
    56      cloud_type: s3
    57      prefix: dummy
    58      bucket_name: "mybucket"
    59      access_id: ""  ## this value can also be read from: AWS_ACCESS_KEY. config file is given precedence
    60      secret_key: ""  ## same as above, can be read from: AWS_SECRET_KEY with config file is given precedence.
    61      init_bucket: "true" ## Only supported for custom workflows. Will attempt to create a bucket if one does not exist.
    62      endpoint: "http://localhost:9000"
    63      region: us-east-1
    64      ssl_enabled: "false"
    65  ```
    66  
    67  for custom cloud, the cloud type will be s3, `access_id` and `secret_key` are needed and ONLY supported for the custom cloud.  Additionally, the `custom` flag needs to be set to true.
    68  
    69   - `init_bucket` is another custom only feature that will attempt to create a bucket if one does not exist.
    70   - `endpoint` is a required parameter though it does have a fallback to localhost:9000
    71   - `region` defaults to us-east-1 if not configured.
    72  
    73  
    74  ## Context Configuration
    75  
    76  This is applicable both standard clouds and cusom.  The only additional change to the context is to provide a storage label to use:
    77  
    78  ```yaml
    79    testing:
    80      output_path: testing_data
    81      ...
    82      storage: any_label
    83      ...
    84  ```
    85  
    86  So given the bucket name of `foo` with a prefix of `bar` with the output_path configured as `testing_data` the connections will be imported to:
    87  
    88  `s3://foo/bar/testing_data/connections/` and exported from the same location.  If you need it to be in a different location you can update the prefix accordingly but at destination will follow the typical app patterns.