github.com/rclone/rclone@v1.66.1-0.20240517100346-7b89735ae726/docs/content/s3.md (about)

     1  ---
     2  title: "Amazon S3"
     3  description: "Rclone docs for Amazon S3"
     4  versionIntroduced: "v0.91"
     5  ---
     6  
     7  # {{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
     8  
     9  The S3 backend can be used with a number of different providers:
    10  
    11  {{< provider_list >}}
    12  {{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#configuration" start="true" >}}
    13  {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
    14  {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
    15  {{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
    16  {{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}}
    17  {{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud" >}}
    18  {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
    19  {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
    20  {{< provider name="GCS" home="https://cloud.google.com/storage/docs" config="/s3/#google-cloud-storage" >}}
    21  {{< provider name="Huawei OBS" home="https://www.huaweicloud.com/intl/en-us/product/obs.html" config="/s3/#huawei-obs" >}}
    22  {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
    23  {{< provider name="IDrive e2" home="https://www.idrive.com/e2/?refer=rclone" config="/s3/#idrive-e2" >}}
    24  {{< provider name="IONOS Cloud" home="https://cloud.ionos.com/storage/object-storage" config="/s3/#ionos" >}}
    25  {{< provider name="Leviia Object Storage" home="https://www.leviia.com/object-storage/" config="/s3/#leviia" >}}
    26  {{< provider name="Liara Object Storage" home="https://liara.ir/landing/object-storage" config="/s3/#liara-cloud" >}}
    27  {{< provider name="Linode Object Storage" home="https://www.linode.com/products/object-storage/" config="/s3/#linode" >}}
    28  {{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
    29  {{< provider name="Petabox" home="https://petabox.io/" config="/s3/#petabox" >}}
    30  {{< provider name="Qiniu Cloud Object Storage (Kodo)" home="https://www.qiniu.com/en/products/kodo" config="/s3/#qiniu" >}}
    31  {{< provider name="RackCorp Object Storage" home="https://www.rackcorp.com/" config="/s3/#RackCorp" >}}
    32  {{< provider name="Rclone Serve S3" home="/commands/rclone_serve_http/" config="/s3/#rclone" >}}
    33  {{< provider name="Scaleway" home="https://www.scaleway.com/en/object-storage/" config="/s3/#scaleway" >}}
    34  {{< provider name="Seagate Lyve Cloud" home="https://www.seagate.com/gb/en/services/cloud/storage/" config="/s3/#lyve" >}}
    35  {{< provider name="SeaweedFS" home="https://github.com/chrislusf/seaweedfs/" config="/s3/#seaweedfs" >}}
    36  {{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
    37  {{< provider name="Storj" home="https://storj.io/" config="/s3/#storj" >}}
    38  {{< provider name="Synology C2 Object Storage" home="https://c2.synology.com/en-global/object-storage/overview" config="/s3/#synology-c2" >}}
    39  {{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
    40  {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" end="true" >}}
    41  {{< /provider_list >}}
    42  
    43  Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
    44  command.)  You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
    45  
    46  Once you have made a remote (see the provider specific section above)
    47  you can use it like this:
    48  
    49  See all buckets
    50  
    51      rclone lsd remote:
    52  
    53  Make a new bucket
    54  
    55      rclone mkdir remote:bucket
    56  
    57  List the contents of a bucket
    58  
    59      rclone ls remote:bucket
    60  
    61  Sync `/home/local/directory` to the remote bucket, deleting any excess
    62  files in the bucket.
    63  
    64      rclone sync --interactive /home/local/directory remote:bucket
    65  
    66  ## Configuration
    67  
    68  Here is an example of making an s3 configuration for the AWS S3 provider.
    69  Most applies to the other providers as well, any differences are described [below](#providers).
    70  
    71  First run
    72  
    73      rclone config
    74  
    75  This will guide you through an interactive setup process.
    76  
    77  ```
    78  No remotes found, make a new one?
    79  n) New remote
    80  s) Set configuration password
    81  q) Quit config
    82  n/s/q> n
    83  name> remote
    84  Type of storage to configure.
    85  Choose a number from below, or type in your own value
    86  [snip]
    87  XX / Amazon S3 Compliant Storage Providers including AWS, ...
    88     \ "s3"
    89  [snip]
    90  Storage> s3
    91  Choose your S3 provider.
    92  Choose a number from below, or type in your own value
    93   1 / Amazon Web Services (AWS) S3
    94     \ "AWS"
    95   2 / Ceph Object Storage
    96     \ "Ceph"
    97   3 / DigitalOcean Spaces
    98     \ "DigitalOcean"
    99   4 / Dreamhost DreamObjects
   100     \ "Dreamhost"
   101   5 / IBM COS S3
   102     \ "IBMCOS"
   103   6 / Minio Object Storage
   104     \ "Minio"
   105   7 / Wasabi Object Storage
   106     \ "Wasabi"
   107   8 / Any other S3 compatible provider
   108     \ "Other"
   109  provider> 1
   110  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
   111  Choose a number from below, or type in your own value
   112   1 / Enter AWS credentials in the next step
   113     \ "false"
   114   2 / Get AWS credentials from the environment (env vars or IAM)
   115     \ "true"
   116  env_auth> 1
   117  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
   118  access_key_id> XXX
   119  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
   120  secret_access_key> YYY
   121  Region to connect to.
   122  Choose a number from below, or type in your own value
   123     / The default endpoint - a good choice if you are unsure.
   124   1 | US Region, Northern Virginia, or Pacific Northwest.
   125     | Leave location constraint empty.
   126     \ "us-east-1"
   127     / US East (Ohio) Region
   128   2 | Needs location constraint us-east-2.
   129     \ "us-east-2"
   130     / US West (Oregon) Region
   131   3 | Needs location constraint us-west-2.
   132     \ "us-west-2"
   133     / US West (Northern California) Region
   134   4 | Needs location constraint us-west-1.
   135     \ "us-west-1"
   136     / Canada (Central) Region
   137   5 | Needs location constraint ca-central-1.
   138     \ "ca-central-1"
   139     / EU (Ireland) Region
   140   6 | Needs location constraint EU or eu-west-1.
   141     \ "eu-west-1"
   142     / EU (London) Region
   143   7 | Needs location constraint eu-west-2.
   144     \ "eu-west-2"
   145     / EU (Frankfurt) Region
   146   8 | Needs location constraint eu-central-1.
   147     \ "eu-central-1"
   148     / Asia Pacific (Singapore) Region
   149   9 | Needs location constraint ap-southeast-1.
   150     \ "ap-southeast-1"
   151     / Asia Pacific (Sydney) Region
   152  10 | Needs location constraint ap-southeast-2.
   153     \ "ap-southeast-2"
   154     / Asia Pacific (Tokyo) Region
   155  11 | Needs location constraint ap-northeast-1.
   156     \ "ap-northeast-1"
   157     / Asia Pacific (Seoul)
   158  12 | Needs location constraint ap-northeast-2.
   159     \ "ap-northeast-2"
   160     / Asia Pacific (Mumbai)
   161  13 | Needs location constraint ap-south-1.
   162     \ "ap-south-1"
   163     / Asia Pacific (Hong Kong) Region
   164  14 | Needs location constraint ap-east-1.
   165     \ "ap-east-1"
   166     / South America (Sao Paulo) Region
   167  15 | Needs location constraint sa-east-1.
   168     \ "sa-east-1"
   169  region> 1
   170  Endpoint for S3 API.
   171  Leave blank if using AWS to use the default endpoint for the region.
   172  endpoint>
   173  Location constraint - must be set to match the Region. Used when creating buckets only.
   174  Choose a number from below, or type in your own value
   175   1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
   176     \ ""
   177   2 / US East (Ohio) Region.
   178     \ "us-east-2"
   179   3 / US West (Oregon) Region.
   180     \ "us-west-2"
   181   4 / US West (Northern California) Region.
   182     \ "us-west-1"
   183   5 / Canada (Central) Region.
   184     \ "ca-central-1"
   185   6 / EU (Ireland) Region.
   186     \ "eu-west-1"
   187   7 / EU (London) Region.
   188     \ "eu-west-2"
   189   8 / EU Region.
   190     \ "EU"
   191   9 / Asia Pacific (Singapore) Region.
   192     \ "ap-southeast-1"
   193  10 / Asia Pacific (Sydney) Region.
   194     \ "ap-southeast-2"
   195  11 / Asia Pacific (Tokyo) Region.
   196     \ "ap-northeast-1"
   197  12 / Asia Pacific (Seoul)
   198     \ "ap-northeast-2"
   199  13 / Asia Pacific (Mumbai)
   200     \ "ap-south-1"
   201  14 / Asia Pacific (Hong Kong)
   202     \ "ap-east-1"
   203  15 / South America (Sao Paulo) Region.
   204     \ "sa-east-1"
   205  location_constraint> 1
   206  Canned ACL used when creating buckets and/or storing objects in S3.
   207  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
   208  Choose a number from below, or type in your own value
   209   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   210     \ "private"
   211   2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   212     \ "public-read"
   213     / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
   214   3 | Granting this on a bucket is generally not recommended.
   215     \ "public-read-write"
   216   4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   217     \ "authenticated-read"
   218     / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
   219   5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   220     \ "bucket-owner-read"
   221     / Both the object owner and the bucket owner get FULL_CONTROL over the object.
   222   6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   223     \ "bucket-owner-full-control"
   224  acl> 1
   225  The server-side encryption algorithm used when storing this object in S3.
   226  Choose a number from below, or type in your own value
   227   1 / None
   228     \ ""
   229   2 / AES256
   230     \ "AES256"
   231  server_side_encryption> 1
   232  The storage class to use when storing objects in S3.
   233  Choose a number from below, or type in your own value
   234   1 / Default
   235     \ ""
   236   2 / Standard storage class
   237     \ "STANDARD"
   238   3 / Reduced redundancy storage class
   239     \ "REDUCED_REDUNDANCY"
   240   4 / Standard Infrequent Access storage class
   241     \ "STANDARD_IA"
   242   5 / One Zone Infrequent Access storage class
   243     \ "ONEZONE_IA"
   244   6 / Glacier storage class
   245     \ "GLACIER"
   246   7 / Glacier Deep Archive storage class
   247     \ "DEEP_ARCHIVE"
   248   8 / Intelligent-Tiering storage class
   249     \ "INTELLIGENT_TIERING"
   250   9 / Glacier Instant Retrieval storage class
   251     \ "GLACIER_IR"
   252  storage_class> 1
   253  Remote config
   254  --------------------
   255  [remote]
   256  type = s3
   257  provider = AWS
   258  env_auth = false
   259  access_key_id = XXX
   260  secret_access_key = YYY
   261  region = us-east-1
   262  endpoint =
   263  location_constraint =
   264  acl = private
   265  server_side_encryption =
   266  storage_class =
   267  --------------------
   268  y) Yes this is OK
   269  e) Edit this remote
   270  d) Delete this remote
   271  y/e/d>
   272  ```
   273  
   274  ### Modification times and hashes
   275  
   276  #### Modification times
   277  
   278  The modified time is stored as metadata on the object as
   279  `X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
   280  
   281  If the modification time needs to be updated rclone will attempt to perform a server
   282  side copy to update the modification if the object can be copied in a single part.
   283  In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive
   284  storage the object will be uploaded rather than copied.
   285  
   286  Note that reading this from the object takes an additional `HEAD`
   287  request as the metadata isn't returned in object listings.
   288  
   289  #### Hashes
   290  
   291  For small objects which weren't uploaded as multipart uploads (objects
   292  sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
   293  the `ETag:` header as an MD5 checksum.
   294  
   295  However for objects which were uploaded as multipart uploads or with
   296  server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
   297  longer the MD5 sum of the data, so rclone adds an additional piece of
   298  metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
   299  the same format as is required for `Content-MD5`).  You can use base64 -d and hexdump to check this value manually:
   300  
   301      echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
   302  
   303  or you can use `rclone check` to verify the hashes are OK.
   304  
   305  For large objects, calculating this hash can take some time so the
   306  addition of this hash can be disabled with `--s3-disable-checksum`.
   307  This will mean that these objects do not have an MD5 checksum.
   308  
   309  Note that reading this from the object takes an additional `HEAD`
   310  request as the metadata isn't returned in object listings.
   311  
   312  ### Reducing costs
   313  
   314  #### Avoiding HEAD requests to read the modification time
   315  
   316  By default, rclone will use the modification time of objects stored in
   317  S3 for syncing.  This is stored in object metadata which unfortunately
   318  takes an extra HEAD request to read which can be expensive (in time
   319  and money).
   320  
   321  The modification time is used by default for all operations that
   322  require checking the time a file was last updated. It allows rclone to
   323  treat the remote more like a true filesystem, but it is inefficient on
   324  S3 because it requires an extra API call to retrieve the metadata.
   325  
   326  The extra API calls can be avoided when syncing (using `rclone sync`
   327  or `rclone copy`) in a few different ways, each with its own
   328  tradeoffs.
   329  
   330  - `--size-only`
   331      - Only checks the size of files.
   332      - Uses no extra transactions.
   333      - If the file doesn't change size then rclone won't detect it has
   334        changed.
   335      - `rclone sync --size-only /path/to/source s3:bucket`
   336  - `--checksum`
   337      - Checks the size and MD5 checksum of files.
   338      - Uses no extra transactions.
   339      - The most accurate detection of changes possible.
   340      - Will cause the source to read an MD5 checksum which, if it is a
   341        local disk, will cause lots of disk activity.
   342      - If the source and destination are both S3 this is the
   343        **recommended** flag to use for maximum efficiency.
   344      - `rclone sync --checksum /path/to/source s3:bucket`
   345  - `--update --use-server-modtime`
   346      - Uses no extra transactions.
   347      - Modification time becomes the time the object was uploaded.
   348      - For many operations this is sufficient to determine if it needs
   349        uploading.
   350      - Using `--update` along with `--use-server-modtime`, avoids the
   351        extra API call and uploads files whose local modification time
   352        is newer than the time it was last uploaded.
   353      - Files created with timestamps in the past will be missed by the sync.
   354      - `rclone sync --update --use-server-modtime /path/to/source s3:bucket`
   355  
   356  These flags can and should be used in combination with `--fast-list` -
   357  see below.
   358  
   359  If using `rclone mount` or any command using the VFS (eg `rclone
   360  serve`) commands then you might want to consider using the VFS flag
   361  `--no-modtime` which will stop rclone reading the modification time
   362  for every object. You could also use `--use-server-modtime` if you are
   363  happy with the modification times of the objects being the time of
   364  upload.
   365  
   366  #### Avoiding GET requests to read directory listings
   367  
   368  Rclone's default directory traversal is to process each directory
   369  individually.  This takes one API call per directory.  Using the
   370  `--fast-list` flag will read all info about the objects into
   371  memory first using a smaller number of API calls (one per 1000
   372  objects). See the [rclone docs](/docs/#fast-list) for more details.
   373  
   374      rclone sync --fast-list --checksum /path/to/source s3:bucket
   375  
   376  `--fast-list` trades off API transactions for memory use. As a rough
   377  guide rclone uses 1k of memory per object stored, so using
   378  `--fast-list` on a sync of a million objects will use roughly 1 GiB of
   379  RAM.
   380  
   381  If you are only copying a small number of files into a big repository
   382  then using `--no-traverse` is a good idea. This finds objects directly
   383  instead of through directory listings. You can do a "top-up" sync very
   384  cheaply by using `--max-age` and `--no-traverse` to copy only recent
   385  files, eg
   386  
   387      rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
   388  
   389  You'd then do a full `rclone sync` less often.
   390  
   391  Note that `--fast-list` isn't required in the top-up sync.
   392  
   393  #### Avoiding HEAD requests after PUT
   394  
   395  By default, rclone will HEAD every object it uploads. It does this to
   396  check the object got uploaded correctly.
   397  
   398  You can disable this with the [--s3-no-head](#s3-no-head) option - see
   399  there for more details.
   400  
   401  Setting this flag increases the chance for undetected upload failures.
   402  
   403  ### Versions
   404  
   405  When bucket versioning is enabled (this can be done with rclone with
   406  the [`rclone backend versioning`](#versioning) command) when rclone
   407  uploads a new version of a file it creates a
   408  [new version of it](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html)
   409  Likewise when you delete a file, the old version will be marked hidden
   410  and still be available.
   411  
   412  Old versions of files, where available, are visible using the
   413  [`--s3-versions`](#s3-versions) flag.
   414  
   415  It is also possible to view a bucket as it was at a certain point in
   416  time, using the [`--s3-version-at`](#s3-version-at) flag. This will
   417  show the file versions as they were at that time, showing files that
   418  have been deleted afterwards, and hiding files that were created
   419  since.
   420  
   421  If you wish to remove all the old versions then you can use the
   422  [`rclone backend cleanup-hidden remote:bucket`](#cleanup-hidden)
   423  command which will delete all the old hidden versions of files,
   424  leaving the current ones intact. You can also supply a path and only
   425  old versions under that path will be deleted, e.g.
   426  `rclone backend cleanup-hidden remote:bucket/path/to/stuff`.
   427  
   428  When you `purge` a bucket, the current and the old versions will be
   429  deleted then the bucket will be deleted.
   430  
   431  However `delete` will cause the current versions of the files to
   432  become hidden old versions.
   433  
   434  Here is a session showing the listing and retrieval of an old
   435  version followed by a `cleanup` of the old versions.
   436  
   437  Show current version and all the versions with `--s3-versions` flag.
   438  
   439  ```
   440  $ rclone -q ls s3:cleanup-test
   441          9 one.txt
   442  
   443  $ rclone -q --s3-versions ls s3:cleanup-test
   444          9 one.txt
   445          8 one-v2016-07-04-141032-000.txt
   446         16 one-v2016-07-04-141003-000.txt
   447         15 one-v2016-07-02-155621-000.txt
   448  ```
   449  
   450  Retrieve an old version
   451  
   452  ```
   453  $ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
   454  
   455  $ ls -l /tmp/one-v2016-07-04-141003-000.txt
   456  -rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt
   457  ```
   458  
   459  Clean up all the old versions and show that they've gone.
   460  
   461  ```
   462  $ rclone -q backend cleanup-hidden s3:cleanup-test
   463  
   464  $ rclone -q ls s3:cleanup-test
   465          9 one.txt
   466  
   467  $ rclone -q --s3-versions ls s3:cleanup-test
   468          9 one.txt
   469  ```
   470  
   471  #### Versions naming caveat
   472  
   473  When using `--s3-versions` flag rclone is relying on the file name
   474  to work out whether the objects are versions or not. Versions' names
   475  are created by inserting timestamp between file name and its extension.
   476  ```
   477          9 file.txt
   478          8 file-v2023-07-17-161032-000.txt
   479         16 file-v2023-06-15-141003-000.txt
   480  ```
   481  If there are real files present with the same names as versions, then
   482  behaviour of `--s3-versions` can be unpredictable.
   483  
   484  ### Cleanup
   485  
   486  If you run `rclone cleanup s3:bucket` then it will remove all pending
   487  multipart uploads older than 24 hours. You can use the `--interactive`/`i`
   488  or `--dry-run` flag to see exactly what it will do. If you want more control over the
   489  expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h`
   490  to expire all uploads older than one hour. You can use `rclone backend
   491  list-multipart-uploads s3:bucket` to see the pending multipart
   492  uploads.
   493  
   494  ### Restricted filename characters
   495  
   496  S3 allows any valid UTF-8 string as a key.
   497  
   498  Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), as
   499  they can't be used in XML.
   500  
   501  The following characters are replaced since these are problematic when
   502  dealing with the REST API:
   503  
   504  | Character | Value | Replacement |
   505  | --------- |:-----:|:-----------:|
   506  | NUL       | 0x00  | ␀           |
   507  | /         | 0x2F  | /           |
   508  
   509  The encoding will also encode these file names as they don't seem to
   510  work with the SDK properly:
   511  
   512  | File name | Replacement |
   513  | --------- |:-----------:|
   514  | .         | .          |
   515  | ..        | ..         |
   516  
   517  ### Multipart uploads
   518  
   519  rclone supports multipart uploads with S3 which means that it can
   520  upload files bigger than 5 GiB.
   521  
   522  Note that files uploaded *both* with multipart upload *and* through
   523  crypt remotes do not have MD5 sums.
   524  
   525  rclone switches from single part uploads to multipart uploads at the
   526  point specified by `--s3-upload-cutoff`.  This can be a maximum of 5 GiB
   527  and a minimum of 0 (ie always upload multipart files).
   528  
   529  The chunk sizes used in the multipart upload are specified by
   530  `--s3-chunk-size` and the number of chunks uploaded concurrently is
   531  specified by `--s3-upload-concurrency`.
   532  
   533  Multipart uploads will use `--transfers` * `--s3-upload-concurrency` *
   534  `--s3-chunk-size` extra memory.  Single part uploads to not use extra
   535  memory.
   536  
   537  Single part transfers can be faster than multipart transfers or slower
   538  depending on your latency from S3 - the more latency, the more likely
   539  single part transfers will be faster.
   540  
   541  Increasing `--s3-upload-concurrency` will increase throughput (8 would
   542  be a sensible value) and increasing `--s3-chunk-size` also increases
   543  throughput (16M would be sensible).  Increasing either of these will
   544  use more memory.  The default values are high enough to gain most of
   545  the possible performance without using too much memory.
   546  
   547  
   548  ### Buckets and Regions
   549  
   550  With Amazon S3 you can list buckets (`rclone lsd`) using any region,
   551  but you can only access the content of a bucket from the region it was
   552  created in.  If you attempt to access a bucket from the wrong region,
   553  you will get an error, `incorrect region, the bucket is not in 'XXX'
   554  region`.
   555  
   556  ### Authentication
   557  
   558  There are a number of ways to supply `rclone` with a set of AWS
   559  credentials, with and without using the environment.
   560  
   561  The different authentication methods are tried in this order:
   562  
   563   - Directly in the rclone configuration file (`env_auth = false` in the config file):
   564     - `access_key_id` and `secret_access_key` are required.
   565     - `session_token` can be optionally set when using AWS STS.
   566   - Runtime configuration (`env_auth = true` in the config file):
   567     - Export the following environment variables before running `rclone`:
   568       - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
   569       - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
   570       - Session Token: `AWS_SESSION_TOKEN` (optional)
   571     - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html):
   572       - Profile files are standard files used by AWS CLI tools
   573       - By default it will use the profile in your home directory (e.g. `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables:
   574           - `AWS_SHARED_CREDENTIALS_FILE` to control which file.
   575           - `AWS_PROFILE` to control which profile to use.
   576     - Or, run `rclone` in an ECS task with an IAM role (AWS only).
   577     - Or, run `rclone` on an EC2 instance with an IAM role (AWS only).
   578     - Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only).
   579  
   580  If none of these option actually end up providing `rclone` with AWS
   581  credentials then S3 interaction will be non-authenticated (see below).
   582  
   583  ### S3 Permissions
   584  
   585  When using the `sync` subcommand of `rclone` the following minimum
   586  permissions are required to be available on the bucket being written to:
   587  
   588  * `ListBucket`
   589  * `DeleteObject`
   590  * `GetObject`
   591  * `PutObject`
   592  * `PutObjectACL`
   593  * `CreateBucket` (unless using [s3-no-check-bucket](#s3-no-check-bucket))
   594  
   595  When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required.
   596  
   597  Example policy:
   598  
   599  ```
   600  {
   601      "Version": "2012-10-17",
   602      "Statement": [
   603          {
   604              "Effect": "Allow",
   605              "Principal": {
   606                  "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
   607              },
   608              "Action": [
   609                  "s3:ListBucket",
   610                  "s3:DeleteObject",
   611                  "s3:GetObject",
   612                  "s3:PutObject",
   613                  "s3:PutObjectAcl"
   614              ],
   615              "Resource": [
   616                "arn:aws:s3:::BUCKET_NAME/*",
   617                "arn:aws:s3:::BUCKET_NAME"
   618              ]
   619          },
   620          {
   621              "Effect": "Allow",
   622              "Action": "s3:ListAllMyBuckets",
   623              "Resource": "arn:aws:s3:::*"
   624          }
   625      ]
   626  }
   627  ```
   628  
   629  Notes on above:
   630  
   631  1. This is a policy that can be used when creating bucket. It assumes
   632     that `USER_NAME` has been created.
   633  2. The Resource entry must include both resource ARNs, as one implies
   634     the bucket and the other implies the bucket's objects.
   635  3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exsits, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
   636  
   637  For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
   638  that will generate one or more buckets that will work with `rclone sync`.
   639  
   640  ### Key Management System (KMS)
   641  
   642  If you are using server-side encryption with KMS then you must make
   643  sure rclone is configured with `server_side_encryption = aws:kms`
   644  otherwise you will find you can't transfer small objects - these will
   645  create checksum errors.
   646  
   647  ### Glacier and Glacier Deep Archive
   648  
   649  You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
   650  The bucket can still be synced or copied into normally, but if rclone
   651  tries to access data from the glacier storage class you will see an error like below.
   652  
   653      2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
   654  
   655  In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
   656  the object(s) in question before using rclone.
   657  
   658  Note that rclone only speaks the S3 API it does not speak the Glacier
   659  Vault API, so rclone cannot directly access Glacier Vaults.
   660  
   661  ### Object-lock enabled S3 bucket
   662  
   663  According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-permission):
   664  
   665  > If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
   666  
   667  As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section,
   668  small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
   669  A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
   670  
   671  {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
   672  ### Standard options
   673  
   674  Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
   675  
   676  #### --s3-provider
   677  
   678  Choose your S3 provider.
   679  
   680  Properties:
   681  
   682  - Config:      provider
   683  - Env Var:     RCLONE_S3_PROVIDER
   684  - Type:        string
   685  - Required:    false
   686  - Examples:
   687      - "AWS"
   688          - Amazon Web Services (AWS) S3
   689      - "Alibaba"
   690          - Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   691      - "ArvanCloud"
   692          - Arvan Cloud Object Storage (AOS)
   693      - "Ceph"
   694          - Ceph Object Storage
   695      - "ChinaMobile"
   696          - China Mobile Ecloud Elastic Object Storage (EOS)
   697      - "Cloudflare"
   698          - Cloudflare R2 Storage
   699      - "DigitalOcean"
   700          - DigitalOcean Spaces
   701      - "Dreamhost"
   702          - Dreamhost DreamObjects
   703      - "GCS"
   704          - Google Cloud Storage
   705      - "HuaweiOBS"
   706          - Huawei Object Storage Service
   707      - "IBMCOS"
   708          - IBM COS S3
   709      - "IDrive"
   710          - IDrive e2
   711      - "IONOS"
   712          - IONOS Cloud
   713      - "LyveCloud"
   714          - Seagate Lyve Cloud
   715      - "Leviia"
   716          - Leviia Object Storage
   717      - "Liara"
   718          - Liara Object Storage
   719      - "Linode"
   720          - Linode Object Storage
   721      - "Minio"
   722          - Minio Object Storage
   723      - "Netease"
   724          - Netease Object Storage (NOS)
   725      - "Petabox"
   726          - Petabox Object Storage
   727      - "RackCorp"
   728          - RackCorp Object Storage
   729      - "Rclone"
   730          - Rclone S3 Server
   731      - "Scaleway"
   732          - Scaleway Object Storage
   733      - "SeaweedFS"
   734          - SeaweedFS S3
   735      - "StackPath"
   736          - StackPath Object Storage
   737      - "Storj"
   738          - Storj (S3 Compatible Gateway)
   739      - "Synology"
   740          - Synology C2 Object Storage
   741      - "TencentCOS"
   742          - Tencent Cloud Object Storage (COS)
   743      - "Wasabi"
   744          - Wasabi Object Storage
   745      - "Qiniu"
   746          - Qiniu Object Storage (Kodo)
   747      - "Other"
   748          - Any other S3 compatible provider
   749  
   750  #### --s3-env-auth
   751  
   752  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
   753  
   754  Only applies if access_key_id and secret_access_key is blank.
   755  
   756  Properties:
   757  
   758  - Config:      env_auth
   759  - Env Var:     RCLONE_S3_ENV_AUTH
   760  - Type:        bool
   761  - Default:     false
   762  - Examples:
   763      - "false"
   764          - Enter AWS credentials in the next step.
   765      - "true"
   766          - Get AWS credentials from the environment (env vars or IAM).
   767  
   768  #### --s3-access-key-id
   769  
   770  AWS Access Key ID.
   771  
   772  Leave blank for anonymous access or runtime credentials.
   773  
   774  Properties:
   775  
   776  - Config:      access_key_id
   777  - Env Var:     RCLONE_S3_ACCESS_KEY_ID
   778  - Type:        string
   779  - Required:    false
   780  
   781  #### --s3-secret-access-key
   782  
   783  AWS Secret Access Key (password).
   784  
   785  Leave blank for anonymous access or runtime credentials.
   786  
   787  Properties:
   788  
   789  - Config:      secret_access_key
   790  - Env Var:     RCLONE_S3_SECRET_ACCESS_KEY
   791  - Type:        string
   792  - Required:    false
   793  
   794  #### --s3-region
   795  
   796  Region to connect to.
   797  
   798  Properties:
   799  
   800  - Config:      region
   801  - Env Var:     RCLONE_S3_REGION
   802  - Provider:    AWS
   803  - Type:        string
   804  - Required:    false
   805  - Examples:
   806      - "us-east-1"
   807          - The default endpoint - a good choice if you are unsure.
   808          - US Region, Northern Virginia, or Pacific Northwest.
   809          - Leave location constraint empty.
   810      - "us-east-2"
   811          - US East (Ohio) Region.
   812          - Needs location constraint us-east-2.
   813      - "us-west-1"
   814          - US West (Northern California) Region.
   815          - Needs location constraint us-west-1.
   816      - "us-west-2"
   817          - US West (Oregon) Region.
   818          - Needs location constraint us-west-2.
   819      - "ca-central-1"
   820          - Canada (Central) Region.
   821          - Needs location constraint ca-central-1.
   822      - "eu-west-1"
   823          - EU (Ireland) Region.
   824          - Needs location constraint EU or eu-west-1.
   825      - "eu-west-2"
   826          - EU (London) Region.
   827          - Needs location constraint eu-west-2.
   828      - "eu-west-3"
   829          - EU (Paris) Region.
   830          - Needs location constraint eu-west-3.
   831      - "eu-north-1"
   832          - EU (Stockholm) Region.
   833          - Needs location constraint eu-north-1.
   834      - "eu-south-1"
   835          - EU (Milan) Region.
   836          - Needs location constraint eu-south-1.
   837      - "eu-central-1"
   838          - EU (Frankfurt) Region.
   839          - Needs location constraint eu-central-1.
   840      - "ap-southeast-1"
   841          - Asia Pacific (Singapore) Region.
   842          - Needs location constraint ap-southeast-1.
   843      - "ap-southeast-2"
   844          - Asia Pacific (Sydney) Region.
   845          - Needs location constraint ap-southeast-2.
   846      - "ap-northeast-1"
   847          - Asia Pacific (Tokyo) Region.
   848          - Needs location constraint ap-northeast-1.
   849      - "ap-northeast-2"
   850          - Asia Pacific (Seoul).
   851          - Needs location constraint ap-northeast-2.
   852      - "ap-northeast-3"
   853          - Asia Pacific (Osaka-Local).
   854          - Needs location constraint ap-northeast-3.
   855      - "ap-south-1"
   856          - Asia Pacific (Mumbai).
   857          - Needs location constraint ap-south-1.
   858      - "ap-east-1"
   859          - Asia Pacific (Hong Kong) Region.
   860          - Needs location constraint ap-east-1.
   861      - "sa-east-1"
   862          - South America (Sao Paulo) Region.
   863          - Needs location constraint sa-east-1.
   864      - "me-south-1"
   865          - Middle East (Bahrain) Region.
   866          - Needs location constraint me-south-1.
   867      - "af-south-1"
   868          - Africa (Cape Town) Region.
   869          - Needs location constraint af-south-1.
   870      - "cn-north-1"
   871          - China (Beijing) Region.
   872          - Needs location constraint cn-north-1.
   873      - "cn-northwest-1"
   874          - China (Ningxia) Region.
   875          - Needs location constraint cn-northwest-1.
   876      - "us-gov-east-1"
   877          - AWS GovCloud (US-East) Region.
   878          - Needs location constraint us-gov-east-1.
   879      - "us-gov-west-1"
   880          - AWS GovCloud (US) Region.
   881          - Needs location constraint us-gov-west-1.
   882  
   883  #### --s3-endpoint
   884  
   885  Endpoint for S3 API.
   886  
   887  Leave blank if using AWS to use the default endpoint for the region.
   888  
   889  Properties:
   890  
   891  - Config:      endpoint
   892  - Env Var:     RCLONE_S3_ENDPOINT
   893  - Provider:    AWS
   894  - Type:        string
   895  - Required:    false
   896  
   897  #### --s3-location-constraint
   898  
   899  Location constraint - must be set to match the Region.
   900  
   901  Used when creating buckets only.
   902  
   903  Properties:
   904  
   905  - Config:      location_constraint
   906  - Env Var:     RCLONE_S3_LOCATION_CONSTRAINT
   907  - Provider:    AWS
   908  - Type:        string
   909  - Required:    false
   910  - Examples:
   911      - ""
   912          - Empty for US Region, Northern Virginia, or Pacific Northwest
   913      - "us-east-2"
   914          - US East (Ohio) Region
   915      - "us-west-1"
   916          - US West (Northern California) Region
   917      - "us-west-2"
   918          - US West (Oregon) Region
   919      - "ca-central-1"
   920          - Canada (Central) Region
   921      - "eu-west-1"
   922          - EU (Ireland) Region
   923      - "eu-west-2"
   924          - EU (London) Region
   925      - "eu-west-3"
   926          - EU (Paris) Region
   927      - "eu-north-1"
   928          - EU (Stockholm) Region
   929      - "eu-south-1"
   930          - EU (Milan) Region
   931      - "EU"
   932          - EU Region
   933      - "ap-southeast-1"
   934          - Asia Pacific (Singapore) Region
   935      - "ap-southeast-2"
   936          - Asia Pacific (Sydney) Region
   937      - "ap-northeast-1"
   938          - Asia Pacific (Tokyo) Region
   939      - "ap-northeast-2"
   940          - Asia Pacific (Seoul) Region
   941      - "ap-northeast-3"
   942          - Asia Pacific (Osaka-Local) Region
   943      - "ap-south-1"
   944          - Asia Pacific (Mumbai) Region
   945      - "ap-east-1"
   946          - Asia Pacific (Hong Kong) Region
   947      - "sa-east-1"
   948          - South America (Sao Paulo) Region
   949      - "me-south-1"
   950          - Middle East (Bahrain) Region
   951      - "af-south-1"
   952          - Africa (Cape Town) Region
   953      - "cn-north-1"
   954          - China (Beijing) Region
   955      - "cn-northwest-1"
   956          - China (Ningxia) Region
   957      - "us-gov-east-1"
   958          - AWS GovCloud (US-East) Region
   959      - "us-gov-west-1"
   960          - AWS GovCloud (US) Region
   961  
   962  #### --s3-acl
   963  
   964  Canned ACL used when creating buckets and storing or copying objects.
   965  
   966  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
   967  
   968  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
   969  
   970  Note that this ACL is applied when server-side copying objects as S3
   971  doesn't copy the ACL from the source but rather writes a fresh one.
   972  
   973  If the acl is an empty string then no X-Amz-Acl: header is added and
   974  the default (private) will be used.
   975  
   976  
   977  Properties:
   978  
   979  - Config:      acl
   980  - Env Var:     RCLONE_S3_ACL
   981  - Provider:    !Storj,Synology,Cloudflare
   982  - Type:        string
   983  - Required:    false
   984  - Examples:
   985      - "default"
   986          - Owner gets Full_CONTROL.
   987          - No one else has access rights (default).
   988      - "private"
   989          - Owner gets FULL_CONTROL.
   990          - No one else has access rights (default).
   991      - "public-read"
   992          - Owner gets FULL_CONTROL.
   993          - The AllUsers group gets READ access.
   994      - "public-read-write"
   995          - Owner gets FULL_CONTROL.
   996          - The AllUsers group gets READ and WRITE access.
   997          - Granting this on a bucket is generally not recommended.
   998      - "authenticated-read"
   999          - Owner gets FULL_CONTROL.
  1000          - The AuthenticatedUsers group gets READ access.
  1001      - "bucket-owner-read"
  1002          - Object owner gets FULL_CONTROL.
  1003          - Bucket owner gets READ access.
  1004          - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  1005      - "bucket-owner-full-control"
  1006          - Both the object owner and the bucket owner get FULL_CONTROL over the object.
  1007          - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  1008      - "private"
  1009          - Owner gets FULL_CONTROL.
  1010          - No one else has access rights (default).
  1011          - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.
  1012      - "public-read"
  1013          - Owner gets FULL_CONTROL.
  1014          - The AllUsers group gets READ access.
  1015          - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS.
  1016      - "public-read-write"
  1017          - Owner gets FULL_CONTROL.
  1018          - The AllUsers group gets READ and WRITE access.
  1019          - This acl is available on IBM Cloud (Infra), On-Premise IBM COS.
  1020      - "authenticated-read"
  1021          - Owner gets FULL_CONTROL.
  1022          - The AuthenticatedUsers group gets READ access.
  1023          - Not supported on Buckets.
  1024          - This acl is available on IBM Cloud (Infra) and On-Premise IBM COS.
  1025  
  1026  #### --s3-server-side-encryption
  1027  
  1028  The server-side encryption algorithm used when storing this object in S3.
  1029  
  1030  Properties:
  1031  
  1032  - Config:      server_side_encryption
  1033  - Env Var:     RCLONE_S3_SERVER_SIDE_ENCRYPTION
  1034  - Provider:    AWS,Ceph,ChinaMobile,Minio
  1035  - Type:        string
  1036  - Required:    false
  1037  - Examples:
  1038      - ""
  1039          - None
  1040      - "AES256"
  1041          - AES256
  1042      - "aws:kms"
  1043          - aws:kms
  1044  
  1045  #### --s3-sse-kms-key-id
  1046  
  1047  If using KMS ID you must provide the ARN of Key.
  1048  
  1049  Properties:
  1050  
  1051  - Config:      sse_kms_key_id
  1052  - Env Var:     RCLONE_S3_SSE_KMS_KEY_ID
  1053  - Provider:    AWS,Ceph,Minio
  1054  - Type:        string
  1055  - Required:    false
  1056  - Examples:
  1057      - ""
  1058          - None
  1059      - "arn:aws:kms:us-east-1:*"
  1060          - arn:aws:kms:*
  1061  
  1062  #### --s3-storage-class
  1063  
  1064  The storage class to use when storing new objects in S3.
  1065  
  1066  Properties:
  1067  
  1068  - Config:      storage_class
  1069  - Env Var:     RCLONE_S3_STORAGE_CLASS
  1070  - Provider:    AWS
  1071  - Type:        string
  1072  - Required:    false
  1073  - Examples:
  1074      - ""
  1075          - Default
  1076      - "STANDARD"
  1077          - Standard storage class
  1078      - "REDUCED_REDUNDANCY"
  1079          - Reduced redundancy storage class
  1080      - "STANDARD_IA"
  1081          - Standard Infrequent Access storage class
  1082      - "ONEZONE_IA"
  1083          - One Zone Infrequent Access storage class
  1084      - "GLACIER"
  1085          - Glacier storage class
  1086      - "DEEP_ARCHIVE"
  1087          - Glacier Deep Archive storage class
  1088      - "INTELLIGENT_TIERING"
  1089          - Intelligent-Tiering storage class
  1090      - "GLACIER_IR"
  1091          - Glacier Instant Retrieval storage class
  1092  
  1093  ### Advanced options
  1094  
  1095  Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
  1096  
  1097  #### --s3-bucket-acl
  1098  
  1099  Canned ACL used when creating buckets.
  1100  
  1101  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1102  
  1103  Note that this ACL is applied when only when creating buckets.  If it
  1104  isn't set then "acl" is used instead.
  1105  
  1106  If the "acl" and "bucket_acl" are empty strings then no X-Amz-Acl:
  1107  header is added and the default (private) will be used.
  1108  
  1109  
  1110  Properties:
  1111  
  1112  - Config:      bucket_acl
  1113  - Env Var:     RCLONE_S3_BUCKET_ACL
  1114  - Type:        string
  1115  - Required:    false
  1116  - Examples:
  1117      - "private"
  1118          - Owner gets FULL_CONTROL.
  1119          - No one else has access rights (default).
  1120      - "public-read"
  1121          - Owner gets FULL_CONTROL.
  1122          - The AllUsers group gets READ access.
  1123      - "public-read-write"
  1124          - Owner gets FULL_CONTROL.
  1125          - The AllUsers group gets READ and WRITE access.
  1126          - Granting this on a bucket is generally not recommended.
  1127      - "authenticated-read"
  1128          - Owner gets FULL_CONTROL.
  1129          - The AuthenticatedUsers group gets READ access.
  1130  
  1131  #### --s3-requester-pays
  1132  
  1133  Enables requester pays option when interacting with S3 bucket.
  1134  
  1135  Properties:
  1136  
  1137  - Config:      requester_pays
  1138  - Env Var:     RCLONE_S3_REQUESTER_PAYS
  1139  - Provider:    AWS
  1140  - Type:        bool
  1141  - Default:     false
  1142  
  1143  #### --s3-sse-customer-algorithm
  1144  
  1145  If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
  1146  
  1147  Properties:
  1148  
  1149  - Config:      sse_customer_algorithm
  1150  - Env Var:     RCLONE_S3_SSE_CUSTOMER_ALGORITHM
  1151  - Provider:    AWS,Ceph,ChinaMobile,Minio
  1152  - Type:        string
  1153  - Required:    false
  1154  - Examples:
  1155      - ""
  1156          - None
  1157      - "AES256"
  1158          - AES256
  1159  
  1160  #### --s3-sse-customer-key
  1161  
  1162  To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.
  1163  
  1164  Alternatively you can provide --sse-customer-key-base64.
  1165  
  1166  Properties:
  1167  
  1168  - Config:      sse_customer_key
  1169  - Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY
  1170  - Provider:    AWS,Ceph,ChinaMobile,Minio
  1171  - Type:        string
  1172  - Required:    false
  1173  - Examples:
  1174      - ""
  1175          - None
  1176  
  1177  #### --s3-sse-customer-key-base64
  1178  
  1179  If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.
  1180  
  1181  Alternatively you can provide --sse-customer-key.
  1182  
  1183  Properties:
  1184  
  1185  - Config:      sse_customer_key_base64
  1186  - Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
  1187  - Provider:    AWS,Ceph,ChinaMobile,Minio
  1188  - Type:        string
  1189  - Required:    false
  1190  - Examples:
  1191      - ""
  1192          - None
  1193  
  1194  #### --s3-sse-customer-key-md5
  1195  
  1196  If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
  1197  
  1198  If you leave it blank, this is calculated automatically from the sse_customer_key provided.
  1199  
  1200  
  1201  Properties:
  1202  
  1203  - Config:      sse_customer_key_md5
  1204  - Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY_MD5
  1205  - Provider:    AWS,Ceph,ChinaMobile,Minio
  1206  - Type:        string
  1207  - Required:    false
  1208  - Examples:
  1209      - ""
  1210          - None
  1211  
  1212  #### --s3-upload-cutoff
  1213  
  1214  Cutoff for switching to chunked upload.
  1215  
  1216  Any files larger than this will be uploaded in chunks of chunk_size.
  1217  The minimum is 0 and the maximum is 5 GiB.
  1218  
  1219  Properties:
  1220  
  1221  - Config:      upload_cutoff
  1222  - Env Var:     RCLONE_S3_UPLOAD_CUTOFF
  1223  - Type:        SizeSuffix
  1224  - Default:     200Mi
  1225  
  1226  #### --s3-chunk-size
  1227  
  1228  Chunk size to use for uploading.
  1229  
  1230  When uploading files larger than upload_cutoff or files with unknown
  1231  size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
  1232  photos or google docs) they will be uploaded as multipart uploads
  1233  using this chunk size.
  1234  
  1235  Note that "--s3-upload-concurrency" chunks of this size are buffered
  1236  in memory per transfer.
  1237  
  1238  If you are transferring large files over high-speed links and you have
  1239  enough memory, then increasing this will speed up the transfers.
  1240  
  1241  Rclone will automatically increase the chunk size when uploading a
  1242  large file of known size to stay below the 10,000 chunks limit.
  1243  
  1244  Files of unknown size are uploaded with the configured
  1245  chunk_size. Since the default chunk size is 5 MiB and there can be at
  1246  most 10,000 chunks, this means that by default the maximum size of
  1247  a file you can stream upload is 48 GiB.  If you wish to stream upload
  1248  larger files then you will need to increase chunk_size.
  1249  
  1250  Increasing the chunk size decreases the accuracy of the progress
  1251  statistics displayed with "-P" flag. Rclone treats chunk as sent when
  1252  it's buffered by the AWS SDK, when in fact it may still be uploading.
  1253  A bigger chunk size means a bigger AWS SDK buffer and progress
  1254  reporting more deviating from the truth.
  1255  
  1256  
  1257  Properties:
  1258  
  1259  - Config:      chunk_size
  1260  - Env Var:     RCLONE_S3_CHUNK_SIZE
  1261  - Type:        SizeSuffix
  1262  - Default:     5Mi
  1263  
  1264  #### --s3-max-upload-parts
  1265  
  1266  Maximum number of parts in a multipart upload.
  1267  
  1268  This option defines the maximum number of multipart chunks to use
  1269  when doing a multipart upload.
  1270  
  1271  This can be useful if a service does not support the AWS S3
  1272  specification of 10,000 chunks.
  1273  
  1274  Rclone will automatically increase the chunk size when uploading a
  1275  large file of a known size to stay below this number of chunks limit.
  1276  
  1277  
  1278  Properties:
  1279  
  1280  - Config:      max_upload_parts
  1281  - Env Var:     RCLONE_S3_MAX_UPLOAD_PARTS
  1282  - Type:        int
  1283  - Default:     10000
  1284  
  1285  #### --s3-copy-cutoff
  1286  
  1287  Cutoff for switching to multipart copy.
  1288  
  1289  Any files larger than this that need to be server-side copied will be
  1290  copied in chunks of this size.
  1291  
  1292  The minimum is 0 and the maximum is 5 GiB.
  1293  
  1294  Properties:
  1295  
  1296  - Config:      copy_cutoff
  1297  - Env Var:     RCLONE_S3_COPY_CUTOFF
  1298  - Type:        SizeSuffix
  1299  - Default:     4.656Gi
  1300  
  1301  #### --s3-disable-checksum
  1302  
  1303  Don't store MD5 checksum with object metadata.
  1304  
  1305  Normally rclone will calculate the MD5 checksum of the input before
  1306  uploading it so it can add it to metadata on the object. This is great
  1307  for data integrity checking but can cause long delays for large files
  1308  to start uploading.
  1309  
  1310  Properties:
  1311  
  1312  - Config:      disable_checksum
  1313  - Env Var:     RCLONE_S3_DISABLE_CHECKSUM
  1314  - Type:        bool
  1315  - Default:     false
  1316  
  1317  #### --s3-shared-credentials-file
  1318  
  1319  Path to the shared credentials file.
  1320  
  1321  If env_auth = true then rclone can use a shared credentials file.
  1322  
  1323  If this variable is empty rclone will look for the
  1324  "AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty
  1325  it will default to the current user's home directory.
  1326  
  1327      Linux/OSX: "$HOME/.aws/credentials"
  1328      Windows:   "%USERPROFILE%\.aws\credentials"
  1329  
  1330  
  1331  Properties:
  1332  
  1333  - Config:      shared_credentials_file
  1334  - Env Var:     RCLONE_S3_SHARED_CREDENTIALS_FILE
  1335  - Type:        string
  1336  - Required:    false
  1337  
  1338  #### --s3-profile
  1339  
  1340  Profile to use in the shared credentials file.
  1341  
  1342  If env_auth = true then rclone can use a shared credentials file. This
  1343  variable controls which profile is used in that file.
  1344  
  1345  If empty it will default to the environment variable "AWS_PROFILE" or
  1346  "default" if that environment variable is also not set.
  1347  
  1348  
  1349  Properties:
  1350  
  1351  - Config:      profile
  1352  - Env Var:     RCLONE_S3_PROFILE
  1353  - Type:        string
  1354  - Required:    false
  1355  
  1356  #### --s3-session-token
  1357  
  1358  An AWS session token.
  1359  
  1360  Properties:
  1361  
  1362  - Config:      session_token
  1363  - Env Var:     RCLONE_S3_SESSION_TOKEN
  1364  - Type:        string
  1365  - Required:    false
  1366  
  1367  #### --s3-upload-concurrency
  1368  
  1369  Concurrency for multipart uploads and copies.
  1370  
  1371  This is the number of chunks of the same file that are uploaded
  1372  concurrently for multipart uploads and copies.
  1373  
  1374  If you are uploading small numbers of large files over high-speed links
  1375  and these uploads do not fully utilize your bandwidth, then increasing
  1376  this may help to speed up the transfers.
  1377  
  1378  Properties:
  1379  
  1380  - Config:      upload_concurrency
  1381  - Env Var:     RCLONE_S3_UPLOAD_CONCURRENCY
  1382  - Type:        int
  1383  - Default:     4
  1384  
  1385  #### --s3-force-path-style
  1386  
  1387  If true use path style access if false use virtual hosted style.
  1388  
  1389  If this is true (the default) then rclone will use path style access,
  1390  if false then rclone will use virtual path style. See [the AWS S3
  1391  docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
  1392  for more info.
  1393  
  1394  Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
  1395  false - rclone will do this automatically based on the provider
  1396  setting.
  1397  
  1398  Properties:
  1399  
  1400  - Config:      force_path_style
  1401  - Env Var:     RCLONE_S3_FORCE_PATH_STYLE
  1402  - Type:        bool
  1403  - Default:     true
  1404  
  1405  #### --s3-v2-auth
  1406  
  1407  If true use v2 authentication.
  1408  
  1409  If this is false (the default) then rclone will use v4 authentication.
  1410  If it is set then rclone will use v2 authentication.
  1411  
  1412  Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
  1413  
  1414  Properties:
  1415  
  1416  - Config:      v2_auth
  1417  - Env Var:     RCLONE_S3_V2_AUTH
  1418  - Type:        bool
  1419  - Default:     false
  1420  
  1421  #### --s3-use-dual-stack
  1422  
  1423  If true use AWS S3 dual-stack endpoint (IPv6 support).
  1424  
  1425  See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html)
  1426  
  1427  Properties:
  1428  
  1429  - Config:      use_dual_stack
  1430  - Env Var:     RCLONE_S3_USE_DUAL_STACK
  1431  - Type:        bool
  1432  - Default:     false
  1433  
  1434  #### --s3-use-accelerate-endpoint
  1435  
  1436  If true use the AWS S3 accelerated endpoint.
  1437  
  1438  See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)
  1439  
  1440  Properties:
  1441  
  1442  - Config:      use_accelerate_endpoint
  1443  - Env Var:     RCLONE_S3_USE_ACCELERATE_ENDPOINT
  1444  - Provider:    AWS
  1445  - Type:        bool
  1446  - Default:     false
  1447  
  1448  #### --s3-leave-parts-on-error
  1449  
  1450  If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
  1451  
  1452  It should be set to true for resuming uploads across different sessions.
  1453  
  1454  WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
  1455  
  1456  
  1457  Properties:
  1458  
  1459  - Config:      leave_parts_on_error
  1460  - Env Var:     RCLONE_S3_LEAVE_PARTS_ON_ERROR
  1461  - Provider:    AWS
  1462  - Type:        bool
  1463  - Default:     false
  1464  
  1465  #### --s3-list-chunk
  1466  
  1467  Size of listing chunk (response list for each ListObject S3 request).
  1468  
  1469  This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
  1470  Most services truncate the response list to 1000 objects even if requested more than that.
  1471  In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
  1472  In Ceph, this can be increased with the "rgw list buckets max chunk" option.
  1473  
  1474  
  1475  Properties:
  1476  
  1477  - Config:      list_chunk
  1478  - Env Var:     RCLONE_S3_LIST_CHUNK
  1479  - Type:        int
  1480  - Default:     1000
  1481  
  1482  #### --s3-list-version
  1483  
  1484  Version of ListObjects to use: 1,2 or 0 for auto.
  1485  
  1486  When S3 originally launched it only provided the ListObjects call to
  1487  enumerate objects in a bucket.
  1488  
  1489  However in May 2016 the ListObjectsV2 call was introduced. This is
  1490  much higher performance and should be used if at all possible.
  1491  
  1492  If set to the default, 0, rclone will guess according to the provider
  1493  set which list objects method to call. If it guesses wrong, then it
  1494  may be set manually here.
  1495  
  1496  
  1497  Properties:
  1498  
  1499  - Config:      list_version
  1500  - Env Var:     RCLONE_S3_LIST_VERSION
  1501  - Type:        int
  1502  - Default:     0
  1503  
  1504  #### --s3-list-url-encode
  1505  
  1506  Whether to url encode listings: true/false/unset
  1507  
  1508  Some providers support URL encoding listings and where this is
  1509  available this is more reliable when using control characters in file
  1510  names. If this is set to unset (the default) then rclone will choose
  1511  according to the provider setting what to apply, but you can override
  1512  rclone's choice here.
  1513  
  1514  
  1515  Properties:
  1516  
  1517  - Config:      list_url_encode
  1518  - Env Var:     RCLONE_S3_LIST_URL_ENCODE
  1519  - Type:        Tristate
  1520  - Default:     unset
  1521  
  1522  #### --s3-no-check-bucket
  1523  
  1524  If set, don't attempt to check the bucket exists or create it.
  1525  
  1526  This can be useful when trying to minimise the number of transactions
  1527  rclone does if you know the bucket exists already.
  1528  
  1529  It can also be needed if the user you are using does not have bucket
  1530  creation permissions. Before v1.52.0 this would have passed silently
  1531  due to a bug.
  1532  
  1533  
  1534  Properties:
  1535  
  1536  - Config:      no_check_bucket
  1537  - Env Var:     RCLONE_S3_NO_CHECK_BUCKET
  1538  - Type:        bool
  1539  - Default:     false
  1540  
  1541  #### --s3-no-head
  1542  
  1543  If set, don't HEAD uploaded objects to check integrity.
  1544  
  1545  This can be useful when trying to minimise the number of transactions
  1546  rclone does.
  1547  
  1548  Setting it means that if rclone receives a 200 OK message after
  1549  uploading an object with PUT then it will assume that it got uploaded
  1550  properly.
  1551  
  1552  In particular it will assume:
  1553  
  1554  - the metadata, including modtime, storage class and content type was as uploaded
  1555  - the size was as uploaded
  1556  
  1557  It reads the following items from the response for a single part PUT:
  1558  
  1559  - the MD5SUM
  1560  - The uploaded date
  1561  
  1562  For multipart uploads these items aren't read.
  1563  
  1564  If an source object of unknown length is uploaded then rclone **will** do a
  1565  HEAD request.
  1566  
  1567  Setting this flag increases the chance for undetected upload failures,
  1568  in particular an incorrect size, so it isn't recommended for normal
  1569  operation. In practice the chance of an undetected upload failure is
  1570  very small even with this flag.
  1571  
  1572  
  1573  Properties:
  1574  
  1575  - Config:      no_head
  1576  - Env Var:     RCLONE_S3_NO_HEAD
  1577  - Type:        bool
  1578  - Default:     false
  1579  
  1580  #### --s3-no-head-object
  1581  
  1582  If set, do not do HEAD before GET when getting objects.
  1583  
  1584  Properties:
  1585  
  1586  - Config:      no_head_object
  1587  - Env Var:     RCLONE_S3_NO_HEAD_OBJECT
  1588  - Type:        bool
  1589  - Default:     false
  1590  
  1591  #### --s3-encoding
  1592  
  1593  The encoding for the backend.
  1594  
  1595  See the [encoding section in the overview](/overview/#encoding) for more info.
  1596  
  1597  Properties:
  1598  
  1599  - Config:      encoding
  1600  - Env Var:     RCLONE_S3_ENCODING
  1601  - Type:        Encoding
  1602  - Default:     Slash,InvalidUtf8,Dot
  1603  
  1604  #### --s3-memory-pool-flush-time
  1605  
  1606  How often internal memory buffer pools will be flushed. (no longer used)
  1607  
  1608  Properties:
  1609  
  1610  - Config:      memory_pool_flush_time
  1611  - Env Var:     RCLONE_S3_MEMORY_POOL_FLUSH_TIME
  1612  - Type:        Duration
  1613  - Default:     1m0s
  1614  
  1615  #### --s3-memory-pool-use-mmap
  1616  
  1617  Whether to use mmap buffers in internal memory pool. (no longer used)
  1618  
  1619  Properties:
  1620  
  1621  - Config:      memory_pool_use_mmap
  1622  - Env Var:     RCLONE_S3_MEMORY_POOL_USE_MMAP
  1623  - Type:        bool
  1624  - Default:     false
  1625  
  1626  #### --s3-disable-http2
  1627  
  1628  Disable usage of http2 for S3 backends.
  1629  
  1630  There is currently an unsolved issue with the s3 (specifically minio) backend
  1631  and HTTP/2.  HTTP/2 is enabled by default for the s3 backend but can be
  1632  disabled here.  When the issue is solved this flag will be removed.
  1633  
  1634  See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631
  1635  
  1636  
  1637  
  1638  Properties:
  1639  
  1640  - Config:      disable_http2
  1641  - Env Var:     RCLONE_S3_DISABLE_HTTP2
  1642  - Type:        bool
  1643  - Default:     false
  1644  
  1645  #### --s3-download-url
  1646  
  1647  Custom endpoint for downloads.
  1648  This is usually set to a CloudFront CDN URL as AWS S3 offers
  1649  cheaper egress for data downloaded through the CloudFront network.
  1650  
  1651  Properties:
  1652  
  1653  - Config:      download_url
  1654  - Env Var:     RCLONE_S3_DOWNLOAD_URL
  1655  - Type:        string
  1656  - Required:    false
  1657  
  1658  #### --s3-directory-markers
  1659  
  1660  Upload an empty object with a trailing slash when a new directory is created
  1661  
  1662  Empty folders are unsupported for bucket based remotes, this option creates an empty
  1663  object ending with "/", to persist the folder.
  1664  
  1665  
  1666  Properties:
  1667  
  1668  - Config:      directory_markers
  1669  - Env Var:     RCLONE_S3_DIRECTORY_MARKERS
  1670  - Type:        bool
  1671  - Default:     false
  1672  
  1673  #### --s3-use-multipart-etag
  1674  
  1675  Whether to use ETag in multipart uploads for verification
  1676  
  1677  This should be true, false or left unset to use the default for the provider.
  1678  
  1679  
  1680  Properties:
  1681  
  1682  - Config:      use_multipart_etag
  1683  - Env Var:     RCLONE_S3_USE_MULTIPART_ETAG
  1684  - Type:        Tristate
  1685  - Default:     unset
  1686  
  1687  #### --s3-use-presigned-request
  1688  
  1689  Whether to use a presigned request or PutObject for single part uploads
  1690  
  1691  If this is false rclone will use PutObject from the AWS SDK to upload
  1692  an object.
  1693  
  1694  Versions of rclone < 1.59 use presigned requests to upload a single
  1695  part object and setting this flag to true will re-enable that
  1696  functionality. This shouldn't be necessary except in exceptional
  1697  circumstances or for testing.
  1698  
  1699  
  1700  Properties:
  1701  
  1702  - Config:      use_presigned_request
  1703  - Env Var:     RCLONE_S3_USE_PRESIGNED_REQUEST
  1704  - Type:        bool
  1705  - Default:     false
  1706  
  1707  #### --s3-versions
  1708  
  1709  Include old versions in directory listings.
  1710  
  1711  Properties:
  1712  
  1713  - Config:      versions
  1714  - Env Var:     RCLONE_S3_VERSIONS
  1715  - Type:        bool
  1716  - Default:     false
  1717  
  1718  #### --s3-version-at
  1719  
  1720  Show file versions as they were at the specified time.
  1721  
  1722  The parameter should be a date, "2006-01-02", datetime "2006-01-02
  1723  15:04:05" or a duration for that long ago, eg "100d" or "1h".
  1724  
  1725  Note that when using this no file write operations are permitted,
  1726  so you can't upload files or delete them.
  1727  
  1728  See [the time option docs](/docs/#time-option) for valid formats.
  1729  
  1730  
  1731  Properties:
  1732  
  1733  - Config:      version_at
  1734  - Env Var:     RCLONE_S3_VERSION_AT
  1735  - Type:        Time
  1736  - Default:     off
  1737  
  1738  #### --s3-version-deleted
  1739  
  1740  Show deleted file markers when using versions.
  1741  
  1742  This shows deleted file markers in the listing when using versions. These will appear
  1743  as 0 size files. The only operation which can be performed on them is deletion.
  1744  
  1745  Deleting a delete marker will reveal the previous version.
  1746  
  1747  Deleted files will always show with a timestamp.
  1748  
  1749  
  1750  Properties:
  1751  
  1752  - Config:      version_deleted
  1753  - Env Var:     RCLONE_S3_VERSION_DELETED
  1754  - Type:        bool
  1755  - Default:     false
  1756  
  1757  #### --s3-decompress
  1758  
  1759  If set this will decompress gzip encoded objects.
  1760  
  1761  It is possible to upload objects to S3 with "Content-Encoding: gzip"
  1762  set. Normally rclone will download these files as compressed objects.
  1763  
  1764  If this flag is set then rclone will decompress these files with
  1765  "Content-Encoding: gzip" as they are received. This means that rclone
  1766  can't check the size and hash but the file contents will be decompressed.
  1767  
  1768  
  1769  Properties:
  1770  
  1771  - Config:      decompress
  1772  - Env Var:     RCLONE_S3_DECOMPRESS
  1773  - Type:        bool
  1774  - Default:     false
  1775  
  1776  #### --s3-might-gzip
  1777  
  1778  Set this if the backend might gzip objects.
  1779  
  1780  Normally providers will not alter objects when they are downloaded. If
  1781  an object was not uploaded with `Content-Encoding: gzip` then it won't
  1782  be set on download.
  1783  
  1784  However some providers may gzip objects even if they weren't uploaded
  1785  with `Content-Encoding: gzip` (eg Cloudflare).
  1786  
  1787  A symptom of this would be receiving errors like
  1788  
  1789      ERROR corrupted on transfer: sizes differ NNN vs MMM
  1790  
  1791  If you set this flag and rclone downloads an object with
  1792  Content-Encoding: gzip set and chunked transfer encoding, then rclone
  1793  will decompress the object on the fly.
  1794  
  1795  If this is set to unset (the default) then rclone will choose
  1796  according to the provider setting what to apply, but you can override
  1797  rclone's choice here.
  1798  
  1799  
  1800  Properties:
  1801  
  1802  - Config:      might_gzip
  1803  - Env Var:     RCLONE_S3_MIGHT_GZIP
  1804  - Type:        Tristate
  1805  - Default:     unset
  1806  
  1807  #### --s3-use-accept-encoding-gzip
  1808  
  1809  Whether to send `Accept-Encoding: gzip` header.
  1810  
  1811  By default, rclone will append `Accept-Encoding: gzip` to the request to download
  1812  compressed objects whenever possible.
  1813  
  1814  However some providers such as Google Cloud Storage may alter the HTTP headers, breaking
  1815  the signature of the request.
  1816  
  1817  A symptom of this would be receiving errors like
  1818  
  1819  	SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.
  1820  
  1821  In this case, you might want to try disabling this option.
  1822  
  1823  
  1824  Properties:
  1825  
  1826  - Config:      use_accept_encoding_gzip
  1827  - Env Var:     RCLONE_S3_USE_ACCEPT_ENCODING_GZIP
  1828  - Type:        Tristate
  1829  - Default:     unset
  1830  
  1831  #### --s3-no-system-metadata
  1832  
  1833  Suppress setting and reading of system metadata
  1834  
  1835  Properties:
  1836  
  1837  - Config:      no_system_metadata
  1838  - Env Var:     RCLONE_S3_NO_SYSTEM_METADATA
  1839  - Type:        bool
  1840  - Default:     false
  1841  
  1842  #### --s3-sts-endpoint
  1843  
  1844  Endpoint for STS.
  1845  
  1846  Leave blank if using AWS to use the default endpoint for the region.
  1847  
  1848  Properties:
  1849  
  1850  - Config:      sts_endpoint
  1851  - Env Var:     RCLONE_S3_STS_ENDPOINT
  1852  - Provider:    AWS
  1853  - Type:        string
  1854  - Required:    false
  1855  
  1856  #### --s3-use-already-exists
  1857  
  1858  Set if rclone should report BucketAlreadyExists errors on bucket creation.
  1859  
  1860  At some point during the evolution of the s3 protocol, AWS started
  1861  returning an `AlreadyOwnedByYou` error when attempting to create a
  1862  bucket that the user already owned, rather than a
  1863  `BucketAlreadyExists` error.
  1864  
  1865  Unfortunately exactly what has been implemented by s3 clones is a
  1866  little inconsistent, some return `AlreadyOwnedByYou`, some return
  1867  `BucketAlreadyExists` and some return no error at all.
  1868  
  1869  This is important to rclone because it ensures the bucket exists by
  1870  creating it on quite a lot of operations (unless
  1871  `--s3-no-check-bucket` is used).
  1872  
  1873  If rclone knows the provider can return `AlreadyOwnedByYou` or returns
  1874  no error then it can report `BucketAlreadyExists` errors when the user
  1875  attempts to create a bucket not owned by them. Otherwise rclone
  1876  ignores the `BucketAlreadyExists` error which can lead to confusion.
  1877  
  1878  This should be automatically set correctly for all providers rclone
  1879  knows about - please make a bug report if not.
  1880  
  1881  
  1882  Properties:
  1883  
  1884  - Config:      use_already_exists
  1885  - Env Var:     RCLONE_S3_USE_ALREADY_EXISTS
  1886  - Type:        Tristate
  1887  - Default:     unset
  1888  
  1889  #### --s3-use-multipart-uploads
  1890  
  1891  Set if rclone should use multipart uploads.
  1892  
  1893  You can change this if you want to disable the use of multipart uploads.
  1894  This shouldn't be necessary in normal operation.
  1895  
  1896  This should be automatically set correctly for all providers rclone
  1897  knows about - please make a bug report if not.
  1898  
  1899  
  1900  Properties:
  1901  
  1902  - Config:      use_multipart_uploads
  1903  - Env Var:     RCLONE_S3_USE_MULTIPART_UPLOADS
  1904  - Type:        Tristate
  1905  - Default:     unset
  1906  
  1907  #### --s3-description
  1908  
  1909  Description of the remote
  1910  
  1911  Properties:
  1912  
  1913  - Config:      description
  1914  - Env Var:     RCLONE_S3_DESCRIPTION
  1915  - Type:        string
  1916  - Required:    false
  1917  
  1918  ### Metadata
  1919  
  1920  User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
  1921  
  1922  Here are the possible system metadata items for the s3 backend.
  1923  
  1924  | Name | Help | Type | Example | Read Only |
  1925  |------|------|------|---------|-----------|
  1926  | btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
  1927  | cache-control | Cache-Control header | string | no-cache | N |
  1928  | content-disposition | Content-Disposition header | string | inline | N |
  1929  | content-encoding | Content-Encoding header | string | gzip | N |
  1930  | content-language | Content-Language header | string | en-US | N |
  1931  | content-type | Content-Type header | string | text/plain | N |
  1932  | mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
  1933  | tier | Tier of the object | string | GLACIER | **Y** |
  1934  
  1935  See the [metadata](/docs/#metadata) docs for more info.
  1936  
  1937  ## Backend commands
  1938  
  1939  Here are the commands specific to the s3 backend.
  1940  
  1941  Run them with
  1942  
  1943      rclone backend COMMAND remote:
  1944  
  1945  The help below will explain what arguments each command takes.
  1946  
  1947  See the [backend](/commands/rclone_backend/) command for more
  1948  info on how to pass options and arguments.
  1949  
  1950  These can be run on a running backend using the rc command
  1951  [backend/command](/rc/#backend-command).
  1952  
  1953  ### restore
  1954  
  1955  Restore objects from GLACIER to normal storage
  1956  
  1957      rclone backend restore remote: [options] [<arguments>+]
  1958  
  1959  This command can be used to restore one or more objects from GLACIER
  1960  to normal storage.
  1961  
  1962  Usage Examples:
  1963  
  1964      rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
  1965      rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
  1966      rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
  1967  
  1968  This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
  1969  
  1970      rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
  1971  
  1972  All the objects shown will be marked for restore, then
  1973  
  1974      rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
  1975  
  1976  It returns a list of status dictionaries with Remote and Status
  1977  keys. The Status will be OK if it was successful or an error message
  1978  if not.
  1979  
  1980      [
  1981          {
  1982              "Status": "OK",
  1983              "Remote": "test.txt"
  1984          },
  1985          {
  1986              "Status": "OK",
  1987              "Remote": "test/file4.txt"
  1988          }
  1989      ]
  1990  
  1991  
  1992  
  1993  Options:
  1994  
  1995  - "description": The optional description for the job.
  1996  - "lifetime": Lifetime of the active copy in days
  1997  - "priority": Priority of restore: Standard|Expedited|Bulk
  1998  
  1999  ### restore-status
  2000  
  2001  Show the restore status for objects being restored from GLACIER to normal storage
  2002  
  2003      rclone backend restore-status remote: [options] [<arguments>+]
  2004  
  2005  This command can be used to show the status for objects being restored from GLACIER
  2006  to normal storage.
  2007  
  2008  Usage Examples:
  2009  
  2010      rclone backend restore-status s3:bucket/path/to/object
  2011      rclone backend restore-status s3:bucket/path/to/directory
  2012      rclone backend restore-status -o all s3:bucket/path/to/directory
  2013  
  2014  This command does not obey the filters.
  2015  
  2016  It returns a list of status dictionaries.
  2017  
  2018      [
  2019          {
  2020              "Remote": "file.txt",
  2021              "VersionID": null,
  2022              "RestoreStatus": {
  2023                  "IsRestoreInProgress": true,
  2024                  "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
  2025              },
  2026              "StorageClass": "GLACIER"
  2027          },
  2028          {
  2029              "Remote": "test.pdf",
  2030              "VersionID": null,
  2031              "RestoreStatus": {
  2032                  "IsRestoreInProgress": false,
  2033                  "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
  2034              },
  2035              "StorageClass": "DEEP_ARCHIVE"
  2036          }
  2037      ]
  2038  
  2039  
  2040  Options:
  2041  
  2042  - "all": if set then show all objects, not just ones with restore status
  2043  
  2044  ### list-multipart-uploads
  2045  
  2046  List the unfinished multipart uploads
  2047  
  2048      rclone backend list-multipart-uploads remote: [options] [<arguments>+]
  2049  
  2050  This command lists the unfinished multipart uploads in JSON format.
  2051  
  2052      rclone backend list-multipart s3:bucket/path/to/object
  2053  
  2054  It returns a dictionary of buckets with values as lists of unfinished
  2055  multipart uploads.
  2056  
  2057  You can call it with no bucket in which case it lists all bucket, with
  2058  a bucket or with a bucket and path.
  2059  
  2060      {
  2061        "rclone": [
  2062          {
  2063            "Initiated": "2020-06-26T14:20:36Z",
  2064            "Initiator": {
  2065              "DisplayName": "XXX",
  2066              "ID": "arn:aws:iam::XXX:user/XXX"
  2067            },
  2068            "Key": "KEY",
  2069            "Owner": {
  2070              "DisplayName": null,
  2071              "ID": "XXX"
  2072            },
  2073            "StorageClass": "STANDARD",
  2074            "UploadId": "XXX"
  2075          }
  2076        ],
  2077        "rclone-1000files": [],
  2078        "rclone-dst": []
  2079      }
  2080  
  2081  
  2082  
  2083  ### cleanup
  2084  
  2085  Remove unfinished multipart uploads.
  2086  
  2087      rclone backend cleanup remote: [options] [<arguments>+]
  2088  
  2089  This command removes unfinished multipart uploads of age greater than
  2090  max-age which defaults to 24 hours.
  2091  
  2092  Note that you can use --interactive/-i or --dry-run with this command to see what
  2093  it would do.
  2094  
  2095      rclone backend cleanup s3:bucket/path/to/object
  2096      rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
  2097  
  2098  Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
  2099  
  2100  
  2101  Options:
  2102  
  2103  - "max-age": Max age of upload to delete
  2104  
  2105  ### cleanup-hidden
  2106  
  2107  Remove old versions of files.
  2108  
  2109      rclone backend cleanup-hidden remote: [options] [<arguments>+]
  2110  
  2111  This command removes any old hidden versions of files
  2112  on a versions enabled bucket.
  2113  
  2114  Note that you can use --interactive/-i or --dry-run with this command to see what
  2115  it would do.
  2116  
  2117      rclone backend cleanup-hidden s3:bucket/path/to/dir
  2118  
  2119  
  2120  ### versioning
  2121  
  2122  Set/get versioning support for a bucket.
  2123  
  2124      rclone backend versioning remote: [options] [<arguments>+]
  2125  
  2126  This command sets versioning support if a parameter is
  2127  passed and then returns the current versioning status for the bucket
  2128  supplied.
  2129  
  2130      rclone backend versioning s3:bucket # read status only
  2131      rclone backend versioning s3:bucket Enabled
  2132      rclone backend versioning s3:bucket Suspended
  2133  
  2134  It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning
  2135  has been enabled the status can't be set back to "Unversioned".
  2136  
  2137  
  2138  ### set
  2139  
  2140  Set command for updating the config parameters.
  2141  
  2142      rclone backend set remote: [options] [<arguments>+]
  2143  
  2144  This set command can be used to update the config parameters
  2145  for a running s3 backend.
  2146  
  2147  Usage Examples:
  2148  
  2149      rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
  2150      rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
  2151      rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X
  2152  
  2153  The option keys are named as they are in the config file.
  2154  
  2155  This rebuilds the connection to the s3 backend when it is called with
  2156  the new parameters. Only new parameters need be passed as the values
  2157  will default to those currently in use.
  2158  
  2159  It doesn't return anything.
  2160  
  2161  
  2162  {{< rem autogenerated options stop >}}
  2163  
  2164  ### Anonymous access to public buckets
  2165  
  2166  If you want to use rclone to access a public bucket, configure with a
  2167  blank `access_key_id` and `secret_access_key`.  Your config should end
  2168  up looking like this:
  2169  
  2170  ```
  2171  [anons3]
  2172  type = s3
  2173  provider = AWS
  2174  env_auth = false
  2175  access_key_id =
  2176  secret_access_key =
  2177  region = us-east-1
  2178  endpoint =
  2179  location_constraint =
  2180  acl = private
  2181  server_side_encryption =
  2182  storage_class =
  2183  ```
  2184  
  2185  Then use it as normal with the name of the public bucket, e.g.
  2186  
  2187      rclone lsd anons3:1000genomes
  2188  
  2189  You will be able to list and copy data but not upload it.
  2190  
  2191  ## Providers
  2192  
  2193  ### AWS S3
  2194  
  2195  This is the provider used as main example and described in the [configuration](#configuration) section above.
  2196  
  2197  ### AWS Snowball Edge
  2198  
  2199  [AWS Snowball](https://aws.amazon.com/snowball/) is a hardware
  2200  appliance used for transferring bulk data back to AWS. Its main
  2201  software interface is S3 object storage.
  2202  
  2203  To use rclone with AWS Snowball Edge devices, configure as standard
  2204  for an 'S3 Compatible Service'.
  2205  
  2206  If using rclone pre v1.59 be sure to set `upload_cutoff = 0` otherwise
  2207  you will run into authentication header issues as the snowball device
  2208  does not support query parameter based authentication.
  2209  
  2210  With rclone v1.59 or later setting `upload_cutoff` should not be necessary.
  2211  
  2212  eg.
  2213  ```
  2214  [snowball]
  2215  type = s3
  2216  provider = Other
  2217  access_key_id = YOUR_ACCESS_KEY
  2218  secret_access_key = YOUR_SECRET_KEY
  2219  endpoint = http://[IP of Snowball]:8080
  2220  upload_cutoff = 0
  2221  ```
  2222  
  2223  ### Ceph
  2224  
  2225  [Ceph](https://ceph.com/) is an open-source, unified, distributed
  2226  storage system designed for excellent performance, reliability and
  2227  scalability.  It has an S3 compatible object storage interface.
  2228  
  2229  To use rclone with Ceph, configure as above but leave the region blank
  2230  and set the endpoint.  You should end up with something like this in
  2231  your config:
  2232  
  2233  
  2234  ```
  2235  [ceph]
  2236  type = s3
  2237  provider = Ceph
  2238  env_auth = false
  2239  access_key_id = XXX
  2240  secret_access_key = YYY
  2241  region =
  2242  endpoint = https://ceph.endpoint.example.com
  2243  location_constraint =
  2244  acl =
  2245  server_side_encryption =
  2246  storage_class =
  2247  ```
  2248  
  2249  If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a
  2250  version of rclone before v1.59 then you may need to supply the
  2251  parameter `--s3-upload-cutoff 0` or put this in the config file as
  2252  `upload_cutoff 0` to work around a bug which causes uploading of small
  2253  files to fail.
  2254  
  2255  Note also that Ceph sometimes puts `/` in the passwords it gives
  2256  users.  If you read the secret access key using the command line tools
  2257  you will get a JSON blob with the `/` escaped as `\/`.  Make sure you
  2258  only write `/` in the secret access key.
  2259  
  2260  Eg the dump from Ceph looks something like this (irrelevant keys
  2261  removed).
  2262  
  2263  ```
  2264  {
  2265      "user_id": "xxx",
  2266      "display_name": "xxxx",
  2267      "keys": [
  2268          {
  2269              "user": "xxx",
  2270              "access_key": "xxxxxx",
  2271              "secret_key": "xxxxxx\/xxxx"
  2272          }
  2273      ],
  2274  }
  2275  ```
  2276  
  2277  Because this is a json dump, it is encoding the `/` as `\/`, so if you
  2278  use the secret key as `xxxxxx/xxxx`  it will work fine.
  2279  
  2280  ### Cloudflare R2 {#cloudflare-r2}
  2281  
  2282  [Cloudflare R2](https://blog.cloudflare.com/r2-open-beta/) Storage
  2283  allows developers to store large amounts of unstructured data without
  2284  the costly egress bandwidth fees associated with typical cloud storage
  2285  services.
  2286  
  2287  Here is an example of making a Cloudflare R2 configuration. First run:
  2288  
  2289      rclone config
  2290  
  2291  This will guide you through an interactive setup process.
  2292  
  2293  Note that all buckets are private, and all are stored in the same
  2294  "auto" region. It is necessary to use Cloudflare workers to share the
  2295  content of a bucket publicly.
  2296  
  2297  ```
  2298  No remotes found, make a new one?
  2299  n) New remote
  2300  s) Set configuration password
  2301  q) Quit config
  2302  n/s/q> n
  2303  name> r2
  2304  Option Storage.
  2305  Type of storage to configure.
  2306  Choose a number from below, or type in your own value.
  2307  ...
  2308  XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi
  2309     \ (s3)
  2310  ...
  2311  Storage> s3
  2312  Option provider.
  2313  Choose your S3 provider.
  2314  Choose a number from below, or type in your own value.
  2315  Press Enter to leave empty.
  2316  ...
  2317  XX / Cloudflare R2 Storage
  2318     \ (Cloudflare)
  2319  ...
  2320  provider> Cloudflare
  2321  Option env_auth.
  2322  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  2323  Only applies if access_key_id and secret_access_key is blank.
  2324  Choose a number from below, or type in your own boolean value (true or false).
  2325  Press Enter for the default (false).
  2326   1 / Enter AWS credentials in the next step.
  2327     \ (false)
  2328   2 / Get AWS credentials from the environment (env vars or IAM).
  2329     \ (true)
  2330  env_auth> 1
  2331  Option access_key_id.
  2332  AWS Access Key ID.
  2333  Leave blank for anonymous access or runtime credentials.
  2334  Enter a value. Press Enter to leave empty.
  2335  access_key_id> ACCESS_KEY
  2336  Option secret_access_key.
  2337  AWS Secret Access Key (password).
  2338  Leave blank for anonymous access or runtime credentials.
  2339  Enter a value. Press Enter to leave empty.
  2340  secret_access_key> SECRET_ACCESS_KEY
  2341  Option region.
  2342  Region to connect to.
  2343  Choose a number from below, or type in your own value.
  2344  Press Enter to leave empty.
  2345   1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
  2346     \ (auto)
  2347  region> 1
  2348  Option endpoint.
  2349  Endpoint for S3 API.
  2350  Required when using an S3 clone.
  2351  Enter a value. Press Enter to leave empty.
  2352  endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com
  2353  Edit advanced config?
  2354  y) Yes
  2355  n) No (default)
  2356  y/n> n
  2357  --------------------
  2358  y) Yes this is OK (default)
  2359  e) Edit this remote
  2360  d) Delete this remote
  2361  y/e/d> y
  2362  ```
  2363  
  2364  This will leave your config looking something like:
  2365  
  2366  ```
  2367  [r2]
  2368  type = s3
  2369  provider = Cloudflare
  2370  access_key_id = ACCESS_KEY
  2371  secret_access_key = SECRET_ACCESS_KEY
  2372  region = auto
  2373  endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
  2374  acl = private
  2375  ```
  2376  
  2377  Now run `rclone lsf r2:` to see your buckets and `rclone lsf
  2378  r2:bucket` to look within a bucket.
  2379  
  2380  For R2 tokens with the "Object Read & Write" permission, you may also need to add `no_check_bucket = true` for object uploads to work correctly.
  2381  
  2382  ### Dreamhost
  2383  
  2384  Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
  2385  an object storage system based on CEPH.
  2386  
  2387  To use rclone with Dreamhost, configure as above but leave the region blank
  2388  and set the endpoint.  You should end up with something like this in
  2389  your config:
  2390  
  2391  ```
  2392  [dreamobjects]
  2393  type = s3
  2394  provider = DreamHost
  2395  env_auth = false
  2396  access_key_id = your_access_key
  2397  secret_access_key = your_secret_key
  2398  region =
  2399  endpoint = objects-us-west-1.dream.io
  2400  location_constraint =
  2401  acl = private
  2402  server_side_encryption =
  2403  storage_class =
  2404  ```
  2405  
  2406  ### Google Cloud Storage
  2407  
  2408  [GoogleCloudStorage](https://cloud.google.com/storage/docs) is an [S3-interoperable](https://cloud.google.com/storage/docs/interoperability) object storage service from Google Cloud Platform.
  2409  
  2410  To connect to Google Cloud Storage you will need an access key and secret key. These can be retrieved by creating an [HMAC key](https://cloud.google.com/storage/docs/authentication/managing-hmackeys).
  2411  
  2412  ```
  2413  [gs]
  2414  type = s3
  2415  provider = GCS
  2416  access_key_id = your_access_key
  2417  secret_access_key = your_secret_key
  2418  endpoint = https://storage.googleapis.com
  2419  ```
  2420  
  2421  **Note** that `--s3-versions` does not work with GCS when it needs to do directory paging. Rclone will return the error:
  2422  
  2423      s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker
  2424  
  2425  This is Google bug [#312292516](https://issuetracker.google.com/u/0/issues/312292516).
  2426  
  2427  ### DigitalOcean Spaces
  2428  
  2429  [Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
  2430  
  2431  To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`.
  2432  
  2433  When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings.
  2434  
  2435  Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below:
  2436  
  2437  ```
  2438  Storage> s3
  2439  env_auth> 1
  2440  access_key_id> YOUR_ACCESS_KEY
  2441  secret_access_key> YOUR_SECRET_KEY
  2442  region>
  2443  endpoint> nyc3.digitaloceanspaces.com
  2444  location_constraint>
  2445  acl>
  2446  storage_class>
  2447  ```
  2448  
  2449  The resulting configuration file should look like:
  2450  
  2451  ```
  2452  [spaces]
  2453  type = s3
  2454  provider = DigitalOcean
  2455  env_auth = false
  2456  access_key_id = YOUR_ACCESS_KEY
  2457  secret_access_key = YOUR_SECRET_KEY
  2458  region =
  2459  endpoint = nyc3.digitaloceanspaces.com
  2460  location_constraint =
  2461  acl =
  2462  server_side_encryption =
  2463  storage_class =
  2464  ```
  2465  
  2466  Once configured, you can create a new Space and begin copying files. For example:
  2467  
  2468  ```
  2469  rclone mkdir spaces:my-new-space
  2470  rclone copy /path/to/files spaces:my-new-space
  2471  ```
  2472  
  2473  ### Huawei OBS {#huawei-obs}
  2474  
  2475  Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere.
  2476  
  2477  OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file.
  2478  ```
  2479  [obs]
  2480  type = s3
  2481  provider = HuaweiOBS
  2482  access_key_id = your-access-key-id
  2483  secret_access_key = your-secret-access-key
  2484  region = af-south-1
  2485  endpoint = obs.af-south-1.myhuaweicloud.com
  2486  acl = private
  2487  ```
  2488  
  2489  Or you can also configure via the interactive command line:
  2490  ```
  2491  No remotes found, make a new one?
  2492  n) New remote
  2493  s) Set configuration password
  2494  q) Quit config
  2495  n/s/q> n
  2496  name> obs
  2497  Option Storage.
  2498  Type of storage to configure.
  2499  Choose a number from below, or type in your own value.
  2500  [snip]
  2501  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  2502     \ (s3)
  2503  [snip]
  2504  Storage> s3
  2505  Option provider.
  2506  Choose your S3 provider.
  2507  Choose a number from below, or type in your own value.
  2508  Press Enter to leave empty.
  2509  [snip]
  2510   9 / Huawei Object Storage Service
  2511     \ (HuaweiOBS)
  2512  [snip]
  2513  provider> 9
  2514  Option env_auth.
  2515  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  2516  Only applies if access_key_id and secret_access_key is blank.
  2517  Choose a number from below, or type in your own boolean value (true or false).
  2518  Press Enter for the default (false).
  2519   1 / Enter AWS credentials in the next step.
  2520     \ (false)
  2521   2 / Get AWS credentials from the environment (env vars or IAM).
  2522     \ (true)
  2523  env_auth> 1
  2524  Option access_key_id.
  2525  AWS Access Key ID.
  2526  Leave blank for anonymous access or runtime credentials.
  2527  Enter a value. Press Enter to leave empty.
  2528  access_key_id> your-access-key-id
  2529  Option secret_access_key.
  2530  AWS Secret Access Key (password).
  2531  Leave blank for anonymous access or runtime credentials.
  2532  Enter a value. Press Enter to leave empty.
  2533  secret_access_key> your-secret-access-key
  2534  Option region.
  2535  Region to connect to.
  2536  Choose a number from below, or type in your own value.
  2537  Press Enter to leave empty.
  2538   1 / AF-Johannesburg
  2539     \ (af-south-1)
  2540   2 / AP-Bangkok
  2541     \ (ap-southeast-2)
  2542  [snip]
  2543  region> 1
  2544  Option endpoint.
  2545  Endpoint for OBS API.
  2546  Choose a number from below, or type in your own value.
  2547  Press Enter to leave empty.
  2548   1 / AF-Johannesburg
  2549     \ (obs.af-south-1.myhuaweicloud.com)
  2550   2 / AP-Bangkok
  2551     \ (obs.ap-southeast-2.myhuaweicloud.com)
  2552  [snip]
  2553  endpoint> 1
  2554  Option acl.
  2555  Canned ACL used when creating buckets and storing or copying objects.
  2556  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
  2557  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  2558  Note that this ACL is applied when server-side copying objects as S3
  2559  doesn't copy the ACL from the source but rather writes a fresh one.
  2560  Choose a number from below, or type in your own value.
  2561  Press Enter to leave empty.
  2562     / Owner gets FULL_CONTROL.
  2563   1 | No one else has access rights (default).
  2564     \ (private)
  2565  [snip]
  2566  acl> 1
  2567  Edit advanced config?
  2568  y) Yes
  2569  n) No (default)
  2570  y/n>
  2571  --------------------
  2572  [obs]
  2573  type = s3
  2574  provider = HuaweiOBS
  2575  access_key_id = your-access-key-id
  2576  secret_access_key = your-secret-access-key
  2577  region = af-south-1
  2578  endpoint = obs.af-south-1.myhuaweicloud.com
  2579  acl = private
  2580  --------------------
  2581  y) Yes this is OK (default)
  2582  e) Edit this remote
  2583  d) Delete this remote
  2584  y/e/d> y
  2585  Current remotes:
  2586  
  2587  Name                 Type
  2588  ====                 ====
  2589  obs                  s3
  2590  
  2591  e) Edit existing remote
  2592  n) New remote
  2593  d) Delete remote
  2594  r) Rename remote
  2595  c) Copy remote
  2596  s) Set configuration password
  2597  q) Quit config
  2598  e/n/d/r/c/s/q> q
  2599  ```
  2600  
  2601  ### IBM COS (S3)
  2602  
  2603  Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
  2604  
  2605  To configure access to IBM COS S3, follow the steps below:
  2606  
  2607  1. Run rclone config and select n for a new remote.
  2608  ```
  2609  	2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
  2610  	No remotes found, make a new one?
  2611  	n) New remote
  2612  	s) Set configuration password
  2613  	q) Quit config
  2614  	n/s/q> n
  2615  ```
  2616  
  2617  2. Enter the name for the configuration
  2618  ```
  2619  	name> <YOUR NAME>
  2620  ```
  2621  
  2622  3. Select "s3" storage.
  2623  ```
  2624  Choose a number from below, or type in your own value
  2625  [snip]
  2626  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  2627     \ "s3"
  2628  [snip]
  2629  Storage> s3
  2630  ```
  2631  
  2632  4. Select IBM COS as the S3 Storage Provider.
  2633  ```
  2634  Choose the S3 provider.
  2635  Choose a number from below, or type in your own value
  2636  	 1 / Choose this option to configure Storage to AWS S3
  2637  	   \ "AWS"
  2638   	 2 / Choose this option to configure Storage to Ceph Systems
  2639    	 \ "Ceph"
  2640  	 3 /  Choose this option to configure Storage to Dreamhost
  2641       \ "Dreamhost"
  2642     4 / Choose this option to the configure Storage to IBM COS S3
  2643     	 \ "IBMCOS"
  2644   	 5 / Choose this option to the configure Storage to Minio
  2645       \ "Minio"
  2646  	 Provider>4
  2647  ```
  2648  
  2649  5. Enter the Access Key and Secret.
  2650  ```
  2651  	AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  2652  	access_key_id> <>
  2653  	AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  2654  	secret_access_key> <>
  2655  ```
  2656  
  2657  6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address.
  2658  ```
  2659  	Endpoint for IBM COS S3 API.
  2660  	Specify if using an IBM COS On Premise.
  2661  	Choose a number from below, or type in your own value
  2662  	 1 / US Cross Region Endpoint
  2663     	   \ "s3-api.us-geo.objectstorage.softlayer.net"
  2664  	 2 / US Cross Region Dallas Endpoint
  2665     	   \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
  2666   	 3 / US Cross Region Washington DC Endpoint
  2667     	   \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
  2668  	 4 / US Cross Region San Jose Endpoint
  2669  	   \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
  2670  	 5 / US Cross Region Private Endpoint
  2671  	   \ "s3-api.us-geo.objectstorage.service.networklayer.com"
  2672  	 6 / US Cross Region Dallas Private Endpoint
  2673  	   \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
  2674  	 7 / US Cross Region Washington DC Private Endpoint
  2675  	   \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
  2676  	 8 / US Cross Region San Jose Private Endpoint
  2677  	   \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
  2678  	 9 / US Region East Endpoint
  2679  	   \ "s3.us-east.objectstorage.softlayer.net"
  2680  	10 / US Region East Private Endpoint
  2681  	   \ "s3.us-east.objectstorage.service.networklayer.com"
  2682  	11 / US Region South Endpoint
  2683  [snip]
  2684  	34 / Toronto Single Site Private Endpoint
  2685  	   \ "s3.tor01.objectstorage.service.networklayer.com"
  2686  	endpoint>1
  2687  ```
  2688  
  2689  
  2690  7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
  2691  ```
  2692  	 1 / US Cross Region Standard
  2693  	   \ "us-standard"
  2694  	 2 / US Cross Region Vault
  2695  	   \ "us-vault"
  2696  	 3 / US Cross Region Cold
  2697  	   \ "us-cold"
  2698  	 4 / US Cross Region Flex
  2699  	   \ "us-flex"
  2700  	 5 / US East Region Standard
  2701  	   \ "us-east-standard"
  2702  	 6 / US East Region Vault
  2703  	   \ "us-east-vault"
  2704  	 7 / US East Region Cold
  2705  	   \ "us-east-cold"
  2706  	 8 / US East Region Flex
  2707  	   \ "us-east-flex"
  2708  	 9 / US South Region Standard
  2709  	   \ "us-south-standard"
  2710  	10 / US South Region Vault
  2711  	   \ "us-south-vault"
  2712  [snip]
  2713  	32 / Toronto Flex
  2714  	   \ "tor01-flex"
  2715  location_constraint>1
  2716  ```
  2717  
  2718  9. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
  2719  ```
  2720  Canned ACL used when creating buckets and/or storing objects in S3.
  2721  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  2722  Choose a number from below, or type in your own value
  2723        1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
  2724        \ "private"
  2725        2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
  2726        \ "public-read"
  2727        3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
  2728        \ "public-read-write"
  2729        4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
  2730        \ "authenticated-read"
  2731  acl> 1
  2732  ```
  2733  
  2734  
  2735  12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
  2736  ```
  2737  	[xxx]
  2738  	type = s3
  2739  	Provider = IBMCOS
  2740  	access_key_id = xxx
  2741  	secret_access_key = yyy
  2742  	endpoint = s3-api.us-geo.objectstorage.softlayer.net
  2743  	location_constraint = us-standard
  2744  	acl = private
  2745  ```
  2746  
  2747  13. Execute rclone commands
  2748  ```
  2749  	1)	Create a bucket.
  2750  		rclone mkdir IBM-COS-XREGION:newbucket
  2751  	2)	List available buckets.
  2752  		rclone lsd IBM-COS-XREGION:
  2753  		-1 2017-11-08 21:16:22        -1 test
  2754  		-1 2018-02-14 20:16:39        -1 newbucket
  2755  	3)	List contents of a bucket.
  2756  		rclone ls IBM-COS-XREGION:newbucket
  2757  		18685952 test.exe
  2758  	4)	Copy a file from local to remote.
  2759  		rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
  2760  	5)	Copy a file from remote to local.
  2761  		rclone copy IBM-COS-XREGION:newbucket/file.txt .
  2762  	6)	Delete a file on remote.
  2763  		rclone delete IBM-COS-XREGION:newbucket/file.txt
  2764  ```
  2765  
  2766  ### IDrive e2 {#idrive-e2}
  2767  
  2768  Here is an example of making an [IDrive e2](https://www.idrive.com/e2/)
  2769  configuration.  First run:
  2770  
  2771      rclone config
  2772  
  2773  This will guide you through an interactive setup process.
  2774  
  2775  ```
  2776  No remotes found, make a new one?
  2777  n) New remote
  2778  s) Set configuration password
  2779  q) Quit config
  2780  n/s/q> n
  2781  
  2782  Enter name for new remote.
  2783  name> e2
  2784  
  2785  Option Storage.
  2786  Type of storage to configure.
  2787  Choose a number from below, or type in your own value.
  2788  [snip]
  2789  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  2790     \ (s3)
  2791  [snip]
  2792  Storage> s3
  2793  
  2794  Option provider.
  2795  Choose your S3 provider.
  2796  Choose a number from below, or type in your own value.
  2797  Press Enter to leave empty.
  2798  [snip]
  2799  XX / IDrive e2
  2800     \ (IDrive)
  2801  [snip]
  2802  provider> IDrive
  2803  
  2804  Option env_auth.
  2805  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  2806  Only applies if access_key_id and secret_access_key is blank.
  2807  Choose a number from below, or type in your own boolean value (true or false).
  2808  Press Enter for the default (false).
  2809   1 / Enter AWS credentials in the next step.
  2810     \ (false)
  2811   2 / Get AWS credentials from the environment (env vars or IAM).
  2812     \ (true)
  2813  env_auth> 
  2814  
  2815  Option access_key_id.
  2816  AWS Access Key ID.
  2817  Leave blank for anonymous access or runtime credentials.
  2818  Enter a value. Press Enter to leave empty.
  2819  access_key_id> YOUR_ACCESS_KEY
  2820  
  2821  Option secret_access_key.
  2822  AWS Secret Access Key (password).
  2823  Leave blank for anonymous access or runtime credentials.
  2824  Enter a value. Press Enter to leave empty.
  2825  secret_access_key> YOUR_SECRET_KEY
  2826  
  2827  Option acl.
  2828  Canned ACL used when creating buckets and storing or copying objects.
  2829  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
  2830  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  2831  Note that this ACL is applied when server-side copying objects as S3
  2832  doesn't copy the ACL from the source but rather writes a fresh one.
  2833  Choose a number from below, or type in your own value.
  2834  Press Enter to leave empty.
  2835     / Owner gets FULL_CONTROL.
  2836   1 | No one else has access rights (default).
  2837     \ (private)
  2838     / Owner gets FULL_CONTROL.
  2839   2 | The AllUsers group gets READ access.
  2840     \ (public-read)
  2841     / Owner gets FULL_CONTROL.
  2842   3 | The AllUsers group gets READ and WRITE access.
  2843     | Granting this on a bucket is generally not recommended.
  2844     \ (public-read-write)
  2845     / Owner gets FULL_CONTROL.
  2846   4 | The AuthenticatedUsers group gets READ access.
  2847     \ (authenticated-read)
  2848     / Object owner gets FULL_CONTROL.
  2849   5 | Bucket owner gets READ access.
  2850     | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  2851     \ (bucket-owner-read)
  2852     / Both the object owner and the bucket owner get FULL_CONTROL over the object.
  2853   6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  2854     \ (bucket-owner-full-control)
  2855  acl> 
  2856  
  2857  Edit advanced config?
  2858  y) Yes
  2859  n) No (default)
  2860  y/n> 
  2861  
  2862  Configuration complete.
  2863  Options:
  2864  - type: s3
  2865  - provider: IDrive
  2866  - access_key_id: YOUR_ACCESS_KEY
  2867  - secret_access_key: YOUR_SECRET_KEY
  2868  - endpoint: q9d9.la12.idrivee2-5.com
  2869  Keep this "e2" remote?
  2870  y) Yes this is OK (default)
  2871  e) Edit this remote
  2872  d) Delete this remote
  2873  y/e/d> y
  2874  ```
  2875  
  2876  ### IONOS Cloud {#ionos}
  2877  
  2878  [IONOS S3 Object Storage](https://cloud.ionos.com/storage/object-storage) is a service offered by IONOS for storing and accessing unstructured data.
  2879  To connect to the service, you will need an access key and a secret key. These can be found in the [Data Center Designer](https://dcd.ionos.com/), by selecting **Manager resources** > **Object Storage Key Manager**.
  2880  
  2881  
  2882  Here is an example of a configuration. First, run `rclone config`. This will walk you through an interactive setup process. Type `n` to add the new remote, and then enter a name:
  2883  
  2884  ```
  2885  Enter name for new remote.
  2886  name> ionos-fra
  2887  ```
  2888  
  2889  Type `s3` to choose the connection type:
  2890  ```
  2891  Option Storage.
  2892  Type of storage to configure.
  2893  Choose a number from below, or type in your own value.
  2894  [snip]
  2895  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  2896     \ (s3)
  2897  [snip]
  2898  Storage> s3
  2899  ```
  2900  
  2901  Type `IONOS`:
  2902  ```
  2903  Option provider.
  2904  Choose your S3 provider.
  2905  Choose a number from below, or type in your own value.
  2906  Press Enter to leave empty.
  2907  [snip]
  2908  XX / IONOS Cloud
  2909     \ (IONOS)
  2910  [snip]
  2911  provider> IONOS
  2912  ```
  2913  
  2914  Press Enter to choose the default option `Enter AWS credentials in the next step`:
  2915  ```
  2916  Option env_auth.
  2917  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  2918  Only applies if access_key_id and secret_access_key is blank.
  2919  Choose a number from below, or type in your own boolean value (true or false).
  2920  Press Enter for the default (false).
  2921   1 / Enter AWS credentials in the next step.
  2922     \ (false)
  2923   2 / Get AWS credentials from the environment (env vars or IAM).
  2924     \ (true)
  2925  env_auth>
  2926  ```
  2927  
  2928  Enter your Access Key and Secret key. These can be retrieved in the [Data Center Designer](https://dcd.ionos.com/), click on the menu “Manager resources”  / "Object Storage Key Manager".
  2929  ```
  2930  Option access_key_id.
  2931  AWS Access Key ID.
  2932  Leave blank for anonymous access or runtime credentials.
  2933  Enter a value. Press Enter to leave empty.
  2934  access_key_id> YOUR_ACCESS_KEY
  2935  
  2936  Option secret_access_key.
  2937  AWS Secret Access Key (password).
  2938  Leave blank for anonymous access or runtime credentials.
  2939  Enter a value. Press Enter to leave empty.
  2940  secret_access_key> YOUR_SECRET_KEY
  2941  ```
  2942  
  2943  Choose the region where your bucket is located:
  2944  ```
  2945  Option region.
  2946  Region where your bucket will be created and your data stored.
  2947  Choose a number from below, or type in your own value.
  2948  Press Enter to leave empty.
  2949   1 / Frankfurt, Germany
  2950     \ (de)
  2951   2 / Berlin, Germany
  2952     \ (eu-central-2)
  2953   3 / Logrono, Spain
  2954     \ (eu-south-2)
  2955  region> 2
  2956  ```
  2957  
  2958  Choose the endpoint from the same region:
  2959  ```
  2960  Option endpoint.
  2961  Endpoint for IONOS S3 Object Storage.
  2962  Specify the endpoint from the same region.
  2963  Choose a number from below, or type in your own value.
  2964  Press Enter to leave empty.
  2965   1 / Frankfurt, Germany
  2966     \ (s3-eu-central-1.ionoscloud.com)
  2967   2 / Berlin, Germany
  2968     \ (s3-eu-central-2.ionoscloud.com)
  2969   3 / Logrono, Spain
  2970     \ (s3-eu-south-2.ionoscloud.com)
  2971  endpoint> 1
  2972  ```
  2973  
  2974  Press Enter to choose the default option or choose the desired ACL setting:
  2975  ```
  2976  Option acl.
  2977  Canned ACL used when creating buckets and storing or copying objects.
  2978  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
  2979  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  2980  Note that this ACL is applied when server-side copying objects as S3
  2981  doesn't copy the ACL from the source but rather writes a fresh one.
  2982  Choose a number from below, or type in your own value.
  2983  Press Enter to leave empty.
  2984     / Owner gets FULL_CONTROL.
  2985   1 | No one else has access rights (default).
  2986     \ (private)
  2987     / Owner gets FULL_CONTROL.
  2988  [snip]
  2989  acl>
  2990  ```
  2991  
  2992  Press Enter to skip the advanced config:
  2993  ```
  2994  Edit advanced config?
  2995  y) Yes
  2996  n) No (default)
  2997  y/n>
  2998  ```
  2999  
  3000  Press Enter to save the configuration, and then `q` to quit the configuration process:
  3001  ```
  3002  Configuration complete.
  3003  Options:
  3004  - type: s3
  3005  - provider: IONOS
  3006  - access_key_id: YOUR_ACCESS_KEY
  3007  - secret_access_key: YOUR_SECRET_KEY
  3008  - endpoint: s3-eu-central-1.ionoscloud.com
  3009  Keep this "ionos-fra" remote?
  3010  y) Yes this is OK (default)
  3011  e) Edit this remote
  3012  d) Delete this remote
  3013  y/e/d> y
  3014  ```
  3015  
  3016  Done! Now you can try some commands (for macOS, use `./rclone` instead of `rclone`).
  3017  
  3018  1)  Create a bucket (the name must be unique within the whole IONOS S3)
  3019  ```
  3020  rclone mkdir ionos-fra:my-bucket
  3021  ```
  3022  2)  List available buckets
  3023  ```
  3024  rclone lsd ionos-fra:
  3025  ```
  3026  4) Copy a file from local to remote
  3027  ```
  3028  rclone copy /Users/file.txt ionos-fra:my-bucket
  3029  ```
  3030  3)  List contents of a bucket
  3031  ```
  3032  rclone ls ionos-fra:my-bucket
  3033  ```
  3034  5)  Copy a file from remote to local
  3035  ```
  3036  rclone copy ionos-fra:my-bucket/file.txt
  3037  ```
  3038  
  3039  ### Minio
  3040  
  3041  [Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
  3042  
  3043  It is very easy to install and provides an S3 compatible server which can be used by rclone.
  3044  
  3045  To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide).
  3046  
  3047  When it configures itself Minio will print something like this
  3048  
  3049  ```
  3050  Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
  3051  AccessKey: USWUXHGYZQYFYFFIT3RE
  3052  SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  3053  Region:    us-east-1
  3054  SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
  3055  
  3056  Browser Access:
  3057     http://192.168.1.106:9000  http://172.23.0.1:9000
  3058  
  3059  Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
  3060     $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  3061  
  3062  Object API (Amazon S3 compatible):
  3063     Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
  3064     Java:       https://docs.minio.io/docs/java-client-quickstart-guide
  3065     Python:     https://docs.minio.io/docs/python-client-quickstart-guide
  3066     JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
  3067     .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
  3068  
  3069  Drive Capacity: 26 GiB Free, 165 GiB Total
  3070  ```
  3071  
  3072  These details need to go into `rclone config` like this.  Note that it
  3073  is important to put the region in as stated above.
  3074  
  3075  ```
  3076  env_auth> 1
  3077  access_key_id> USWUXHGYZQYFYFFIT3RE
  3078  secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  3079  region> us-east-1
  3080  endpoint> http://192.168.1.106:9000
  3081  location_constraint>
  3082  server_side_encryption>
  3083  ```
  3084  
  3085  Which makes the config file look like this
  3086  
  3087  ```
  3088  [minio]
  3089  type = s3
  3090  provider = Minio
  3091  env_auth = false
  3092  access_key_id = USWUXHGYZQYFYFFIT3RE
  3093  secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  3094  region = us-east-1
  3095  endpoint = http://192.168.1.106:9000
  3096  location_constraint =
  3097  server_side_encryption =
  3098  ```
  3099  
  3100  So once set up, for example, to copy files into a bucket
  3101  
  3102  ```
  3103  rclone copy /path/to/files minio:bucket
  3104  ```
  3105  
  3106  ### Qiniu Cloud Object Storage (Kodo) {#qiniu}
  3107  
  3108  [Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.
  3109  
  3110  To configure access to Qiniu Kodo, follow the steps below:
  3111  
  3112  1. Run `rclone config` and select `n` for a new remote.
  3113  
  3114  ```
  3115  rclone config
  3116  No remotes found, make a new one?
  3117  n) New remote
  3118  s) Set configuration password
  3119  q) Quit config
  3120  n/s/q> n
  3121  ```
  3122  
  3123  2. Give the name of the configuration. For example, name it 'qiniu'.
  3124  
  3125  ```
  3126  name> qiniu
  3127  ```
  3128  
  3129  3. Select `s3` storage.
  3130  
  3131  ```
  3132  Choose a number from below, or type in your own value
  3133  [snip]
  3134  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  3135     \ (s3)
  3136  [snip]
  3137  Storage> s3
  3138  ```
  3139  
  3140  4. Select `Qiniu` provider.
  3141  ```
  3142  Choose a number from below, or type in your own value
  3143  1 / Amazon Web Services (AWS) S3
  3144     \ "AWS"
  3145  [snip]
  3146  22 / Qiniu Object Storage (Kodo)
  3147     \ (Qiniu)
  3148  [snip]
  3149  provider> Qiniu
  3150  ```
  3151  
  3152  5. Enter your SecretId and SecretKey of Qiniu Kodo.
  3153  
  3154  ```
  3155  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  3156  Only applies if access_key_id and secret_access_key is blank.
  3157  Enter a boolean value (true or false). Press Enter for the default ("false").
  3158  Choose a number from below, or type in your own value
  3159   1 / Enter AWS credentials in the next step
  3160     \ "false"
  3161   2 / Get AWS credentials from the environment (env vars or IAM)
  3162     \ "true"
  3163  env_auth> 1
  3164  AWS Access Key ID.
  3165  Leave blank for anonymous access or runtime credentials.
  3166  Enter a string value. Press Enter for the default ("").
  3167  access_key_id> AKIDxxxxxxxxxx
  3168  AWS Secret Access Key (password)
  3169  Leave blank for anonymous access or runtime credentials.
  3170  Enter a string value. Press Enter for the default ("").
  3171  secret_access_key> xxxxxxxxxxx
  3172  ```
  3173  
  3174  6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region.
  3175  
  3176  ```
  3177     / The default endpoint - a good choice if you are unsure.
  3178   1 | East China Region 1.
  3179     | Needs location constraint cn-east-1.
  3180     \ (cn-east-1)
  3181     / East China Region 2.
  3182   2 | Needs location constraint cn-east-2.
  3183     \ (cn-east-2)
  3184     / North China Region 1.
  3185   3 | Needs location constraint cn-north-1.
  3186     \ (cn-north-1)
  3187     / South China Region 1.
  3188   4 | Needs location constraint cn-south-1.
  3189     \ (cn-south-1)
  3190     / North America Region.
  3191   5 | Needs location constraint us-north-1.
  3192     \ (us-north-1)
  3193     / Southeast Asia Region 1.
  3194   6 | Needs location constraint ap-southeast-1.
  3195     \ (ap-southeast-1)
  3196     / Northeast Asia Region 1.
  3197   7 | Needs location constraint ap-northeast-1.
  3198     \ (ap-northeast-1)
  3199  [snip]
  3200  endpoint> 1
  3201  
  3202  Option endpoint.
  3203  Endpoint for Qiniu Object Storage.
  3204  Choose a number from below, or type in your own value.
  3205  Press Enter to leave empty.
  3206   1 / East China Endpoint 1
  3207     \ (s3-cn-east-1.qiniucs.com)
  3208   2 / East China Endpoint 2
  3209     \ (s3-cn-east-2.qiniucs.com)
  3210   3 / North China Endpoint 1
  3211     \ (s3-cn-north-1.qiniucs.com)
  3212   4 / South China Endpoint 1
  3213     \ (s3-cn-south-1.qiniucs.com)
  3214   5 / North America Endpoint 1
  3215     \ (s3-us-north-1.qiniucs.com)
  3216   6 / Southeast Asia Endpoint 1
  3217     \ (s3-ap-southeast-1.qiniucs.com)
  3218   7 / Northeast Asia Endpoint 1
  3219     \ (s3-ap-northeast-1.qiniucs.com)
  3220  endpoint> 1
  3221  
  3222  Option location_constraint.
  3223  Location constraint - must be set to match the Region.
  3224  Used when creating buckets only.
  3225  Choose a number from below, or type in your own value.
  3226  Press Enter to leave empty.
  3227   1 / East China Region 1
  3228     \ (cn-east-1)
  3229   2 / East China Region 2
  3230     \ (cn-east-2)
  3231   3 / North China Region 1
  3232     \ (cn-north-1)
  3233   4 / South China Region 1
  3234     \ (cn-south-1)
  3235   5 / North America Region 1
  3236     \ (us-north-1)
  3237   6 / Southeast Asia Region 1
  3238     \ (ap-southeast-1)
  3239   7 / Northeast Asia Region 1
  3240     \ (ap-northeast-1)
  3241  location_constraint> 1
  3242  ```
  3243  
  3244  7. Choose acl and storage class.
  3245  
  3246  ```
  3247  Note that this ACL is applied when server-side copying objects as S3
  3248  doesn't copy the ACL from the source but rather writes a fresh one.
  3249  Enter a string value. Press Enter for the default ("").
  3250  Choose a number from below, or type in your own value
  3251     / Owner gets FULL_CONTROL.
  3252   1 | No one else has access rights (default).
  3253     \ (private)
  3254     / Owner gets FULL_CONTROL.
  3255   2 | The AllUsers group gets READ access.
  3256     \ (public-read)
  3257  [snip]
  3258  acl> 2
  3259  The storage class to use when storing new objects in Tencent COS.
  3260  Enter a string value. Press Enter for the default ("").
  3261  Choose a number from below, or type in your own value
  3262   1 / Standard storage class
  3263     \ (STANDARD)
  3264   2 / Infrequent access storage mode
  3265     \ (LINE)
  3266   3 / Archive storage mode
  3267     \ (GLACIER)
  3268   4 / Deep archive storage mode
  3269     \ (DEEP_ARCHIVE)
  3270  [snip]
  3271  storage_class> 1
  3272  Edit advanced config? (y/n)
  3273  y) Yes
  3274  n) No (default)
  3275  y/n> n
  3276  Remote config
  3277  --------------------
  3278  [qiniu]
  3279  - type: s3
  3280  - provider: Qiniu
  3281  - access_key_id: xxx
  3282  - secret_access_key: xxx
  3283  - region: cn-east-1
  3284  - endpoint: s3-cn-east-1.qiniucs.com
  3285  - location_constraint: cn-east-1
  3286  - acl: public-read
  3287  - storage_class: STANDARD
  3288  --------------------
  3289  y) Yes this is OK (default)
  3290  e) Edit this remote
  3291  d) Delete this remote
  3292  y/e/d> y
  3293  Current remotes:
  3294  
  3295  Name                 Type
  3296  ====                 ====
  3297  qiniu                s3
  3298  ```
  3299  
  3300  ### RackCorp {#RackCorp}
  3301  
  3302  [RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 compatible object storage platform from your friendly cloud provider RackCorp. 
  3303  The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty.
  3304  
  3305  Before you can use RackCorp Object Storage, you'll need to "[sign up](https://www.rackcorp.com/signup)" for an account on our "[portal](https://portal.rackcorp.com)".
  3306  Next you can create an `access key`, a `secret key` and `buckets`, in your location of choice with ease. 
  3307  These details are required for the next steps of configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`.
  3308  
  3309  Your config should end up looking a bit like this:
  3310  
  3311  ```
  3312  [RCS3-demo-config]
  3313  type = s3
  3314  provider = RackCorp
  3315  env_auth = true
  3316  access_key_id = YOURACCESSKEY
  3317  secret_access_key = YOURSECRETACCESSKEY
  3318  region = au-nsw
  3319  endpoint = s3.rackcorp.com
  3320  location_constraint = au-nsw
  3321  ```
  3322  
  3323  ### Rclone Serve S3 {#rclone}
  3324  
  3325  Rclone can serve any remote over the S3 protocol. For details see the
  3326  [rclone serve s3](/commands/rclone_serve_http/) documentation.
  3327  
  3328  For example, to serve `remote:path` over s3, run the server like this:
  3329  
  3330  ```
  3331  rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
  3332  ```
  3333  
  3334  This will be compatible with an rclone remote which is defined like this:
  3335  
  3336  ```
  3337  [serves3]
  3338  type = s3
  3339  provider = Rclone
  3340  endpoint = http://127.0.0.1:8080/
  3341  access_key_id = ACCESS_KEY_ID
  3342  secret_access_key = SECRET_ACCESS_KEY
  3343  use_multipart_uploads = false
  3344  ```
  3345  
  3346  Note that setting `disable_multipart_uploads = true` is to work around
  3347  [a bug](/commands/rclone_serve_http/#bugs) which will be fixed in due course.
  3348  
  3349  ### Scaleway
  3350  
  3351  [Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
  3352  Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
  3353  
  3354  Scaleway provides an S3 interface which can be configured for use with rclone like this:
  3355  
  3356  ```
  3357  [scaleway]
  3358  type = s3
  3359  provider = Scaleway
  3360  env_auth = false
  3361  endpoint = s3.nl-ams.scw.cloud
  3362  access_key_id = SCWXXXXXXXXXXXXXX
  3363  secret_access_key = 1111111-2222-3333-44444-55555555555555
  3364  region = nl-ams
  3365  location_constraint = nl-ams
  3366  acl = private
  3367  upload_cutoff = 5M
  3368  chunk_size = 5M
  3369  copy_cutoff = 5M
  3370  ```
  3371  
  3372  [C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
  3373  So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
  3374  
  3375  ### Seagate Lyve Cloud {#lyve}
  3376  
  3377  [Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3
  3378  compatible object storage platform from [Seagate](https://seagate.com/) intended for enterprise use.
  3379  
  3380  Here is a config run through for a remote called `remote` - you may
  3381  choose a different name of course. Note that to create an access key
  3382  and secret key you will need to create a service account first.
  3383  
  3384  ```
  3385  $ rclone config
  3386  No remotes found, make a new one?
  3387  n) New remote
  3388  s) Set configuration password
  3389  q) Quit config
  3390  n/s/q> n
  3391  name> remote
  3392  ```
  3393  
  3394  Choose `s3` backend
  3395  
  3396  ```
  3397  Type of storage to configure.
  3398  Choose a number from below, or type in your own value.
  3399  [snip]
  3400  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  3401     \ (s3)
  3402  [snip]
  3403  Storage> s3
  3404  ```
  3405  
  3406  Choose `LyveCloud` as S3 provider
  3407  
  3408  ```
  3409  Choose your S3 provider.
  3410  Choose a number from below, or type in your own value.
  3411  Press Enter to leave empty.
  3412  [snip]
  3413  XX / Seagate Lyve Cloud
  3414     \ (LyveCloud)
  3415  [snip]
  3416  provider> LyveCloud
  3417  ```
  3418  
  3419  Take the default (just press enter) to enter access key and secret in the config file.
  3420  
  3421  ```
  3422  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  3423  Only applies if access_key_id and secret_access_key is blank.
  3424  Choose a number from below, or type in your own boolean value (true or false).
  3425  Press Enter for the default (false).
  3426   1 / Enter AWS credentials in the next step.
  3427     \ (false)
  3428   2 / Get AWS credentials from the environment (env vars or IAM).
  3429     \ (true)
  3430  env_auth>
  3431  ```
  3432  
  3433  ```
  3434  AWS Access Key ID.
  3435  Leave blank for anonymous access or runtime credentials.
  3436  Enter a value. Press Enter to leave empty.
  3437  access_key_id> XXX
  3438  ```
  3439  
  3440  ```
  3441  AWS Secret Access Key (password).
  3442  Leave blank for anonymous access or runtime credentials.
  3443  Enter a value. Press Enter to leave empty.
  3444  secret_access_key> YYY
  3445  ```
  3446  
  3447  Leave region blank
  3448  
  3449  ```
  3450  Region to connect to.
  3451  Leave blank if you are using an S3 clone and you don't have a region.
  3452  Choose a number from below, or type in your own value.
  3453  Press Enter to leave empty.
  3454     / Use this if unsure.
  3455   1 | Will use v4 signatures and an empty region.
  3456     \ ()
  3457     / Use this only if v4 signatures don't work.
  3458   2 | E.g. pre Jewel/v10 CEPH.
  3459     \ (other-v2-signature)
  3460  region>
  3461  ```
  3462  
  3463  Choose an endpoint from the list
  3464  
  3465  ```
  3466  Endpoint for S3 API.
  3467  Required when using an S3 clone.
  3468  Choose a number from below, or type in your own value.
  3469  Press Enter to leave empty.
  3470   1 / Seagate Lyve Cloud US East 1 (Virginia)
  3471     \ (s3.us-east-1.lyvecloud.seagate.com)
  3472   2 / Seagate Lyve Cloud US West 1 (California)
  3473     \ (s3.us-west-1.lyvecloud.seagate.com)
  3474   3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
  3475     \ (s3.ap-southeast-1.lyvecloud.seagate.com)
  3476  endpoint> 1
  3477  ```
  3478  
  3479  Leave location constraint blank
  3480  
  3481  ```
  3482  Location constraint - must be set to match the Region.
  3483  Leave blank if not sure. Used when creating buckets only.
  3484  Enter a value. Press Enter to leave empty.
  3485  location_constraint>
  3486  ```
  3487  
  3488  Choose default ACL (`private`).
  3489  
  3490  ```
  3491  Canned ACL used when creating buckets and storing or copying objects.
  3492  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
  3493  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  3494  Note that this ACL is applied when server-side copying objects as S3
  3495  doesn't copy the ACL from the source but rather writes a fresh one.
  3496  Choose a number from below, or type in your own value.
  3497  Press Enter to leave empty.
  3498     / Owner gets FULL_CONTROL.
  3499   1 | No one else has access rights (default).
  3500     \ (private)
  3501  [snip]
  3502  acl>
  3503  ```
  3504  
  3505  And the config file should end up looking like this:
  3506  
  3507  ```
  3508  [remote]
  3509  type = s3
  3510  provider = LyveCloud
  3511  access_key_id = XXX
  3512  secret_access_key = YYY
  3513  endpoint = s3.us-east-1.lyvecloud.seagate.com
  3514  ```
  3515  
  3516  ### SeaweedFS
  3517  
  3518  [SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for 
  3519  blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
  3520  It has an S3 compatible object storage interface. SeaweedFS can also act as a 
  3521  [gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage) 
  3522  to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost.
  3523  
  3524  Assuming the SeaweedFS are configured with `weed shell` as such:
  3525  ```
  3526  > s3.bucket.create -name foo
  3527  > s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
  3528  {
  3529    "identities": [
  3530      {
  3531        "name": "me",
  3532        "credentials": [
  3533          {
  3534            "accessKey": "any",
  3535            "secretKey": "any"
  3536          }
  3537        ],
  3538        "actions": [
  3539          "Read:foo",
  3540          "Write:foo",
  3541          "List:foo",
  3542          "Tagging:foo",
  3543          "Admin:foo"
  3544        ]
  3545      }
  3546    ]
  3547  }
  3548  ```
  3549  
  3550  To use rclone with SeaweedFS, above configuration should end up with something like this in
  3551  your config:
  3552  
  3553  ```
  3554  [seaweedfs_s3]
  3555  type = s3
  3556  provider = SeaweedFS
  3557  access_key_id = any
  3558  secret_access_key = any
  3559  endpoint = localhost:8333
  3560  ```
  3561  
  3562  So once set up, for example to copy files into a bucket
  3563  
  3564  ```
  3565  rclone copy /path/to/files seaweedfs_s3:foo
  3566  ```
  3567  
  3568  ### Wasabi
  3569  
  3570  [Wasabi](https://wasabi.com) is a cloud-based object storage service for a
  3571  broad range of applications and use cases. Wasabi is designed for
  3572  individuals and organizations that require a high-performance,
  3573  reliable, and secure data storage infrastructure at minimal cost.
  3574  
  3575  Wasabi provides an S3 interface which can be configured for use with
  3576  rclone like this.
  3577  
  3578  ```
  3579  No remotes found, make a new one?
  3580  n) New remote
  3581  s) Set configuration password
  3582  n/s> n
  3583  name> wasabi
  3584  Type of storage to configure.
  3585  Choose a number from below, or type in your own value
  3586  [snip]
  3587  XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara)
  3588     \ "s3"
  3589  [snip]
  3590  Storage> s3
  3591  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  3592  Choose a number from below, or type in your own value
  3593   1 / Enter AWS credentials in the next step
  3594     \ "false"
  3595   2 / Get AWS credentials from the environment (env vars or IAM)
  3596     \ "true"
  3597  env_auth> 1
  3598  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  3599  access_key_id> YOURACCESSKEY
  3600  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  3601  secret_access_key> YOURSECRETACCESSKEY
  3602  Region to connect to.
  3603  Choose a number from below, or type in your own value
  3604     / The default endpoint - a good choice if you are unsure.
  3605   1 | US Region, Northern Virginia, or Pacific Northwest.
  3606     | Leave location constraint empty.
  3607     \ "us-east-1"
  3608  [snip]
  3609  region> us-east-1
  3610  Endpoint for S3 API.
  3611  Leave blank if using AWS to use the default endpoint for the region.
  3612  Specify if using an S3 clone such as Ceph.
  3613  endpoint> s3.wasabisys.com
  3614  Location constraint - must be set to match the Region. Used when creating buckets only.
  3615  Choose a number from below, or type in your own value
  3616   1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
  3617     \ ""
  3618  [snip]
  3619  location_constraint>
  3620  Canned ACL used when creating buckets and/or storing objects in S3.
  3621  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  3622  Choose a number from below, or type in your own value
  3623   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  3624     \ "private"
  3625  [snip]
  3626  acl>
  3627  The server-side encryption algorithm used when storing this object in S3.
  3628  Choose a number from below, or type in your own value
  3629   1 / None
  3630     \ ""
  3631   2 / AES256
  3632     \ "AES256"
  3633  server_side_encryption>
  3634  The storage class to use when storing objects in S3.
  3635  Choose a number from below, or type in your own value
  3636   1 / Default
  3637     \ ""
  3638   2 / Standard storage class
  3639     \ "STANDARD"
  3640   3 / Reduced redundancy storage class
  3641     \ "REDUCED_REDUNDANCY"
  3642   4 / Standard Infrequent Access storage class
  3643     \ "STANDARD_IA"
  3644  storage_class>
  3645  Remote config
  3646  --------------------
  3647  [wasabi]
  3648  env_auth = false
  3649  access_key_id = YOURACCESSKEY
  3650  secret_access_key = YOURSECRETACCESSKEY
  3651  region = us-east-1
  3652  endpoint = s3.wasabisys.com
  3653  location_constraint =
  3654  acl =
  3655  server_side_encryption =
  3656  storage_class =
  3657  --------------------
  3658  y) Yes this is OK
  3659  e) Edit this remote
  3660  d) Delete this remote
  3661  y/e/d> y
  3662  ```
  3663  
  3664  This will leave the config file looking like this.
  3665  
  3666  ```
  3667  [wasabi]
  3668  type = s3
  3669  provider = Wasabi
  3670  env_auth = false
  3671  access_key_id = YOURACCESSKEY
  3672  secret_access_key = YOURSECRETACCESSKEY
  3673  region =
  3674  endpoint = s3.wasabisys.com
  3675  location_constraint =
  3676  acl =
  3677  server_side_encryption =
  3678  storage_class =
  3679  ```
  3680  
  3681  ### Alibaba OSS {#alibaba-oss}
  3682  
  3683  Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
  3684  configuration.  First run:
  3685  
  3686      rclone config
  3687  
  3688  This will guide you through an interactive setup process.
  3689  
  3690  ```
  3691  No remotes found, make a new one?
  3692  n) New remote
  3693  s) Set configuration password
  3694  q) Quit config
  3695  n/s/q> n
  3696  name> oss
  3697  Type of storage to configure.
  3698  Enter a string value. Press Enter for the default ("").
  3699  Choose a number from below, or type in your own value
  3700  [snip]
  3701  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  3702     \ "s3"
  3703  [snip]
  3704  Storage> s3
  3705  Choose your S3 provider.
  3706  Enter a string value. Press Enter for the default ("").
  3707  Choose a number from below, or type in your own value
  3708   1 / Amazon Web Services (AWS) S3
  3709     \ "AWS"
  3710   2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
  3711     \ "Alibaba"
  3712   3 / Ceph Object Storage
  3713     \ "Ceph"
  3714  [snip]
  3715  provider> Alibaba
  3716  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  3717  Only applies if access_key_id and secret_access_key is blank.
  3718  Enter a boolean value (true or false). Press Enter for the default ("false").
  3719  Choose a number from below, or type in your own value
  3720   1 / Enter AWS credentials in the next step
  3721     \ "false"
  3722   2 / Get AWS credentials from the environment (env vars or IAM)
  3723     \ "true"
  3724  env_auth> 1
  3725  AWS Access Key ID.
  3726  Leave blank for anonymous access or runtime credentials.
  3727  Enter a string value. Press Enter for the default ("").
  3728  access_key_id> accesskeyid
  3729  AWS Secret Access Key (password)
  3730  Leave blank for anonymous access or runtime credentials.
  3731  Enter a string value. Press Enter for the default ("").
  3732  secret_access_key> secretaccesskey
  3733  Endpoint for OSS API.
  3734  Enter a string value. Press Enter for the default ("").
  3735  Choose a number from below, or type in your own value
  3736   1 / East China 1 (Hangzhou)
  3737     \ "oss-cn-hangzhou.aliyuncs.com"
  3738   2 / East China 2 (Shanghai)
  3739     \ "oss-cn-shanghai.aliyuncs.com"
  3740   3 / North China 1 (Qingdao)
  3741     \ "oss-cn-qingdao.aliyuncs.com"
  3742  [snip]
  3743  endpoint> 1
  3744  Canned ACL used when creating buckets and storing or copying objects.
  3745  
  3746  Note that this ACL is applied when server-side copying objects as S3
  3747  doesn't copy the ACL from the source but rather writes a fresh one.
  3748  Enter a string value. Press Enter for the default ("").
  3749  Choose a number from below, or type in your own value
  3750   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  3751     \ "private"
  3752   2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  3753     \ "public-read"
  3754     / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  3755  [snip]
  3756  acl> 1
  3757  The storage class to use when storing new objects in OSS.
  3758  Enter a string value. Press Enter for the default ("").
  3759  Choose a number from below, or type in your own value
  3760   1 / Default
  3761     \ ""
  3762   2 / Standard storage class
  3763     \ "STANDARD"
  3764   3 / Archive storage mode.
  3765     \ "GLACIER"
  3766   4 / Infrequent access storage mode.
  3767     \ "STANDARD_IA"
  3768  storage_class> 1
  3769  Edit advanced config? (y/n)
  3770  y) Yes
  3771  n) No
  3772  y/n> n
  3773  Remote config
  3774  --------------------
  3775  [oss]
  3776  type = s3
  3777  provider = Alibaba
  3778  env_auth = false
  3779  access_key_id = accesskeyid
  3780  secret_access_key = secretaccesskey
  3781  endpoint = oss-cn-hangzhou.aliyuncs.com
  3782  acl = private
  3783  storage_class = Standard
  3784  --------------------
  3785  y) Yes this is OK
  3786  e) Edit this remote
  3787  d) Delete this remote
  3788  y/e/d> y
  3789  ```
  3790  
  3791  ### China Mobile Ecloud Elastic Object Storage (EOS) {#china-mobile-ecloud-eos}
  3792  
  3793  Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/)
  3794  configuration.  First run:
  3795  
  3796      rclone config
  3797  
  3798  This will guide you through an interactive setup process.
  3799  
  3800  ```
  3801  No remotes found, make a new one?
  3802  n) New remote
  3803  s) Set configuration password
  3804  q) Quit config
  3805  n/s/q> n
  3806  name> ChinaMobile
  3807  Option Storage.
  3808  Type of storage to configure.
  3809  Choose a number from below, or type in your own value.
  3810   ...
  3811  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  3812     \ (s3)
  3813   ...
  3814  Storage> s3
  3815  Option provider.
  3816  Choose your S3 provider.
  3817  Choose a number from below, or type in your own value.
  3818  Press Enter to leave empty.
  3819   ...
  3820   4 / China Mobile Ecloud Elastic Object Storage (EOS)
  3821     \ (ChinaMobile)
  3822   ...
  3823  provider> ChinaMobile
  3824  Option env_auth.
  3825  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  3826  Only applies if access_key_id and secret_access_key is blank.
  3827  Choose a number from below, or type in your own boolean value (true or false).
  3828  Press Enter for the default (false).
  3829   1 / Enter AWS credentials in the next step.
  3830     \ (false)
  3831   2 / Get AWS credentials from the environment (env vars or IAM).
  3832     \ (true)
  3833  env_auth>
  3834  Option access_key_id.
  3835  AWS Access Key ID.
  3836  Leave blank for anonymous access or runtime credentials.
  3837  Enter a value. Press Enter to leave empty.
  3838  access_key_id> accesskeyid
  3839  Option secret_access_key.
  3840  AWS Secret Access Key (password).
  3841  Leave blank for anonymous access or runtime credentials.
  3842  Enter a value. Press Enter to leave empty.
  3843  secret_access_key> secretaccesskey
  3844  Option endpoint.
  3845  Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
  3846  Choose a number from below, or type in your own value.
  3847  Press Enter to leave empty.
  3848     / The default endpoint - a good choice if you are unsure.
  3849   1 | East China (Suzhou)
  3850     \ (eos-wuxi-1.cmecloud.cn)
  3851   2 / East China (Jinan)
  3852     \ (eos-jinan-1.cmecloud.cn)
  3853   3 / East China (Hangzhou)
  3854     \ (eos-ningbo-1.cmecloud.cn)
  3855   4 / East China (Shanghai-1)
  3856     \ (eos-shanghai-1.cmecloud.cn)
  3857   5 / Central China (Zhengzhou)
  3858     \ (eos-zhengzhou-1.cmecloud.cn)
  3859   6 / Central China (Changsha-1)
  3860     \ (eos-hunan-1.cmecloud.cn)
  3861   7 / Central China (Changsha-2)
  3862     \ (eos-zhuzhou-1.cmecloud.cn)
  3863   8 / South China (Guangzhou-2)
  3864     \ (eos-guangzhou-1.cmecloud.cn)
  3865   9 / South China (Guangzhou-3)
  3866     \ (eos-dongguan-1.cmecloud.cn)
  3867  10 / North China (Beijing-1)
  3868     \ (eos-beijing-1.cmecloud.cn)
  3869  11 / North China (Beijing-2)
  3870     \ (eos-beijing-2.cmecloud.cn)
  3871  12 / North China (Beijing-3)
  3872     \ (eos-beijing-4.cmecloud.cn)
  3873  13 / North China (Huhehaote)
  3874     \ (eos-huhehaote-1.cmecloud.cn)
  3875  14 / Southwest China (Chengdu)
  3876     \ (eos-chengdu-1.cmecloud.cn)
  3877  15 / Southwest China (Chongqing)
  3878     \ (eos-chongqing-1.cmecloud.cn)
  3879  16 / Southwest China (Guiyang)
  3880     \ (eos-guiyang-1.cmecloud.cn)
  3881  17 / Nouthwest China (Xian)
  3882     \ (eos-xian-1.cmecloud.cn)
  3883  18 / Yunnan China (Kunming)
  3884     \ (eos-yunnan.cmecloud.cn)
  3885  19 / Yunnan China (Kunming-2)
  3886     \ (eos-yunnan-2.cmecloud.cn)
  3887  20 / Tianjin China (Tianjin)
  3888     \ (eos-tianjin-1.cmecloud.cn)
  3889  21 / Jilin China (Changchun)
  3890     \ (eos-jilin-1.cmecloud.cn)
  3891  22 / Hubei China (Xiangyan)
  3892     \ (eos-hubei-1.cmecloud.cn)
  3893  23 / Jiangxi China (Nanchang)
  3894     \ (eos-jiangxi-1.cmecloud.cn)
  3895  24 / Gansu China (Lanzhou)
  3896     \ (eos-gansu-1.cmecloud.cn)
  3897  25 / Shanxi China (Taiyuan)
  3898     \ (eos-shanxi-1.cmecloud.cn)
  3899  26 / Liaoning China (Shenyang)
  3900     \ (eos-liaoning-1.cmecloud.cn)
  3901  27 / Hebei China (Shijiazhuang)
  3902     \ (eos-hebei-1.cmecloud.cn)
  3903  28 / Fujian China (Xiamen)
  3904     \ (eos-fujian-1.cmecloud.cn)
  3905  29 / Guangxi China (Nanning)
  3906     \ (eos-guangxi-1.cmecloud.cn)
  3907  30 / Anhui China (Huainan)
  3908     \ (eos-anhui-1.cmecloud.cn)
  3909  endpoint> 1
  3910  Option location_constraint.
  3911  Location constraint - must match endpoint.
  3912  Used when creating buckets only.
  3913  Choose a number from below, or type in your own value.
  3914  Press Enter to leave empty.
  3915   1 / East China (Suzhou)
  3916     \ (wuxi1)
  3917   2 / East China (Jinan)
  3918     \ (jinan1)
  3919   3 / East China (Hangzhou)
  3920     \ (ningbo1)
  3921   4 / East China (Shanghai-1)
  3922     \ (shanghai1)
  3923   5 / Central China (Zhengzhou)
  3924     \ (zhengzhou1)
  3925   6 / Central China (Changsha-1)
  3926     \ (hunan1)
  3927   7 / Central China (Changsha-2)
  3928     \ (zhuzhou1)
  3929   8 / South China (Guangzhou-2)
  3930     \ (guangzhou1)
  3931   9 / South China (Guangzhou-3)
  3932     \ (dongguan1)
  3933  10 / North China (Beijing-1)
  3934     \ (beijing1)
  3935  11 / North China (Beijing-2)
  3936     \ (beijing2)
  3937  12 / North China (Beijing-3)
  3938     \ (beijing4)
  3939  13 / North China (Huhehaote)
  3940     \ (huhehaote1)
  3941  14 / Southwest China (Chengdu)
  3942     \ (chengdu1)
  3943  15 / Southwest China (Chongqing)
  3944     \ (chongqing1)
  3945  16 / Southwest China (Guiyang)
  3946     \ (guiyang1)
  3947  17 / Nouthwest China (Xian)
  3948     \ (xian1)
  3949  18 / Yunnan China (Kunming)
  3950     \ (yunnan)
  3951  19 / Yunnan China (Kunming-2)
  3952     \ (yunnan2)
  3953  20 / Tianjin China (Tianjin)
  3954     \ (tianjin1)
  3955  21 / Jilin China (Changchun)
  3956     \ (jilin1)
  3957  22 / Hubei China (Xiangyan)
  3958     \ (hubei1)
  3959  23 / Jiangxi China (Nanchang)
  3960     \ (jiangxi1)
  3961  24 / Gansu China (Lanzhou)
  3962     \ (gansu1)
  3963  25 / Shanxi China (Taiyuan)
  3964     \ (shanxi1)
  3965  26 / Liaoning China (Shenyang)
  3966     \ (liaoning1)
  3967  27 / Hebei China (Shijiazhuang)
  3968     \ (hebei1)
  3969  28 / Fujian China (Xiamen)
  3970     \ (fujian1)
  3971  29 / Guangxi China (Nanning)
  3972     \ (guangxi1)
  3973  30 / Anhui China (Huainan)
  3974     \ (anhui1)
  3975  location_constraint> 1
  3976  Option acl.
  3977  Canned ACL used when creating buckets and storing or copying objects.
  3978  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
  3979  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  3980  Note that this ACL is applied when server-side copying objects as S3
  3981  doesn't copy the ACL from the source but rather writes a fresh one.
  3982  Choose a number from below, or type in your own value.
  3983  Press Enter to leave empty.
  3984     / Owner gets FULL_CONTROL.
  3985   1 | No one else has access rights (default).
  3986     \ (private)
  3987     / Owner gets FULL_CONTROL.
  3988   2 | The AllUsers group gets READ access.
  3989     \ (public-read)
  3990     / Owner gets FULL_CONTROL.
  3991   3 | The AllUsers group gets READ and WRITE access.
  3992     | Granting this on a bucket is generally not recommended.
  3993     \ (public-read-write)
  3994     / Owner gets FULL_CONTROL.
  3995   4 | The AuthenticatedUsers group gets READ access.
  3996     \ (authenticated-read)
  3997     / Object owner gets FULL_CONTROL.
  3998  acl> private
  3999  Option server_side_encryption.
  4000  The server-side encryption algorithm used when storing this object in S3.
  4001  Choose a number from below, or type in your own value.
  4002  Press Enter to leave empty.
  4003   1 / None
  4004     \ ()
  4005   2 / AES256
  4006     \ (AES256)
  4007  server_side_encryption>
  4008  Option storage_class.
  4009  The storage class to use when storing new objects in ChinaMobile.
  4010  Choose a number from below, or type in your own value.
  4011  Press Enter to leave empty.
  4012   1 / Default
  4013     \ ()
  4014   2 / Standard storage class
  4015     \ (STANDARD)
  4016   3 / Archive storage mode
  4017     \ (GLACIER)
  4018   4 / Infrequent access storage mode
  4019     \ (STANDARD_IA)
  4020  storage_class>
  4021  Edit advanced config?
  4022  y) Yes
  4023  n) No (default)
  4024  y/n> n
  4025  --------------------
  4026  [ChinaMobile]
  4027  type = s3
  4028  provider = ChinaMobile
  4029  access_key_id = accesskeyid
  4030  secret_access_key = secretaccesskey
  4031  endpoint = eos-wuxi-1.cmecloud.cn
  4032  location_constraint = wuxi1
  4033  acl = private
  4034  --------------------
  4035  y) Yes this is OK (default)
  4036  e) Edit this remote
  4037  d) Delete this remote
  4038  y/e/d> y
  4039  ```
  4040  
  4041  ### Leviia Cloud Object Storage {#leviia}
  4042  
  4043  [Leviia Object Storage](https://www.leviia.com/object-storage/), backup and secure your data in a 100% French cloud, independent of GAFAM..
  4044  
  4045  To configure access to Leviia, follow the steps below:
  4046  
  4047  1. Run `rclone config` and select `n` for a new remote.
  4048  
  4049  ```
  4050  rclone config
  4051  No remotes found, make a new one?
  4052  n) New remote
  4053  s) Set configuration password
  4054  q) Quit config
  4055  n/s/q> n
  4056  ```
  4057  
  4058  2. Give the name of the configuration. For example, name it 'leviia'.
  4059  
  4060  ```
  4061  name> leviia
  4062  ```
  4063  
  4064  3. Select `s3` storage.
  4065  
  4066  ```
  4067  Choose a number from below, or type in your own value
  4068  [snip]
  4069  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  4070     \ (s3)
  4071  [snip]
  4072  Storage> s3
  4073  ```
  4074  
  4075  4. Select `Leviia` provider.
  4076  ```
  4077  Choose a number from below, or type in your own value
  4078  1 / Amazon Web Services (AWS) S3
  4079     \ "AWS"
  4080  [snip]
  4081  15 / Leviia Object Storage
  4082     \ (Leviia)
  4083  [snip]
  4084  provider> Leviia
  4085  ```
  4086  
  4087  5. Enter your SecretId and SecretKey of Leviia.
  4088  
  4089  ```
  4090  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  4091  Only applies if access_key_id and secret_access_key is blank.
  4092  Enter a boolean value (true or false). Press Enter for the default ("false").
  4093  Choose a number from below, or type in your own value
  4094   1 / Enter AWS credentials in the next step
  4095     \ "false"
  4096   2 / Get AWS credentials from the environment (env vars or IAM)
  4097     \ "true"
  4098  env_auth> 1
  4099  AWS Access Key ID.
  4100  Leave blank for anonymous access or runtime credentials.
  4101  Enter a string value. Press Enter for the default ("").
  4102  access_key_id> ZnIx.xxxxxxxxxxxxxxx
  4103  AWS Secret Access Key (password)
  4104  Leave blank for anonymous access or runtime credentials.
  4105  Enter a string value. Press Enter for the default ("").
  4106  secret_access_key> xxxxxxxxxxx
  4107  ```
  4108  
  4109  6. Select endpoint for Leviia.
  4110  
  4111  ```
  4112     / The default endpoint
  4113   1 | Leviia.
  4114     \ (s3.leviia.com)
  4115  [snip]
  4116  endpoint> 1
  4117  ```
  4118  7. Choose acl.
  4119  
  4120  ```
  4121  Note that this ACL is applied when server-side copying objects as S3
  4122  doesn't copy the ACL from the source but rather writes a fresh one.
  4123  Enter a string value. Press Enter for the default ("").
  4124  Choose a number from below, or type in your own value
  4125     / Owner gets FULL_CONTROL.
  4126   1 | No one else has access rights (default).
  4127     \ (private)
  4128     / Owner gets FULL_CONTROL.
  4129   2 | The AllUsers group gets READ access.
  4130     \ (public-read)
  4131  [snip]
  4132  acl> 1
  4133  Edit advanced config? (y/n)
  4134  y) Yes
  4135  n) No (default)
  4136  y/n> n
  4137  Remote config
  4138  --------------------
  4139  [leviia]
  4140  - type: s3
  4141  - provider: Leviia
  4142  - access_key_id: ZnIx.xxxxxxx
  4143  - secret_access_key: xxxxxxxx
  4144  - endpoint: s3.leviia.com
  4145  - acl: private
  4146  --------------------
  4147  y) Yes this is OK (default)
  4148  e) Edit this remote
  4149  d) Delete this remote
  4150  y/e/d> y
  4151  Current remotes:
  4152  
  4153  Name                 Type
  4154  ====                 ====
  4155  leviia                s3
  4156  ```
  4157  
  4158  ### Liara {#liara-cloud}
  4159  
  4160  Here is an example of making a [Liara Object Storage](https://liara.ir/landing/object-storage)
  4161  configuration.  First run:
  4162  
  4163      rclone config
  4164  
  4165  This will guide you through an interactive setup process.
  4166  
  4167  ```
  4168  No remotes found, make a new one?
  4169  n) New remote
  4170  s) Set configuration password
  4171  n/s> n
  4172  name> Liara
  4173  Type of storage to configure.
  4174  Choose a number from below, or type in your own value
  4175  [snip]
  4176  XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
  4177     \ "s3"
  4178  [snip]
  4179  Storage> s3
  4180  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  4181  Choose a number from below, or type in your own value
  4182   1 / Enter AWS credentials in the next step
  4183     \ "false"
  4184   2 / Get AWS credentials from the environment (env vars or IAM)
  4185     \ "true"
  4186  env_auth> 1
  4187  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  4188  access_key_id> YOURACCESSKEY
  4189  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  4190  secret_access_key> YOURSECRETACCESSKEY
  4191  Region to connect to.
  4192  Choose a number from below, or type in your own value
  4193     / The default endpoint
  4194   1 | US Region, Northern Virginia, or Pacific Northwest.
  4195     | Leave location constraint empty.
  4196     \ "us-east-1"
  4197  [snip]
  4198  region>
  4199  Endpoint for S3 API.
  4200  Leave blank if using Liara to use the default endpoint for the region.
  4201  Specify if using an S3 clone such as Ceph.
  4202  endpoint> storage.iran.liara.space
  4203  Canned ACL used when creating buckets and/or storing objects in S3.
  4204  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  4205  Choose a number from below, or type in your own value
  4206   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  4207     \ "private"
  4208  [snip]
  4209  acl>
  4210  The server-side encryption algorithm used when storing this object in S3.
  4211  Choose a number from below, or type in your own value
  4212   1 / None
  4213     \ ""
  4214   2 / AES256
  4215     \ "AES256"
  4216  server_side_encryption>
  4217  The storage class to use when storing objects in S3.
  4218  Choose a number from below, or type in your own value
  4219   1 / Default
  4220     \ ""
  4221   2 / Standard storage class
  4222     \ "STANDARD"
  4223  storage_class>
  4224  Remote config
  4225  --------------------
  4226  [Liara]
  4227  env_auth = false
  4228  access_key_id = YOURACCESSKEY
  4229  secret_access_key = YOURSECRETACCESSKEY
  4230  endpoint = storage.iran.liara.space
  4231  location_constraint =
  4232  acl =
  4233  server_side_encryption =
  4234  storage_class =
  4235  --------------------
  4236  y) Yes this is OK
  4237  e) Edit this remote
  4238  d) Delete this remote
  4239  y/e/d> y
  4240  ```
  4241  
  4242  This will leave the config file looking like this.
  4243  
  4244  ```
  4245  [Liara]
  4246  type = s3
  4247  provider = Liara
  4248  env_auth = false
  4249  access_key_id = YOURACCESSKEY
  4250  secret_access_key = YOURSECRETACCESSKEY
  4251  region =
  4252  endpoint = storage.iran.liara.space
  4253  location_constraint =
  4254  acl =
  4255  server_side_encryption =
  4256  storage_class =
  4257  ```
  4258  
  4259  ### Linode {#linode}
  4260  
  4261  Here is an example of making a [Linode Object Storage](https://www.linode.com/products/object-storage/)
  4262  configuration.  First run:
  4263  
  4264      rclone config
  4265  
  4266  This will guide you through an interactive setup process.
  4267  
  4268  ```
  4269  No remotes found, make a new one?
  4270  n) New remote
  4271  s) Set configuration password
  4272  q) Quit config
  4273  n/s/q> n
  4274  
  4275  Enter name for new remote.
  4276  name> linode
  4277  
  4278  Option Storage.
  4279  Type of storage to configure.
  4280  Choose a number from below, or type in your own value.
  4281  [snip]
  4282  XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
  4283     \ (s3)
  4284  [snip]
  4285  Storage> s3
  4286  
  4287  Option provider.
  4288  Choose your S3 provider.
  4289  Choose a number from below, or type in your own value.
  4290  Press Enter to leave empty.
  4291  [snip]
  4292  XX / Linode Object Storage
  4293     \ (Linode)
  4294  [snip]
  4295  provider> Linode
  4296  
  4297  Option env_auth.
  4298  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  4299  Only applies if access_key_id and secret_access_key is blank.
  4300  Choose a number from below, or type in your own boolean value (true or false).
  4301  Press Enter for the default (false).
  4302   1 / Enter AWS credentials in the next step.
  4303     \ (false)
  4304   2 / Get AWS credentials from the environment (env vars or IAM).
  4305     \ (true)
  4306  env_auth> 
  4307  
  4308  Option access_key_id.
  4309  AWS Access Key ID.
  4310  Leave blank for anonymous access or runtime credentials.
  4311  Enter a value. Press Enter to leave empty.
  4312  access_key_id> ACCESS_KEY
  4313  
  4314  Option secret_access_key.
  4315  AWS Secret Access Key (password).
  4316  Leave blank for anonymous access or runtime credentials.
  4317  Enter a value. Press Enter to leave empty.
  4318  secret_access_key> SECRET_ACCESS_KEY
  4319  
  4320  Option endpoint.
  4321  Endpoint for Linode Object Storage API.
  4322  Choose a number from below, or type in your own value.
  4323  Press Enter to leave empty.
  4324   1 / Atlanta, GA (USA), us-southeast-1
  4325     \ (us-southeast-1.linodeobjects.com)
  4326   2 / Chicago, IL (USA), us-ord-1
  4327     \ (us-ord-1.linodeobjects.com)
  4328   3 / Frankfurt (Germany), eu-central-1
  4329     \ (eu-central-1.linodeobjects.com)
  4330   4 / Milan (Italy), it-mil-1
  4331     \ (it-mil-1.linodeobjects.com)
  4332   5 / Newark, NJ (USA), us-east-1
  4333     \ (us-east-1.linodeobjects.com)
  4334   6 / Paris (France), fr-par-1
  4335     \ (fr-par-1.linodeobjects.com)
  4336   7 / Seattle, WA (USA), us-sea-1
  4337     \ (us-sea-1.linodeobjects.com)
  4338   8 / Singapore ap-south-1
  4339     \ (ap-south-1.linodeobjects.com)
  4340   9 / Stockholm (Sweden), se-sto-1
  4341     \ (se-sto-1.linodeobjects.com)
  4342  10 / Washington, DC, (USA), us-iad-1
  4343     \ (us-iad-1.linodeobjects.com)
  4344  endpoint> 3
  4345  
  4346  Option acl.
  4347  Canned ACL used when creating buckets and storing or copying objects.
  4348  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
  4349  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  4350  Note that this ACL is applied when server-side copying objects as S3
  4351  doesn't copy the ACL from the source but rather writes a fresh one.
  4352  If the acl is an empty string then no X-Amz-Acl: header is added and
  4353  the default (private) will be used.
  4354  Choose a number from below, or type in your own value.
  4355  Press Enter to leave empty.
  4356     / Owner gets FULL_CONTROL.
  4357   1 | No one else has access rights (default).
  4358     \ (private)
  4359  [snip]
  4360  acl> 
  4361  
  4362  Edit advanced config?
  4363  y) Yes
  4364  n) No (default)
  4365  y/n> n
  4366  
  4367  Configuration complete.
  4368  Options:
  4369  - type: s3
  4370  - provider: Linode
  4371  - access_key_id: ACCESS_KEY
  4372  - secret_access_key: SECRET_ACCESS_KEY
  4373  - endpoint: eu-central-1.linodeobjects.com
  4374  Keep this "linode" remote?
  4375  y) Yes this is OK (default)
  4376  e) Edit this remote
  4377  d) Delete this remote
  4378  y/e/d> y
  4379  ```
  4380  
  4381  This will leave the config file looking like this.
  4382  
  4383  ```
  4384  [linode]
  4385  type = s3
  4386  provider = Linode
  4387  access_key_id = ACCESS_KEY
  4388  secret_access_key = SECRET_ACCESS_KEY
  4389  endpoint = eu-central-1.linodeobjects.com
  4390  ```
  4391  
  4392  ### ArvanCloud {#arvan-cloud}
  4393  
  4394  [ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage. 
  4395  It gives you access to backup and archived files and allows sharing. 
  4396  Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service.
  4397  
  4398  ArvanCloud provides an S3 interface which can be configured for use with
  4399  rclone like this.
  4400  
  4401  ```
  4402  No remotes found, make a new one?
  4403  n) New remote
  4404  s) Set configuration password
  4405  n/s> n
  4406  name> ArvanCloud
  4407  Type of storage to configure.
  4408  Choose a number from below, or type in your own value
  4409  [snip]
  4410  XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
  4411     \ "s3"
  4412  [snip]
  4413  Storage> s3
  4414  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  4415  Choose a number from below, or type in your own value
  4416   1 / Enter AWS credentials in the next step
  4417     \ "false"
  4418   2 / Get AWS credentials from the environment (env vars or IAM)
  4419     \ "true"
  4420  env_auth> 1
  4421  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  4422  access_key_id> YOURACCESSKEY
  4423  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  4424  secret_access_key> YOURSECRETACCESSKEY
  4425  Region to connect to.
  4426  Choose a number from below, or type in your own value
  4427     / The default endpoint - a good choice if you are unsure.
  4428   1 | US Region, Northern Virginia, or Pacific Northwest.
  4429     | Leave location constraint empty.
  4430     \ "us-east-1"
  4431  [snip]
  4432  region> 
  4433  Endpoint for S3 API.
  4434  Leave blank if using ArvanCloud to use the default endpoint for the region.
  4435  Specify if using an S3 clone such as Ceph.
  4436  endpoint> s3.arvanstorage.com
  4437  Location constraint - must be set to match the Region. Used when creating buckets only.
  4438  Choose a number from below, or type in your own value
  4439   1 / Empty for Iran-Tehran Region.
  4440     \ ""
  4441  [snip]
  4442  location_constraint>
  4443  Canned ACL used when creating buckets and/or storing objects in S3.
  4444  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  4445  Choose a number from below, or type in your own value
  4446   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  4447     \ "private"
  4448  [snip]
  4449  acl>
  4450  The server-side encryption algorithm used when storing this object in S3.
  4451  Choose a number from below, or type in your own value
  4452   1 / None
  4453     \ ""
  4454   2 / AES256
  4455     \ "AES256"
  4456  server_side_encryption>
  4457  The storage class to use when storing objects in S3.
  4458  Choose a number from below, or type in your own value
  4459   1 / Default
  4460     \ ""
  4461   2 / Standard storage class
  4462     \ "STANDARD"
  4463  storage_class>
  4464  Remote config
  4465  --------------------
  4466  [ArvanCloud]
  4467  env_auth = false
  4468  access_key_id = YOURACCESSKEY
  4469  secret_access_key = YOURSECRETACCESSKEY
  4470  region = ir-thr-at1
  4471  endpoint = s3.arvanstorage.com
  4472  location_constraint =
  4473  acl =
  4474  server_side_encryption =
  4475  storage_class =
  4476  --------------------
  4477  y) Yes this is OK
  4478  e) Edit this remote
  4479  d) Delete this remote
  4480  y/e/d> y
  4481  ```
  4482  
  4483  This will leave the config file looking like this.
  4484  
  4485  ```
  4486  [ArvanCloud]
  4487  type = s3
  4488  provider = ArvanCloud
  4489  env_auth = false
  4490  access_key_id = YOURACCESSKEY
  4491  secret_access_key = YOURSECRETACCESSKEY
  4492  region =
  4493  endpoint = s3.arvanstorage.com
  4494  location_constraint =
  4495  acl =
  4496  server_side_encryption =
  4497  storage_class =
  4498  ```
  4499  
  4500  ### Tencent COS {#tencent-cos}
  4501  
  4502  [Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
  4503  
  4504  To configure access to Tencent COS, follow the steps below:
  4505  
  4506  1. Run `rclone config` and select `n` for a new remote.
  4507  
  4508  ```
  4509  rclone config
  4510  No remotes found, make a new one?
  4511  n) New remote
  4512  s) Set configuration password
  4513  q) Quit config
  4514  n/s/q> n
  4515  ```
  4516  
  4517  2. Give the name of the configuration. For example, name it 'cos'.
  4518  
  4519  ```
  4520  name> cos
  4521  ```
  4522  
  4523  3. Select `s3` storage.
  4524  
  4525  ```
  4526  Choose a number from below, or type in your own value
  4527  [snip]
  4528  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  4529     \ "s3"
  4530  [snip]
  4531  Storage> s3
  4532  ```
  4533  
  4534  4. Select `TencentCOS` provider.
  4535  ```
  4536  Choose a number from below, or type in your own value
  4537  1 / Amazon Web Services (AWS) S3
  4538     \ "AWS"
  4539  [snip]
  4540  11 / Tencent Cloud Object Storage (COS)
  4541     \ "TencentCOS"
  4542  [snip]
  4543  provider> TencentCOS
  4544  ```
  4545  
  4546  5. Enter your SecretId and SecretKey of Tencent Cloud.
  4547  
  4548  ```
  4549  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  4550  Only applies if access_key_id and secret_access_key is blank.
  4551  Enter a boolean value (true or false). Press Enter for the default ("false").
  4552  Choose a number from below, or type in your own value
  4553   1 / Enter AWS credentials in the next step
  4554     \ "false"
  4555   2 / Get AWS credentials from the environment (env vars or IAM)
  4556     \ "true"
  4557  env_auth> 1
  4558  AWS Access Key ID.
  4559  Leave blank for anonymous access or runtime credentials.
  4560  Enter a string value. Press Enter for the default ("").
  4561  access_key_id> AKIDxxxxxxxxxx
  4562  AWS Secret Access Key (password)
  4563  Leave blank for anonymous access or runtime credentials.
  4564  Enter a string value. Press Enter for the default ("").
  4565  secret_access_key> xxxxxxxxxxx
  4566  ```
  4567  
  4568  6. Select endpoint for Tencent COS. This is the standard endpoint for different region.
  4569  
  4570  ```
  4571   1 / Beijing Region.
  4572     \ "cos.ap-beijing.myqcloud.com"
  4573   2 / Nanjing Region.
  4574     \ "cos.ap-nanjing.myqcloud.com"
  4575   3 / Shanghai Region.
  4576     \ "cos.ap-shanghai.myqcloud.com"
  4577   4 / Guangzhou Region.
  4578     \ "cos.ap-guangzhou.myqcloud.com"
  4579  [snip]
  4580  endpoint> 4
  4581  ```
  4582  
  4583  7. Choose acl and storage class.
  4584  
  4585  ```
  4586  Note that this ACL is applied when server-side copying objects as S3
  4587  doesn't copy the ACL from the source but rather writes a fresh one.
  4588  Enter a string value. Press Enter for the default ("").
  4589  Choose a number from below, or type in your own value
  4590   1 / Owner gets Full_CONTROL. No one else has access rights (default).
  4591     \ "default"
  4592  [snip]
  4593  acl> 1
  4594  The storage class to use when storing new objects in Tencent COS.
  4595  Enter a string value. Press Enter for the default ("").
  4596  Choose a number from below, or type in your own value
  4597   1 / Default
  4598     \ ""
  4599  [snip]
  4600  storage_class> 1
  4601  Edit advanced config? (y/n)
  4602  y) Yes
  4603  n) No (default)
  4604  y/n> n
  4605  Remote config
  4606  --------------------
  4607  [cos]
  4608  type = s3
  4609  provider = TencentCOS
  4610  env_auth = false
  4611  access_key_id = xxx
  4612  secret_access_key = xxx
  4613  endpoint = cos.ap-guangzhou.myqcloud.com
  4614  acl = default
  4615  --------------------
  4616  y) Yes this is OK (default)
  4617  e) Edit this remote
  4618  d) Delete this remote
  4619  y/e/d> y
  4620  Current remotes:
  4621  
  4622  Name                 Type
  4623  ====                 ====
  4624  cos                  s3
  4625  ```
  4626  
  4627  ### Netease NOS
  4628  
  4629  For Netease NOS configure as per the configurator `rclone config`
  4630  setting the provider `Netease`.  This will automatically set
  4631  `force_path_style = false` which is necessary for it to run properly.
  4632  
  4633  ### Petabox
  4634  
  4635  Here is an example of making a [Petabox](https://petabox.io/)
  4636  configuration. First run:
  4637  
  4638  ```bash
  4639  rclone config
  4640  ```
  4641  
  4642  This will guide you through an interactive setup process.
  4643  
  4644  ```
  4645  No remotes found, make a new one?
  4646  n) New remote
  4647  s) Set configuration password
  4648  n/s> n
  4649  
  4650  Enter name for new remote.
  4651  name> My Petabox Storage
  4652  
  4653  Option Storage.
  4654  Type of storage to configure.
  4655  Choose a number from below, or type in your own value.
  4656  [snip]
  4657  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  4658     \ "s3"
  4659  [snip]
  4660  Storage> s3
  4661  
  4662  Option provider.
  4663  Choose your S3 provider.
  4664  Choose a number from below, or type in your own value.
  4665  Press Enter to leave empty.
  4666  [snip]
  4667  XX / Petabox Object Storage
  4668     \ (Petabox)
  4669  [snip]
  4670  provider> Petabox
  4671  
  4672  Option env_auth.
  4673  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  4674  Only applies if access_key_id and secret_access_key is blank.
  4675  Choose a number from below, or type in your own boolean value (true or false).
  4676  Press Enter for the default (false).
  4677   1 / Enter AWS credentials in the next step.
  4678     \ (false)
  4679   2 / Get AWS credentials from the environment (env vars or IAM).
  4680     \ (true)
  4681  env_auth> 1
  4682  
  4683  Option access_key_id.
  4684  AWS Access Key ID.
  4685  Leave blank for anonymous access or runtime credentials.
  4686  Enter a value. Press Enter to leave empty.
  4687  access_key_id> YOUR_ACCESS_KEY_ID
  4688  
  4689  Option secret_access_key.
  4690  AWS Secret Access Key (password).
  4691  Leave blank for anonymous access or runtime credentials.
  4692  Enter a value. Press Enter to leave empty.
  4693  secret_access_key> YOUR_SECRET_ACCESS_KEY
  4694  
  4695  Option region.
  4696  Region where your bucket will be created and your data stored.
  4697  Choose a number from below, or type in your own value.
  4698  Press Enter to leave empty.
  4699   1 / US East (N. Virginia)
  4700     \ (us-east-1)
  4701   2 / Europe (Frankfurt)
  4702     \ (eu-central-1)
  4703   3 / Asia Pacific (Singapore)
  4704     \ (ap-southeast-1)
  4705   4 / Middle East (Bahrain)
  4706     \ (me-south-1)
  4707   5 / South America (São Paulo)
  4708     \ (sa-east-1)
  4709  region> 1
  4710  
  4711  Option endpoint.
  4712  Endpoint for Petabox S3 Object Storage.
  4713  Specify the endpoint from the same region.
  4714  Choose a number from below, or type in your own value.
  4715   1 / US East (N. Virginia)
  4716     \ (s3.petabox.io)
  4717   2 / US East (N. Virginia)
  4718     \ (s3.us-east-1.petabox.io)
  4719   3 / Europe (Frankfurt)
  4720     \ (s3.eu-central-1.petabox.io)
  4721   4 / Asia Pacific (Singapore)
  4722     \ (s3.ap-southeast-1.petabox.io)
  4723   5 / Middle East (Bahrain)
  4724     \ (s3.me-south-1.petabox.io)
  4725   6 / South America (São Paulo)
  4726     \ (s3.sa-east-1.petabox.io)
  4727  endpoint> 1
  4728  
  4729  Option acl.
  4730  Canned ACL used when creating buckets and storing or copying objects.
  4731  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
  4732  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  4733  Note that this ACL is applied when server-side copying objects as S3
  4734  doesn't copy the ACL from the source but rather writes a fresh one.
  4735  If the acl is an empty string then no X-Amz-Acl: header is added and
  4736  the default (private) will be used.
  4737  Choose a number from below, or type in your own value.
  4738  Press Enter to leave empty.
  4739     / Owner gets FULL_CONTROL.
  4740   1 | No one else has access rights (default).
  4741     \ (private)
  4742     / Owner gets FULL_CONTROL.
  4743   2 | The AllUsers group gets READ access.
  4744     \ (public-read)
  4745     / Owner gets FULL_CONTROL.
  4746   3 | The AllUsers group gets READ and WRITE access.
  4747     | Granting this on a bucket is generally not recommended.
  4748     \ (public-read-write)
  4749     / Owner gets FULL_CONTROL.
  4750   4 | The AuthenticatedUsers group gets READ access.
  4751     \ (authenticated-read)
  4752     / Object owner gets FULL_CONTROL.
  4753   5 | Bucket owner gets READ access.
  4754     | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  4755     \ (bucket-owner-read)
  4756     / Both the object owner and the bucket owner get FULL_CONTROL over the object.
  4757   6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  4758     \ (bucket-owner-full-control)
  4759  acl> 1
  4760  
  4761  Edit advanced config?
  4762  y) Yes
  4763  n) No (default)
  4764  y/n> No
  4765  
  4766  Configuration complete.
  4767  Options:
  4768  - type: s3
  4769  - provider: Petabox
  4770  - access_key_id: YOUR_ACCESS_KEY_ID
  4771  - secret_access_key: YOUR_SECRET_ACCESS_KEY
  4772  - region: us-east-1
  4773  - endpoint: s3.petabox.io
  4774  Keep this "My Petabox Storage" remote?
  4775  y) Yes this is OK (default)
  4776  e) Edit this remote
  4777  d) Delete this remote
  4778  y/e/d> y
  4779  ```
  4780  
  4781  This will leave the config file looking like this.
  4782  
  4783  ```
  4784  [My Petabox Storage]
  4785  type = s3
  4786  provider = Petabox
  4787  access_key_id = YOUR_ACCESS_KEY_ID
  4788  secret_access_key = YOUR_SECRET_ACCESS_KEY
  4789  region = us-east-1
  4790  endpoint = s3.petabox.io
  4791  ```
  4792  
  4793  ### Storj
  4794  
  4795  Storj is a decentralized cloud storage which can be used through its
  4796  native protocol or an S3 compatible gateway.
  4797  
  4798  The S3 compatible gateway is configured using `rclone config` with a
  4799  type of `s3` and with a provider name of `Storj`. Here is an example
  4800  run of the configurator.
  4801  
  4802  ```
  4803  Type of storage to configure.
  4804  Storage> s3
  4805  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  4806  Only applies if access_key_id and secret_access_key is blank.
  4807  Choose a number from below, or type in your own boolean value (true or false).
  4808  Press Enter for the default (false).
  4809   1 / Enter AWS credentials in the next step.
  4810     \ (false)
  4811   2 / Get AWS credentials from the environment (env vars or IAM).
  4812     \ (true)
  4813  env_auth> 1
  4814  Option access_key_id.
  4815  AWS Access Key ID.
  4816  Leave blank for anonymous access or runtime credentials.
  4817  Enter a value. Press Enter to leave empty.
  4818  access_key_id> XXXX (as shown when creating the access grant)
  4819  Option secret_access_key.
  4820  AWS Secret Access Key (password).
  4821  Leave blank for anonymous access or runtime credentials.
  4822  Enter a value. Press Enter to leave empty.
  4823  secret_access_key> XXXX (as shown when creating the access grant)
  4824  Option endpoint.
  4825  Endpoint of the Shared Gateway.
  4826  Choose a number from below, or type in your own value.
  4827  Press Enter to leave empty.
  4828   1 / EU1 Shared Gateway
  4829     \ (gateway.eu1.storjshare.io)
  4830   2 / US1 Shared Gateway
  4831     \ (gateway.us1.storjshare.io)
  4832   3 / Asia-Pacific Shared Gateway
  4833     \ (gateway.ap1.storjshare.io)
  4834  endpoint> 1 (as shown when creating the access grant)
  4835  Edit advanced config?
  4836  y) Yes
  4837  n) No (default)
  4838  y/n> n
  4839  ```
  4840  
  4841  Note that s3 credentials are generated when you [create an access
  4842  grant](https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#usage).
  4843  
  4844  #### Backend quirks
  4845  
  4846  - `--chunk-size` is forced to be 64 MiB or greater. This will use more
  4847    memory than the default of 5 MiB.
  4848  - Server side copy is disabled as it isn't currently supported in the
  4849    gateway.
  4850  - GetTier and SetTier are not supported.
  4851  
  4852  #### Backend bugs
  4853  
  4854  Due to [issue #39](https://github.com/storj/gateway-mt/issues/39)
  4855  uploading multipart files via the S3 gateway causes them to lose their
  4856  metadata. For rclone's purpose this means that the modification time
  4857  is not stored, nor is any MD5SUM (if one is available from the
  4858  source).
  4859  
  4860  This has the following consequences:
  4861  
  4862  - Using `rclone rcat` will fail as the medatada doesn't match after upload
  4863  - Uploading files with `rclone mount` will fail for the same reason
  4864      - This can worked around by using `--vfs-cache-mode writes` or `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large
  4865  - Files uploaded via a multipart upload won't have their modtimes
  4866      - This will mean that `rclone sync` will likely keep trying to upload files bigger than `--s3-upload-cutoff`
  4867      - This can be worked around with `--checksum` or `--size-only` or setting `--s3-upload-cutoff` large
  4868      - The maximum value for `--s3-upload-cutoff` is 5GiB though
  4869  
  4870  One general purpose workaround is to set `--s3-upload-cutoff 5G`. This
  4871  means that rclone will upload files smaller than 5GiB as single parts.
  4872  Note that this can be set in the config file with `upload_cutoff = 5G`
  4873  or configured in the advanced settings. If you regularly transfer
  4874  files larger than 5G then using `--checksum` or `--size-only` in
  4875  `rclone sync` is the recommended workaround.
  4876  
  4877  #### Comparison with the native protocol
  4878  
  4879  Use the [the native protocol](/storj) to take advantage of
  4880  client-side encryption as well as to achieve the best possible
  4881  download performance. Uploads will be erasure-coded locally, thus a
  4882  1gb upload will result in 2.68gb of data being uploaded to storage
  4883  nodes across the network.
  4884  
  4885  Use this backend and the S3 compatible Hosted Gateway to increase
  4886  upload performance and reduce the load on your systems and network.
  4887  Uploads will be encrypted and erasure-coded server-side, thus a 1GB
  4888  upload will result in only in 1GB of data being uploaded to storage
  4889  nodes across the network.
  4890  
  4891  For more detailed comparison please check the documentation of the
  4892  [storj](/storj) backend.
  4893  
  4894  ## Limitations
  4895  
  4896  `rclone about` is not supported by the S3 backend. Backends without
  4897  this capability cannot determine free space for an rclone mount or
  4898  use policy `mfs` (most free space) as a member of an rclone union
  4899  remote.
  4900  
  4901  See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
  4902  
  4903  
  4904  
  4905  ### Synology C2 Object Storage {#synology-c2}
  4906  
  4907  [Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty.
  4908  
  4909  The S3 compatible gateway is configured using `rclone config` with a
  4910  type of `s3` and with a provider name of `Synology`. Here is an example
  4911  run of the configurator.
  4912  
  4913  First run:
  4914  
  4915  ```
  4916  rclone config
  4917  ```
  4918  
  4919  This will guide you through an interactive setup process.
  4920  
  4921  ```
  4922  No remotes found, make a new one?
  4923  n) New remote
  4924  s) Set configuration password
  4925  q) Quit config
  4926  
  4927  n/s/q> n
  4928  
  4929  Enter name for new remote.1
  4930  name> syno
  4931  
  4932  Type of storage to configure.
  4933  Enter a string value. Press Enter for the default ("").
  4934  Choose a number from below, or type in your own value
  4935  
  4936  XX / Amazon S3 Compliant Storage Providers including AWS, ...
  4937     \ "s3"
  4938  
  4939  Storage> s3
  4940  
  4941  Choose your S3 provider.
  4942  Enter a string value. Press Enter for the default ("").
  4943  Choose a number from below, or type in your own value
  4944   24 / Synology C2 Object Storage
  4945     \ (Synology)
  4946  
  4947  provider> Synology
  4948  
  4949  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  4950  Only applies if access_key_id and secret_access_key is blank.
  4951  Enter a boolean value (true or false). Press Enter for the default ("false").
  4952  Choose a number from below, or type in your own value
  4953   1 / Enter AWS credentials in the next step
  4954     \ "false"
  4955   2 / Get AWS credentials from the environment (env vars or IAM)
  4956     \ "true"
  4957  
  4958  env_auth> 1
  4959  
  4960  AWS Access Key ID.
  4961  Leave blank for anonymous access or runtime credentials.
  4962  Enter a string value. Press Enter for the default ("").
  4963  
  4964  access_key_id> accesskeyid
  4965  
  4966  AWS Secret Access Key (password)
  4967  Leave blank for anonymous access or runtime credentials.
  4968  Enter a string value. Press Enter for the default ("").
  4969  
  4970  secret_access_key> secretaccesskey
  4971  
  4972  Region where your data stored.
  4973  Choose a number from below, or type in your own value.
  4974  Press Enter to leave empty.
  4975   1 / Europe Region 1
  4976     \ (eu-001)
  4977   2 / Europe Region 2
  4978     \ (eu-002)
  4979   3 / US Region 1
  4980     \ (us-001)
  4981   4 / US Region 2
  4982     \ (us-002)
  4983   5 / Asia (Taiwan)
  4984     \ (tw-001)
  4985  
  4986  region > 1
  4987  
  4988  Option endpoint.
  4989  Endpoint for Synology C2 Object Storage API.
  4990  Choose a number from below, or type in your own value.
  4991  Press Enter to leave empty.
  4992   1 / EU Endpoint 1
  4993     \ (eu-001.s3.synologyc2.net)
  4994   2 / US Endpoint 1
  4995     \ (us-001.s3.synologyc2.net)
  4996   3 / TW Endpoint 1
  4997     \ (tw-001.s3.synologyc2.net)
  4998  
  4999  endpoint> 1
  5000  
  5001  Option location_constraint.
  5002  Location constraint - must be set to match the Region.
  5003  Leave blank if not sure. Used when creating buckets only.
  5004  Enter a value. Press Enter to leave empty.
  5005  location_constraint>
  5006  
  5007  Edit advanced config? (y/n)
  5008  y) Yes
  5009  n) No
  5010  y/n> y
  5011  
  5012  Option no_check_bucket.
  5013  If set, don't attempt to check the bucket exists or create it.
  5014  This can be useful when trying to minimise the number of transactions
  5015  rclone does if you know the bucket exists already.
  5016  It can also be needed if the user you are using does not have bucket
  5017  creation permissions. Before v1.52.0 this would have passed silently
  5018  due to a bug.
  5019  Enter a boolean value (true or false). Press Enter for the default (true).
  5020  
  5021  no_check_bucket> true
  5022  
  5023  Configuration complete.
  5024  Options:
  5025  - type: s3
  5026  - provider: Synology
  5027  - region: eu-001
  5028  - endpoint: eu-001.s3.synologyc2.net
  5029  - no_check_bucket: true
  5030  Keep this "syno" remote?
  5031  y) Yes this is OK (default)
  5032  e) Edit this remote
  5033  d) Delete this remote
  5034  
  5035  y/e/d> y
  5036  ```