github.com/10XDev/rclone@v1.52.3-0.20200626220027-16af9ab76b2a/docs/content/s3.md (about)

     1  ---
     2  title: "Amazon S3"
     3  description: "Rclone docs for Amazon S3"
     4  ---
     5  
     6  {{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
     7  --------------------------------------------------------
     8  
     9  The S3 backend can be used with a number of different providers:
    10  
    11  {{< provider_list >}}
    12  {{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#amazon-s3" start="true" >}}
    13  {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
    14  {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
    15  {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
    16  {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
    17  {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
    18  {{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
    19  {{< provider name="Scaleway" home="https://www.scaleway.com/en/object-storage/" config="/s3/#scaleway" >}}
    20  {{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
    21  {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" end="true" >}}
    22  {{< /provider_list >}}
    23  
    24  Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
    25  command.)  You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
    26  
    27  Once you have made a remote (see the provider specific section above)
    28  you can use it like this:
    29  
    30  See all buckets
    31  
    32      rclone lsd remote:
    33  
    34  Make a new bucket
    35  
    36      rclone mkdir remote:bucket
    37  
    38  List the contents of a bucket
    39  
    40      rclone ls remote:bucket
    41  
    42  Sync `/home/local/directory` to the remote bucket, deleting any excess
    43  files in the bucket.
    44  
    45      rclone sync /home/local/directory remote:bucket
    46  
    47  ## AWS S3 {#amazon-s3}
    48  
    49  Here is an example of making an s3 configuration.  First run
    50  
    51      rclone config
    52  
    53  This will guide you through an interactive setup process.
    54  
    55  ```
    56  No remotes found - make a new one
    57  n) New remote
    58  s) Set configuration password
    59  q) Quit config
    60  n/s/q> n
    61  name> remote
    62  Type of storage to configure.
    63  Choose a number from below, or type in your own value
    64  [snip]
    65  XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
    66     \ "s3"
    67  [snip]
    68  Storage> s3
    69  Choose your S3 provider.
    70  Choose a number from below, or type in your own value
    71   1 / Amazon Web Services (AWS) S3
    72     \ "AWS"
    73   2 / Ceph Object Storage
    74     \ "Ceph"
    75   3 / Digital Ocean Spaces
    76     \ "DigitalOcean"
    77   4 / Dreamhost DreamObjects
    78     \ "Dreamhost"
    79   5 / IBM COS S3
    80     \ "IBMCOS"
    81   6 / Minio Object Storage
    82     \ "Minio"
    83   7 / Wasabi Object Storage
    84     \ "Wasabi"
    85   8 / Any other S3 compatible provider
    86     \ "Other"
    87  provider> 1
    88  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
    89  Choose a number from below, or type in your own value
    90   1 / Enter AWS credentials in the next step
    91     \ "false"
    92   2 / Get AWS credentials from the environment (env vars or IAM)
    93     \ "true"
    94  env_auth> 1
    95  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
    96  access_key_id> XXX
    97  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    98  secret_access_key> YYY
    99  Region to connect to.
   100  Choose a number from below, or type in your own value
   101     / The default endpoint - a good choice if you are unsure.
   102   1 | US Region, Northern Virginia or Pacific Northwest.
   103     | Leave location constraint empty.
   104     \ "us-east-1"
   105     / US East (Ohio) Region
   106   2 | Needs location constraint us-east-2.
   107     \ "us-east-2"
   108     / US West (Oregon) Region
   109   3 | Needs location constraint us-west-2.
   110     \ "us-west-2"
   111     / US West (Northern California) Region
   112   4 | Needs location constraint us-west-1.
   113     \ "us-west-1"
   114     / Canada (Central) Region
   115   5 | Needs location constraint ca-central-1.
   116     \ "ca-central-1"
   117     / EU (Ireland) Region
   118   6 | Needs location constraint EU or eu-west-1.
   119     \ "eu-west-1"
   120     / EU (London) Region
   121   7 | Needs location constraint eu-west-2.
   122     \ "eu-west-2"
   123     / EU (Frankfurt) Region
   124   8 | Needs location constraint eu-central-1.
   125     \ "eu-central-1"
   126     / Asia Pacific (Singapore) Region
   127   9 | Needs location constraint ap-southeast-1.
   128     \ "ap-southeast-1"
   129     / Asia Pacific (Sydney) Region
   130  10 | Needs location constraint ap-southeast-2.
   131     \ "ap-southeast-2"
   132     / Asia Pacific (Tokyo) Region
   133  11 | Needs location constraint ap-northeast-1.
   134     \ "ap-northeast-1"
   135     / Asia Pacific (Seoul)
   136  12 | Needs location constraint ap-northeast-2.
   137     \ "ap-northeast-2"
   138     / Asia Pacific (Mumbai)
   139  13 | Needs location constraint ap-south-1.
   140     \ "ap-south-1"
   141     / Asia Patific (Hong Kong) Region
   142  14 | Needs location constraint ap-east-1.
   143     \ "ap-east-1"
   144     / South America (Sao Paulo) Region
   145  15 | Needs location constraint sa-east-1.
   146     \ "sa-east-1"
   147  region> 1
   148  Endpoint for S3 API.
   149  Leave blank if using AWS to use the default endpoint for the region.
   150  endpoint> 
   151  Location constraint - must be set to match the Region. Used when creating buckets only.
   152  Choose a number from below, or type in your own value
   153   1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   154     \ ""
   155   2 / US East (Ohio) Region.
   156     \ "us-east-2"
   157   3 / US West (Oregon) Region.
   158     \ "us-west-2"
   159   4 / US West (Northern California) Region.
   160     \ "us-west-1"
   161   5 / Canada (Central) Region.
   162     \ "ca-central-1"
   163   6 / EU (Ireland) Region.
   164     \ "eu-west-1"
   165   7 / EU (London) Region.
   166     \ "eu-west-2"
   167   8 / EU Region.
   168     \ "EU"
   169   9 / Asia Pacific (Singapore) Region.
   170     \ "ap-southeast-1"
   171  10 / Asia Pacific (Sydney) Region.
   172     \ "ap-southeast-2"
   173  11 / Asia Pacific (Tokyo) Region.
   174     \ "ap-northeast-1"
   175  12 / Asia Pacific (Seoul)
   176     \ "ap-northeast-2"
   177  13 / Asia Pacific (Mumbai)
   178     \ "ap-south-1"
   179  14 / Asia Pacific (Hong Kong)
   180     \ "ap-east-1"
   181  15 / South America (Sao Paulo) Region.
   182     \ "sa-east-1"
   183  location_constraint> 1
   184  Canned ACL used when creating buckets and/or storing objects in S3.
   185  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
   186  Choose a number from below, or type in your own value
   187   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   188     \ "private"
   189   2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   190     \ "public-read"
   191     / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
   192   3 | Granting this on a bucket is generally not recommended.
   193     \ "public-read-write"
   194   4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   195     \ "authenticated-read"
   196     / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
   197   5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   198     \ "bucket-owner-read"
   199     / Both the object owner and the bucket owner get FULL_CONTROL over the object.
   200   6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   201     \ "bucket-owner-full-control"
   202  acl> 1
   203  The server-side encryption algorithm used when storing this object in S3.
   204  Choose a number from below, or type in your own value
   205   1 / None
   206     \ ""
   207   2 / AES256
   208     \ "AES256"
   209  server_side_encryption> 1
   210  The storage class to use when storing objects in S3.
   211  Choose a number from below, or type in your own value
   212   1 / Default
   213     \ ""
   214   2 / Standard storage class
   215     \ "STANDARD"
   216   3 / Reduced redundancy storage class
   217     \ "REDUCED_REDUNDANCY"
   218   4 / Standard Infrequent Access storage class
   219     \ "STANDARD_IA"
   220   5 / One Zone Infrequent Access storage class
   221     \ "ONEZONE_IA"
   222   6 / Glacier storage class
   223     \ "GLACIER"
   224   7 / Glacier Deep Archive storage class
   225     \ "DEEP_ARCHIVE"
   226   8 / Intelligent-Tiering storage class
   227     \ "INTELLIGENT_TIERING"
   228  storage_class> 1
   229  Remote config
   230  --------------------
   231  [remote]
   232  type = s3
   233  provider = AWS
   234  env_auth = false
   235  access_key_id = XXX
   236  secret_access_key = YYY
   237  region = us-east-1
   238  endpoint = 
   239  location_constraint = 
   240  acl = private
   241  server_side_encryption = 
   242  storage_class = 
   243  --------------------
   244  y) Yes this is OK
   245  e) Edit this remote
   246  d) Delete this remote
   247  y/e/d> 
   248  ```
   249  
   250  ### --fast-list ###
   251  
   252  This remote supports `--fast-list` which allows you to use fewer
   253  transactions in exchange for more memory. See the [rclone
   254  docs](/docs/#fast-list) for more details.
   255  
   256  ### --update and --use-server-modtime ###
   257  
   258  As noted below, the modified time is stored on metadata on the object. It is
   259  used by default for all operations that require checking the time a file was
   260  last updated. It allows rclone to treat the remote more like a true filesystem,
   261  but it is inefficient because it requires an extra API call to retrieve the
   262  metadata.
   263  
   264  For many operations, the time the object was last uploaded to the remote is
   265  sufficient to determine if it is "dirty". By using `--update` along with
   266  `--use-server-modtime`, you can avoid the extra API call and simply upload
   267  files whose local modtime is newer than the time it was last uploaded.
   268  
   269  ### Modified time ###
   270  
   271  The modified time is stored as metadata on the object as
   272  `X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns.
   273  
   274  If the modification time needs to be updated rclone will attempt to perform a server
   275  side copy to update the modification if the object can be copied in a single part.
   276  In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive
   277  storage the object will be uploaded rather than copied.
   278  
   279  #### Restricted filename characters
   280  
   281  S3 allows any valid UTF-8 string as a key.
   282  
   283  Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), as
   284  they can't be used in XML.
   285  
   286  The following characters are replaced since these are problematic when
   287  dealing with the REST API:
   288  
   289  | Character | Value | Replacement |
   290  | --------- |:-----:|:-----------:|
   291  | NUL       | 0x00  | ␀           |
   292  | /         | 0x2F  | /           |
   293  
   294  The encoding will also encode these file names as they don't seem to
   295  work with the SDK properly:
   296  
   297  | File name | Replacement |
   298  | --------- |:-----------:|
   299  | .         | .          |
   300  | ..        | ..         |
   301  
   302  ### Multipart uploads ###
   303  
   304  rclone supports multipart uploads with S3 which means that it can
   305  upload files bigger than 5GB.
   306  
   307  Note that files uploaded *both* with multipart upload *and* through
   308  crypt remotes do not have MD5 sums.
   309  
   310  rclone switches from single part uploads to multipart uploads at the
   311  point specified by `--s3-upload-cutoff`.  This can be a maximum of 5GB
   312  and a minimum of 0 (ie always upload multipart files).
   313  
   314  The chunk sizes used in the multipart upload are specified by
   315  `--s3-chunk-size` and the number of chunks uploaded concurrently is
   316  specified by `--s3-upload-concurrency`.
   317  
   318  Multipart uploads will use `--transfers` * `--s3-upload-concurrency` *
   319  `--s3-chunk-size` extra memory.  Single part uploads to not use extra
   320  memory.
   321  
   322  Single part transfers can be faster than multipart transfers or slower
   323  depending on your latency from S3 - the more latency, the more likely
   324  single part transfers will be faster.
   325  
   326  Increasing `--s3-upload-concurrency` will increase throughput (8 would
   327  be a sensible value) and increasing `--s3-chunk-size` also increases
   328  throughput (16M would be sensible).  Increasing either of these will
   329  use more memory.  The default values are high enough to gain most of
   330  the possible performance without using too much memory.
   331  
   332  
   333  ### Buckets and Regions ###
   334  
   335  With Amazon S3 you can list buckets (`rclone lsd`) using any region,
   336  but you can only access the content of a bucket from the region it was
   337  created in.  If you attempt to access a bucket from the wrong region,
   338  you will get an error, `incorrect region, the bucket is not in 'XXX'
   339  region`.
   340  
   341  ### Authentication ###
   342  
   343  There are a number of ways to supply `rclone` with a set of AWS
   344  credentials, with and without using the environment.
   345  
   346  The different authentication methods are tried in this order:
   347  
   348   - Directly in the rclone configuration file (`env_auth = false` in the config file):
   349     - `access_key_id` and `secret_access_key` are required.
   350     - `session_token` can be optionally set when using AWS STS.
   351   - Runtime configuration (`env_auth = true` in the config file):
   352     - Export the following environment variables before running `rclone`:
   353       - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
   354       - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
   355       - Session Token: `AWS_SESSION_TOKEN` (optional)
   356     - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html):
   357       - Profile files are standard files used by AWS CLI tools
   358       - By default it will use the profile in your home directory (eg `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables:
   359           - `AWS_SHARED_CREDENTIALS_FILE` to control which file.
   360           - `AWS_PROFILE` to control which profile to use.
   361     - Or, run `rclone` in an ECS task with an IAM role (AWS only).
   362     - Or, run `rclone` on an EC2 instance with an IAM role (AWS only).
   363     - Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only).
   364  
   365  If none of these option actually end up providing `rclone` with AWS
   366  credentials then S3 interaction will be non-authenticated (see below).
   367  
   368  ### S3 Permissions ###
   369  
   370  When using the `sync` subcommand of `rclone` the following minimum
   371  permissions are required to be available on the bucket being written to:
   372  
   373  * `ListBucket`
   374  * `DeleteObject`
   375  * `GetObject`
   376  * `PutObject`
   377  * `PutObjectACL`
   378  
   379  When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required.
   380  
   381  Example policy:
   382  
   383  ```
   384  {
   385      "Version": "2012-10-17",
   386      "Statement": [
   387          {
   388              "Effect": "Allow",
   389              "Principal": {
   390                  "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
   391              },
   392              "Action": [
   393                  "s3:ListBucket",
   394                  "s3:DeleteObject",
   395                  "s3:GetObject",
   396                  "s3:PutObject",
   397                  "s3:PutObjectAcl"
   398              ],
   399              "Resource": [
   400                "arn:aws:s3:::BUCKET_NAME/*",
   401                "arn:aws:s3:::BUCKET_NAME"
   402              ]
   403          },
   404          {
   405              "Effect": "Allow",
   406              "Action": "s3:ListAllMyBuckets",
   407              "Resource": "arn:aws:s3:::*"
   408          }	
   409      ]
   410  }
   411  ```
   412  
   413  Notes on above:
   414  
   415  1. This is a policy that can be used when creating bucket. It assumes
   416     that `USER_NAME` has been created.
   417  2. The Resource entry must include both resource ARNs, as one implies
   418     the bucket and the other implies the bucket's objects.
   419  
   420  For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
   421  that will generate one or more buckets that will work with `rclone sync`.
   422  
   423  ### Key Management System (KMS) ###
   424  
   425  If you are using server side encryption with KMS then you will find
   426  you can't transfer small objects.  As a work-around you can use the
   427  `--ignore-checksum` flag.
   428  
   429  A proper fix is being worked on in [issue #1824](https://github.com/rclone/rclone/issues/1824).
   430  
   431  ### Glacier and Glacier Deep Archive ###
   432  
   433  You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
   434  The bucket can still be synced or copied into normally, but if rclone
   435  tries to access data from the glacier storage class you will see an error like below.
   436  
   437      2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
   438  
   439  In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
   440  the object(s) in question before using rclone.
   441  
   442  Note that rclone only speaks the S3 API it does not speak the Glacier
   443  Vault API, so rclone cannot directly access Glacier Vaults.
   444  
   445  {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
   446  ### Standard Options
   447  
   448  Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
   449  
   450  #### --s3-provider
   451  
   452  Choose your S3 provider.
   453  
   454  - Config:      provider
   455  - Env Var:     RCLONE_S3_PROVIDER
   456  - Type:        string
   457  - Default:     ""
   458  - Examples:
   459      - "AWS"
   460          - Amazon Web Services (AWS) S3
   461      - "Alibaba"
   462          - Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   463      - "Ceph"
   464          - Ceph Object Storage
   465      - "DigitalOcean"
   466          - Digital Ocean Spaces
   467      - "Dreamhost"
   468          - Dreamhost DreamObjects
   469      - "IBMCOS"
   470          - IBM COS S3
   471      - "Minio"
   472          - Minio Object Storage
   473      - "Netease"
   474          - Netease Object Storage (NOS)
   475      - "StackPath"
   476          - StackPath Object Storage
   477      - "Wasabi"
   478          - Wasabi Object Storage
   479      - "Other"
   480          - Any other S3 compatible provider
   481  
   482  #### --s3-env-auth
   483  
   484  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
   485  Only applies if access_key_id and secret_access_key is blank.
   486  
   487  - Config:      env_auth
   488  - Env Var:     RCLONE_S3_ENV_AUTH
   489  - Type:        bool
   490  - Default:     false
   491  - Examples:
   492      - "false"
   493          - Enter AWS credentials in the next step
   494      - "true"
   495          - Get AWS credentials from the environment (env vars or IAM)
   496  
   497  #### --s3-access-key-id
   498  
   499  AWS Access Key ID.
   500  Leave blank for anonymous access or runtime credentials.
   501  
   502  - Config:      access_key_id
   503  - Env Var:     RCLONE_S3_ACCESS_KEY_ID
   504  - Type:        string
   505  - Default:     ""
   506  
   507  #### --s3-secret-access-key
   508  
   509  AWS Secret Access Key (password)
   510  Leave blank for anonymous access or runtime credentials.
   511  
   512  - Config:      secret_access_key
   513  - Env Var:     RCLONE_S3_SECRET_ACCESS_KEY
   514  - Type:        string
   515  - Default:     ""
   516  
   517  #### --s3-region
   518  
   519  Region to connect to.
   520  
   521  - Config:      region
   522  - Env Var:     RCLONE_S3_REGION
   523  - Type:        string
   524  - Default:     ""
   525  - Examples:
   526      - "us-east-1"
   527          - The default endpoint - a good choice if you are unsure.
   528          - US Region, Northern Virginia or Pacific Northwest.
   529          - Leave location constraint empty.
   530      - "us-east-2"
   531          - US East (Ohio) Region
   532          - Needs location constraint us-east-2.
   533      - "us-west-2"
   534          - US West (Oregon) Region
   535          - Needs location constraint us-west-2.
   536      - "us-west-1"
   537          - US West (Northern California) Region
   538          - Needs location constraint us-west-1.
   539      - "ca-central-1"
   540          - Canada (Central) Region
   541          - Needs location constraint ca-central-1.
   542      - "eu-west-1"
   543          - EU (Ireland) Region
   544          - Needs location constraint EU or eu-west-1.
   545      - "eu-west-2"
   546          - EU (London) Region
   547          - Needs location constraint eu-west-2.
   548      - "eu-north-1"
   549          - EU (Stockholm) Region
   550          - Needs location constraint eu-north-1.
   551      - "eu-central-1"
   552          - EU (Frankfurt) Region
   553          - Needs location constraint eu-central-1.
   554      - "ap-southeast-1"
   555          - Asia Pacific (Singapore) Region
   556          - Needs location constraint ap-southeast-1.
   557      - "ap-southeast-2"
   558          - Asia Pacific (Sydney) Region
   559          - Needs location constraint ap-southeast-2.
   560      - "ap-northeast-1"
   561          - Asia Pacific (Tokyo) Region
   562          - Needs location constraint ap-northeast-1.
   563      - "ap-northeast-2"
   564          - Asia Pacific (Seoul)
   565          - Needs location constraint ap-northeast-2.
   566      - "ap-south-1"
   567          - Asia Pacific (Mumbai)
   568          - Needs location constraint ap-south-1.
   569      - "ap-east-1"
   570          - Asia Patific (Hong Kong) Region
   571          - Needs location constraint ap-east-1.
   572      - "sa-east-1"
   573          - South America (Sao Paulo) Region
   574          - Needs location constraint sa-east-1.
   575  
   576  #### --s3-region
   577  
   578  Region to connect to.
   579  Leave blank if you are using an S3 clone and you don't have a region.
   580  
   581  - Config:      region
   582  - Env Var:     RCLONE_S3_REGION
   583  - Type:        string
   584  - Default:     ""
   585  - Examples:
   586      - ""
   587          - Use this if unsure. Will use v4 signatures and an empty region.
   588      - "other-v2-signature"
   589          - Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
   590  
   591  #### --s3-endpoint
   592  
   593  Endpoint for S3 API.
   594  Leave blank if using AWS to use the default endpoint for the region.
   595  
   596  - Config:      endpoint
   597  - Env Var:     RCLONE_S3_ENDPOINT
   598  - Type:        string
   599  - Default:     ""
   600  
   601  #### --s3-endpoint
   602  
   603  Endpoint for IBM COS S3 API.
   604  Specify if using an IBM COS On Premise.
   605  
   606  - Config:      endpoint
   607  - Env Var:     RCLONE_S3_ENDPOINT
   608  - Type:        string
   609  - Default:     ""
   610  - Examples:
   611      - "s3-api.us-geo.objectstorage.softlayer.net"
   612          - US Cross Region Endpoint
   613      - "s3-api.dal.us-geo.objectstorage.softlayer.net"
   614          - US Cross Region Dallas Endpoint
   615      - "s3-api.wdc-us-geo.objectstorage.softlayer.net"
   616          - US Cross Region Washington DC Endpoint
   617      - "s3-api.sjc-us-geo.objectstorage.softlayer.net"
   618          - US Cross Region San Jose Endpoint
   619      - "s3-api.us-geo.objectstorage.service.networklayer.com"
   620          - US Cross Region Private Endpoint
   621      - "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
   622          - US Cross Region Dallas Private Endpoint
   623      - "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
   624          - US Cross Region Washington DC Private Endpoint
   625      - "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
   626          - US Cross Region San Jose Private Endpoint
   627      - "s3.us-east.objectstorage.softlayer.net"
   628          - US Region East Endpoint
   629      - "s3.us-east.objectstorage.service.networklayer.com"
   630          - US Region East Private Endpoint
   631      - "s3.us-south.objectstorage.softlayer.net"
   632          - US Region South Endpoint
   633      - "s3.us-south.objectstorage.service.networklayer.com"
   634          - US Region South Private Endpoint
   635      - "s3.eu-geo.objectstorage.softlayer.net"
   636          - EU Cross Region Endpoint
   637      - "s3.fra-eu-geo.objectstorage.softlayer.net"
   638          - EU Cross Region Frankfurt Endpoint
   639      - "s3.mil-eu-geo.objectstorage.softlayer.net"
   640          - EU Cross Region Milan Endpoint
   641      - "s3.ams-eu-geo.objectstorage.softlayer.net"
   642          - EU Cross Region Amsterdam Endpoint
   643      - "s3.eu-geo.objectstorage.service.networklayer.com"
   644          - EU Cross Region Private Endpoint
   645      - "s3.fra-eu-geo.objectstorage.service.networklayer.com"
   646          - EU Cross Region Frankfurt Private Endpoint
   647      - "s3.mil-eu-geo.objectstorage.service.networklayer.com"
   648          - EU Cross Region Milan Private Endpoint
   649      - "s3.ams-eu-geo.objectstorage.service.networklayer.com"
   650          - EU Cross Region Amsterdam Private Endpoint
   651      - "s3.eu-gb.objectstorage.softlayer.net"
   652          - Great Britain Endpoint
   653      - "s3.eu-gb.objectstorage.service.networklayer.com"
   654          - Great Britain Private Endpoint
   655      - "s3.ap-geo.objectstorage.softlayer.net"
   656          - APAC Cross Regional Endpoint
   657      - "s3.tok-ap-geo.objectstorage.softlayer.net"
   658          - APAC Cross Regional Tokyo Endpoint
   659      - "s3.hkg-ap-geo.objectstorage.softlayer.net"
   660          - APAC Cross Regional HongKong Endpoint
   661      - "s3.seo-ap-geo.objectstorage.softlayer.net"
   662          - APAC Cross Regional Seoul Endpoint
   663      - "s3.ap-geo.objectstorage.service.networklayer.com"
   664          - APAC Cross Regional Private Endpoint
   665      - "s3.tok-ap-geo.objectstorage.service.networklayer.com"
   666          - APAC Cross Regional Tokyo Private Endpoint
   667      - "s3.hkg-ap-geo.objectstorage.service.networklayer.com"
   668          - APAC Cross Regional HongKong Private Endpoint
   669      - "s3.seo-ap-geo.objectstorage.service.networklayer.com"
   670          - APAC Cross Regional Seoul Private Endpoint
   671      - "s3.mel01.objectstorage.softlayer.net"
   672          - Melbourne Single Site Endpoint
   673      - "s3.mel01.objectstorage.service.networklayer.com"
   674          - Melbourne Single Site Private Endpoint
   675      - "s3.tor01.objectstorage.softlayer.net"
   676          - Toronto Single Site Endpoint
   677      - "s3.tor01.objectstorage.service.networklayer.com"
   678          - Toronto Single Site Private Endpoint
   679  
   680  #### --s3-endpoint
   681  
   682  Endpoint for OSS API.
   683  
   684  - Config:      endpoint
   685  - Env Var:     RCLONE_S3_ENDPOINT
   686  - Type:        string
   687  - Default:     ""
   688  - Examples:
   689      - "oss-cn-hangzhou.aliyuncs.com"
   690          - East China 1 (Hangzhou)
   691      - "oss-cn-shanghai.aliyuncs.com"
   692          - East China 2 (Shanghai)
   693      - "oss-cn-qingdao.aliyuncs.com"
   694          - North China 1 (Qingdao)
   695      - "oss-cn-beijing.aliyuncs.com"
   696          - North China 2 (Beijing)
   697      - "oss-cn-zhangjiakou.aliyuncs.com"
   698          - North China 3 (Zhangjiakou)
   699      - "oss-cn-huhehaote.aliyuncs.com"
   700          - North China 5 (Huhehaote)
   701      - "oss-cn-shenzhen.aliyuncs.com"
   702          - South China 1 (Shenzhen)
   703      - "oss-cn-hongkong.aliyuncs.com"
   704          - Hong Kong (Hong Kong)
   705      - "oss-us-west-1.aliyuncs.com"
   706          - US West 1 (Silicon Valley)
   707      - "oss-us-east-1.aliyuncs.com"
   708          - US East 1 (Virginia)
   709      - "oss-ap-southeast-1.aliyuncs.com"
   710          - Southeast Asia Southeast 1 (Singapore)
   711      - "oss-ap-southeast-2.aliyuncs.com"
   712          - Asia Pacific Southeast 2 (Sydney)
   713      - "oss-ap-southeast-3.aliyuncs.com"
   714          - Southeast Asia Southeast 3 (Kuala Lumpur)
   715      - "oss-ap-southeast-5.aliyuncs.com"
   716          - Asia Pacific Southeast 5 (Jakarta)
   717      - "oss-ap-northeast-1.aliyuncs.com"
   718          - Asia Pacific Northeast 1 (Japan)
   719      - "oss-ap-south-1.aliyuncs.com"
   720          - Asia Pacific South 1 (Mumbai)
   721      - "oss-eu-central-1.aliyuncs.com"
   722          - Central Europe 1 (Frankfurt)
   723      - "oss-eu-west-1.aliyuncs.com"
   724          - West Europe (London)
   725      - "oss-me-east-1.aliyuncs.com"
   726          - Middle East 1 (Dubai)
   727  
   728  #### --s3-endpoint
   729  
   730  Endpoint for StackPath Object Storage.
   731  
   732  - Config:      endpoint
   733  - Env Var:     RCLONE_S3_ENDPOINT
   734  - Type:        string
   735  - Default:     ""
   736  - Examples:
   737      - "s3.us-east-2.stackpathstorage.com"
   738          - US East Endpoint
   739      - "s3.us-west-1.stackpathstorage.com"
   740          - US West Endpoint
   741      - "s3.eu-central-1.stackpathstorage.com"
   742          - EU Endpoint
   743  
   744  #### --s3-endpoint
   745  
   746  Endpoint for S3 API.
   747  Required when using an S3 clone.
   748  
   749  - Config:      endpoint
   750  - Env Var:     RCLONE_S3_ENDPOINT
   751  - Type:        string
   752  - Default:     ""
   753  - Examples:
   754      - "objects-us-east-1.dream.io"
   755          - Dream Objects endpoint
   756      - "nyc3.digitaloceanspaces.com"
   757          - Digital Ocean Spaces New York 3
   758      - "ams3.digitaloceanspaces.com"
   759          - Digital Ocean Spaces Amsterdam 3
   760      - "sgp1.digitaloceanspaces.com"
   761          - Digital Ocean Spaces Singapore 1
   762      - "s3.wasabisys.com"
   763          - Wasabi US East endpoint
   764      - "s3.us-west-1.wasabisys.com"
   765          - Wasabi US West endpoint
   766      - "s3.eu-central-1.wasabisys.com"
   767          - Wasabi EU Central endpoint
   768  
   769  #### --s3-location-constraint
   770  
   771  Location constraint - must be set to match the Region.
   772  Used when creating buckets only.
   773  
   774  - Config:      location_constraint
   775  - Env Var:     RCLONE_S3_LOCATION_CONSTRAINT
   776  - Type:        string
   777  - Default:     ""
   778  - Examples:
   779      - ""
   780          - Empty for US Region, Northern Virginia or Pacific Northwest.
   781      - "us-east-2"
   782          - US East (Ohio) Region.
   783      - "us-west-2"
   784          - US West (Oregon) Region.
   785      - "us-west-1"
   786          - US West (Northern California) Region.
   787      - "ca-central-1"
   788          - Canada (Central) Region.
   789      - "eu-west-1"
   790          - EU (Ireland) Region.
   791      - "eu-west-2"
   792          - EU (London) Region.
   793      - "eu-north-1"
   794          - EU (Stockholm) Region.
   795      - "EU"
   796          - EU Region.
   797      - "ap-southeast-1"
   798          - Asia Pacific (Singapore) Region.
   799      - "ap-southeast-2"
   800          - Asia Pacific (Sydney) Region.
   801      - "ap-northeast-1"
   802          - Asia Pacific (Tokyo) Region.
   803      - "ap-northeast-2"
   804          - Asia Pacific (Seoul)
   805      - "ap-south-1"
   806          - Asia Pacific (Mumbai)
   807      - "ap-east-1"
   808          - Asia Pacific (Hong Kong)
   809      - "sa-east-1"
   810          - South America (Sao Paulo) Region.
   811  
   812  #### --s3-location-constraint
   813  
   814  Location constraint - must match endpoint when using IBM Cloud Public.
   815  For on-prem COS, do not make a selection from this list, hit enter
   816  
   817  - Config:      location_constraint
   818  - Env Var:     RCLONE_S3_LOCATION_CONSTRAINT
   819  - Type:        string
   820  - Default:     ""
   821  - Examples:
   822      - "us-standard"
   823          - US Cross Region Standard
   824      - "us-vault"
   825          - US Cross Region Vault
   826      - "us-cold"
   827          - US Cross Region Cold
   828      - "us-flex"
   829          - US Cross Region Flex
   830      - "us-east-standard"
   831          - US East Region Standard
   832      - "us-east-vault"
   833          - US East Region Vault
   834      - "us-east-cold"
   835          - US East Region Cold
   836      - "us-east-flex"
   837          - US East Region Flex
   838      - "us-south-standard"
   839          - US South Region Standard
   840      - "us-south-vault"
   841          - US South Region Vault
   842      - "us-south-cold"
   843          - US South Region Cold
   844      - "us-south-flex"
   845          - US South Region Flex
   846      - "eu-standard"
   847          - EU Cross Region Standard
   848      - "eu-vault"
   849          - EU Cross Region Vault
   850      - "eu-cold"
   851          - EU Cross Region Cold
   852      - "eu-flex"
   853          - EU Cross Region Flex
   854      - "eu-gb-standard"
   855          - Great Britain Standard
   856      - "eu-gb-vault"
   857          - Great Britain Vault
   858      - "eu-gb-cold"
   859          - Great Britain Cold
   860      - "eu-gb-flex"
   861          - Great Britain Flex
   862      - "ap-standard"
   863          - APAC Standard
   864      - "ap-vault"
   865          - APAC Vault
   866      - "ap-cold"
   867          - APAC Cold
   868      - "ap-flex"
   869          - APAC Flex
   870      - "mel01-standard"
   871          - Melbourne Standard
   872      - "mel01-vault"
   873          - Melbourne Vault
   874      - "mel01-cold"
   875          - Melbourne Cold
   876      - "mel01-flex"
   877          - Melbourne Flex
   878      - "tor01-standard"
   879          - Toronto Standard
   880      - "tor01-vault"
   881          - Toronto Vault
   882      - "tor01-cold"
   883          - Toronto Cold
   884      - "tor01-flex"
   885          - Toronto Flex
   886  
   887  #### --s3-location-constraint
   888  
   889  Location constraint - must be set to match the Region.
   890  Leave blank if not sure. Used when creating buckets only.
   891  
   892  - Config:      location_constraint
   893  - Env Var:     RCLONE_S3_LOCATION_CONSTRAINT
   894  - Type:        string
   895  - Default:     ""
   896  
   897  #### --s3-acl
   898  
   899  Canned ACL used when creating buckets and storing or copying objects.
   900  
   901  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
   902  
   903  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
   904  
   905  Note that this ACL is applied when server side copying objects as S3
   906  doesn't copy the ACL from the source but rather writes a fresh one.
   907  
   908  - Config:      acl
   909  - Env Var:     RCLONE_S3_ACL
   910  - Type:        string
   911  - Default:     ""
   912  - Examples:
   913      - "private"
   914          - Owner gets FULL_CONTROL. No one else has access rights (default).
   915      - "public-read"
   916          - Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   917      - "public-read-write"
   918          - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
   919          - Granting this on a bucket is generally not recommended.
   920      - "authenticated-read"
   921          - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   922      - "bucket-owner-read"
   923          - Object owner gets FULL_CONTROL. Bucket owner gets READ access.
   924          - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   925      - "bucket-owner-full-control"
   926          - Both the object owner and the bucket owner get FULL_CONTROL over the object.
   927          - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   928      - "private"
   929          - Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
   930      - "public-read"
   931          - Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
   932      - "public-read-write"
   933          - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
   934      - "authenticated-read"
   935          - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
   936  
   937  #### --s3-server-side-encryption
   938  
   939  The server-side encryption algorithm used when storing this object in S3.
   940  
   941  - Config:      server_side_encryption
   942  - Env Var:     RCLONE_S3_SERVER_SIDE_ENCRYPTION
   943  - Type:        string
   944  - Default:     ""
   945  - Examples:
   946      - ""
   947          - None
   948      - "AES256"
   949          - AES256
   950      - "aws:kms"
   951          - aws:kms
   952  
   953  #### --s3-sse-kms-key-id
   954  
   955  If using KMS ID you must provide the ARN of Key.
   956  
   957  - Config:      sse_kms_key_id
   958  - Env Var:     RCLONE_S3_SSE_KMS_KEY_ID
   959  - Type:        string
   960  - Default:     ""
   961  - Examples:
   962      - ""
   963          - None
   964      - "arn:aws:kms:us-east-1:*"
   965          - arn:aws:kms:*
   966  
   967  #### --s3-storage-class
   968  
   969  The storage class to use when storing new objects in S3.
   970  
   971  - Config:      storage_class
   972  - Env Var:     RCLONE_S3_STORAGE_CLASS
   973  - Type:        string
   974  - Default:     ""
   975  - Examples:
   976      - ""
   977          - Default
   978      - "STANDARD"
   979          - Standard storage class
   980      - "REDUCED_REDUNDANCY"
   981          - Reduced redundancy storage class
   982      - "STANDARD_IA"
   983          - Standard Infrequent Access storage class
   984      - "ONEZONE_IA"
   985          - One Zone Infrequent Access storage class
   986      - "GLACIER"
   987          - Glacier storage class
   988      - "DEEP_ARCHIVE"
   989          - Glacier Deep Archive storage class
   990      - "INTELLIGENT_TIERING"
   991          - Intelligent-Tiering storage class
   992  
   993  #### --s3-storage-class
   994  
   995  The storage class to use when storing new objects in OSS.
   996  
   997  - Config:      storage_class
   998  - Env Var:     RCLONE_S3_STORAGE_CLASS
   999  - Type:        string
  1000  - Default:     ""
  1001  - Examples:
  1002      - ""
  1003          - Default
  1004      - "STANDARD"
  1005          - Standard storage class
  1006      - "GLACIER"
  1007          - Archive storage mode.
  1008      - "STANDARD_IA"
  1009          - Infrequent access storage mode.
  1010  
  1011  ### Advanced Options
  1012  
  1013  Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
  1014  
  1015  #### --s3-bucket-acl
  1016  
  1017  Canned ACL used when creating buckets.
  1018  
  1019  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1020  
  1021  Note that this ACL is applied when only when creating buckets.  If it
  1022  isn't set then "acl" is used instead.
  1023  
  1024  - Config:      bucket_acl
  1025  - Env Var:     RCLONE_S3_BUCKET_ACL
  1026  - Type:        string
  1027  - Default:     ""
  1028  - Examples:
  1029      - "private"
  1030          - Owner gets FULL_CONTROL. No one else has access rights (default).
  1031      - "public-read"
  1032          - Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  1033      - "public-read-write"
  1034          - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  1035          - Granting this on a bucket is generally not recommended.
  1036      - "authenticated-read"
  1037          - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
  1038  
  1039  #### --s3-sse-customer-algorithm
  1040  
  1041  If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
  1042  
  1043  - Config:      sse_customer_algorithm
  1044  - Env Var:     RCLONE_S3_SSE_CUSTOMER_ALGORITHM
  1045  - Type:        string
  1046  - Default:     ""
  1047  - Examples:
  1048      - ""
  1049          - None
  1050      - "AES256"
  1051          - AES256
  1052  
  1053  #### --s3-sse-customer-key
  1054  
  1055  If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
  1056  
  1057  - Config:      sse_customer_key
  1058  - Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY
  1059  - Type:        string
  1060  - Default:     ""
  1061  - Examples:
  1062      - ""
  1063          - None
  1064  
  1065  #### --s3-sse-customer-key-md5
  1066  
  1067  If using SSE-C you must provide the secret encryption key MD5 checksum.
  1068  
  1069  - Config:      sse_customer_key_md5
  1070  - Env Var:     RCLONE_S3_SSE_CUSTOMER_KEY_MD5
  1071  - Type:        string
  1072  - Default:     ""
  1073  - Examples:
  1074      - ""
  1075          - None
  1076  
  1077  #### --s3-upload-cutoff
  1078  
  1079  Cutoff for switching to chunked upload
  1080  
  1081  Any files larger than this will be uploaded in chunks of chunk_size.
  1082  The minimum is 0 and the maximum is 5GB.
  1083  
  1084  - Config:      upload_cutoff
  1085  - Env Var:     RCLONE_S3_UPLOAD_CUTOFF
  1086  - Type:        SizeSuffix
  1087  - Default:     200M
  1088  
  1089  #### --s3-chunk-size
  1090  
  1091  Chunk size to use for uploading.
  1092  
  1093  When uploading files larger than upload_cutoff or files with unknown
  1094  size (eg from "rclone rcat" or uploaded with "rclone mount" or google
  1095  photos or google docs) they will be uploaded as multipart uploads
  1096  using this chunk size.
  1097  
  1098  Note that "--s3-upload-concurrency" chunks of this size are buffered
  1099  in memory per transfer.
  1100  
  1101  If you are transferring large files over high speed links and you have
  1102  enough memory, then increasing this will speed up the transfers.
  1103  
  1104  Rclone will automatically increase the chunk size when uploading a
  1105  large file of known size to stay below the 10,000 chunks limit.
  1106  
  1107  Files of unknown size are uploaded with the configured
  1108  chunk_size. Since the default chunk size is 5MB and there can be at
  1109  most 10,000 chunks, this means that by default the maximum size of
  1110  file you can stream upload is 48GB.  If you wish to stream upload
  1111  larger files then you will need to increase chunk_size.
  1112  
  1113  - Config:      chunk_size
  1114  - Env Var:     RCLONE_S3_CHUNK_SIZE
  1115  - Type:        SizeSuffix
  1116  - Default:     5M
  1117  
  1118  #### --s3-copy-cutoff
  1119  
  1120  Cutoff for switching to multipart copy
  1121  
  1122  Any files larger than this that need to be server side copied will be
  1123  copied in chunks of this size.
  1124  
  1125  The minimum is 0 and the maximum is 5GB.
  1126  
  1127  - Config:      copy_cutoff
  1128  - Env Var:     RCLONE_S3_COPY_CUTOFF
  1129  - Type:        SizeSuffix
  1130  - Default:     5G
  1131  
  1132  #### --s3-disable-checksum
  1133  
  1134  Don't store MD5 checksum with object metadata
  1135  
  1136  Normally rclone will calculate the MD5 checksum of the input before
  1137  uploading it so it can add it to metadata on the object. This is great
  1138  for data integrity checking but can cause long delays for large files
  1139  to start uploading.
  1140  
  1141  - Config:      disable_checksum
  1142  - Env Var:     RCLONE_S3_DISABLE_CHECKSUM
  1143  - Type:        bool
  1144  - Default:     false
  1145  
  1146  #### --s3-session-token
  1147  
  1148  An AWS session token
  1149  
  1150  - Config:      session_token
  1151  - Env Var:     RCLONE_S3_SESSION_TOKEN
  1152  - Type:        string
  1153  - Default:     ""
  1154  
  1155  #### --s3-upload-concurrency
  1156  
  1157  Concurrency for multipart uploads.
  1158  
  1159  This is the number of chunks of the same file that are uploaded
  1160  concurrently.
  1161  
  1162  If you are uploading small numbers of large file over high speed link
  1163  and these uploads do not fully utilize your bandwidth, then increasing
  1164  this may help to speed up the transfers.
  1165  
  1166  - Config:      upload_concurrency
  1167  - Env Var:     RCLONE_S3_UPLOAD_CONCURRENCY
  1168  - Type:        int
  1169  - Default:     4
  1170  
  1171  #### --s3-force-path-style
  1172  
  1173  If true use path style access if false use virtual hosted style.
  1174  
  1175  If this is true (the default) then rclone will use path style access,
  1176  if false then rclone will use virtual path style. See [the AWS S3
  1177  docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
  1178  for more info.
  1179  
  1180  Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
  1181  false - rclone will do this automatically based on the provider
  1182  setting.
  1183  
  1184  - Config:      force_path_style
  1185  - Env Var:     RCLONE_S3_FORCE_PATH_STYLE
  1186  - Type:        bool
  1187  - Default:     true
  1188  
  1189  #### --s3-v2-auth
  1190  
  1191  If true use v2 authentication.
  1192  
  1193  If this is false (the default) then rclone will use v4 authentication.
  1194  If it is set then rclone will use v2 authentication.
  1195  
  1196  Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
  1197  
  1198  - Config:      v2_auth
  1199  - Env Var:     RCLONE_S3_V2_AUTH
  1200  - Type:        bool
  1201  - Default:     false
  1202  
  1203  #### --s3-use-accelerate-endpoint
  1204  
  1205  If true use the AWS S3 accelerated endpoint.
  1206  
  1207  See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)
  1208  
  1209  - Config:      use_accelerate_endpoint
  1210  - Env Var:     RCLONE_S3_USE_ACCELERATE_ENDPOINT
  1211  - Type:        bool
  1212  - Default:     false
  1213  
  1214  #### --s3-leave-parts-on-error
  1215  
  1216  If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
  1217  
  1218  It should be set to true for resuming uploads across different sessions.
  1219  
  1220  WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
  1221  
  1222  
  1223  - Config:      leave_parts_on_error
  1224  - Env Var:     RCLONE_S3_LEAVE_PARTS_ON_ERROR
  1225  - Type:        bool
  1226  - Default:     false
  1227  
  1228  #### --s3-list-chunk
  1229  
  1230  Size of listing chunk (response list for each ListObject S3 request).
  1231  
  1232  This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
  1233  Most services truncate the response list to 1000 objects even if requested more than that.
  1234  In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
  1235  In Ceph, this can be increased with the "rgw list buckets max chunk" option.
  1236  
  1237  
  1238  - Config:      list_chunk
  1239  - Env Var:     RCLONE_S3_LIST_CHUNK
  1240  - Type:        int
  1241  - Default:     1000
  1242  
  1243  #### --s3-encoding
  1244  
  1245  This sets the encoding for the backend.
  1246  
  1247  See: the [encoding section in the overview](/overview/#encoding) for more info.
  1248  
  1249  - Config:      encoding
  1250  - Env Var:     RCLONE_S3_ENCODING
  1251  - Type:        MultiEncoder
  1252  - Default:     Slash,InvalidUtf8,Dot
  1253  
  1254  #### --s3-memory-pool-flush-time
  1255  
  1256  How often internal memory buffer pools will be flushed.
  1257  Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
  1258  This option controls how often unused buffers will be removed from the pool.
  1259  
  1260  - Config:      memory_pool_flush_time
  1261  - Env Var:     RCLONE_S3_MEMORY_POOL_FLUSH_TIME
  1262  - Type:        Duration
  1263  - Default:     1m0s
  1264  
  1265  #### --s3-memory-pool-use-mmap
  1266  
  1267  Whether to use mmap buffers in internal memory pool.
  1268  
  1269  - Config:      memory_pool_use_mmap
  1270  - Env Var:     RCLONE_S3_MEMORY_POOL_USE_MMAP
  1271  - Type:        bool
  1272  - Default:     false
  1273  
  1274  {{< rem autogenerated options stop >}}
  1275  
  1276  ### Anonymous access to public buckets ###
  1277  
  1278  If you want to use rclone to access a public bucket, configure with a
  1279  blank `access_key_id` and `secret_access_key`.  Your config should end
  1280  up looking like this:
  1281  
  1282  ```
  1283  [anons3]
  1284  type = s3
  1285  provider = AWS
  1286  env_auth = false
  1287  access_key_id = 
  1288  secret_access_key = 
  1289  region = us-east-1
  1290  endpoint = 
  1291  location_constraint = 
  1292  acl = private
  1293  server_side_encryption = 
  1294  storage_class = 
  1295  ```
  1296  
  1297  Then use it as normal with the name of the public bucket, eg
  1298  
  1299      rclone lsd anons3:1000genomes
  1300  
  1301  You will be able to list and copy data but not upload it.
  1302  
  1303  ### Ceph ###
  1304  
  1305  [Ceph](https://ceph.com/) is an open source unified, distributed
  1306  storage system designed for excellent performance, reliability and
  1307  scalability.  It has an S3 compatible object storage interface.
  1308  
  1309  To use rclone with Ceph, configure as above but leave the region blank
  1310  and set the endpoint.  You should end up with something like this in
  1311  your config:
  1312  
  1313  
  1314  ```
  1315  [ceph]
  1316  type = s3
  1317  provider = Ceph
  1318  env_auth = false
  1319  access_key_id = XXX
  1320  secret_access_key = YYY
  1321  region =
  1322  endpoint = https://ceph.endpoint.example.com
  1323  location_constraint =
  1324  acl =
  1325  server_side_encryption =
  1326  storage_class =
  1327  ```
  1328  
  1329  If you are using an older version of CEPH, eg 10.2.x Jewel, then you
  1330  may need to supply the parameter `--s3-upload-cutoff 0` or put this in
  1331  the config file as `upload_cutoff 0` to work around a bug which causes
  1332  uploading of small files to fail.
  1333  
  1334  Note also that Ceph sometimes puts `/` in the passwords it gives
  1335  users.  If you read the secret access key using the command line tools
  1336  you will get a JSON blob with the `/` escaped as `\/`.  Make sure you
  1337  only write `/` in the secret access key.
  1338  
  1339  Eg the dump from Ceph looks something like this (irrelevant keys
  1340  removed).
  1341  
  1342  ```
  1343  {
  1344      "user_id": "xxx",
  1345      "display_name": "xxxx",
  1346      "keys": [
  1347          {
  1348              "user": "xxx",
  1349              "access_key": "xxxxxx",
  1350              "secret_key": "xxxxxx\/xxxx"
  1351          }
  1352      ],
  1353  }
  1354  ```
  1355  
  1356  Because this is a json dump, it is encoding the `/` as `\/`, so if you
  1357  use the secret key as `xxxxxx/xxxx`  it will work fine.
  1358  
  1359  ### Dreamhost ###
  1360  
  1361  Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
  1362  an object storage system based on CEPH.
  1363  
  1364  To use rclone with Dreamhost, configure as above but leave the region blank
  1365  and set the endpoint.  You should end up with something like this in
  1366  your config:
  1367  
  1368  ```
  1369  [dreamobjects]
  1370  type = s3
  1371  provider = DreamHost
  1372  env_auth = false
  1373  access_key_id = your_access_key
  1374  secret_access_key = your_secret_key
  1375  region =
  1376  endpoint = objects-us-west-1.dream.io
  1377  location_constraint =
  1378  acl = private
  1379  server_side_encryption =
  1380  storage_class =
  1381  ```
  1382  
  1383  ### DigitalOcean Spaces ###
  1384  
  1385  [Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
  1386  
  1387  To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`.
  1388  
  1389  When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings.
  1390  
  1391  Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below:
  1392  
  1393  ```
  1394  Storage> s3
  1395  env_auth> 1
  1396  access_key_id> YOUR_ACCESS_KEY
  1397  secret_access_key> YOUR_SECRET_KEY
  1398  region>
  1399  endpoint> nyc3.digitaloceanspaces.com
  1400  location_constraint>
  1401  acl>
  1402  storage_class>
  1403  ```
  1404  
  1405  The resulting configuration file should look like:
  1406  
  1407  ```
  1408  [spaces]
  1409  type = s3
  1410  provider = DigitalOcean
  1411  env_auth = false
  1412  access_key_id = YOUR_ACCESS_KEY
  1413  secret_access_key = YOUR_SECRET_KEY
  1414  region =
  1415  endpoint = nyc3.digitaloceanspaces.com
  1416  location_constraint =
  1417  acl =
  1418  server_side_encryption =
  1419  storage_class =
  1420  ```
  1421  
  1422  Once configured, you can create a new Space and begin copying files. For example:
  1423  
  1424  ```
  1425  rclone mkdir spaces:my-new-space
  1426  rclone copy /path/to/files spaces:my-new-space
  1427  ```
  1428  
  1429  ### IBM COS (S3) ###
  1430  
  1431  Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
  1432  
  1433  To configure access to IBM COS S3, follow the steps below:
  1434  
  1435  1. Run rclone config and select n for a new remote.
  1436  ```
  1437  	2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
  1438  	No remotes found - make a new one
  1439  	n) New remote
  1440  	s) Set configuration password
  1441  	q) Quit config
  1442  	n/s/q> n
  1443  ```
  1444  
  1445  2. Enter the name for the configuration
  1446  ```
  1447  	name> <YOUR NAME>
  1448  ```
  1449  
  1450  3. Select "s3" storage.
  1451  ```
  1452  Choose a number from below, or type in your own value
  1453   	1 / Alias for an existing remote
  1454     	\ "alias"
  1455   	2 / Amazon Drive
  1456     	\ "amazon cloud drive"
  1457   	3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
  1458     	\ "s3"
  1459   	4 / Backblaze B2
  1460     	\ "b2"
  1461  [snip]
  1462  	23 / http Connection
  1463      \ "http"
  1464  Storage> 3
  1465  ```
  1466  
  1467  4. Select IBM COS as the S3 Storage Provider.
  1468  ```
  1469  Choose the S3 provider.
  1470  Choose a number from below, or type in your own value
  1471  	 1 / Choose this option to configure Storage to AWS S3
  1472  	   \ "AWS"
  1473   	 2 / Choose this option to configure Storage to Ceph Systems
  1474    	 \ "Ceph"
  1475  	 3 /  Choose this option to configure Storage to Dreamhost
  1476       \ "Dreamhost"
  1477     4 / Choose this option to the configure Storage to IBM COS S3
  1478     	 \ "IBMCOS"
  1479   	 5 / Choose this option to the configure Storage to Minio
  1480       \ "Minio"
  1481  	 Provider>4
  1482  ```
  1483  
  1484  5. Enter the Access Key and Secret.
  1485  ```
  1486  	AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  1487  	access_key_id> <>
  1488  	AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  1489  	secret_access_key> <>
  1490  ```
  1491  
  1492  6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.
  1493  ```
  1494  	Endpoint for IBM COS S3 API.
  1495  	Specify if using an IBM COS On Premise.
  1496  	Choose a number from below, or type in your own value
  1497  	 1 / US Cross Region Endpoint
  1498     	   \ "s3-api.us-geo.objectstorage.softlayer.net"
  1499  	 2 / US Cross Region Dallas Endpoint
  1500     	   \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
  1501   	 3 / US Cross Region Washington DC Endpoint
  1502     	   \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
  1503  	 4 / US Cross Region San Jose Endpoint
  1504  	   \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
  1505  	 5 / US Cross Region Private Endpoint
  1506  	   \ "s3-api.us-geo.objectstorage.service.networklayer.com"
  1507  	 6 / US Cross Region Dallas Private Endpoint
  1508  	   \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
  1509  	 7 / US Cross Region Washington DC Private Endpoint
  1510  	   \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
  1511  	 8 / US Cross Region San Jose Private Endpoint
  1512  	   \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
  1513  	 9 / US Region East Endpoint
  1514  	   \ "s3.us-east.objectstorage.softlayer.net"
  1515  	10 / US Region East Private Endpoint
  1516  	   \ "s3.us-east.objectstorage.service.networklayer.com"
  1517  	11 / US Region South Endpoint
  1518  [snip]
  1519  	34 / Toronto Single Site Private Endpoint
  1520  	   \ "s3.tor01.objectstorage.service.networklayer.com"
  1521  	endpoint>1
  1522  ```
  1523  
  1524  
  1525  7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
  1526  ```
  1527  	 1 / US Cross Region Standard
  1528  	   \ "us-standard"
  1529  	 2 / US Cross Region Vault
  1530  	   \ "us-vault"
  1531  	 3 / US Cross Region Cold
  1532  	   \ "us-cold"
  1533  	 4 / US Cross Region Flex
  1534  	   \ "us-flex"
  1535  	 5 / US East Region Standard
  1536  	   \ "us-east-standard"
  1537  	 6 / US East Region Vault
  1538  	   \ "us-east-vault"
  1539  	 7 / US East Region Cold
  1540  	   \ "us-east-cold"
  1541  	 8 / US East Region Flex
  1542  	   \ "us-east-flex"
  1543  	 9 / US South Region Standard
  1544  	   \ "us-south-standard"
  1545  	10 / US South Region Vault
  1546  	   \ "us-south-vault"
  1547  [snip]
  1548  	32 / Toronto Flex
  1549  	   \ "tor01-flex"
  1550  location_constraint>1
  1551  ```
  1552  
  1553  9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
  1554  ```
  1555  Canned ACL used when creating buckets and/or storing objects in S3.
  1556  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1557  Choose a number from below, or type in your own value
  1558        1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
  1559        \ "private"
  1560        2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
  1561        \ "public-read"
  1562        3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
  1563        \ "public-read-write"
  1564        4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
  1565        \ "authenticated-read"
  1566  acl> 1
  1567  ```
  1568  
  1569  
  1570  12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
  1571  ```
  1572  	[xxx]
  1573  	type = s3
  1574  	Provider = IBMCOS
  1575  	access_key_id = xxx
  1576  	secret_access_key = yyy
  1577  	endpoint = s3-api.us-geo.objectstorage.softlayer.net
  1578  	location_constraint = us-standard
  1579  	acl = private
  1580  ```
  1581  
  1582  13. Execute rclone commands
  1583  ```
  1584  	1)	Create a bucket.
  1585  		rclone mkdir IBM-COS-XREGION:newbucket
  1586  	2)	List available buckets.
  1587  		rclone lsd IBM-COS-XREGION:
  1588  		-1 2017-11-08 21:16:22        -1 test
  1589  		-1 2018-02-14 20:16:39        -1 newbucket
  1590  	3)	List contents of a bucket.
  1591  		rclone ls IBM-COS-XREGION:newbucket
  1592  		18685952 test.exe
  1593  	4)	Copy a file from local to remote.
  1594  		rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
  1595  	5)	Copy a file from remote to local.
  1596  		rclone copy IBM-COS-XREGION:newbucket/file.txt .
  1597  	6)	Delete a file on remote.
  1598  		rclone delete IBM-COS-XREGION:newbucket/file.txt
  1599  ```
  1600  
  1601  ### Minio ###
  1602  
  1603  [Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
  1604  
  1605  It is very easy to install and provides an S3 compatible server which can be used by rclone.
  1606  
  1607  To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide).
  1608  
  1609  When it configures itself Minio will print something like this
  1610  
  1611  ```
  1612  Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
  1613  AccessKey: USWUXHGYZQYFYFFIT3RE
  1614  SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1615  Region:    us-east-1
  1616  SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
  1617  
  1618  Browser Access:
  1619     http://192.168.1.106:9000  http://172.23.0.1:9000
  1620  
  1621  Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
  1622     $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1623  
  1624  Object API (Amazon S3 compatible):
  1625     Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
  1626     Java:       https://docs.minio.io/docs/java-client-quickstart-guide
  1627     Python:     https://docs.minio.io/docs/python-client-quickstart-guide
  1628     JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
  1629     .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
  1630  
  1631  Drive Capacity: 26 GiB Free, 165 GiB Total
  1632  ```
  1633  
  1634  These details need to go into `rclone config` like this.  Note that it
  1635  is important to put the region in as stated above.
  1636  
  1637  ```
  1638  env_auth> 1
  1639  access_key_id> USWUXHGYZQYFYFFIT3RE
  1640  secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1641  region> us-east-1
  1642  endpoint> http://192.168.1.106:9000
  1643  location_constraint>
  1644  server_side_encryption>
  1645  ```
  1646  
  1647  Which makes the config file look like this
  1648  
  1649  ```
  1650  [minio]
  1651  type = s3
  1652  provider = Minio
  1653  env_auth = false
  1654  access_key_id = USWUXHGYZQYFYFFIT3RE
  1655  secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1656  region = us-east-1
  1657  endpoint = http://192.168.1.106:9000
  1658  location_constraint =
  1659  server_side_encryption =
  1660  ```
  1661  
  1662  So once set up, for example to copy files into a bucket
  1663  
  1664  ```
  1665  rclone copy /path/to/files minio:bucket
  1666  ```
  1667  
  1668  ### Scaleway {#scaleway}
  1669  
  1670  [Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
  1671  Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
  1672  
  1673  Scaleway provides an S3 interface which can be configured for use with rclone like this:
  1674  
  1675  ```
  1676  [scaleway]
  1677  type = s3
  1678  env_auth = false
  1679  endpoint = s3.nl-ams.scw.cloud
  1680  access_key_id = SCWXXXXXXXXXXXXXX
  1681  secret_access_key = 1111111-2222-3333-44444-55555555555555
  1682  region = nl-ams
  1683  location_constraint =
  1684  acl = private
  1685  force_path_style = false
  1686  server_side_encryption =
  1687  storage_class =
  1688  ```
  1689  
  1690  ### Wasabi ###
  1691  
  1692  [Wasabi](https://wasabi.com) is a cloud-based object storage service for a
  1693  broad range of applications and use cases. Wasabi is designed for
  1694  individuals and organizations that require a high-performance,
  1695  reliable, and secure data storage infrastructure at minimal cost.
  1696  
  1697  Wasabi provides an S3 interface which can be configured for use with
  1698  rclone like this.
  1699  
  1700  ```
  1701  No remotes found - make a new one
  1702  n) New remote
  1703  s) Set configuration password
  1704  n/s> n
  1705  name> wasabi
  1706  Type of storage to configure.
  1707  Choose a number from below, or type in your own value
  1708  [snip]
  1709  XX / Amazon S3 (also Dreamhost, Ceph, Minio)
  1710     \ "s3"
  1711  [snip]
  1712  Storage> s3
  1713  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  1714  Choose a number from below, or type in your own value
  1715   1 / Enter AWS credentials in the next step
  1716     \ "false"
  1717   2 / Get AWS credentials from the environment (env vars or IAM)
  1718     \ "true"
  1719  env_auth> 1
  1720  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  1721  access_key_id> YOURACCESSKEY
  1722  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  1723  secret_access_key> YOURSECRETACCESSKEY
  1724  Region to connect to.
  1725  Choose a number from below, or type in your own value
  1726     / The default endpoint - a good choice if you are unsure.
  1727   1 | US Region, Northern Virginia or Pacific Northwest.
  1728     | Leave location constraint empty.
  1729     \ "us-east-1"
  1730  [snip]
  1731  region> us-east-1
  1732  Endpoint for S3 API.
  1733  Leave blank if using AWS to use the default endpoint for the region.
  1734  Specify if using an S3 clone such as Ceph.
  1735  endpoint> s3.wasabisys.com
  1736  Location constraint - must be set to match the Region. Used when creating buckets only.
  1737  Choose a number from below, or type in your own value
  1738   1 / Empty for US Region, Northern Virginia or Pacific Northwest.
  1739     \ ""
  1740  [snip]
  1741  location_constraint>
  1742  Canned ACL used when creating buckets and/or storing objects in S3.
  1743  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1744  Choose a number from below, or type in your own value
  1745   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  1746     \ "private"
  1747  [snip]
  1748  acl>
  1749  The server-side encryption algorithm used when storing this object in S3.
  1750  Choose a number from below, or type in your own value
  1751   1 / None
  1752     \ ""
  1753   2 / AES256
  1754     \ "AES256"
  1755  server_side_encryption>
  1756  The storage class to use when storing objects in S3.
  1757  Choose a number from below, or type in your own value
  1758   1 / Default
  1759     \ ""
  1760   2 / Standard storage class
  1761     \ "STANDARD"
  1762   3 / Reduced redundancy storage class
  1763     \ "REDUCED_REDUNDANCY"
  1764   4 / Standard Infrequent Access storage class
  1765     \ "STANDARD_IA"
  1766  storage_class>
  1767  Remote config
  1768  --------------------
  1769  [wasabi]
  1770  env_auth = false
  1771  access_key_id = YOURACCESSKEY
  1772  secret_access_key = YOURSECRETACCESSKEY
  1773  region = us-east-1
  1774  endpoint = s3.wasabisys.com
  1775  location_constraint =
  1776  acl =
  1777  server_side_encryption =
  1778  storage_class =
  1779  --------------------
  1780  y) Yes this is OK
  1781  e) Edit this remote
  1782  d) Delete this remote
  1783  y/e/d> y
  1784  ```
  1785  
  1786  This will leave the config file looking like this.
  1787  
  1788  ```
  1789  [wasabi]
  1790  type = s3
  1791  provider = Wasabi
  1792  env_auth = false
  1793  access_key_id = YOURACCESSKEY
  1794  secret_access_key = YOURSECRETACCESSKEY
  1795  region =
  1796  endpoint = s3.wasabisys.com
  1797  location_constraint =
  1798  acl =
  1799  server_side_encryption =
  1800  storage_class =
  1801  ```
  1802  
  1803  ### Alibaba OSS {#alibaba-oss}
  1804  
  1805  Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
  1806  configuration.  First run:
  1807  
  1808      rclone config
  1809  
  1810  This will guide you through an interactive setup process.
  1811  
  1812  ```
  1813  No remotes found - make a new one
  1814  n) New remote
  1815  s) Set configuration password
  1816  q) Quit config
  1817  n/s/q> n
  1818  name> oss
  1819  Type of storage to configure.
  1820  Enter a string value. Press Enter for the default ("").
  1821  Choose a number from below, or type in your own value
  1822  [snip]
  1823   4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
  1824     \ "s3"
  1825  [snip]
  1826  Storage> s3
  1827  Choose your S3 provider.
  1828  Enter a string value. Press Enter for the default ("").
  1829  Choose a number from below, or type in your own value
  1830   1 / Amazon Web Services (AWS) S3
  1831     \ "AWS"
  1832   2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
  1833     \ "Alibaba"
  1834   3 / Ceph Object Storage
  1835     \ "Ceph"
  1836  [snip]
  1837  provider> Alibaba
  1838  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  1839  Only applies if access_key_id and secret_access_key is blank.
  1840  Enter a boolean value (true or false). Press Enter for the default ("false").
  1841  Choose a number from below, or type in your own value
  1842   1 / Enter AWS credentials in the next step
  1843     \ "false"
  1844   2 / Get AWS credentials from the environment (env vars or IAM)
  1845     \ "true"
  1846  env_auth> 1
  1847  AWS Access Key ID.
  1848  Leave blank for anonymous access or runtime credentials.
  1849  Enter a string value. Press Enter for the default ("").
  1850  access_key_id> accesskeyid
  1851  AWS Secret Access Key (password)
  1852  Leave blank for anonymous access or runtime credentials.
  1853  Enter a string value. Press Enter for the default ("").
  1854  secret_access_key> secretaccesskey
  1855  Endpoint for OSS API.
  1856  Enter a string value. Press Enter for the default ("").
  1857  Choose a number from below, or type in your own value
  1858   1 / East China 1 (Hangzhou)
  1859     \ "oss-cn-hangzhou.aliyuncs.com"
  1860   2 / East China 2 (Shanghai)
  1861     \ "oss-cn-shanghai.aliyuncs.com"
  1862   3 / North China 1 (Qingdao)
  1863     \ "oss-cn-qingdao.aliyuncs.com"
  1864  [snip]
  1865  endpoint> 1
  1866  Canned ACL used when creating buckets and storing or copying objects.
  1867  
  1868  Note that this ACL is applied when server side copying objects as S3
  1869  doesn't copy the ACL from the source but rather writes a fresh one.
  1870  Enter a string value. Press Enter for the default ("").
  1871  Choose a number from below, or type in your own value
  1872   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  1873     \ "private"
  1874   2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  1875     \ "public-read"
  1876     / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  1877  [snip]
  1878  acl> 1
  1879  The storage class to use when storing new objects in OSS.
  1880  Enter a string value. Press Enter for the default ("").
  1881  Choose a number from below, or type in your own value
  1882   1 / Default
  1883     \ ""
  1884   2 / Standard storage class
  1885     \ "STANDARD"
  1886   3 / Archive storage mode.
  1887     \ "GLACIER"
  1888   4 / Infrequent access storage mode.
  1889     \ "STANDARD_IA"
  1890  storage_class> 1
  1891  Edit advanced config? (y/n)
  1892  y) Yes
  1893  n) No
  1894  y/n> n
  1895  Remote config
  1896  --------------------
  1897  [oss]
  1898  type = s3
  1899  provider = Alibaba
  1900  env_auth = false
  1901  access_key_id = accesskeyid
  1902  secret_access_key = secretaccesskey
  1903  endpoint = oss-cn-hangzhou.aliyuncs.com
  1904  acl = private
  1905  storage_class = Standard
  1906  --------------------
  1907  y) Yes this is OK
  1908  e) Edit this remote
  1909  d) Delete this remote
  1910  y/e/d> y
  1911  ```
  1912  
  1913  ### Netease NOS  ###
  1914  
  1915  For Netease NOS configure as per the configurator `rclone config`
  1916  setting the provider `Netease`.  This will automatically set
  1917  `force_path_style = false` which is necessary for it to run properly.