github.com/xhghs/rclone@v1.51.1-0.20200430155106-e186a28cced8/docs/content/s3.md (about)

     1  ---
     2  title: "Amazon S3"
     3  description: "Rclone docs for Amazon S3"
     4  date: "2016-07-11"
     5  ---
     6  
     7  <i class="fab fa-amazon"></i> Amazon S3 Storage Providers
     8  --------------------------------------------------------
     9  
    10  The S3 backend can be used with a number of different providers:
    11  
    12  * {{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#amazon-s3" >}}
    13  * {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
    14  * {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
    15  * {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
    16  * {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
    17  * {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
    18  * {{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
    19  * {{< provider name="Scaleway" home="https://www.scaleway.com/en/object-storage/" config="/s3/#scaleway" >}}
    20  * {{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
    21  * {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
    22  
    23  Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
    24  command.)  You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
    25  
    26  Once you have made a remote (see the provider specific section above)
    27  you can use it like this:
    28  
    29  See all buckets
    30  
    31      rclone lsd remote:
    32  
    33  Make a new bucket
    34  
    35      rclone mkdir remote:bucket
    36  
    37  List the contents of a bucket
    38  
    39      rclone ls remote:bucket
    40  
    41  Sync `/home/local/directory` to the remote bucket, deleting any excess
    42  files in the bucket.
    43  
    44      rclone sync /home/local/directory remote:bucket
    45  
    46  ## AWS S3 {#amazon-s3}
    47  
    48  Here is an example of making an s3 configuration.  First run
    49  
    50      rclone config
    51  
    52  This will guide you through an interactive setup process.
    53  
    54  ```
    55  No remotes found - make a new one
    56  n) New remote
    57  s) Set configuration password
    58  q) Quit config
    59  n/s/q> n
    60  name> remote
    61  Type of storage to configure.
    62  Choose a number from below, or type in your own value
    63  [snip]
    64  XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
    65     \ "s3"
    66  [snip]
    67  Storage> s3
    68  Choose your S3 provider.
    69  Choose a number from below, or type in your own value
    70   1 / Amazon Web Services (AWS) S3
    71     \ "AWS"
    72   2 / Ceph Object Storage
    73     \ "Ceph"
    74   3 / Digital Ocean Spaces
    75     \ "DigitalOcean"
    76   4 / Dreamhost DreamObjects
    77     \ "Dreamhost"
    78   5 / IBM COS S3
    79     \ "IBMCOS"
    80   6 / Minio Object Storage
    81     \ "Minio"
    82   7 / Wasabi Object Storage
    83     \ "Wasabi"
    84   8 / Any other S3 compatible provider
    85     \ "Other"
    86  provider> 1
    87  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
    88  Choose a number from below, or type in your own value
    89   1 / Enter AWS credentials in the next step
    90     \ "false"
    91   2 / Get AWS credentials from the environment (env vars or IAM)
    92     \ "true"
    93  env_auth> 1
    94  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
    95  access_key_id> XXX
    96  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    97  secret_access_key> YYY
    98  Region to connect to.
    99  Choose a number from below, or type in your own value
   100     / The default endpoint - a good choice if you are unsure.
   101   1 | US Region, Northern Virginia or Pacific Northwest.
   102     | Leave location constraint empty.
   103     \ "us-east-1"
   104     / US East (Ohio) Region
   105   2 | Needs location constraint us-east-2.
   106     \ "us-east-2"
   107     / US West (Oregon) Region
   108   3 | Needs location constraint us-west-2.
   109     \ "us-west-2"
   110     / US West (Northern California) Region
   111   4 | Needs location constraint us-west-1.
   112     \ "us-west-1"
   113     / Canada (Central) Region
   114   5 | Needs location constraint ca-central-1.
   115     \ "ca-central-1"
   116     / EU (Ireland) Region
   117   6 | Needs location constraint EU or eu-west-1.
   118     \ "eu-west-1"
   119     / EU (London) Region
   120   7 | Needs location constraint eu-west-2.
   121     \ "eu-west-2"
   122     / EU (Frankfurt) Region
   123   8 | Needs location constraint eu-central-1.
   124     \ "eu-central-1"
   125     / Asia Pacific (Singapore) Region
   126   9 | Needs location constraint ap-southeast-1.
   127     \ "ap-southeast-1"
   128     / Asia Pacific (Sydney) Region
   129  10 | Needs location constraint ap-southeast-2.
   130     \ "ap-southeast-2"
   131     / Asia Pacific (Tokyo) Region
   132  11 | Needs location constraint ap-northeast-1.
   133     \ "ap-northeast-1"
   134     / Asia Pacific (Seoul)
   135  12 | Needs location constraint ap-northeast-2.
   136     \ "ap-northeast-2"
   137     / Asia Pacific (Mumbai)
   138  13 | Needs location constraint ap-south-1.
   139     \ "ap-south-1"
   140     / Asia Patific (Hong Kong) Region
   141  14 | Needs location constraint ap-east-1.
   142     \ "ap-east-1"
   143     / South America (Sao Paulo) Region
   144  15 | Needs location constraint sa-east-1.
   145     \ "sa-east-1"
   146  region> 1
   147  Endpoint for S3 API.
   148  Leave blank if using AWS to use the default endpoint for the region.
   149  endpoint> 
   150  Location constraint - must be set to match the Region. Used when creating buckets only.
   151  Choose a number from below, or type in your own value
   152   1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   153     \ ""
   154   2 / US East (Ohio) Region.
   155     \ "us-east-2"
   156   3 / US West (Oregon) Region.
   157     \ "us-west-2"
   158   4 / US West (Northern California) Region.
   159     \ "us-west-1"
   160   5 / Canada (Central) Region.
   161     \ "ca-central-1"
   162   6 / EU (Ireland) Region.
   163     \ "eu-west-1"
   164   7 / EU (London) Region.
   165     \ "eu-west-2"
   166   8 / EU Region.
   167     \ "EU"
   168   9 / Asia Pacific (Singapore) Region.
   169     \ "ap-southeast-1"
   170  10 / Asia Pacific (Sydney) Region.
   171     \ "ap-southeast-2"
   172  11 / Asia Pacific (Tokyo) Region.
   173     \ "ap-northeast-1"
   174  12 / Asia Pacific (Seoul)
   175     \ "ap-northeast-2"
   176  13 / Asia Pacific (Mumbai)
   177     \ "ap-south-1"
   178  14 / Asia Pacific (Hong Kong)
   179     \ "ap-east-1"
   180  15 / South America (Sao Paulo) Region.
   181     \ "sa-east-1"
   182  location_constraint> 1
   183  Canned ACL used when creating buckets and/or storing objects in S3.
   184  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
   185  Choose a number from below, or type in your own value
   186   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   187     \ "private"
   188   2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   189     \ "public-read"
   190     / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
   191   3 | Granting this on a bucket is generally not recommended.
   192     \ "public-read-write"
   193   4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   194     \ "authenticated-read"
   195     / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
   196   5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   197     \ "bucket-owner-read"
   198     / Both the object owner and the bucket owner get FULL_CONTROL over the object.
   199   6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   200     \ "bucket-owner-full-control"
   201  acl> 1
   202  The server-side encryption algorithm used when storing this object in S3.
   203  Choose a number from below, or type in your own value
   204   1 / None
   205     \ ""
   206   2 / AES256
   207     \ "AES256"
   208  server_side_encryption> 1
   209  The storage class to use when storing objects in S3.
   210  Choose a number from below, or type in your own value
   211   1 / Default
   212     \ ""
   213   2 / Standard storage class
   214     \ "STANDARD"
   215   3 / Reduced redundancy storage class
   216     \ "REDUCED_REDUNDANCY"
   217   4 / Standard Infrequent Access storage class
   218     \ "STANDARD_IA"
   219   5 / One Zone Infrequent Access storage class
   220     \ "ONEZONE_IA"
   221   6 / Glacier storage class
   222     \ "GLACIER"
   223   7 / Glacier Deep Archive storage class
   224     \ "DEEP_ARCHIVE"
   225   8 / Intelligent-Tiering storage class
   226     \ "INTELLIGENT_TIERING"
   227  storage_class> 1
   228  Remote config
   229  --------------------
   230  [remote]
   231  type = s3
   232  provider = AWS
   233  env_auth = false
   234  access_key_id = XXX
   235  secret_access_key = YYY
   236  region = us-east-1
   237  endpoint = 
   238  location_constraint = 
   239  acl = private
   240  server_side_encryption = 
   241  storage_class = 
   242  --------------------
   243  y) Yes this is OK
   244  e) Edit this remote
   245  d) Delete this remote
   246  y/e/d> 
   247  ```
   248  
   249  ### --fast-list ###
   250  
   251  This remote supports `--fast-list` which allows you to use fewer
   252  transactions in exchange for more memory. See the [rclone
   253  docs](/docs/#fast-list) for more details.
   254  
   255  ### --update and --use-server-modtime ###
   256  
   257  As noted below, the modified time is stored on metadata on the object. It is
   258  used by default for all operations that require checking the time a file was
   259  last updated. It allows rclone to treat the remote more like a true filesystem,
   260  but it is inefficient because it requires an extra API call to retrieve the
   261  metadata.
   262  
   263  For many operations, the time the object was last uploaded to the remote is
   264  sufficient to determine if it is "dirty". By using `--update` along with
   265  `--use-server-modtime`, you can avoid the extra API call and simply upload
   266  files whose local modtime is newer than the time it was last uploaded.
   267  
   268  ### Modified time ###
   269  
   270  The modified time is stored as metadata on the object as
   271  `X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns.
   272  
   273  If the modification time needs to be updated rclone will attempt to perform a server
   274  side copy to update the modification if the object can be copied in a single part.
   275  In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive
   276  storage the object will be uploaded rather than copied.
   277  
   278  #### Restricted filename characters
   279  
   280  S3 allows any valid UTF-8 string as a key.
   281  
   282  Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), as
   283  they can't be used in XML.
   284  
   285  The following characters are replaced since these are problematic when
   286  dealing with the REST API:
   287  
   288  | Character | Value | Replacement |
   289  | --------- |:-----:|:-----------:|
   290  | NUL       | 0x00  | ␀           |
   291  | /         | 0x2F  | /           |
   292  
   293  The encoding will also encode these file names as they don't seem to
   294  work with the SDK properly:
   295  
   296  | File name | Replacement |
   297  | --------- |:-----------:|
   298  | .         | .          |
   299  | ..        | ..         |
   300  
   301  ### Multipart uploads ###
   302  
   303  rclone supports multipart uploads with S3 which means that it can
   304  upload files bigger than 5GB.
   305  
   306  Note that files uploaded *both* with multipart upload *and* through
   307  crypt remotes do not have MD5 sums.
   308  
   309  rclone switches from single part uploads to multipart uploads at the
   310  point specified by `--s3-upload-cutoff`.  This can be a maximum of 5GB
   311  and a minimum of 0 (ie always upload multipart files).
   312  
   313  The chunk sizes used in the multipart upload are specified by
   314  `--s3-chunk-size` and the number of chunks uploaded concurrently is
   315  specified by `--s3-upload-concurrency`.
   316  
   317  Multipart uploads will use `--transfers` * `--s3-upload-concurrency` *
   318  `--s3-chunk-size` extra memory.  Single part uploads to not use extra
   319  memory.
   320  
   321  Single part transfers can be faster than multipart transfers or slower
   322  depending on your latency from S3 - the more latency, the more likely
   323  single part transfers will be faster.
   324  
   325  Increasing `--s3-upload-concurrency` will increase throughput (8 would
   326  be a sensible value) and increasing `--s3-chunk-size` also increases
   327  throughput (16M would be sensible).  Increasing either of these will
   328  use more memory.  The default values are high enough to gain most of
   329  the possible performance without using too much memory.
   330  
   331  
   332  ### Buckets and Regions ###
   333  
   334  With Amazon S3 you can list buckets (`rclone lsd`) using any region,
   335  but you can only access the content of a bucket from the region it was
   336  created in.  If you attempt to access a bucket from the wrong region,
   337  you will get an error, `incorrect region, the bucket is not in 'XXX'
   338  region`.
   339  
   340  ### Authentication ###
   341  
   342  There are a number of ways to supply `rclone` with a set of AWS
   343  credentials, with and without using the environment.
   344  
   345  The different authentication methods are tried in this order:
   346  
   347   - Directly in the rclone configuration file (`env_auth = false` in the config file):
   348     - `access_key_id` and `secret_access_key` are required.
   349     - `session_token` can be optionally set when using AWS STS.
   350   - Runtime configuration (`env_auth = true` in the config file):
   351     - Export the following environment variables before running `rclone`:
   352       - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
   353       - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
   354       - Session Token: `AWS_SESSION_TOKEN` (optional)
   355     - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html):
   356       - Profile files are standard files used by AWS CLI tools
   357       - By default it will use the profile in your home directory (eg `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables:
   358           - `AWS_SHARED_CREDENTIALS_FILE` to control which file.
   359           - `AWS_PROFILE` to control which profile to use.
   360     - Or, run `rclone` in an ECS task with an IAM role (AWS only).
   361     - Or, run `rclone` on an EC2 instance with an IAM role (AWS only).
   362     - Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only).
   363  
   364  If none of these option actually end up providing `rclone` with AWS
   365  credentials then S3 interaction will be non-authenticated (see below).
   366  
   367  ### S3 Permissions ###
   368  
   369  When using the `sync` subcommand of `rclone` the following minimum
   370  permissions are required to be available on the bucket being written to:
   371  
   372  * `ListBucket`
   373  * `DeleteObject`
   374  * `GetObject`
   375  * `PutObject`
   376  * `PutObjectACL`
   377  
   378  When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required.
   379  
   380  Example policy:
   381  
   382  ```
   383  {
   384      "Version": "2012-10-17",
   385      "Statement": [
   386          {
   387              "Effect": "Allow",
   388              "Principal": {
   389                  "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
   390              },
   391              "Action": [
   392                  "s3:ListBucket",
   393                  "s3:DeleteObject",
   394                  "s3:GetObject",
   395                  "s3:PutObject",
   396                  "s3:PutObjectAcl"
   397              ],
   398              "Resource": [
   399                "arn:aws:s3:::BUCKET_NAME/*",
   400                "arn:aws:s3:::BUCKET_NAME"
   401              ]
   402          },
   403          {
   404              "Effect": "Allow",
   405              "Action": "s3:ListAllMyBuckets",
   406              "Resource": "arn:aws:s3:::*"
   407          }	
   408      ]
   409  }
   410  ```
   411  
   412  Notes on above:
   413  
   414  1. This is a policy that can be used when creating bucket. It assumes
   415     that `USER_NAME` has been created.
   416  2. The Resource entry must include both resource ARNs, as one implies
   417     the bucket and the other implies the bucket's objects.
   418  
   419  For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
   420  that will generate one or more buckets that will work with `rclone sync`.
   421  
   422  ### Key Management System (KMS) ###
   423  
   424  If you are using server side encryption with KMS then you will find
   425  you can't transfer small objects.  As a work-around you can use the
   426  `--ignore-checksum` flag.
   427  
   428  A proper fix is being worked on in [issue #1824](https://github.com/rclone/rclone/issues/1824).
   429  
   430  ### Glacier and Glacier Deep Archive ###
   431  
   432  You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
   433  The bucket can still be synced or copied into normally, but if rclone
   434  tries to access data from the glacier storage class you will see an error like below.
   435  
   436      2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
   437  
   438  In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
   439  the object(s) in question before using rclone.
   440  
   441  Note that rclone only speaks the S3 API it does not speak the Glacier
   442  Vault API, so rclone cannot directly access Glacier Vaults.
   443  
   444  <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs -->
   445  ### Standard Options
   446  
   447  Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
   448  
   449  #### --s3-provider
   450  
   451  Choose your S3 provider.
   452  
   453  - Config:      provider
   454  - Env Var:     RCLONE_S3_PROVIDER
   455  - Type:        string
   456  - Default:     ""
   457  - Examples:
   458      - "AWS"
   459          - Amazon Web Services (AWS) S3
   460      - "Alibaba"
   461          - Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   462      - "Ceph"
   463          - Ceph Object Storage
   464      - "DigitalOcean"
   465          - Digital Ocean Spaces
   466      - "Dreamhost"
   467          - Dreamhost DreamObjects
   468      - "IBMCOS"
   469          - IBM COS S3
   470      - "Minio"
   471          - Minio Object Storage
   472      - "Netease"
   473          - Netease Object Storage (NOS)
   474      - "StackPath"
   475          - StackPath Object Storage
   476      - "Wasabi"
   477          - Wasabi Object Storage
   478      - "Other"
   479          - Any other S3 compatible provider
   480  
   481  #### --s3-env-auth
   482  
   483  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
   484  Only applies if access_key_id and secret_access_key is blank.
   485  
   486  - Config:      env_auth
   487  - Env Var:     RCLONE_S3_ENV_AUTH
   488  - Type:        bool
   489  - Default:     false
   490  - Examples:
   491      - "false"
   492          - Enter AWS credentials in the next step
   493      - "true"
   494          - Get AWS credentials from the environment (env vars or IAM)
   495  
   496  #### --s3-access-key-id
   497  
   498  AWS Access Key ID.
   499  Leave blank for anonymous access or runtime credentials.
   500  
   501  - Config:      access_key_id
   502  - Env Var:     RCLONE_S3_ACCESS_KEY_ID
   503  - Type:        string
   504  - Default:     ""
   505  
   506  #### --s3-secret-access-key
   507  
   508  AWS Secret Access Key (password)
   509  Leave blank for anonymous access or runtime credentials.
   510  
   511  - Config:      secret_access_key
   512  - Env Var:     RCLONE_S3_SECRET_ACCESS_KEY
   513  - Type:        string
   514  - Default:     ""
   515  
   516  #### --s3-region
   517  
   518  Region to connect to.
   519  
   520  - Config:      region
   521  - Env Var:     RCLONE_S3_REGION
   522  - Type:        string
   523  - Default:     ""
   524  - Examples:
   525      - "us-east-1"
   526          - The default endpoint - a good choice if you are unsure.
   527          - US Region, Northern Virginia or Pacific Northwest.
   528          - Leave location constraint empty.
   529      - "us-east-2"
   530          - US East (Ohio) Region
   531          - Needs location constraint us-east-2.
   532      - "us-west-2"
   533          - US West (Oregon) Region
   534          - Needs location constraint us-west-2.
   535      - "us-west-1"
   536          - US West (Northern California) Region
   537          - Needs location constraint us-west-1.
   538      - "ca-central-1"
   539          - Canada (Central) Region
   540          - Needs location constraint ca-central-1.
   541      - "eu-west-1"
   542          - EU (Ireland) Region
   543          - Needs location constraint EU or eu-west-1.
   544      - "eu-west-2"
   545          - EU (London) Region
   546          - Needs location constraint eu-west-2.
   547      - "eu-north-1"
   548          - EU (Stockholm) Region
   549          - Needs location constraint eu-north-1.
   550      - "eu-central-1"
   551          - EU (Frankfurt) Region
   552          - Needs location constraint eu-central-1.
   553      - "ap-southeast-1"
   554          - Asia Pacific (Singapore) Region
   555          - Needs location constraint ap-southeast-1.
   556      - "ap-southeast-2"
   557          - Asia Pacific (Sydney) Region
   558          - Needs location constraint ap-southeast-2.
   559      - "ap-northeast-1"
   560          - Asia Pacific (Tokyo) Region
   561          - Needs location constraint ap-northeast-1.
   562      - "ap-northeast-2"
   563          - Asia Pacific (Seoul)
   564          - Needs location constraint ap-northeast-2.
   565      - "ap-south-1"
   566          - Asia Pacific (Mumbai)
   567          - Needs location constraint ap-south-1.
   568      - "ap-east-1"
   569          - Asia Patific (Hong Kong) Region
   570          - Needs location constraint ap-east-1.
   571      - "sa-east-1"
   572          - South America (Sao Paulo) Region
   573          - Needs location constraint sa-east-1.
   574  
   575  #### --s3-region
   576  
   577  Region to connect to.
   578  Leave blank if you are using an S3 clone and you don't have a region.
   579  
   580  - Config:      region
   581  - Env Var:     RCLONE_S3_REGION
   582  - Type:        string
   583  - Default:     ""
   584  - Examples:
   585      - ""
   586          - Use this if unsure. Will use v4 signatures and an empty region.
   587      - "other-v2-signature"
   588          - Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
   589  
   590  #### --s3-endpoint
   591  
   592  Endpoint for S3 API.
   593  Leave blank if using AWS to use the default endpoint for the region.
   594  
   595  - Config:      endpoint
   596  - Env Var:     RCLONE_S3_ENDPOINT
   597  - Type:        string
   598  - Default:     ""
   599  
   600  #### --s3-endpoint
   601  
   602  Endpoint for IBM COS S3 API.
   603  Specify if using an IBM COS On Premise.
   604  
   605  - Config:      endpoint
   606  - Env Var:     RCLONE_S3_ENDPOINT
   607  - Type:        string
   608  - Default:     ""
   609  - Examples:
   610      - "s3-api.us-geo.objectstorage.softlayer.net"
   611          - US Cross Region Endpoint
   612      - "s3-api.dal.us-geo.objectstorage.softlayer.net"
   613          - US Cross Region Dallas Endpoint
   614      - "s3-api.wdc-us-geo.objectstorage.softlayer.net"
   615          - US Cross Region Washington DC Endpoint
   616      - "s3-api.sjc-us-geo.objectstorage.softlayer.net"
   617          - US Cross Region San Jose Endpoint
   618      - "s3-api.us-geo.objectstorage.service.networklayer.com"
   619          - US Cross Region Private Endpoint
   620      - "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
   621          - US Cross Region Dallas Private Endpoint
   622      - "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
   623          - US Cross Region Washington DC Private Endpoint
   624      - "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
   625          - US Cross Region San Jose Private Endpoint
   626      - "s3.us-east.objectstorage.softlayer.net"
   627          - US Region East Endpoint
   628      - "s3.us-east.objectstorage.service.networklayer.com"
   629          - US Region East Private Endpoint
   630      - "s3.us-south.objectstorage.softlayer.net"
   631          - US Region South Endpoint
   632      - "s3.us-south.objectstorage.service.networklayer.com"
   633          - US Region South Private Endpoint
   634      - "s3.eu-geo.objectstorage.softlayer.net"
   635          - EU Cross Region Endpoint
   636      - "s3.fra-eu-geo.objectstorage.softlayer.net"
   637          - EU Cross Region Frankfurt Endpoint
   638      - "s3.mil-eu-geo.objectstorage.softlayer.net"
   639          - EU Cross Region Milan Endpoint
   640      - "s3.ams-eu-geo.objectstorage.softlayer.net"
   641          - EU Cross Region Amsterdam Endpoint
   642      - "s3.eu-geo.objectstorage.service.networklayer.com"
   643          - EU Cross Region Private Endpoint
   644      - "s3.fra-eu-geo.objectstorage.service.networklayer.com"
   645          - EU Cross Region Frankfurt Private Endpoint
   646      - "s3.mil-eu-geo.objectstorage.service.networklayer.com"
   647          - EU Cross Region Milan Private Endpoint
   648      - "s3.ams-eu-geo.objectstorage.service.networklayer.com"
   649          - EU Cross Region Amsterdam Private Endpoint
   650      - "s3.eu-gb.objectstorage.softlayer.net"
   651          - Great Britain Endpoint
   652      - "s3.eu-gb.objectstorage.service.networklayer.com"
   653          - Great Britain Private Endpoint
   654      - "s3.ap-geo.objectstorage.softlayer.net"
   655          - APAC Cross Regional Endpoint
   656      - "s3.tok-ap-geo.objectstorage.softlayer.net"
   657          - APAC Cross Regional Tokyo Endpoint
   658      - "s3.hkg-ap-geo.objectstorage.softlayer.net"
   659          - APAC Cross Regional HongKong Endpoint
   660      - "s3.seo-ap-geo.objectstorage.softlayer.net"
   661          - APAC Cross Regional Seoul Endpoint
   662      - "s3.ap-geo.objectstorage.service.networklayer.com"
   663          - APAC Cross Regional Private Endpoint
   664      - "s3.tok-ap-geo.objectstorage.service.networklayer.com"
   665          - APAC Cross Regional Tokyo Private Endpoint
   666      - "s3.hkg-ap-geo.objectstorage.service.networklayer.com"
   667          - APAC Cross Regional HongKong Private Endpoint
   668      - "s3.seo-ap-geo.objectstorage.service.networklayer.com"
   669          - APAC Cross Regional Seoul Private Endpoint
   670      - "s3.mel01.objectstorage.softlayer.net"
   671          - Melbourne Single Site Endpoint
   672      - "s3.mel01.objectstorage.service.networklayer.com"
   673          - Melbourne Single Site Private Endpoint
   674      - "s3.tor01.objectstorage.softlayer.net"
   675          - Toronto Single Site Endpoint
   676      - "s3.tor01.objectstorage.service.networklayer.com"
   677          - Toronto Single Site Private Endpoint
   678  
   679  #### --s3-endpoint
   680  
   681  Endpoint for StackPath Object Storage API.
   682  
   683  - Config:      endpoint
   684  - Env Var:     RCLONE_S3_ENDPOINT
   685  - Type:        string
   686  - Default:     ""
   687  - Examples:
   688      - "s3.us-east-2.stackpathstorage.com"
   689          - US East Endpoint
   690      - "s3.us-west-1.stackpathstorage.com"
   691          - US West Endpoint
   692      - "s3.eu-central-1.stackpathstorage.com"
   693          - EU Endpoint
   694  
   695  #### --s3-endpoint
   696  
   697  Endpoint for OSS API.
   698  
   699  - Config:      endpoint
   700  - Env Var:     RCLONE_S3_ENDPOINT
   701  - Type:        string
   702  - Default:     ""
   703  - Examples:
   704      - "oss-cn-hangzhou.aliyuncs.com"
   705          - East China 1 (Hangzhou)
   706      - "oss-cn-shanghai.aliyuncs.com"
   707          - East China 2 (Shanghai)
   708      - "oss-cn-qingdao.aliyuncs.com"
   709          - North China 1 (Qingdao)
   710      - "oss-cn-beijing.aliyuncs.com"
   711          - North China 2 (Beijing)
   712      - "oss-cn-zhangjiakou.aliyuncs.com"
   713          - North China 3 (Zhangjiakou)
   714      - "oss-cn-huhehaote.aliyuncs.com"
   715          - North China 5 (Huhehaote)
   716      - "oss-cn-shenzhen.aliyuncs.com"
   717          - South China 1 (Shenzhen)
   718      - "oss-cn-hongkong.aliyuncs.com"
   719          - Hong Kong (Hong Kong)
   720      - "oss-us-west-1.aliyuncs.com"
   721          - US West 1 (Silicon Valley)
   722      - "oss-us-east-1.aliyuncs.com"
   723          - US East 1 (Virginia)
   724      - "oss-ap-southeast-1.aliyuncs.com"
   725          - Southeast Asia Southeast 1 (Singapore)
   726      - "oss-ap-southeast-2.aliyuncs.com"
   727          - Asia Pacific Southeast 2 (Sydney)
   728      - "oss-ap-southeast-3.aliyuncs.com"
   729          - Southeast Asia Southeast 3 (Kuala Lumpur)
   730      - "oss-ap-southeast-5.aliyuncs.com"
   731          - Asia Pacific Southeast 5 (Jakarta)
   732      - "oss-ap-northeast-1.aliyuncs.com"
   733          - Asia Pacific Northeast 1 (Japan)
   734      - "oss-ap-south-1.aliyuncs.com"
   735          - Asia Pacific South 1 (Mumbai)
   736      - "oss-eu-central-1.aliyuncs.com"
   737          - Central Europe 1 (Frankfurt)
   738      - "oss-eu-west-1.aliyuncs.com"
   739          - West Europe (London)
   740      - "oss-me-east-1.aliyuncs.com"
   741          - Middle East 1 (Dubai)
   742  
   743  #### --s3-endpoint
   744  
   745  Endpoint for S3 API.
   746  Required when using an S3 clone.
   747  
   748  - Config:      endpoint
   749  - Env Var:     RCLONE_S3_ENDPOINT
   750  - Type:        string
   751  - Default:     ""
   752  - Examples:
   753      - "objects-us-east-1.dream.io"
   754          - Dream Objects endpoint
   755      - "nyc3.digitaloceanspaces.com"
   756          - Digital Ocean Spaces New York 3
   757      - "ams3.digitaloceanspaces.com"
   758          - Digital Ocean Spaces Amsterdam 3
   759      - "sgp1.digitaloceanspaces.com"
   760          - Digital Ocean Spaces Singapore 1
   761      - "s3.wasabisys.com"
   762          - Wasabi US East endpoint
   763      - "s3.us-west-1.wasabisys.com"
   764          - Wasabi US West endpoint
   765      - "s3.eu-central-1.wasabisys.com"
   766          - Wasabi EU Central endpoint
   767  
   768  #### --s3-location-constraint
   769  
   770  Location constraint - must be set to match the Region.
   771  Used when creating buckets only.
   772  
   773  - Config:      location_constraint
   774  - Env Var:     RCLONE_S3_LOCATION_CONSTRAINT
   775  - Type:        string
   776  - Default:     ""
   777  - Examples:
   778      - ""
   779          - Empty for US Region, Northern Virginia or Pacific Northwest.
   780      - "us-east-2"
   781          - US East (Ohio) Region.
   782      - "us-west-2"
   783          - US West (Oregon) Region.
   784      - "us-west-1"
   785          - US West (Northern California) Region.
   786      - "ca-central-1"
   787          - Canada (Central) Region.
   788      - "eu-west-1"
   789          - EU (Ireland) Region.
   790      - "eu-west-2"
   791          - EU (London) Region.
   792      - "eu-north-1"
   793          - EU (Stockholm) Region.
   794      - "EU"
   795          - EU Region.
   796      - "ap-southeast-1"
   797          - Asia Pacific (Singapore) Region.
   798      - "ap-southeast-2"
   799          - Asia Pacific (Sydney) Region.
   800      - "ap-northeast-1"
   801          - Asia Pacific (Tokyo) Region.
   802      - "ap-northeast-2"
   803          - Asia Pacific (Seoul)
   804      - "ap-south-1"
   805          - Asia Pacific (Mumbai)
   806      - "ap-east-1"
   807          - Asia Pacific (Hong Kong)
   808      - "sa-east-1"
   809          - South America (Sao Paulo) Region.
   810  
   811  #### --s3-location-constraint
   812  
   813  Location constraint - must match endpoint when using IBM Cloud Public.
   814  For on-prem COS, do not make a selection from this list, hit enter
   815  
   816  - Config:      location_constraint
   817  - Env Var:     RCLONE_S3_LOCATION_CONSTRAINT
   818  - Type:        string
   819  - Default:     ""
   820  - Examples:
   821      - "us-standard"
   822          - US Cross Region Standard
   823      - "us-vault"
   824          - US Cross Region Vault
   825      - "us-cold"
   826          - US Cross Region Cold
   827      - "us-flex"
   828          - US Cross Region Flex
   829      - "us-east-standard"
   830          - US East Region Standard
   831      - "us-east-vault"
   832          - US East Region Vault
   833      - "us-east-cold"
   834          - US East Region Cold
   835      - "us-east-flex"
   836          - US East Region Flex
   837      - "us-south-standard"
   838          - US South Region Standard
   839      - "us-south-vault"
   840          - US South Region Vault
   841      - "us-south-cold"
   842          - US South Region Cold
   843      - "us-south-flex"
   844          - US South Region Flex
   845      - "eu-standard"
   846          - EU Cross Region Standard
   847      - "eu-vault"
   848          - EU Cross Region Vault
   849      - "eu-cold"
   850          - EU Cross Region Cold
   851      - "eu-flex"
   852          - EU Cross Region Flex
   853      - "eu-gb-standard"
   854          - Great Britain Standard
   855      - "eu-gb-vault"
   856          - Great Britain Vault
   857      - "eu-gb-cold"
   858          - Great Britain Cold
   859      - "eu-gb-flex"
   860          - Great Britain Flex
   861      - "ap-standard"
   862          - APAC Standard
   863      - "ap-vault"
   864          - APAC Vault
   865      - "ap-cold"
   866          - APAC Cold
   867      - "ap-flex"
   868          - APAC Flex
   869      - "mel01-standard"
   870          - Melbourne Standard
   871      - "mel01-vault"
   872          - Melbourne Vault
   873      - "mel01-cold"
   874          - Melbourne Cold
   875      - "mel01-flex"
   876          - Melbourne Flex
   877      - "tor01-standard"
   878          - Toronto Standard
   879      - "tor01-vault"
   880          - Toronto Vault
   881      - "tor01-cold"
   882          - Toronto Cold
   883      - "tor01-flex"
   884          - Toronto Flex
   885  
   886  #### --s3-location-constraint
   887  
   888  Location constraint - must be set to match the Region.
   889  Leave blank if not sure. Used when creating buckets only.
   890  
   891  - Config:      location_constraint
   892  - Env Var:     RCLONE_S3_LOCATION_CONSTRAINT
   893  - Type:        string
   894  - Default:     ""
   895  
   896  #### --s3-acl
   897  
   898  Canned ACL used when creating buckets and storing or copying objects.
   899  
   900  This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
   901  
   902  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
   903  
   904  Note that this ACL is applied when server side copying objects as S3
   905  doesn't copy the ACL from the source but rather writes a fresh one.
   906  
   907  - Config:      acl
   908  - Env Var:     RCLONE_S3_ACL
   909  - Type:        string
   910  - Default:     ""
   911  - Examples:
   912      - "private"
   913          - Owner gets FULL_CONTROL. No one else has access rights (default).
   914      - "public-read"
   915          - Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   916      - "public-read-write"
   917          - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
   918          - Granting this on a bucket is generally not recommended.
   919      - "authenticated-read"
   920          - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   921      - "bucket-owner-read"
   922          - Object owner gets FULL_CONTROL. Bucket owner gets READ access.
   923          - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   924      - "bucket-owner-full-control"
   925          - Both the object owner and the bucket owner get FULL_CONTROL over the object.
   926          - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   927      - "private"
   928          - Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
   929      - "public-read"
   930          - Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
   931      - "public-read-write"
   932          - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
   933      - "authenticated-read"
   934          - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
   935  
   936  #### --s3-server-side-encryption
   937  
   938  The server-side encryption algorithm used when storing this object in S3.
   939  
   940  - Config:      server_side_encryption
   941  - Env Var:     RCLONE_S3_SERVER_SIDE_ENCRYPTION
   942  - Type:        string
   943  - Default:     ""
   944  - Examples:
   945      - ""
   946          - None
   947      - "AES256"
   948          - AES256
   949      - "aws:kms"
   950          - aws:kms
   951  
   952  #### --s3-sse-kms-key-id
   953  
   954  If using KMS ID you must provide the ARN of Key.
   955  
   956  - Config:      sse_kms_key_id
   957  - Env Var:     RCLONE_S3_SSE_KMS_KEY_ID
   958  - Type:        string
   959  - Default:     ""
   960  - Examples:
   961      - ""
   962          - None
   963      - "arn:aws:kms:us-east-1:*"
   964          - arn:aws:kms:*
   965  
   966  #### --s3-storage-class
   967  
   968  The storage class to use when storing new objects in S3.
   969  
   970  - Config:      storage_class
   971  - Env Var:     RCLONE_S3_STORAGE_CLASS
   972  - Type:        string
   973  - Default:     ""
   974  - Examples:
   975      - ""
   976          - Default
   977      - "STANDARD"
   978          - Standard storage class
   979      - "REDUCED_REDUNDANCY"
   980          - Reduced redundancy storage class
   981      - "STANDARD_IA"
   982          - Standard Infrequent Access storage class
   983      - "ONEZONE_IA"
   984          - One Zone Infrequent Access storage class
   985      - "GLACIER"
   986          - Glacier storage class
   987      - "DEEP_ARCHIVE"
   988          - Glacier Deep Archive storage class
   989      - "INTELLIGENT_TIERING"
   990          - Intelligent-Tiering storage class
   991  
   992  #### --s3-storage-class
   993  
   994  The storage class to use when storing new objects in OSS.
   995  
   996  - Config:      storage_class
   997  - Env Var:     RCLONE_S3_STORAGE_CLASS
   998  - Type:        string
   999  - Default:     ""
  1000  - Examples:
  1001      - ""
  1002          - Default
  1003      - "STANDARD"
  1004          - Standard storage class
  1005      - "GLACIER"
  1006          - Archive storage mode.
  1007      - "STANDARD_IA"
  1008          - Infrequent access storage mode.
  1009  
  1010  ### Advanced Options
  1011  
  1012  Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
  1013  
  1014  #### --s3-bucket-acl
  1015  
  1016  Canned ACL used when creating buckets.
  1017  
  1018  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1019  
  1020  Note that this ACL is applied when only when creating buckets.  If it
  1021  isn't set then "acl" is used instead.
  1022  
  1023  - Config:      bucket_acl
  1024  - Env Var:     RCLONE_S3_BUCKET_ACL
  1025  - Type:        string
  1026  - Default:     ""
  1027  - Examples:
  1028      - "private"
  1029          - Owner gets FULL_CONTROL. No one else has access rights (default).
  1030      - "public-read"
  1031          - Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  1032      - "public-read-write"
  1033          - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  1034          - Granting this on a bucket is generally not recommended.
  1035      - "authenticated-read"
  1036          - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
  1037  
  1038  #### --s3-upload-cutoff
  1039  
  1040  Cutoff for switching to chunked upload
  1041  
  1042  Any files larger than this will be uploaded in chunks of chunk_size.
  1043  The minimum is 0 and the maximum is 5GB.
  1044  
  1045  - Config:      upload_cutoff
  1046  - Env Var:     RCLONE_S3_UPLOAD_CUTOFF
  1047  - Type:        SizeSuffix
  1048  - Default:     200M
  1049  
  1050  #### --s3-chunk-size
  1051  
  1052  Chunk size to use for uploading.
  1053  
  1054  When uploading files larger than upload_cutoff or files with unknown
  1055  size (eg from "rclone rcat" or uploaded with "rclone mount" or google
  1056  photos or google docs) they will be uploaded as multipart uploads
  1057  using this chunk size.
  1058  
  1059  Note that "--s3-upload-concurrency" chunks of this size are buffered
  1060  in memory per transfer.
  1061  
  1062  If you are transferring large files over high speed links and you have
  1063  enough memory, then increasing this will speed up the transfers.
  1064  
  1065  Rclone will automatically increase the chunk size when uploading a
  1066  large file of known size to stay below the 10,000 chunks limit.
  1067  
  1068  Files of unknown size are uploaded with the configured
  1069  chunk_size. Since the default chunk size is 5MB and there can be at
  1070  most 10,000 chunks, this means that by default the maximum size of
  1071  file you can stream upload is 48GB.  If you wish to stream upload
  1072  larger files then you will need to increase chunk_size.
  1073  
  1074  - Config:      chunk_size
  1075  - Env Var:     RCLONE_S3_CHUNK_SIZE
  1076  - Type:        SizeSuffix
  1077  - Default:     5M
  1078  
  1079  #### --s3-copy-cutoff
  1080  
  1081  Cutoff for switching to multipart copy
  1082  
  1083  Any files larger than this that need to be server side copied will be
  1084  copied in chunks of this size.
  1085  
  1086  The minimum is 0 and the maximum is 5GB.
  1087  
  1088  - Config:      copy_cutoff
  1089  - Env Var:     RCLONE_S3_COPY_CUTOFF
  1090  - Type:        SizeSuffix
  1091  - Default:     5G
  1092  
  1093  #### --s3-disable-checksum
  1094  
  1095  Don't store MD5 checksum with object metadata
  1096  
  1097  - Config:      disable_checksum
  1098  - Env Var:     RCLONE_S3_DISABLE_CHECKSUM
  1099  - Type:        bool
  1100  - Default:     false
  1101  
  1102  #### --s3-session-token
  1103  
  1104  An AWS session token
  1105  
  1106  - Config:      session_token
  1107  - Env Var:     RCLONE_S3_SESSION_TOKEN
  1108  - Type:        string
  1109  - Default:     ""
  1110  
  1111  #### --s3-upload-concurrency
  1112  
  1113  Concurrency for multipart uploads.
  1114  
  1115  This is the number of chunks of the same file that are uploaded
  1116  concurrently.
  1117  
  1118  If you are uploading small numbers of large file over high speed link
  1119  and these uploads do not fully utilize your bandwidth, then increasing
  1120  this may help to speed up the transfers.
  1121  
  1122  - Config:      upload_concurrency
  1123  - Env Var:     RCLONE_S3_UPLOAD_CONCURRENCY
  1124  - Type:        int
  1125  - Default:     4
  1126  
  1127  #### --s3-force-path-style
  1128  
  1129  If true use path style access if false use virtual hosted style.
  1130  
  1131  If this is true (the default) then rclone will use path style access,
  1132  if false then rclone will use virtual path style. See [the AWS S3
  1133  docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
  1134  for more info.
  1135  
  1136  Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
  1137  false - rclone will do this automatically based on the provider
  1138  setting.
  1139  
  1140  - Config:      force_path_style
  1141  - Env Var:     RCLONE_S3_FORCE_PATH_STYLE
  1142  - Type:        bool
  1143  - Default:     true
  1144  
  1145  #### --s3-v2-auth
  1146  
  1147  If true use v2 authentication.
  1148  
  1149  If this is false (the default) then rclone will use v4 authentication.
  1150  If it is set then rclone will use v2 authentication.
  1151  
  1152  Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
  1153  
  1154  - Config:      v2_auth
  1155  - Env Var:     RCLONE_S3_V2_AUTH
  1156  - Type:        bool
  1157  - Default:     false
  1158  
  1159  #### --s3-use-accelerate-endpoint
  1160  
  1161  If true use the AWS S3 accelerated endpoint.
  1162  
  1163  See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)
  1164  
  1165  - Config:      use_accelerate_endpoint
  1166  - Env Var:     RCLONE_S3_USE_ACCELERATE_ENDPOINT
  1167  - Type:        bool
  1168  - Default:     false
  1169  
  1170  #### --s3-leave-parts-on-error
  1171  
  1172  If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
  1173  
  1174  It should be set to true for resuming uploads across different sessions.
  1175  
  1176  WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
  1177  
  1178  
  1179  - Config:      leave_parts_on_error
  1180  - Env Var:     RCLONE_S3_LEAVE_PARTS_ON_ERROR
  1181  - Type:        bool
  1182  - Default:     false
  1183  
  1184  #### --s3-list-chunk
  1185  
  1186  Size of listing chunk (response list for each ListObject S3 request).
  1187  
  1188  This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
  1189  Most services truncate the response list to 1000 objects even if requested more than that.
  1190  In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
  1191  In Ceph, this can be increased with the "rgw list buckets max chunk" option.
  1192  
  1193  
  1194  - Config:      list_chunk
  1195  - Env Var:     RCLONE_S3_LIST_CHUNK
  1196  - Type:        int
  1197  - Default:     1000
  1198  
  1199  #### --s3-encoding
  1200  
  1201  This sets the encoding for the backend.
  1202  
  1203  See: the [encoding section in the overview](/overview/#encoding) for more info.
  1204  
  1205  - Config:      encoding
  1206  - Env Var:     RCLONE_S3_ENCODING
  1207  - Type:        MultiEncoder
  1208  - Default:     Slash,InvalidUtf8,Dot
  1209  
  1210  <!--- autogenerated options stop -->
  1211  
  1212  ### Anonymous access to public buckets ###
  1213  
  1214  If you want to use rclone to access a public bucket, configure with a
  1215  blank `access_key_id` and `secret_access_key`.  Your config should end
  1216  up looking like this:
  1217  
  1218  ```
  1219  [anons3]
  1220  type = s3
  1221  provider = AWS
  1222  env_auth = false
  1223  access_key_id = 
  1224  secret_access_key = 
  1225  region = us-east-1
  1226  endpoint = 
  1227  location_constraint = 
  1228  acl = private
  1229  server_side_encryption = 
  1230  storage_class = 
  1231  ```
  1232  
  1233  Then use it as normal with the name of the public bucket, eg
  1234  
  1235      rclone lsd anons3:1000genomes
  1236  
  1237  You will be able to list and copy data but not upload it.
  1238  
  1239  ### Ceph ###
  1240  
  1241  [Ceph](https://ceph.com/) is an open source unified, distributed
  1242  storage system designed for excellent performance, reliability and
  1243  scalability.  It has an S3 compatible object storage interface.
  1244  
  1245  To use rclone with Ceph, configure as above but leave the region blank
  1246  and set the endpoint.  You should end up with something like this in
  1247  your config:
  1248  
  1249  
  1250  ```
  1251  [ceph]
  1252  type = s3
  1253  provider = Ceph
  1254  env_auth = false
  1255  access_key_id = XXX
  1256  secret_access_key = YYY
  1257  region =
  1258  endpoint = https://ceph.endpoint.example.com
  1259  location_constraint =
  1260  acl =
  1261  server_side_encryption =
  1262  storage_class =
  1263  ```
  1264  
  1265  If you are using an older version of CEPH, eg 10.2.x Jewel, then you
  1266  may need to supply the parameter `--s3-upload-cutoff 0` or put this in
  1267  the config file as `upload_cutoff 0` to work around a bug which causes
  1268  uploading of small files to fail.
  1269  
  1270  Note also that Ceph sometimes puts `/` in the passwords it gives
  1271  users.  If you read the secret access key using the command line tools
  1272  you will get a JSON blob with the `/` escaped as `\/`.  Make sure you
  1273  only write `/` in the secret access key.
  1274  
  1275  Eg the dump from Ceph looks something like this (irrelevant keys
  1276  removed).
  1277  
  1278  ```
  1279  {
  1280      "user_id": "xxx",
  1281      "display_name": "xxxx",
  1282      "keys": [
  1283          {
  1284              "user": "xxx",
  1285              "access_key": "xxxxxx",
  1286              "secret_key": "xxxxxx\/xxxx"
  1287          }
  1288      ],
  1289  }
  1290  ```
  1291  
  1292  Because this is a json dump, it is encoding the `/` as `\/`, so if you
  1293  use the secret key as `xxxxxx/xxxx`  it will work fine.
  1294  
  1295  ### Dreamhost ###
  1296  
  1297  Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
  1298  an object storage system based on CEPH.
  1299  
  1300  To use rclone with Dreamhost, configure as above but leave the region blank
  1301  and set the endpoint.  You should end up with something like this in
  1302  your config:
  1303  
  1304  ```
  1305  [dreamobjects]
  1306  type = s3
  1307  provider = DreamHost
  1308  env_auth = false
  1309  access_key_id = your_access_key
  1310  secret_access_key = your_secret_key
  1311  region =
  1312  endpoint = objects-us-west-1.dream.io
  1313  location_constraint =
  1314  acl = private
  1315  server_side_encryption =
  1316  storage_class =
  1317  ```
  1318  
  1319  ### DigitalOcean Spaces ###
  1320  
  1321  [Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
  1322  
  1323  To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when promted by `rclone config` for your `access_key_id` and `secret_access_key`.
  1324  
  1325  When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings.
  1326  
  1327  Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below:
  1328  
  1329  ```
  1330  Storage> s3
  1331  env_auth> 1
  1332  access_key_id> YOUR_ACCESS_KEY
  1333  secret_access_key> YOUR_SECRET_KEY
  1334  region>
  1335  endpoint> nyc3.digitaloceanspaces.com
  1336  location_constraint>
  1337  acl>
  1338  storage_class>
  1339  ```
  1340  
  1341  The resulting configuration file should look like:
  1342  
  1343  ```
  1344  [spaces]
  1345  type = s3
  1346  provider = DigitalOcean
  1347  env_auth = false
  1348  access_key_id = YOUR_ACCESS_KEY
  1349  secret_access_key = YOUR_SECRET_KEY
  1350  region =
  1351  endpoint = nyc3.digitaloceanspaces.com
  1352  location_constraint =
  1353  acl =
  1354  server_side_encryption =
  1355  storage_class =
  1356  ```
  1357  
  1358  Once configured, you can create a new Space and begin copying files. For example:
  1359  
  1360  ```
  1361  rclone mkdir spaces:my-new-space
  1362  rclone copy /path/to/files spaces:my-new-space
  1363  ```
  1364  
  1365  ### IBM COS (S3) ###
  1366  
  1367  Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
  1368  
  1369  To configure access to IBM COS S3, follow the steps below:
  1370  
  1371  1. Run rclone config and select n for a new remote.
  1372  ```
  1373  	2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
  1374  	No remotes found - make a new one
  1375  	n) New remote
  1376  	s) Set configuration password
  1377  	q) Quit config
  1378  	n/s/q> n
  1379  ```
  1380  
  1381  2. Enter the name for the configuration
  1382  ```
  1383  	name> <YOUR NAME>
  1384  ```
  1385  
  1386  3. Select "s3" storage.
  1387  ```
  1388  Choose a number from below, or type in your own value
  1389   	1 / Alias for an existing remote
  1390     	\ "alias"
  1391   	2 / Amazon Drive
  1392     	\ "amazon cloud drive"
  1393   	3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
  1394     	\ "s3"
  1395   	4 / Backblaze B2
  1396     	\ "b2"
  1397  [snip]
  1398  	23 / http Connection
  1399      \ "http"
  1400  Storage> 3
  1401  ```
  1402  
  1403  4. Select IBM COS as the S3 Storage Provider.
  1404  ```
  1405  Choose the S3 provider.
  1406  Choose a number from below, or type in your own value
  1407  	 1 / Choose this option to configure Storage to AWS S3
  1408  	   \ "AWS"
  1409   	 2 / Choose this option to configure Storage to Ceph Systems
  1410    	 \ "Ceph"
  1411  	 3 /  Choose this option to configure Storage to Dreamhost
  1412       \ "Dreamhost"
  1413     4 / Choose this option to the configure Storage to IBM COS S3
  1414     	 \ "IBMCOS"
  1415   	 5 / Choose this option to the configure Storage to Minio
  1416       \ "Minio"
  1417  	 Provider>4
  1418  ```
  1419  
  1420  5. Enter the Access Key and Secret.
  1421  ```
  1422  	AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  1423  	access_key_id> <>
  1424  	AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  1425  	secret_access_key> <>
  1426  ```
  1427  
  1428  6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.
  1429  ```
  1430  	Endpoint for IBM COS S3 API.
  1431  	Specify if using an IBM COS On Premise.
  1432  	Choose a number from below, or type in your own value
  1433  	 1 / US Cross Region Endpoint
  1434     	   \ "s3-api.us-geo.objectstorage.softlayer.net"
  1435  	 2 / US Cross Region Dallas Endpoint
  1436     	   \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
  1437   	 3 / US Cross Region Washington DC Endpoint
  1438     	   \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
  1439  	 4 / US Cross Region San Jose Endpoint
  1440  	   \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
  1441  	 5 / US Cross Region Private Endpoint
  1442  	   \ "s3-api.us-geo.objectstorage.service.networklayer.com"
  1443  	 6 / US Cross Region Dallas Private Endpoint
  1444  	   \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
  1445  	 7 / US Cross Region Washington DC Private Endpoint
  1446  	   \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
  1447  	 8 / US Cross Region San Jose Private Endpoint
  1448  	   \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
  1449  	 9 / US Region East Endpoint
  1450  	   \ "s3.us-east.objectstorage.softlayer.net"
  1451  	10 / US Region East Private Endpoint
  1452  	   \ "s3.us-east.objectstorage.service.networklayer.com"
  1453  	11 / US Region South Endpoint
  1454  [snip]
  1455  	34 / Toronto Single Site Private Endpoint
  1456  	   \ "s3.tor01.objectstorage.service.networklayer.com"
  1457  	endpoint>1
  1458  ```
  1459  
  1460  
  1461  7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
  1462  ```
  1463  	 1 / US Cross Region Standard
  1464  	   \ "us-standard"
  1465  	 2 / US Cross Region Vault
  1466  	   \ "us-vault"
  1467  	 3 / US Cross Region Cold
  1468  	   \ "us-cold"
  1469  	 4 / US Cross Region Flex
  1470  	   \ "us-flex"
  1471  	 5 / US East Region Standard
  1472  	   \ "us-east-standard"
  1473  	 6 / US East Region Vault
  1474  	   \ "us-east-vault"
  1475  	 7 / US East Region Cold
  1476  	   \ "us-east-cold"
  1477  	 8 / US East Region Flex
  1478  	   \ "us-east-flex"
  1479  	 9 / US South Region Standard
  1480  	   \ "us-south-standard"
  1481  	10 / US South Region Vault
  1482  	   \ "us-south-vault"
  1483  [snip]
  1484  	32 / Toronto Flex
  1485  	   \ "tor01-flex"
  1486  location_constraint>1
  1487  ```
  1488  
  1489  9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
  1490  ```
  1491  Canned ACL used when creating buckets and/or storing objects in S3.
  1492  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1493  Choose a number from below, or type in your own value
  1494        1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
  1495        \ "private"
  1496        2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
  1497        \ "public-read"
  1498        3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
  1499        \ "public-read-write"
  1500        4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
  1501        \ "authenticated-read"
  1502  acl> 1
  1503  ```
  1504  
  1505  
  1506  12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
  1507  ```
  1508  	[xxx]
  1509  	type = s3
  1510  	Provider = IBMCOS
  1511  	access_key_id = xxx
  1512  	secret_access_key = yyy
  1513  	endpoint = s3-api.us-geo.objectstorage.softlayer.net
  1514  	location_constraint = us-standard
  1515  	acl = private
  1516  ```
  1517  
  1518  13. Execute rclone commands
  1519  ```
  1520  	1)	Create a bucket.
  1521  		rclone mkdir IBM-COS-XREGION:newbucket
  1522  	2)	List available buckets.
  1523  		rclone lsd IBM-COS-XREGION:
  1524  		-1 2017-11-08 21:16:22        -1 test
  1525  		-1 2018-02-14 20:16:39        -1 newbucket
  1526  	3)	List contents of a bucket.
  1527  		rclone ls IBM-COS-XREGION:newbucket
  1528  		18685952 test.exe
  1529  	4)	Copy a file from local to remote.
  1530  		rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
  1531  	5)	Copy a file from remote to local.
  1532  		rclone copy IBM-COS-XREGION:newbucket/file.txt .
  1533  	6)	Delete a file on remote.
  1534  		rclone delete IBM-COS-XREGION:newbucket/file.txt
  1535  ```
  1536  
  1537  ### Minio ###
  1538  
  1539  [Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
  1540  
  1541  It is very easy to install and provides an S3 compatible server which can be used by rclone.
  1542  
  1543  To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide).
  1544  
  1545  When it configures itself Minio will print something like this
  1546  
  1547  ```
  1548  Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
  1549  AccessKey: USWUXHGYZQYFYFFIT3RE
  1550  SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1551  Region:    us-east-1
  1552  SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
  1553  
  1554  Browser Access:
  1555     http://192.168.1.106:9000  http://172.23.0.1:9000
  1556  
  1557  Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
  1558     $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1559  
  1560  Object API (Amazon S3 compatible):
  1561     Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
  1562     Java:       https://docs.minio.io/docs/java-client-quickstart-guide
  1563     Python:     https://docs.minio.io/docs/python-client-quickstart-guide
  1564     JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
  1565     .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
  1566  
  1567  Drive Capacity: 26 GiB Free, 165 GiB Total
  1568  ```
  1569  
  1570  These details need to go into `rclone config` like this.  Note that it
  1571  is important to put the region in as stated above.
  1572  
  1573  ```
  1574  env_auth> 1
  1575  access_key_id> USWUXHGYZQYFYFFIT3RE
  1576  secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1577  region> us-east-1
  1578  endpoint> http://192.168.1.106:9000
  1579  location_constraint>
  1580  server_side_encryption>
  1581  ```
  1582  
  1583  Which makes the config file look like this
  1584  
  1585  ```
  1586  [minio]
  1587  type = s3
  1588  provider = Minio
  1589  env_auth = false
  1590  access_key_id = USWUXHGYZQYFYFFIT3RE
  1591  secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  1592  region = us-east-1
  1593  endpoint = http://192.168.1.106:9000
  1594  location_constraint =
  1595  server_side_encryption =
  1596  ```
  1597  
  1598  So once set up, for example to copy files into a bucket
  1599  
  1600  ```
  1601  rclone copy /path/to/files minio:bucket
  1602  ```
  1603  
  1604  ### Scaleway {#scaleway}
  1605  
  1606  [Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
  1607  Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
  1608  
  1609  Scaleway provides an S3 interface which can be configured for use with rclone like this:
  1610  
  1611  ```
  1612  [scaleway]
  1613  type = s3
  1614  env_auth = false
  1615  endpoint = s3.nl-ams.scw.cloud
  1616  access_key_id = SCWXXXXXXXXXXXXXX
  1617  secret_access_key = 1111111-2222-3333-44444-55555555555555
  1618  region = nl-ams
  1619  location_constraint =
  1620  acl = private
  1621  force_path_style = false
  1622  server_side_encryption =
  1623  storage_class =
  1624  ```
  1625  
  1626  ### Wasabi ###
  1627  
  1628  [Wasabi](https://wasabi.com) is a cloud-based object storage service for a
  1629  broad range of applications and use cases. Wasabi is designed for
  1630  individuals and organizations that require a high-performance,
  1631  reliable, and secure data storage infrastructure at minimal cost.
  1632  
  1633  Wasabi provides an S3 interface which can be configured for use with
  1634  rclone like this.
  1635  
  1636  ```
  1637  No remotes found - make a new one
  1638  n) New remote
  1639  s) Set configuration password
  1640  n/s> n
  1641  name> wasabi
  1642  Type of storage to configure.
  1643  Choose a number from below, or type in your own value
  1644  [snip]
  1645  XX / Amazon S3 (also Dreamhost, Ceph, Minio)
  1646     \ "s3"
  1647  [snip]
  1648  Storage> s3
  1649  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  1650  Choose a number from below, or type in your own value
  1651   1 / Enter AWS credentials in the next step
  1652     \ "false"
  1653   2 / Get AWS credentials from the environment (env vars or IAM)
  1654     \ "true"
  1655  env_auth> 1
  1656  AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  1657  access_key_id> YOURACCESSKEY
  1658  AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  1659  secret_access_key> YOURSECRETACCESSKEY
  1660  Region to connect to.
  1661  Choose a number from below, or type in your own value
  1662     / The default endpoint - a good choice if you are unsure.
  1663   1 | US Region, Northern Virginia or Pacific Northwest.
  1664     | Leave location constraint empty.
  1665     \ "us-east-1"
  1666  [snip]
  1667  region> us-east-1
  1668  Endpoint for S3 API.
  1669  Leave blank if using AWS to use the default endpoint for the region.
  1670  Specify if using an S3 clone such as Ceph.
  1671  endpoint> s3.wasabisys.com
  1672  Location constraint - must be set to match the Region. Used when creating buckets only.
  1673  Choose a number from below, or type in your own value
  1674   1 / Empty for US Region, Northern Virginia or Pacific Northwest.
  1675     \ ""
  1676  [snip]
  1677  location_constraint>
  1678  Canned ACL used when creating buckets and/or storing objects in S3.
  1679  For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1680  Choose a number from below, or type in your own value
  1681   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  1682     \ "private"
  1683  [snip]
  1684  acl>
  1685  The server-side encryption algorithm used when storing this object in S3.
  1686  Choose a number from below, or type in your own value
  1687   1 / None
  1688     \ ""
  1689   2 / AES256
  1690     \ "AES256"
  1691  server_side_encryption>
  1692  The storage class to use when storing objects in S3.
  1693  Choose a number from below, or type in your own value
  1694   1 / Default
  1695     \ ""
  1696   2 / Standard storage class
  1697     \ "STANDARD"
  1698   3 / Reduced redundancy storage class
  1699     \ "REDUCED_REDUNDANCY"
  1700   4 / Standard Infrequent Access storage class
  1701     \ "STANDARD_IA"
  1702  storage_class>
  1703  Remote config
  1704  --------------------
  1705  [wasabi]
  1706  env_auth = false
  1707  access_key_id = YOURACCESSKEY
  1708  secret_access_key = YOURSECRETACCESSKEY
  1709  region = us-east-1
  1710  endpoint = s3.wasabisys.com
  1711  location_constraint =
  1712  acl =
  1713  server_side_encryption =
  1714  storage_class =
  1715  --------------------
  1716  y) Yes this is OK
  1717  e) Edit this remote
  1718  d) Delete this remote
  1719  y/e/d> y
  1720  ```
  1721  
  1722  This will leave the config file looking like this.
  1723  
  1724  ```
  1725  [wasabi]
  1726  type = s3
  1727  provider = Wasabi
  1728  env_auth = false
  1729  access_key_id = YOURACCESSKEY
  1730  secret_access_key = YOURSECRETACCESSKEY
  1731  region =
  1732  endpoint = s3.wasabisys.com
  1733  location_constraint =
  1734  acl =
  1735  server_side_encryption =
  1736  storage_class =
  1737  ```
  1738  
  1739  ### Alibaba OSS {#alibaba-oss}
  1740  
  1741  Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
  1742  configuration.  First run:
  1743  
  1744      rclone config
  1745  
  1746  This will guide you through an interactive setup process.
  1747  
  1748  ```
  1749  No remotes found - make a new one
  1750  n) New remote
  1751  s) Set configuration password
  1752  q) Quit config
  1753  n/s/q> n
  1754  name> oss
  1755  Type of storage to configure.
  1756  Enter a string value. Press Enter for the default ("").
  1757  Choose a number from below, or type in your own value
  1758  [snip]
  1759   4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
  1760     \ "s3"
  1761  [snip]
  1762  Storage> s3
  1763  Choose your S3 provider.
  1764  Enter a string value. Press Enter for the default ("").
  1765  Choose a number from below, or type in your own value
  1766   1 / Amazon Web Services (AWS) S3
  1767     \ "AWS"
  1768   2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
  1769     \ "Alibaba"
  1770   3 / Ceph Object Storage
  1771     \ "Ceph"
  1772  [snip]
  1773  provider> Alibaba
  1774  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  1775  Only applies if access_key_id and secret_access_key is blank.
  1776  Enter a boolean value (true or false). Press Enter for the default ("false").
  1777  Choose a number from below, or type in your own value
  1778   1 / Enter AWS credentials in the next step
  1779     \ "false"
  1780   2 / Get AWS credentials from the environment (env vars or IAM)
  1781     \ "true"
  1782  env_auth> 1
  1783  AWS Access Key ID.
  1784  Leave blank for anonymous access or runtime credentials.
  1785  Enter a string value. Press Enter for the default ("").
  1786  access_key_id> accesskeyid
  1787  AWS Secret Access Key (password)
  1788  Leave blank for anonymous access or runtime credentials.
  1789  Enter a string value. Press Enter for the default ("").
  1790  secret_access_key> secretaccesskey
  1791  Endpoint for OSS API.
  1792  Enter a string value. Press Enter for the default ("").
  1793  Choose a number from below, or type in your own value
  1794   1 / East China 1 (Hangzhou)
  1795     \ "oss-cn-hangzhou.aliyuncs.com"
  1796   2 / East China 2 (Shanghai)
  1797     \ "oss-cn-shanghai.aliyuncs.com"
  1798   3 / North China 1 (Qingdao)
  1799     \ "oss-cn-qingdao.aliyuncs.com"
  1800  [snip]
  1801  endpoint> 1
  1802  Canned ACL used when creating buckets and storing or copying objects.
  1803  
  1804  Note that this ACL is applied when server side copying objects as S3
  1805  doesn't copy the ACL from the source but rather writes a fresh one.
  1806  Enter a string value. Press Enter for the default ("").
  1807  Choose a number from below, or type in your own value
  1808   1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  1809     \ "private"
  1810   2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  1811     \ "public-read"
  1812     / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  1813  [snip]
  1814  acl> 1
  1815  The storage class to use when storing new objects in OSS.
  1816  Enter a string value. Press Enter for the default ("").
  1817  Choose a number from below, or type in your own value
  1818   1 / Default
  1819     \ ""
  1820   2 / Standard storage class
  1821     \ "STANDARD"
  1822   3 / Archive storage mode.
  1823     \ "GLACIER"
  1824   4 / Infrequent access storage mode.
  1825     \ "STANDARD_IA"
  1826  storage_class> 1
  1827  Edit advanced config? (y/n)
  1828  y) Yes
  1829  n) No
  1830  y/n> n
  1831  Remote config
  1832  --------------------
  1833  [oss]
  1834  type = s3
  1835  provider = Alibaba
  1836  env_auth = false
  1837  access_key_id = accesskeyid
  1838  secret_access_key = secretaccesskey
  1839  endpoint = oss-cn-hangzhou.aliyuncs.com
  1840  acl = private
  1841  storage_class = Standard
  1842  --------------------
  1843  y) Yes this is OK
  1844  e) Edit this remote
  1845  d) Delete this remote
  1846  y/e/d> y
  1847  ```
  1848  
  1849  ### Netease NOS  ###
  1850  
  1851  For Netease NOS configure as per the configurator `rclone config`
  1852  setting the provider `Netease`.  This will automatically set
  1853  `force_path_style = false` which is necessary for it to run properly.